Oct  3 04:40:18 np0005468397 kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct  3 04:40:18 np0005468397 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct  3 04:40:18 np0005468397 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  3 04:40:18 np0005468397 kernel: BIOS-provided physical RAM map:
Oct  3 04:40:18 np0005468397 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  3 04:40:18 np0005468397 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  3 04:40:18 np0005468397 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  3 04:40:18 np0005468397 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct  3 04:40:18 np0005468397 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct  3 04:40:18 np0005468397 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct  3 04:40:18 np0005468397 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  3 04:40:18 np0005468397 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct  3 04:40:18 np0005468397 kernel: NX (Execute Disable) protection: active
Oct  3 04:40:18 np0005468397 kernel: APIC: Static calls initialized
Oct  3 04:40:18 np0005468397 kernel: SMBIOS 2.8 present.
Oct  3 04:40:18 np0005468397 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct  3 04:40:18 np0005468397 kernel: Hypervisor detected: KVM
Oct  3 04:40:18 np0005468397 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  3 04:40:18 np0005468397 kernel: kvm-clock: using sched offset of 4350074189 cycles
Oct  3 04:40:18 np0005468397 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  3 04:40:18 np0005468397 kernel: tsc: Detected 2800.000 MHz processor
Oct  3 04:40:18 np0005468397 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct  3 04:40:18 np0005468397 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct  3 04:40:18 np0005468397 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  3 04:40:18 np0005468397 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct  3 04:40:18 np0005468397 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct  3 04:40:18 np0005468397 kernel: Using GB pages for direct mapping
Oct  3 04:40:18 np0005468397 kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct  3 04:40:18 np0005468397 kernel: ACPI: Early table checksum verification disabled
Oct  3 04:40:18 np0005468397 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct  3 04:40:18 np0005468397 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  3 04:40:18 np0005468397 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  3 04:40:18 np0005468397 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  3 04:40:18 np0005468397 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct  3 04:40:18 np0005468397 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  3 04:40:18 np0005468397 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  3 04:40:18 np0005468397 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct  3 04:40:18 np0005468397 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct  3 04:40:18 np0005468397 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct  3 04:40:18 np0005468397 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct  3 04:40:18 np0005468397 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct  3 04:40:18 np0005468397 kernel: No NUMA configuration found
Oct  3 04:40:18 np0005468397 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct  3 04:40:18 np0005468397 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct  3 04:40:18 np0005468397 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct  3 04:40:18 np0005468397 kernel: Zone ranges:
Oct  3 04:40:18 np0005468397 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  3 04:40:18 np0005468397 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct  3 04:40:18 np0005468397 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct  3 04:40:18 np0005468397 kernel:  Device   empty
Oct  3 04:40:18 np0005468397 kernel: Movable zone start for each node
Oct  3 04:40:18 np0005468397 kernel: Early memory node ranges
Oct  3 04:40:18 np0005468397 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  3 04:40:18 np0005468397 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct  3 04:40:18 np0005468397 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct  3 04:40:18 np0005468397 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct  3 04:40:18 np0005468397 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  3 04:40:18 np0005468397 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  3 04:40:18 np0005468397 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct  3 04:40:18 np0005468397 kernel: ACPI: PM-Timer IO Port: 0x608
Oct  3 04:40:18 np0005468397 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  3 04:40:18 np0005468397 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct  3 04:40:18 np0005468397 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct  3 04:40:18 np0005468397 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  3 04:40:18 np0005468397 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  3 04:40:18 np0005468397 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  3 04:40:18 np0005468397 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  3 04:40:18 np0005468397 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  3 04:40:18 np0005468397 kernel: TSC deadline timer available
Oct  3 04:40:18 np0005468397 kernel: CPU topo: Max. logical packages:   8
Oct  3 04:40:18 np0005468397 kernel: CPU topo: Max. logical dies:       8
Oct  3 04:40:18 np0005468397 kernel: CPU topo: Max. dies per package:   1
Oct  3 04:40:18 np0005468397 kernel: CPU topo: Max. threads per core:   1
Oct  3 04:40:18 np0005468397 kernel: CPU topo: Num. cores per package:     1
Oct  3 04:40:18 np0005468397 kernel: CPU topo: Num. threads per package:   1
Oct  3 04:40:18 np0005468397 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct  3 04:40:18 np0005468397 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct  3 04:40:18 np0005468397 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct  3 04:40:18 np0005468397 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct  3 04:40:18 np0005468397 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct  3 04:40:18 np0005468397 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct  3 04:40:18 np0005468397 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct  3 04:40:18 np0005468397 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct  3 04:40:18 np0005468397 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct  3 04:40:18 np0005468397 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct  3 04:40:18 np0005468397 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct  3 04:40:18 np0005468397 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct  3 04:40:18 np0005468397 kernel: Booting paravirtualized kernel on KVM
Oct  3 04:40:18 np0005468397 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  3 04:40:18 np0005468397 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct  3 04:40:18 np0005468397 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct  3 04:40:18 np0005468397 kernel: kvm-guest: PV spinlocks disabled, no host support
Oct  3 04:40:18 np0005468397 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  3 04:40:18 np0005468397 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct  3 04:40:18 np0005468397 kernel: random: crng init done
Oct  3 04:40:18 np0005468397 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct  3 04:40:18 np0005468397 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct  3 04:40:18 np0005468397 kernel: Fallback order for Node 0: 0 
Oct  3 04:40:18 np0005468397 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct  3 04:40:18 np0005468397 kernel: Policy zone: Normal
Oct  3 04:40:18 np0005468397 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  3 04:40:18 np0005468397 kernel: software IO TLB: area num 8.
Oct  3 04:40:18 np0005468397 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct  3 04:40:18 np0005468397 kernel: ftrace: allocating 49370 entries in 193 pages
Oct  3 04:40:18 np0005468397 kernel: ftrace: allocated 193 pages with 3 groups
Oct  3 04:40:18 np0005468397 kernel: Dynamic Preempt: voluntary
Oct  3 04:40:18 np0005468397 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct  3 04:40:18 np0005468397 kernel: rcu: #011RCU event tracing is enabled.
Oct  3 04:40:18 np0005468397 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct  3 04:40:18 np0005468397 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct  3 04:40:18 np0005468397 kernel: #011Rude variant of Tasks RCU enabled.
Oct  3 04:40:18 np0005468397 kernel: #011Tracing variant of Tasks RCU enabled.
Oct  3 04:40:18 np0005468397 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  3 04:40:18 np0005468397 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct  3 04:40:18 np0005468397 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  3 04:40:18 np0005468397 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  3 04:40:18 np0005468397 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  3 04:40:18 np0005468397 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct  3 04:40:18 np0005468397 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct  3 04:40:18 np0005468397 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct  3 04:40:18 np0005468397 kernel: Console: colour VGA+ 80x25
Oct  3 04:40:18 np0005468397 kernel: printk: console [ttyS0] enabled
Oct  3 04:40:18 np0005468397 kernel: ACPI: Core revision 20230331
Oct  3 04:40:18 np0005468397 kernel: APIC: Switch to symmetric I/O mode setup
Oct  3 04:40:18 np0005468397 kernel: x2apic enabled
Oct  3 04:40:18 np0005468397 kernel: APIC: Switched APIC routing to: physical x2apic
Oct  3 04:40:18 np0005468397 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct  3 04:40:18 np0005468397 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct  3 04:40:18 np0005468397 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct  3 04:40:18 np0005468397 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct  3 04:40:18 np0005468397 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct  3 04:40:18 np0005468397 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  3 04:40:18 np0005468397 kernel: Spectre V2 : Mitigation: Retpolines
Oct  3 04:40:18 np0005468397 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct  3 04:40:18 np0005468397 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct  3 04:40:18 np0005468397 kernel: RETBleed: Mitigation: untrained return thunk
Oct  3 04:40:18 np0005468397 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct  3 04:40:18 np0005468397 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct  3 04:40:18 np0005468397 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct  3 04:40:18 np0005468397 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct  3 04:40:18 np0005468397 kernel: x86/bugs: return thunk changed
Oct  3 04:40:18 np0005468397 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct  3 04:40:18 np0005468397 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  3 04:40:18 np0005468397 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  3 04:40:18 np0005468397 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  3 04:40:18 np0005468397 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  3 04:40:18 np0005468397 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct  3 04:40:18 np0005468397 kernel: Freeing SMP alternatives memory: 40K
Oct  3 04:40:18 np0005468397 kernel: pid_max: default: 32768 minimum: 301
Oct  3 04:40:18 np0005468397 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct  3 04:40:18 np0005468397 kernel: landlock: Up and running.
Oct  3 04:40:18 np0005468397 kernel: Yama: becoming mindful.
Oct  3 04:40:18 np0005468397 kernel: SELinux:  Initializing.
Oct  3 04:40:18 np0005468397 kernel: LSM support for eBPF active
Oct  3 04:40:18 np0005468397 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  3 04:40:18 np0005468397 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  3 04:40:18 np0005468397 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct  3 04:40:18 np0005468397 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct  3 04:40:18 np0005468397 kernel: ... version:                0
Oct  3 04:40:18 np0005468397 kernel: ... bit width:              48
Oct  3 04:40:18 np0005468397 kernel: ... generic registers:      6
Oct  3 04:40:18 np0005468397 kernel: ... value mask:             0000ffffffffffff
Oct  3 04:40:18 np0005468397 kernel: ... max period:             00007fffffffffff
Oct  3 04:40:18 np0005468397 kernel: ... fixed-purpose events:   0
Oct  3 04:40:18 np0005468397 kernel: ... event mask:             000000000000003f
Oct  3 04:40:18 np0005468397 kernel: signal: max sigframe size: 1776
Oct  3 04:40:18 np0005468397 kernel: rcu: Hierarchical SRCU implementation.
Oct  3 04:40:18 np0005468397 kernel: rcu: #011Max phase no-delay instances is 400.
Oct  3 04:40:18 np0005468397 kernel: smp: Bringing up secondary CPUs ...
Oct  3 04:40:18 np0005468397 kernel: smpboot: x86: Booting SMP configuration:
Oct  3 04:40:18 np0005468397 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct  3 04:40:18 np0005468397 kernel: smp: Brought up 1 node, 8 CPUs
Oct  3 04:40:18 np0005468397 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct  3 04:40:18 np0005468397 kernel: node 0 deferred pages initialised in 27ms
Oct  3 04:40:18 np0005468397 kernel: Memory: 7765256K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616508K reserved, 0K cma-reserved)
Oct  3 04:40:18 np0005468397 kernel: devtmpfs: initialized
Oct  3 04:40:18 np0005468397 kernel: x86/mm: Memory block size: 128MB
Oct  3 04:40:18 np0005468397 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  3 04:40:18 np0005468397 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct  3 04:40:18 np0005468397 kernel: pinctrl core: initialized pinctrl subsystem
Oct  3 04:40:18 np0005468397 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  3 04:40:18 np0005468397 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct  3 04:40:18 np0005468397 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct  3 04:40:18 np0005468397 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct  3 04:40:18 np0005468397 kernel: audit: initializing netlink subsys (disabled)
Oct  3 04:40:18 np0005468397 kernel: audit: type=2000 audit(1759480817.437:1): state=initialized audit_enabled=0 res=1
Oct  3 04:40:18 np0005468397 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct  3 04:40:18 np0005468397 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  3 04:40:18 np0005468397 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  3 04:40:18 np0005468397 kernel: cpuidle: using governor menu
Oct  3 04:40:18 np0005468397 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  3 04:40:18 np0005468397 kernel: PCI: Using configuration type 1 for base access
Oct  3 04:40:18 np0005468397 kernel: PCI: Using configuration type 1 for extended access
Oct  3 04:40:18 np0005468397 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  3 04:40:18 np0005468397 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct  3 04:40:18 np0005468397 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct  3 04:40:18 np0005468397 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct  3 04:40:18 np0005468397 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct  3 04:40:18 np0005468397 kernel: Demotion targets for Node 0: null
Oct  3 04:40:18 np0005468397 kernel: cryptd: max_cpu_qlen set to 1000
Oct  3 04:40:18 np0005468397 kernel: ACPI: Added _OSI(Module Device)
Oct  3 04:40:18 np0005468397 kernel: ACPI: Added _OSI(Processor Device)
Oct  3 04:40:18 np0005468397 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  3 04:40:18 np0005468397 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  3 04:40:18 np0005468397 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct  3 04:40:18 np0005468397 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct  3 04:40:18 np0005468397 kernel: ACPI: Interpreter enabled
Oct  3 04:40:18 np0005468397 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct  3 04:40:18 np0005468397 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  3 04:40:18 np0005468397 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  3 04:40:18 np0005468397 kernel: PCI: Using E820 reservations for host bridge windows
Oct  3 04:40:18 np0005468397 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct  3 04:40:18 np0005468397 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  3 04:40:18 np0005468397 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [3] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [4] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [5] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [6] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [7] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [8] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [9] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [10] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [11] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [12] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [13] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [14] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [15] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [16] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [17] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [18] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [19] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [20] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [21] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [22] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [23] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [24] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [25] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [26] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [27] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [28] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [29] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [30] registered
Oct  3 04:40:18 np0005468397 kernel: acpiphp: Slot [31] registered
Oct  3 04:40:18 np0005468397 kernel: PCI host bridge to bus 0000:00
Oct  3 04:40:18 np0005468397 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  3 04:40:18 np0005468397 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  3 04:40:18 np0005468397 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  3 04:40:18 np0005468397 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct  3 04:40:18 np0005468397 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct  3 04:40:18 np0005468397 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct  3 04:40:18 np0005468397 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  3 04:40:18 np0005468397 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  3 04:40:18 np0005468397 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  3 04:40:18 np0005468397 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  3 04:40:18 np0005468397 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct  3 04:40:18 np0005468397 kernel: iommu: Default domain type: Translated
Oct  3 04:40:18 np0005468397 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct  3 04:40:18 np0005468397 kernel: SCSI subsystem initialized
Oct  3 04:40:18 np0005468397 kernel: ACPI: bus type USB registered
Oct  3 04:40:18 np0005468397 kernel: usbcore: registered new interface driver usbfs
Oct  3 04:40:18 np0005468397 kernel: usbcore: registered new interface driver hub
Oct  3 04:40:18 np0005468397 kernel: usbcore: registered new device driver usb
Oct  3 04:40:18 np0005468397 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  3 04:40:18 np0005468397 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  3 04:40:18 np0005468397 kernel: PTP clock support registered
Oct  3 04:40:18 np0005468397 kernel: EDAC MC: Ver: 3.0.0
Oct  3 04:40:18 np0005468397 kernel: NetLabel: Initializing
Oct  3 04:40:18 np0005468397 kernel: NetLabel:  domain hash size = 128
Oct  3 04:40:18 np0005468397 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct  3 04:40:18 np0005468397 kernel: NetLabel:  unlabeled traffic allowed by default
Oct  3 04:40:18 np0005468397 kernel: PCI: Using ACPI for IRQ routing
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  3 04:40:18 np0005468397 kernel: vgaarb: loaded
Oct  3 04:40:18 np0005468397 kernel: clocksource: Switched to clocksource kvm-clock
Oct  3 04:40:18 np0005468397 kernel: VFS: Disk quotas dquot_6.6.0
Oct  3 04:40:18 np0005468397 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  3 04:40:18 np0005468397 kernel: pnp: PnP ACPI init
Oct  3 04:40:18 np0005468397 kernel: pnp: PnP ACPI: found 5 devices
Oct  3 04:40:18 np0005468397 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  3 04:40:18 np0005468397 kernel: NET: Registered PF_INET protocol family
Oct  3 04:40:18 np0005468397 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  3 04:40:18 np0005468397 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct  3 04:40:18 np0005468397 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  3 04:40:18 np0005468397 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct  3 04:40:18 np0005468397 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct  3 04:40:18 np0005468397 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct  3 04:40:18 np0005468397 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct  3 04:40:18 np0005468397 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  3 04:40:18 np0005468397 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  3 04:40:18 np0005468397 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  3 04:40:18 np0005468397 kernel: NET: Registered PF_XDP protocol family
Oct  3 04:40:18 np0005468397 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  3 04:40:18 np0005468397 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  3 04:40:18 np0005468397 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  3 04:40:18 np0005468397 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct  3 04:40:18 np0005468397 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct  3 04:40:18 np0005468397 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct  3 04:40:18 np0005468397 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 78713 usecs
Oct  3 04:40:18 np0005468397 kernel: PCI: CLS 0 bytes, default 64
Oct  3 04:40:18 np0005468397 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct  3 04:40:18 np0005468397 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct  3 04:40:18 np0005468397 kernel: ACPI: bus type thunderbolt registered
Oct  3 04:40:18 np0005468397 kernel: Trying to unpack rootfs image as initramfs...
Oct  3 04:40:18 np0005468397 kernel: Initialise system trusted keyrings
Oct  3 04:40:18 np0005468397 kernel: Key type blacklist registered
Oct  3 04:40:18 np0005468397 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct  3 04:40:18 np0005468397 kernel: zbud: loaded
Oct  3 04:40:18 np0005468397 kernel: integrity: Platform Keyring initialized
Oct  3 04:40:18 np0005468397 kernel: integrity: Machine keyring initialized
Oct  3 04:40:18 np0005468397 kernel: Freeing initrd memory: 86104K
Oct  3 04:40:18 np0005468397 kernel: NET: Registered PF_ALG protocol family
Oct  3 04:40:18 np0005468397 kernel: xor: automatically using best checksumming function   avx       
Oct  3 04:40:18 np0005468397 kernel: Key type asymmetric registered
Oct  3 04:40:18 np0005468397 kernel: Asymmetric key parser 'x509' registered
Oct  3 04:40:18 np0005468397 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct  3 04:40:18 np0005468397 kernel: io scheduler mq-deadline registered
Oct  3 04:40:18 np0005468397 kernel: io scheduler kyber registered
Oct  3 04:40:18 np0005468397 kernel: io scheduler bfq registered
Oct  3 04:40:18 np0005468397 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct  3 04:40:18 np0005468397 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct  3 04:40:18 np0005468397 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct  3 04:40:18 np0005468397 kernel: ACPI: button: Power Button [PWRF]
Oct  3 04:40:18 np0005468397 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct  3 04:40:18 np0005468397 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct  3 04:40:18 np0005468397 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct  3 04:40:18 np0005468397 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  3 04:40:18 np0005468397 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  3 04:40:18 np0005468397 kernel: Non-volatile memory driver v1.3
Oct  3 04:40:18 np0005468397 kernel: rdac: device handler registered
Oct  3 04:40:18 np0005468397 kernel: hp_sw: device handler registered
Oct  3 04:40:18 np0005468397 kernel: emc: device handler registered
Oct  3 04:40:18 np0005468397 kernel: alua: device handler registered
Oct  3 04:40:18 np0005468397 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct  3 04:40:18 np0005468397 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct  3 04:40:18 np0005468397 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct  3 04:40:18 np0005468397 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct  3 04:40:18 np0005468397 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct  3 04:40:18 np0005468397 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct  3 04:40:18 np0005468397 kernel: usb usb1: Product: UHCI Host Controller
Oct  3 04:40:18 np0005468397 kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct  3 04:40:18 np0005468397 kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct  3 04:40:18 np0005468397 kernel: hub 1-0:1.0: USB hub found
Oct  3 04:40:18 np0005468397 kernel: hub 1-0:1.0: 2 ports detected
Oct  3 04:40:18 np0005468397 kernel: usbcore: registered new interface driver usbserial_generic
Oct  3 04:40:18 np0005468397 kernel: usbserial: USB Serial support registered for generic
Oct  3 04:40:18 np0005468397 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  3 04:40:18 np0005468397 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  3 04:40:18 np0005468397 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  3 04:40:18 np0005468397 kernel: mousedev: PS/2 mouse device common for all mice
Oct  3 04:40:18 np0005468397 kernel: rtc_cmos 00:04: RTC can wake from S4
Oct  3 04:40:18 np0005468397 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct  3 04:40:18 np0005468397 kernel: rtc_cmos 00:04: registered as rtc0
Oct  3 04:40:18 np0005468397 kernel: rtc_cmos 00:04: setting system clock to 2025-10-03T08:40:17 UTC (1759480817)
Oct  3 04:40:18 np0005468397 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct  3 04:40:18 np0005468397 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct  3 04:40:18 np0005468397 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct  3 04:40:18 np0005468397 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct  3 04:40:18 np0005468397 kernel: usbcore: registered new interface driver usbhid
Oct  3 04:40:18 np0005468397 kernel: usbhid: USB HID core driver
Oct  3 04:40:18 np0005468397 kernel: drop_monitor: Initializing network drop monitor service
Oct  3 04:40:18 np0005468397 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct  3 04:40:18 np0005468397 kernel: Initializing XFRM netlink socket
Oct  3 04:40:18 np0005468397 kernel: NET: Registered PF_INET6 protocol family
Oct  3 04:40:18 np0005468397 kernel: Segment Routing with IPv6
Oct  3 04:40:18 np0005468397 kernel: NET: Registered PF_PACKET protocol family
Oct  3 04:40:18 np0005468397 kernel: mpls_gso: MPLS GSO support
Oct  3 04:40:18 np0005468397 kernel: IPI shorthand broadcast: enabled
Oct  3 04:40:18 np0005468397 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  3 04:40:18 np0005468397 kernel: AES CTR mode by8 optimization enabled
Oct  3 04:40:18 np0005468397 kernel: sched_clock: Marking stable (1290004185, 145784368)->(1519969607, -84181054)
Oct  3 04:40:18 np0005468397 kernel: registered taskstats version 1
Oct  3 04:40:18 np0005468397 kernel: Loading compiled-in X.509 certificates
Oct  3 04:40:18 np0005468397 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  3 04:40:18 np0005468397 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct  3 04:40:18 np0005468397 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct  3 04:40:18 np0005468397 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct  3 04:40:18 np0005468397 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct  3 04:40:18 np0005468397 kernel: Demotion targets for Node 0: null
Oct  3 04:40:18 np0005468397 kernel: page_owner is disabled
Oct  3 04:40:18 np0005468397 kernel: Key type .fscrypt registered
Oct  3 04:40:18 np0005468397 kernel: Key type fscrypt-provisioning registered
Oct  3 04:40:18 np0005468397 kernel: Key type big_key registered
Oct  3 04:40:18 np0005468397 kernel: Key type encrypted registered
Oct  3 04:40:18 np0005468397 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  3 04:40:18 np0005468397 kernel: Loading compiled-in module X.509 certificates
Oct  3 04:40:18 np0005468397 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  3 04:40:18 np0005468397 kernel: ima: Allocated hash algorithm: sha256
Oct  3 04:40:18 np0005468397 kernel: ima: No architecture policies found
Oct  3 04:40:18 np0005468397 kernel: evm: Initialising EVM extended attributes:
Oct  3 04:40:18 np0005468397 kernel: evm: security.selinux
Oct  3 04:40:18 np0005468397 kernel: evm: security.SMACK64 (disabled)
Oct  3 04:40:18 np0005468397 kernel: evm: security.SMACK64EXEC (disabled)
Oct  3 04:40:18 np0005468397 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct  3 04:40:18 np0005468397 kernel: evm: security.SMACK64MMAP (disabled)
Oct  3 04:40:18 np0005468397 kernel: evm: security.apparmor (disabled)
Oct  3 04:40:18 np0005468397 kernel: evm: security.ima
Oct  3 04:40:18 np0005468397 kernel: evm: security.capability
Oct  3 04:40:18 np0005468397 kernel: evm: HMAC attrs: 0x1
Oct  3 04:40:18 np0005468397 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct  3 04:40:18 np0005468397 kernel: Running certificate verification RSA selftest
Oct  3 04:40:18 np0005468397 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct  3 04:40:18 np0005468397 kernel: Running certificate verification ECDSA selftest
Oct  3 04:40:18 np0005468397 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct  3 04:40:18 np0005468397 kernel: clk: Disabling unused clocks
Oct  3 04:40:18 np0005468397 kernel: Freeing unused decrypted memory: 2028K
Oct  3 04:40:18 np0005468397 kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct  3 04:40:18 np0005468397 kernel: Write protecting the kernel read-only data: 30720k
Oct  3 04:40:18 np0005468397 kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct  3 04:40:18 np0005468397 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct  3 04:40:18 np0005468397 kernel: Run /init as init process
Oct  3 04:40:18 np0005468397 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  3 04:40:18 np0005468397 systemd: Detected virtualization kvm.
Oct  3 04:40:18 np0005468397 systemd: Detected architecture x86-64.
Oct  3 04:40:18 np0005468397 systemd: Running in initrd.
Oct  3 04:40:18 np0005468397 systemd: No hostname configured, using default hostname.
Oct  3 04:40:18 np0005468397 systemd: Hostname set to <localhost>.
Oct  3 04:40:18 np0005468397 systemd: Initializing machine ID from VM UUID.
Oct  3 04:40:18 np0005468397 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct  3 04:40:18 np0005468397 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct  3 04:40:18 np0005468397 kernel: usb 1-1: Product: QEMU USB Tablet
Oct  3 04:40:18 np0005468397 kernel: usb 1-1: Manufacturer: QEMU
Oct  3 04:40:18 np0005468397 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct  3 04:40:18 np0005468397 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct  3 04:40:18 np0005468397 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct  3 04:40:18 np0005468397 systemd: Queued start job for default target Initrd Default Target.
Oct  3 04:40:18 np0005468397 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  3 04:40:18 np0005468397 systemd: Reached target Local Encrypted Volumes.
Oct  3 04:40:18 np0005468397 systemd: Reached target Initrd /usr File System.
Oct  3 04:40:18 np0005468397 systemd: Reached target Local File Systems.
Oct  3 04:40:18 np0005468397 systemd: Reached target Path Units.
Oct  3 04:40:18 np0005468397 systemd: Reached target Slice Units.
Oct  3 04:40:18 np0005468397 systemd: Reached target Swaps.
Oct  3 04:40:18 np0005468397 systemd: Reached target Timer Units.
Oct  3 04:40:18 np0005468397 systemd: Listening on D-Bus System Message Bus Socket.
Oct  3 04:40:18 np0005468397 systemd: Listening on Journal Socket (/dev/log).
Oct  3 04:40:18 np0005468397 systemd: Listening on Journal Socket.
Oct  3 04:40:18 np0005468397 systemd: Listening on udev Control Socket.
Oct  3 04:40:18 np0005468397 systemd: Listening on udev Kernel Socket.
Oct  3 04:40:18 np0005468397 systemd: Reached target Socket Units.
Oct  3 04:40:18 np0005468397 systemd: Starting Create List of Static Device Nodes...
Oct  3 04:40:18 np0005468397 systemd: Starting Journal Service...
Oct  3 04:40:18 np0005468397 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  3 04:40:18 np0005468397 systemd: Starting Apply Kernel Variables...
Oct  3 04:40:18 np0005468397 systemd: Starting Create System Users...
Oct  3 04:40:18 np0005468397 systemd: Starting Setup Virtual Console...
Oct  3 04:40:18 np0005468397 systemd: Finished Create List of Static Device Nodes.
Oct  3 04:40:18 np0005468397 systemd: Finished Apply Kernel Variables.
Oct  3 04:40:18 np0005468397 systemd: Finished Create System Users.
Oct  3 04:40:18 np0005468397 systemd-journald[307]: Journal started
Oct  3 04:40:18 np0005468397 systemd-journald[307]: Runtime Journal (/run/log/journal/1cc15826d1a94f5b875a64915f5c099d) is 8.0M, max 153.5M, 145.5M free.
Oct  3 04:40:18 np0005468397 systemd-sysusers[312]: Creating group 'users' with GID 100.
Oct  3 04:40:18 np0005468397 systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Oct  3 04:40:18 np0005468397 systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct  3 04:40:18 np0005468397 systemd: Started Journal Service.
Oct  3 04:40:18 np0005468397 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  3 04:40:18 np0005468397 systemd[1]: Starting Create Volatile Files and Directories...
Oct  3 04:40:18 np0005468397 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  3 04:40:18 np0005468397 systemd[1]: Finished Setup Virtual Console.
Oct  3 04:40:18 np0005468397 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct  3 04:40:18 np0005468397 systemd[1]: Starting dracut cmdline hook...
Oct  3 04:40:18 np0005468397 systemd[1]: Finished Create Volatile Files and Directories.
Oct  3 04:40:18 np0005468397 dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Oct  3 04:40:18 np0005468397 dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  3 04:40:18 np0005468397 systemd[1]: Finished dracut cmdline hook.
Oct  3 04:40:18 np0005468397 systemd[1]: Starting dracut pre-udev hook...
Oct  3 04:40:18 np0005468397 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  3 04:40:18 np0005468397 kernel: device-mapper: uevent: version 1.0.3
Oct  3 04:40:18 np0005468397 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct  3 04:40:18 np0005468397 kernel: RPC: Registered named UNIX socket transport module.
Oct  3 04:40:18 np0005468397 kernel: RPC: Registered udp transport module.
Oct  3 04:40:18 np0005468397 kernel: RPC: Registered tcp transport module.
Oct  3 04:40:18 np0005468397 kernel: RPC: Registered tcp-with-tls transport module.
Oct  3 04:40:18 np0005468397 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct  3 04:40:18 np0005468397 rpc.statd[445]: Version 2.5.4 starting
Oct  3 04:40:18 np0005468397 rpc.statd[445]: Initializing NSM state
Oct  3 04:40:18 np0005468397 rpc.idmapd[450]: Setting log level to 0
Oct  3 04:40:18 np0005468397 systemd[1]: Finished dracut pre-udev hook.
Oct  3 04:40:18 np0005468397 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  3 04:40:18 np0005468397 systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Oct  3 04:40:18 np0005468397 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  3 04:40:18 np0005468397 systemd[1]: Starting dracut pre-trigger hook...
Oct  3 04:40:18 np0005468397 systemd[1]: Finished dracut pre-trigger hook.
Oct  3 04:40:18 np0005468397 systemd[1]: Starting Coldplug All udev Devices...
Oct  3 04:40:18 np0005468397 systemd[1]: Created slice Slice /system/modprobe.
Oct  3 04:40:18 np0005468397 systemd[1]: Starting Load Kernel Module configfs...
Oct  3 04:40:18 np0005468397 systemd[1]: Finished Coldplug All udev Devices.
Oct  3 04:40:18 np0005468397 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  3 04:40:18 np0005468397 systemd[1]: Finished Load Kernel Module configfs.
Oct  3 04:40:18 np0005468397 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  3 04:40:18 np0005468397 systemd[1]: Reached target Network.
Oct  3 04:40:18 np0005468397 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  3 04:40:18 np0005468397 systemd[1]: Starting dracut initqueue hook...
Oct  3 04:40:18 np0005468397 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct  3 04:40:18 np0005468397 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct  3 04:40:18 np0005468397 kernel: vda: vda1
Oct  3 04:40:18 np0005468397 systemd-udevd[468]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 04:40:18 np0005468397 kernel: scsi host0: ata_piix
Oct  3 04:40:18 np0005468397 kernel: scsi host1: ata_piix
Oct  3 04:40:18 np0005468397 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct  3 04:40:18 np0005468397 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct  3 04:40:18 np0005468397 systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  3 04:40:18 np0005468397 systemd[1]: Reached target Initrd Root Device.
Oct  3 04:40:19 np0005468397 systemd[1]: Mounting Kernel Configuration File System...
Oct  3 04:40:19 np0005468397 systemd[1]: Mounted Kernel Configuration File System.
Oct  3 04:40:19 np0005468397 kernel: ata1: found unknown device (class 0)
Oct  3 04:40:19 np0005468397 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct  3 04:40:19 np0005468397 systemd[1]: Reached target System Initialization.
Oct  3 04:40:19 np0005468397 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct  3 04:40:19 np0005468397 systemd[1]: Reached target Basic System.
Oct  3 04:40:19 np0005468397 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct  3 04:40:19 np0005468397 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct  3 04:40:19 np0005468397 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct  3 04:40:19 np0005468397 systemd[1]: Finished dracut initqueue hook.
Oct  3 04:40:19 np0005468397 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  3 04:40:19 np0005468397 systemd[1]: Reached target Remote Encrypted Volumes.
Oct  3 04:40:19 np0005468397 systemd[1]: Reached target Remote File Systems.
Oct  3 04:40:19 np0005468397 systemd[1]: Starting dracut pre-mount hook...
Oct  3 04:40:19 np0005468397 systemd[1]: Finished dracut pre-mount hook.
Oct  3 04:40:19 np0005468397 systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct  3 04:40:19 np0005468397 systemd-fsck[559]: /usr/sbin/fsck.xfs: XFS file system.
Oct  3 04:40:19 np0005468397 systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  3 04:40:19 np0005468397 systemd[1]: Mounting /sysroot...
Oct  3 04:40:19 np0005468397 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct  3 04:40:19 np0005468397 kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct  3 04:40:19 np0005468397 kernel: XFS (vda1): Ending clean mount
Oct  3 04:40:19 np0005468397 systemd[1]: Mounted /sysroot.
Oct  3 04:40:19 np0005468397 systemd[1]: Reached target Initrd Root File System.
Oct  3 04:40:19 np0005468397 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct  3 04:40:19 np0005468397 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  3 04:40:19 np0005468397 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct  3 04:40:19 np0005468397 systemd[1]: Reached target Initrd File Systems.
Oct  3 04:40:19 np0005468397 systemd[1]: Reached target Initrd Default Target.
Oct  3 04:40:19 np0005468397 systemd[1]: Starting dracut mount hook...
Oct  3 04:40:19 np0005468397 systemd[1]: Finished dracut mount hook.
Oct  3 04:40:20 np0005468397 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct  3 04:40:20 np0005468397 rpc.idmapd[450]: exiting on signal 15
Oct  3 04:40:20 np0005468397 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct  3 04:40:20 np0005468397 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped target Network.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped target Timer Units.
Oct  3 04:40:20 np0005468397 systemd[1]: dbus.socket: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct  3 04:40:20 np0005468397 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped target Initrd Default Target.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped target Basic System.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped target Initrd Root Device.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped target Initrd /usr File System.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped target Path Units.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped target Remote File Systems.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped target Slice Units.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped target Socket Units.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped target System Initialization.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped target Local File Systems.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped target Swaps.
Oct  3 04:40:20 np0005468397 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped dracut mount hook.
Oct  3 04:40:20 np0005468397 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped dracut pre-mount hook.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped target Local Encrypted Volumes.
Oct  3 04:40:20 np0005468397 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct  3 04:40:20 np0005468397 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped dracut initqueue hook.
Oct  3 04:40:20 np0005468397 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped Apply Kernel Variables.
Oct  3 04:40:20 np0005468397 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped Create Volatile Files and Directories.
Oct  3 04:40:20 np0005468397 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped Coldplug All udev Devices.
Oct  3 04:40:20 np0005468397 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped dracut pre-trigger hook.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct  3 04:40:20 np0005468397 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped Setup Virtual Console.
Oct  3 04:40:20 np0005468397 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct  3 04:40:20 np0005468397 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct  3 04:40:20 np0005468397 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Closed udev Control Socket.
Oct  3 04:40:20 np0005468397 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Closed udev Kernel Socket.
Oct  3 04:40:20 np0005468397 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped dracut pre-udev hook.
Oct  3 04:40:20 np0005468397 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped dracut cmdline hook.
Oct  3 04:40:20 np0005468397 systemd[1]: Starting Cleanup udev Database...
Oct  3 04:40:20 np0005468397 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct  3 04:40:20 np0005468397 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped Create List of Static Device Nodes.
Oct  3 04:40:20 np0005468397 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Stopped Create System Users.
Oct  3 04:40:20 np0005468397 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  3 04:40:20 np0005468397 systemd[1]: Finished Cleanup udev Database.
Oct  3 04:40:20 np0005468397 systemd[1]: Reached target Switch Root.
Oct  3 04:40:20 np0005468397 systemd[1]: Starting Switch Root...
Oct  3 04:40:20 np0005468397 systemd[1]: Switching root.
Oct  3 04:40:20 np0005468397 systemd-journald[307]: Journal stopped
Oct  3 04:40:21 np0005468397 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct  3 04:40:21 np0005468397 kernel: audit: type=1404 audit(1759480820.423:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct  3 04:40:21 np0005468397 kernel: SELinux:  policy capability network_peer_controls=1
Oct  3 04:40:21 np0005468397 kernel: SELinux:  policy capability open_perms=1
Oct  3 04:40:21 np0005468397 kernel: SELinux:  policy capability extended_socket_class=1
Oct  3 04:40:21 np0005468397 kernel: SELinux:  policy capability always_check_network=0
Oct  3 04:40:21 np0005468397 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  3 04:40:21 np0005468397 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  3 04:40:21 np0005468397 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  3 04:40:21 np0005468397 kernel: audit: type=1403 audit(1759480820.587:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  3 04:40:21 np0005468397 systemd: Successfully loaded SELinux policy in 167.787ms.
Oct  3 04:40:21 np0005468397 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 35.738ms.
Oct  3 04:40:21 np0005468397 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  3 04:40:21 np0005468397 systemd: Detected virtualization kvm.
Oct  3 04:40:21 np0005468397 systemd: Detected architecture x86-64.
Oct  3 04:40:21 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 04:40:21 np0005468397 systemd: initrd-switch-root.service: Deactivated successfully.
Oct  3 04:40:21 np0005468397 systemd: Stopped Switch Root.
Oct  3 04:40:21 np0005468397 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  3 04:40:21 np0005468397 systemd: Created slice Slice /system/getty.
Oct  3 04:40:21 np0005468397 systemd: Created slice Slice /system/serial-getty.
Oct  3 04:40:21 np0005468397 systemd: Created slice Slice /system/sshd-keygen.
Oct  3 04:40:21 np0005468397 systemd: Created slice User and Session Slice.
Oct  3 04:40:21 np0005468397 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  3 04:40:21 np0005468397 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct  3 04:40:21 np0005468397 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct  3 04:40:21 np0005468397 systemd: Reached target Local Encrypted Volumes.
Oct  3 04:40:21 np0005468397 systemd: Stopped target Switch Root.
Oct  3 04:40:21 np0005468397 systemd: Stopped target Initrd File Systems.
Oct  3 04:40:21 np0005468397 systemd: Stopped target Initrd Root File System.
Oct  3 04:40:21 np0005468397 systemd: Reached target Local Integrity Protected Volumes.
Oct  3 04:40:21 np0005468397 systemd: Reached target Path Units.
Oct  3 04:40:21 np0005468397 systemd: Reached target rpc_pipefs.target.
Oct  3 04:40:21 np0005468397 systemd: Reached target Slice Units.
Oct  3 04:40:21 np0005468397 systemd: Reached target Swaps.
Oct  3 04:40:21 np0005468397 systemd: Reached target Local Verity Protected Volumes.
Oct  3 04:40:21 np0005468397 systemd: Listening on RPCbind Server Activation Socket.
Oct  3 04:40:21 np0005468397 systemd: Reached target RPC Port Mapper.
Oct  3 04:40:21 np0005468397 systemd: Listening on Process Core Dump Socket.
Oct  3 04:40:21 np0005468397 systemd: Listening on initctl Compatibility Named Pipe.
Oct  3 04:40:21 np0005468397 systemd: Listening on udev Control Socket.
Oct  3 04:40:21 np0005468397 systemd: Listening on udev Kernel Socket.
Oct  3 04:40:21 np0005468397 systemd: Mounting Huge Pages File System...
Oct  3 04:40:21 np0005468397 systemd: Mounting POSIX Message Queue File System...
Oct  3 04:40:21 np0005468397 systemd: Mounting Kernel Debug File System...
Oct  3 04:40:21 np0005468397 systemd: Mounting Kernel Trace File System...
Oct  3 04:40:21 np0005468397 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  3 04:40:21 np0005468397 systemd: Starting Create List of Static Device Nodes...
Oct  3 04:40:21 np0005468397 systemd: Starting Load Kernel Module configfs...
Oct  3 04:40:21 np0005468397 systemd: Starting Load Kernel Module drm...
Oct  3 04:40:21 np0005468397 systemd: Starting Load Kernel Module efi_pstore...
Oct  3 04:40:21 np0005468397 systemd: Starting Load Kernel Module fuse...
Oct  3 04:40:21 np0005468397 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct  3 04:40:21 np0005468397 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct  3 04:40:21 np0005468397 systemd: Stopped File System Check on Root Device.
Oct  3 04:40:21 np0005468397 systemd: Stopped Journal Service.
Oct  3 04:40:21 np0005468397 systemd: Starting Journal Service...
Oct  3 04:40:21 np0005468397 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  3 04:40:21 np0005468397 systemd: Starting Generate network units from Kernel command line...
Oct  3 04:40:21 np0005468397 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  3 04:40:21 np0005468397 systemd: Starting Remount Root and Kernel File Systems...
Oct  3 04:40:21 np0005468397 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct  3 04:40:21 np0005468397 systemd: Starting Apply Kernel Variables...
Oct  3 04:40:21 np0005468397 systemd: Starting Coldplug All udev Devices...
Oct  3 04:40:21 np0005468397 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct  3 04:40:21 np0005468397 systemd: Mounted Huge Pages File System.
Oct  3 04:40:21 np0005468397 systemd-journald[682]: Journal started
Oct  3 04:40:21 np0005468397 systemd-journald[682]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  3 04:40:21 np0005468397 systemd[1]: Queued start job for default target Multi-User System.
Oct  3 04:40:21 np0005468397 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  3 04:40:21 np0005468397 systemd: Started Journal Service.
Oct  3 04:40:21 np0005468397 systemd[1]: Mounted POSIX Message Queue File System.
Oct  3 04:40:21 np0005468397 systemd[1]: Mounted Kernel Debug File System.
Oct  3 04:40:21 np0005468397 systemd[1]: Mounted Kernel Trace File System.
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Create List of Static Device Nodes.
Oct  3 04:40:21 np0005468397 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Load Kernel Module configfs.
Oct  3 04:40:21 np0005468397 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct  3 04:40:21 np0005468397 kernel: ACPI: bus type drm_connector registered
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct  3 04:40:21 np0005468397 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Load Kernel Module drm.
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Generate network units from Kernel command line.
Oct  3 04:40:21 np0005468397 kernel: fuse: init (API version 7.37)
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct  3 04:40:21 np0005468397 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  3 04:40:21 np0005468397 systemd[1]: Starting Rebuild Hardware Database...
Oct  3 04:40:21 np0005468397 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct  3 04:40:21 np0005468397 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  3 04:40:21 np0005468397 systemd[1]: Starting Load/Save OS Random Seed...
Oct  3 04:40:21 np0005468397 systemd[1]: Starting Create System Users...
Oct  3 04:40:21 np0005468397 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Load Kernel Module fuse.
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Apply Kernel Variables.
Oct  3 04:40:21 np0005468397 systemd-journald[682]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  3 04:40:21 np0005468397 systemd-journald[682]: Received client request to flush runtime journal.
Oct  3 04:40:21 np0005468397 systemd[1]: Mounting FUSE Control File System...
Oct  3 04:40:21 np0005468397 systemd[1]: Mounted FUSE Control File System.
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Load/Save OS Random Seed.
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct  3 04:40:21 np0005468397 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Create System Users.
Oct  3 04:40:21 np0005468397 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Coldplug All udev Devices.
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  3 04:40:21 np0005468397 systemd[1]: Reached target Preparation for Local File Systems.
Oct  3 04:40:21 np0005468397 systemd[1]: Reached target Local File Systems.
Oct  3 04:40:21 np0005468397 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct  3 04:40:21 np0005468397 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct  3 04:40:21 np0005468397 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct  3 04:40:21 np0005468397 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct  3 04:40:21 np0005468397 systemd[1]: Starting Automatic Boot Loader Update...
Oct  3 04:40:21 np0005468397 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct  3 04:40:21 np0005468397 systemd[1]: Starting Create Volatile Files and Directories...
Oct  3 04:40:21 np0005468397 bootctl[704]: Couldn't find EFI system partition, skipping.
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Automatic Boot Loader Update.
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Create Volatile Files and Directories.
Oct  3 04:40:21 np0005468397 systemd[1]: Starting Security Auditing Service...
Oct  3 04:40:21 np0005468397 systemd[1]: Starting RPC Bind...
Oct  3 04:40:21 np0005468397 systemd[1]: Starting Rebuild Journal Catalog...
Oct  3 04:40:21 np0005468397 auditd[710]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct  3 04:40:21 np0005468397 auditd[710]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct  3 04:40:21 np0005468397 systemd[1]: Started RPC Bind.
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Rebuild Journal Catalog.
Oct  3 04:40:21 np0005468397 augenrules[715]: /sbin/augenrules: No change
Oct  3 04:40:21 np0005468397 augenrules[730]: No rules
Oct  3 04:40:21 np0005468397 augenrules[730]: enabled 1
Oct  3 04:40:21 np0005468397 augenrules[730]: failure 1
Oct  3 04:40:21 np0005468397 augenrules[730]: pid 710
Oct  3 04:40:21 np0005468397 augenrules[730]: rate_limit 0
Oct  3 04:40:21 np0005468397 augenrules[730]: backlog_limit 8192
Oct  3 04:40:21 np0005468397 augenrules[730]: lost 0
Oct  3 04:40:21 np0005468397 augenrules[730]: backlog 4
Oct  3 04:40:21 np0005468397 augenrules[730]: backlog_wait_time 60000
Oct  3 04:40:21 np0005468397 augenrules[730]: backlog_wait_time_actual 0
Oct  3 04:40:21 np0005468397 augenrules[730]: enabled 1
Oct  3 04:40:21 np0005468397 augenrules[730]: failure 1
Oct  3 04:40:21 np0005468397 augenrules[730]: pid 710
Oct  3 04:40:21 np0005468397 augenrules[730]: rate_limit 0
Oct  3 04:40:21 np0005468397 augenrules[730]: backlog_limit 8192
Oct  3 04:40:21 np0005468397 augenrules[730]: lost 0
Oct  3 04:40:21 np0005468397 augenrules[730]: backlog 0
Oct  3 04:40:21 np0005468397 augenrules[730]: backlog_wait_time 60000
Oct  3 04:40:21 np0005468397 augenrules[730]: backlog_wait_time_actual 0
Oct  3 04:40:21 np0005468397 augenrules[730]: enabled 1
Oct  3 04:40:21 np0005468397 augenrules[730]: failure 1
Oct  3 04:40:21 np0005468397 augenrules[730]: pid 710
Oct  3 04:40:21 np0005468397 augenrules[730]: rate_limit 0
Oct  3 04:40:21 np0005468397 augenrules[730]: backlog_limit 8192
Oct  3 04:40:21 np0005468397 augenrules[730]: lost 0
Oct  3 04:40:21 np0005468397 augenrules[730]: backlog 3
Oct  3 04:40:21 np0005468397 augenrules[730]: backlog_wait_time 60000
Oct  3 04:40:21 np0005468397 augenrules[730]: backlog_wait_time_actual 0
Oct  3 04:40:21 np0005468397 systemd[1]: Started Security Auditing Service.
Oct  3 04:40:21 np0005468397 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct  3 04:40:21 np0005468397 systemd[1]: Finished Rebuild Hardware Database.
Oct  3 04:40:22 np0005468397 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  3 04:40:22 np0005468397 systemd[1]: Starting Update is Completed...
Oct  3 04:40:22 np0005468397 systemd[1]: Finished Update is Completed.
Oct  3 04:40:22 np0005468397 systemd-udevd[738]: Using default interface naming scheme 'rhel-9.0'.
Oct  3 04:40:22 np0005468397 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  3 04:40:22 np0005468397 systemd[1]: Reached target System Initialization.
Oct  3 04:40:22 np0005468397 systemd[1]: Started dnf makecache --timer.
Oct  3 04:40:22 np0005468397 systemd[1]: Started Daily rotation of log files.
Oct  3 04:40:22 np0005468397 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct  3 04:40:22 np0005468397 systemd[1]: Reached target Timer Units.
Oct  3 04:40:22 np0005468397 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct  3 04:40:22 np0005468397 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct  3 04:40:22 np0005468397 systemd[1]: Reached target Socket Units.
Oct  3 04:40:22 np0005468397 systemd[1]: Starting D-Bus System Message Bus...
Oct  3 04:40:22 np0005468397 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  3 04:40:22 np0005468397 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct  3 04:40:22 np0005468397 systemd[1]: Starting Load Kernel Module configfs...
Oct  3 04:40:22 np0005468397 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  3 04:40:22 np0005468397 systemd[1]: Finished Load Kernel Module configfs.
Oct  3 04:40:22 np0005468397 systemd-udevd[746]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 04:40:22 np0005468397 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct  3 04:40:22 np0005468397 systemd[1]: Started D-Bus System Message Bus.
Oct  3 04:40:22 np0005468397 systemd[1]: Reached target Basic System.
Oct  3 04:40:22 np0005468397 dbus-broker-lau[777]: Ready
Oct  3 04:40:22 np0005468397 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct  3 04:40:22 np0005468397 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct  3 04:40:22 np0005468397 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct  3 04:40:22 np0005468397 systemd[1]: Starting NTP client/server...
Oct  3 04:40:22 np0005468397 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct  3 04:40:22 np0005468397 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct  3 04:40:22 np0005468397 systemd[1]: Starting IPv4 firewall with iptables...
Oct  3 04:40:22 np0005468397 systemd[1]: Started irqbalance daemon.
Oct  3 04:40:22 np0005468397 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct  3 04:40:22 np0005468397 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  3 04:40:22 np0005468397 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  3 04:40:22 np0005468397 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  3 04:40:22 np0005468397 systemd[1]: Reached target sshd-keygen.target.
Oct  3 04:40:22 np0005468397 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct  3 04:40:22 np0005468397 systemd[1]: Reached target User and Group Name Lookups.
Oct  3 04:40:22 np0005468397 systemd[1]: Starting User Login Management...
Oct  3 04:40:22 np0005468397 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct  3 04:40:22 np0005468397 chronyd[807]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  3 04:40:22 np0005468397 chronyd[807]: Loaded 0 symmetric keys
Oct  3 04:40:22 np0005468397 chronyd[807]: Using right/UTC timezone to obtain leap second data
Oct  3 04:40:22 np0005468397 chronyd[807]: Loaded seccomp filter (level 2)
Oct  3 04:40:22 np0005468397 systemd[1]: Started NTP client/server.
Oct  3 04:40:22 np0005468397 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  3 04:40:22 np0005468397 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  3 04:40:22 np0005468397 systemd-logind[798]: New seat seat0.
Oct  3 04:40:22 np0005468397 systemd[1]: Started User Login Management.
Oct  3 04:40:22 np0005468397 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct  3 04:40:22 np0005468397 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct  3 04:40:22 np0005468397 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct  3 04:40:22 np0005468397 kernel: Console: switching to colour dummy device 80x25
Oct  3 04:40:22 np0005468397 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct  3 04:40:22 np0005468397 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct  3 04:40:22 np0005468397 kernel: [drm] features: -context_init
Oct  3 04:40:22 np0005468397 kernel: [drm] number of scanouts: 1
Oct  3 04:40:22 np0005468397 kernel: [drm] number of cap sets: 0
Oct  3 04:40:22 np0005468397 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct  3 04:40:22 np0005468397 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct  3 04:40:22 np0005468397 kernel: Console: switching to colour frame buffer device 128x48
Oct  3 04:40:22 np0005468397 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct  3 04:40:22 np0005468397 kernel: kvm_amd: TSC scaling supported
Oct  3 04:40:22 np0005468397 kernel: kvm_amd: Nested Virtualization enabled
Oct  3 04:40:22 np0005468397 kernel: kvm_amd: Nested Paging enabled
Oct  3 04:40:22 np0005468397 kernel: kvm_amd: LBR virtualization supported
Oct  3 04:40:22 np0005468397 iptables.init[790]: iptables: Applying firewall rules: [  OK  ]
Oct  3 04:40:22 np0005468397 systemd[1]: Finished IPv4 firewall with iptables.
Oct  3 04:40:22 np0005468397 cloud-init[846]: Cloud-init v. 24.4-7.el9 running 'init-local' at Fri, 03 Oct 2025 08:40:22 +0000. Up 6.65 seconds.
Oct  3 04:40:23 np0005468397 systemd[1]: run-cloud\x2dinit-tmp-tmpuqyinnj5.mount: Deactivated successfully.
Oct  3 04:40:23 np0005468397 systemd[1]: Starting Hostname Service...
Oct  3 04:40:23 np0005468397 systemd[1]: Started Hostname Service.
Oct  3 04:40:23 np0005468397 systemd-hostnamed[860]: Hostname set to <np0005468397.novalocal> (static)
Oct  3 04:40:23 np0005468397 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct  3 04:40:23 np0005468397 systemd[1]: Reached target Preparation for Network.
Oct  3 04:40:23 np0005468397 systemd[1]: Starting Network Manager...
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.4986] NetworkManager (version 1.54.1-1.el9) is starting... (boot:0f78053e-e0d8-4b4c-acfe-b75004689112)
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.4992] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5174] manager[0x564a4b488080]: monitoring kernel firmware directory '/lib/firmware'.
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5243] hostname: hostname: using hostnamed
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5243] hostname: static hostname changed from (none) to "np0005468397.novalocal"
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5250] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5404] manager[0x564a4b488080]: rfkill: Wi-Fi hardware radio set enabled
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5405] manager[0x564a4b488080]: rfkill: WWAN hardware radio set enabled
Oct  3 04:40:23 np0005468397 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5521] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5521] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5522] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5523] manager: Networking is enabled by state file
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5525] settings: Loaded settings plugin: keyfile (internal)
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5555] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5580] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5605] dhcp: init: Using DHCP client 'internal'
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5609] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5624] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5637] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5646] device (lo): Activation: starting connection 'lo' (e79ef2b9-7dd2-4056-b7de-d12dc9e46160)
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5657] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5661] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  3 04:40:23 np0005468397 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  3 04:40:23 np0005468397 systemd[1]: Started Network Manager.
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5720] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5725] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5729] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5731] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5733] device (eth0): carrier: link connected
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5737] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5742] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  3 04:40:23 np0005468397 systemd[1]: Reached target Network.
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5747] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5752] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5753] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5755] manager: NetworkManager state is now CONNECTING
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5756] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5762] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5765] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  3 04:40:23 np0005468397 systemd[1]: Starting Network Manager Wait Online...
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5799] dhcp4 (eth0): state changed new lease, address=38.129.56.154
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5804] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5819] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  3 04:40:23 np0005468397 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5851] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5852] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5858] device (lo): Activation: successful, device activated.
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5865] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5866] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  3 04:40:23 np0005468397 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5869] manager: NetworkManager state is now CONNECTED_SITE
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5872] device (eth0): Activation: successful, device activated.
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5877] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  3 04:40:23 np0005468397 NetworkManager[864]: <info>  [1759480823.5880] manager: startup complete
Oct  3 04:40:23 np0005468397 systemd[1]: Started GSSAPI Proxy Daemon.
Oct  3 04:40:23 np0005468397 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  3 04:40:23 np0005468397 systemd[1]: Reached target NFS client services.
Oct  3 04:40:23 np0005468397 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  3 04:40:23 np0005468397 systemd[1]: Reached target Remote File Systems.
Oct  3 04:40:23 np0005468397 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  3 04:40:23 np0005468397 systemd[1]: Finished Network Manager Wait Online.
Oct  3 04:40:23 np0005468397 systemd[1]: Starting Cloud-init: Network Stage...
Oct  3 04:40:23 np0005468397 cloud-init[927]: Cloud-init v. 24.4-7.el9 running 'init' at Fri, 03 Oct 2025 08:40:23 +0000. Up 7.67 seconds.
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: |  eth0  | True |        38.129.56.154        | 255.255.255.0 | global | fa:16:3e:be:03:60 |
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: |  eth0  | True | fe80::f816:3eff:febe:360/64 |       .       |  link  | fa:16:3e:be:03:60 |
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct  3 04:40:23 np0005468397 cloud-init[927]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct  3 04:40:24 np0005468397 cloud-init[927]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  3 04:40:25 np0005468397 cloud-init[927]: Generating public/private rsa key pair.
Oct  3 04:40:25 np0005468397 cloud-init[927]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct  3 04:40:25 np0005468397 cloud-init[927]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct  3 04:40:25 np0005468397 cloud-init[927]: The key fingerprint is:
Oct  3 04:40:25 np0005468397 cloud-init[927]: SHA256:1kNm56W157kSpgUJdmkHtLwOHskzvhiI8kXLoyBaSZA root@np0005468397.novalocal
Oct  3 04:40:25 np0005468397 cloud-init[927]: The key's randomart image is:
Oct  3 04:40:25 np0005468397 cloud-init[927]: +---[RSA 3072]----+
Oct  3 04:40:25 np0005468397 cloud-init[927]: |          .oo    |
Oct  3 04:40:25 np0005468397 cloud-init[927]: | .       o.+..   |
Oct  3 04:40:25 np0005468397 cloud-init[927]: |E       . Bo+ o  |
Oct  3 04:40:25 np0005468397 cloud-init[927]: | .      .=.=.+ . |
Oct  3 04:40:25 np0005468397 cloud-init[927]: |  .  .  SBo.+ . .|
Oct  3 04:40:25 np0005468397 cloud-init[927]: | . .+ o.o *. + o.|
Oct  3 04:40:25 np0005468397 cloud-init[927]: |.oo. * . o .+ ...|
Oct  3 04:40:25 np0005468397 cloud-init[927]: |o.+ o . o .. .  .|
Oct  3 04:40:25 np0005468397 cloud-init[927]: |.  o   . .    .. |
Oct  3 04:40:25 np0005468397 cloud-init[927]: +----[SHA256]-----+
Oct  3 04:40:25 np0005468397 cloud-init[927]: Generating public/private ecdsa key pair.
Oct  3 04:40:25 np0005468397 cloud-init[927]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct  3 04:40:25 np0005468397 cloud-init[927]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct  3 04:40:25 np0005468397 cloud-init[927]: The key fingerprint is:
Oct  3 04:40:25 np0005468397 cloud-init[927]: SHA256:k9bsksOt0XojFiI3r0wGwDT0HGUizzRE4aglL6yUMBw root@np0005468397.novalocal
Oct  3 04:40:25 np0005468397 cloud-init[927]: The key's randomart image is:
Oct  3 04:40:25 np0005468397 cloud-init[927]: +---[ECDSA 256]---+
Oct  3 04:40:25 np0005468397 cloud-init[927]: | E=oOoo          |
Oct  3 04:40:25 np0005468397 cloud-init[927]: |.o.@ =           |
Oct  3 04:40:25 np0005468397 cloud-init[927]: |+.= *            |
Oct  3 04:40:25 np0005468397 cloud-init[927]: |o*..     +       |
Oct  3 04:40:25 np0005468397 cloud-init[927]: |o+. .   S o      |
Oct  3 04:40:25 np0005468397 cloud-init[927]: |o.  ..+o.*       |
Oct  3 04:40:25 np0005468397 cloud-init[927]: |.    oo+*.+      |
Oct  3 04:40:25 np0005468397 cloud-init[927]: |     +  +*o      |
Oct  3 04:40:25 np0005468397 cloud-init[927]: |      oooo .     |
Oct  3 04:40:25 np0005468397 cloud-init[927]: +----[SHA256]-----+
Oct  3 04:40:25 np0005468397 cloud-init[927]: Generating public/private ed25519 key pair.
Oct  3 04:40:25 np0005468397 cloud-init[927]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct  3 04:40:25 np0005468397 cloud-init[927]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct  3 04:40:25 np0005468397 cloud-init[927]: The key fingerprint is:
Oct  3 04:40:25 np0005468397 cloud-init[927]: SHA256:jqSEWa2jBFP087zLLWQMmEMLCsk1znxWbiEyWZE1sok root@np0005468397.novalocal
Oct  3 04:40:25 np0005468397 cloud-init[927]: The key's randomart image is:
Oct  3 04:40:25 np0005468397 cloud-init[927]: +--[ED25519 256]--+
Oct  3 04:40:25 np0005468397 cloud-init[927]: |.o+=o=+=         |
Oct  3 04:40:25 np0005468397 cloud-init[927]: |+o=o=oB o        |
Oct  3 04:40:25 np0005468397 cloud-init[927]: |=o E+=.o         |
Oct  3 04:40:25 np0005468397 cloud-init[927]: |.o=++=.          |
Oct  3 04:40:25 np0005468397 cloud-init[927]: |  +.+o+ S        |
Oct  3 04:40:25 np0005468397 cloud-init[927]: | . o +++         |
Oct  3 04:40:25 np0005468397 cloud-init[927]: |  . .oo .        |
Oct  3 04:40:25 np0005468397 cloud-init[927]: |     ..o         |
Oct  3 04:40:25 np0005468397 cloud-init[927]: |      o..        |
Oct  3 04:40:25 np0005468397 cloud-init[927]: +----[SHA256]-----+
Oct  3 04:40:25 np0005468397 systemd[1]: Finished Cloud-init: Network Stage.
Oct  3 04:40:25 np0005468397 systemd[1]: Reached target Cloud-config availability.
Oct  3 04:40:25 np0005468397 systemd[1]: Reached target Network is Online.
Oct  3 04:40:25 np0005468397 systemd[1]: Starting Cloud-init: Config Stage...
Oct  3 04:40:25 np0005468397 systemd[1]: Starting Notify NFS peers of a restart...
Oct  3 04:40:25 np0005468397 systemd[1]: Starting System Logging Service...
Oct  3 04:40:25 np0005468397 systemd[1]: Starting OpenSSH server daemon...
Oct  3 04:40:25 np0005468397 sm-notify[1008]: Version 2.5.4 starting
Oct  3 04:40:25 np0005468397 systemd[1]: Starting Permit User Sessions...
Oct  3 04:40:25 np0005468397 systemd[1]: Started Notify NFS peers of a restart.
Oct  3 04:40:25 np0005468397 systemd[1]: Started OpenSSH server daemon.
Oct  3 04:40:25 np0005468397 systemd[1]: Finished Permit User Sessions.
Oct  3 04:40:25 np0005468397 systemd[1]: Started Command Scheduler.
Oct  3 04:40:25 np0005468397 systemd[1]: Started Getty on tty1.
Oct  3 04:40:25 np0005468397 systemd[1]: Started Serial Getty on ttyS0.
Oct  3 04:40:25 np0005468397 systemd[1]: Reached target Login Prompts.
Oct  3 04:40:25 np0005468397 rsyslogd[1009]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1009" x-info="https://www.rsyslog.com"] start
Oct  3 04:40:25 np0005468397 rsyslogd[1009]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct  3 04:40:25 np0005468397 systemd[1]: Started System Logging Service.
Oct  3 04:40:25 np0005468397 systemd[1]: Reached target Multi-User System.
Oct  3 04:40:25 np0005468397 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct  3 04:40:25 np0005468397 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  3 04:40:25 np0005468397 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct  3 04:40:25 np0005468397 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 04:40:25 np0005468397 cloud-init[1021]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Fri, 03 Oct 2025 08:40:25 +0000. Up 9.33 seconds.
Oct  3 04:40:25 np0005468397 systemd[1]: Finished Cloud-init: Config Stage.
Oct  3 04:40:25 np0005468397 systemd[1]: Starting Cloud-init: Final Stage...
Oct  3 04:40:26 np0005468397 cloud-init[1043]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Fri, 03 Oct 2025 08:40:26 +0000. Up 9.79 seconds.
Oct  3 04:40:26 np0005468397 cloud-init[1045]: #############################################################
Oct  3 04:40:26 np0005468397 cloud-init[1046]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct  3 04:40:26 np0005468397 cloud-init[1048]: 256 SHA256:k9bsksOt0XojFiI3r0wGwDT0HGUizzRE4aglL6yUMBw root@np0005468397.novalocal (ECDSA)
Oct  3 04:40:26 np0005468397 cloud-init[1050]: 256 SHA256:jqSEWa2jBFP087zLLWQMmEMLCsk1znxWbiEyWZE1sok root@np0005468397.novalocal (ED25519)
Oct  3 04:40:26 np0005468397 cloud-init[1052]: 3072 SHA256:1kNm56W157kSpgUJdmkHtLwOHskzvhiI8kXLoyBaSZA root@np0005468397.novalocal (RSA)
Oct  3 04:40:26 np0005468397 cloud-init[1053]: -----END SSH HOST KEY FINGERPRINTS-----
Oct  3 04:40:26 np0005468397 cloud-init[1054]: #############################################################
Oct  3 04:40:26 np0005468397 cloud-init[1043]: Cloud-init v. 24.4-7.el9 finished at Fri, 03 Oct 2025 08:40:26 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.96 seconds
Oct  3 04:40:26 np0005468397 systemd[1]: Finished Cloud-init: Final Stage.
Oct  3 04:40:26 np0005468397 systemd[1]: Reached target Cloud-init target.
Oct  3 04:40:26 np0005468397 systemd[1]: Startup finished in 1.632s (kernel) + 2.526s (initrd) + 5.891s (userspace) = 10.049s.
Oct  3 04:40:28 np0005468397 chronyd[807]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Oct  3 04:40:29 np0005468397 chronyd[807]: System clock wrong by 1.084471 seconds
Oct  3 04:40:29 np0005468397 chronyd[807]: System clock was stepped by 1.084471 seconds
Oct  3 04:40:29 np0005468397 chronyd[807]: System clock TAI offset set to 37 seconds
Oct  3 04:40:33 np0005468397 irqbalance[794]: Cannot change IRQ 25 affinity: Operation not permitted
Oct  3 04:40:33 np0005468397 irqbalance[794]: IRQ 25 affinity is now unmanaged
Oct  3 04:40:33 np0005468397 irqbalance[794]: Cannot change IRQ 31 affinity: Operation not permitted
Oct  3 04:40:33 np0005468397 irqbalance[794]: IRQ 31 affinity is now unmanaged
Oct  3 04:40:33 np0005468397 irqbalance[794]: Cannot change IRQ 28 affinity: Operation not permitted
Oct  3 04:40:33 np0005468397 irqbalance[794]: IRQ 28 affinity is now unmanaged
Oct  3 04:40:33 np0005468397 irqbalance[794]: Cannot change IRQ 32 affinity: Operation not permitted
Oct  3 04:40:33 np0005468397 irqbalance[794]: IRQ 32 affinity is now unmanaged
Oct  3 04:40:33 np0005468397 irqbalance[794]: Cannot change IRQ 30 affinity: Operation not permitted
Oct  3 04:40:33 np0005468397 irqbalance[794]: IRQ 30 affinity is now unmanaged
Oct  3 04:40:33 np0005468397 irqbalance[794]: Cannot change IRQ 29 affinity: Operation not permitted
Oct  3 04:40:33 np0005468397 irqbalance[794]: IRQ 29 affinity is now unmanaged
Oct  3 04:40:34 np0005468397 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  3 04:40:54 np0005468397 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  3 04:41:14 np0005468397 systemd[1]: Created slice User Slice of UID 1000.
Oct  3 04:41:14 np0005468397 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  3 04:41:14 np0005468397 systemd-logind[798]: New session 1 of user zuul.
Oct  3 04:41:14 np0005468397 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  3 04:41:14 np0005468397 systemd[1]: Starting User Manager for UID 1000...
Oct  3 04:41:14 np0005468397 systemd[1064]: Queued start job for default target Main User Target.
Oct  3 04:41:14 np0005468397 systemd[1064]: Created slice User Application Slice.
Oct  3 04:41:14 np0005468397 systemd[1064]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  3 04:41:14 np0005468397 systemd[1064]: Started Daily Cleanup of User's Temporary Directories.
Oct  3 04:41:14 np0005468397 systemd[1064]: Reached target Paths.
Oct  3 04:41:14 np0005468397 systemd[1064]: Reached target Timers.
Oct  3 04:41:14 np0005468397 systemd[1064]: Starting D-Bus User Message Bus Socket...
Oct  3 04:41:14 np0005468397 systemd[1064]: Starting Create User's Volatile Files and Directories...
Oct  3 04:41:14 np0005468397 systemd[1064]: Finished Create User's Volatile Files and Directories.
Oct  3 04:41:14 np0005468397 systemd[1064]: Listening on D-Bus User Message Bus Socket.
Oct  3 04:41:14 np0005468397 systemd[1064]: Reached target Sockets.
Oct  3 04:41:14 np0005468397 systemd[1064]: Reached target Basic System.
Oct  3 04:41:14 np0005468397 systemd[1064]: Reached target Main User Target.
Oct  3 04:41:14 np0005468397 systemd[1064]: Startup finished in 119ms.
Oct  3 04:41:14 np0005468397 systemd[1]: Started User Manager for UID 1000.
Oct  3 04:41:14 np0005468397 systemd[1]: Started Session 1 of User zuul.
Oct  3 04:41:15 np0005468397 python3[1146]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 04:41:18 np0005468397 python3[1174]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 04:41:24 np0005468397 python3[1232]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 04:41:25 np0005468397 python3[1272]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct  3 04:41:27 np0005468397 python3[1298]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyvtEJ9Fuz0oWlNXcfVqO6f2Kr6dq7j5/bHZBtR5F/su9W18j2w85uEeDDwI8umr64DuLanlpEAuYmn6jlqYfdGc+LbAWTx+VGA+peZYySZ+Du2JskjlZi3JsWx5mmDxJVJzoJeQBMOtvOWVJeSy5x13WQgv5Jo9o4Iu2vAjbBKTeSoKdf30bfuoeuVNcJEaka+MnehZnUXRAqYmKeqx9UwVRp59u89N4MuR48zbCJ/A1aYFX4nqg1QVzR0m9iJdgb/MA2jvRF+1dr5dSU5co7vPoc3yQS3ZmmqoCw2gFqL9Yq0Wan5MyRjmcMhrS3CYre7ChCH3e+u6jIVvMMgjsSzbAsUCjHkLKfnrW9lN0LwbwA8tg4Wrh/V0pSyEEsdIldbbewTvas4fbsiR3KNgKr+favNLKdrNg4Zve6PazqI0yN0njwIatXE7BnKQ/wUvcZrgD23BXA1PbbSAuzWu2KgEQiHpVTvTkPrGBahpdsvP9XjRzvgIgf17f1MINVekU= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:27 np0005468397 python3[1322]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:41:27 np0005468397 python3[1421]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 04:41:28 np0005468397 python3[1492]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759480887.7012827-207-246686638811989/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=3ec2a3437db24543904963318e92a537_id_rsa follow=False checksum=11469752411c2c8f79326fe14c65148d16f3c6d8 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:41:28 np0005468397 python3[1615]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 04:41:29 np0005468397 python3[1686]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759480888.5159993-240-156797902661914/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=3ec2a3437db24543904963318e92a537_id_rsa.pub follow=False checksum=100704fecc55c1b5c1f586d1ec71d83dda13d0f6 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:41:30 np0005468397 python3[1734]: ansible-ping Invoked with data=pong
Oct  3 04:41:31 np0005468397 python3[1758]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 04:41:32 np0005468397 python3[1816]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct  3 04:41:34 np0005468397 python3[1848]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:41:34 np0005468397 python3[1872]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:41:34 np0005468397 python3[1896]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:41:34 np0005468397 python3[1920]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:41:35 np0005468397 python3[1944]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:41:35 np0005468397 python3[1968]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:41:37 np0005468397 python3[1994]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:41:38 np0005468397 python3[2072]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 04:41:38 np0005468397 python3[2145]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759480897.7587264-21-17963429868286/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:41:39 np0005468397 python3[2193]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:39 np0005468397 python3[2217]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:39 np0005468397 python3[2241]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:40 np0005468397 python3[2265]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:40 np0005468397 python3[2289]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:40 np0005468397 python3[2313]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:40 np0005468397 python3[2337]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:41 np0005468397 python3[2361]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:41 np0005468397 python3[2385]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:41 np0005468397 python3[2409]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:42 np0005468397 python3[2433]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:42 np0005468397 python3[2457]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:42 np0005468397 python3[2481]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:42 np0005468397 python3[2505]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:43 np0005468397 python3[2529]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:43 np0005468397 python3[2553]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:43 np0005468397 python3[2577]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:44 np0005468397 python3[2601]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:44 np0005468397 python3[2625]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:44 np0005468397 python3[2649]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:44 np0005468397 python3[2673]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:45 np0005468397 python3[2697]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:45 np0005468397 python3[2721]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:45 np0005468397 python3[2745]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:45 np0005468397 python3[2769]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:46 np0005468397 python3[2793]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:41:48 np0005468397 python3[2819]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  3 04:41:48 np0005468397 systemd[1]: Starting Time & Date Service...
Oct  3 04:41:48 np0005468397 systemd[1]: Started Time & Date Service.
Oct  3 04:41:49 np0005468397 systemd-timedated[2821]: Changed time zone to 'UTC' (UTC).
Oct  3 04:41:49 np0005468397 python3[2850]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:41:49 np0005468397 python3[2926]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 04:41:50 np0005468397 python3[2997]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759480909.538861-153-273485527609053/source _original_basename=tmpocm1kzoe follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:41:50 np0005468397 python3[3097]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 04:41:50 np0005468397 python3[3168]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759480910.3698585-183-195826725868187/source _original_basename=tmp_emqv8s3 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:41:51 np0005468397 python3[3270]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 04:41:52 np0005468397 python3[3343]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759480911.3686087-231-219652915068661/source _original_basename=tmpxz0hqw8u follow=False checksum=7505069c79f0f05c557725b6c9065b30a260291f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:41:52 np0005468397 python3[3391]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 04:41:52 np0005468397 python3[3417]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 04:41:53 np0005468397 python3[3497]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 04:41:53 np0005468397 python3[3570]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759480913.0510643-273-227273166274682/source _original_basename=tmpe2nlxty2 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:41:54 np0005468397 python3[3621]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-4220-0d4e-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 04:41:54 np0005468397 python3[3649]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-4220-0d4e-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct  3 04:41:56 np0005468397 python3[3678]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:42:14 np0005468397 python3[3704]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:42:19 np0005468397 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  3 04:42:48 np0005468397 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  3 04:42:48 np0005468397 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct  3 04:42:48 np0005468397 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct  3 04:42:48 np0005468397 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct  3 04:42:48 np0005468397 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct  3 04:42:48 np0005468397 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct  3 04:42:48 np0005468397 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct  3 04:42:48 np0005468397 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct  3 04:42:48 np0005468397 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct  3 04:42:48 np0005468397 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct  3 04:42:48 np0005468397 NetworkManager[864]: <info>  [1759480968.5018] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  3 04:42:48 np0005468397 systemd-udevd[3707]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 04:42:48 np0005468397 NetworkManager[864]: <info>  [1759480968.5213] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  3 04:42:48 np0005468397 NetworkManager[864]: <info>  [1759480968.5243] settings: (eth1): created default wired connection 'Wired connection 1'
Oct  3 04:42:48 np0005468397 NetworkManager[864]: <info>  [1759480968.5248] device (eth1): carrier: link connected
Oct  3 04:42:48 np0005468397 NetworkManager[864]: <info>  [1759480968.5251] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  3 04:42:48 np0005468397 NetworkManager[864]: <info>  [1759480968.5258] policy: auto-activating connection 'Wired connection 1' (d95e5de3-2db4-3998-8047-40bafb2966b1)
Oct  3 04:42:48 np0005468397 NetworkManager[864]: <info>  [1759480968.5263] device (eth1): Activation: starting connection 'Wired connection 1' (d95e5de3-2db4-3998-8047-40bafb2966b1)
Oct  3 04:42:48 np0005468397 NetworkManager[864]: <info>  [1759480968.5265] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  3 04:42:48 np0005468397 NetworkManager[864]: <info>  [1759480968.5267] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  3 04:42:48 np0005468397 NetworkManager[864]: <info>  [1759480968.5272] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  3 04:42:48 np0005468397 NetworkManager[864]: <info>  [1759480968.5277] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  3 04:42:49 np0005468397 python3[3734]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-3c22-2ca0-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 04:42:59 np0005468397 python3[3814]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 04:42:59 np0005468397 python3[3887]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759480979.0133064-102-95829793346418/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=70170c996dc9e4d7ed9478166f93fdc6f09b2162 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:43:00 np0005468397 python3[3937]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 04:43:00 np0005468397 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  3 04:43:00 np0005468397 systemd[1]: Stopped Network Manager Wait Online.
Oct  3 04:43:00 np0005468397 systemd[1]: Stopping Network Manager Wait Online...
Oct  3 04:43:00 np0005468397 NetworkManager[864]: <info>  [1759480980.5905] caught SIGTERM, shutting down normally.
Oct  3 04:43:00 np0005468397 systemd[1]: Stopping Network Manager...
Oct  3 04:43:00 np0005468397 NetworkManager[864]: <info>  [1759480980.5916] dhcp4 (eth0): canceled DHCP transaction
Oct  3 04:43:00 np0005468397 NetworkManager[864]: <info>  [1759480980.5916] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  3 04:43:00 np0005468397 NetworkManager[864]: <info>  [1759480980.5916] dhcp4 (eth0): state changed no lease
Oct  3 04:43:00 np0005468397 NetworkManager[864]: <info>  [1759480980.5918] manager: NetworkManager state is now CONNECTING
Oct  3 04:43:00 np0005468397 NetworkManager[864]: <info>  [1759480980.6012] dhcp4 (eth1): canceled DHCP transaction
Oct  3 04:43:00 np0005468397 NetworkManager[864]: <info>  [1759480980.6013] dhcp4 (eth1): state changed no lease
Oct  3 04:43:00 np0005468397 NetworkManager[864]: <info>  [1759480980.6055] exiting (success)
Oct  3 04:43:00 np0005468397 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  3 04:43:00 np0005468397 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  3 04:43:00 np0005468397 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  3 04:43:00 np0005468397 systemd[1]: Stopped Network Manager.
Oct  3 04:43:00 np0005468397 systemd[1]: NetworkManager.service: Consumed 1.076s CPU time, 10.0M memory peak.
Oct  3 04:43:00 np0005468397 systemd[1]: Starting Network Manager...
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.6742] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:0f78053e-e0d8-4b4c-acfe-b75004689112)
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.6745] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.6797] manager[0x5564be845070]: monitoring kernel firmware directory '/lib/firmware'.
Oct  3 04:43:00 np0005468397 systemd[1]: Starting Hostname Service...
Oct  3 04:43:00 np0005468397 systemd[1]: Started Hostname Service.
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7718] hostname: hostname: using hostnamed
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7723] hostname: static hostname changed from (none) to "np0005468397.novalocal"
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7728] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7733] manager[0x5564be845070]: rfkill: Wi-Fi hardware radio set enabled
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7733] manager[0x5564be845070]: rfkill: WWAN hardware radio set enabled
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7761] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7761] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7762] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7762] manager: Networking is enabled by state file
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7764] settings: Loaded settings plugin: keyfile (internal)
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7768] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7790] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7798] dhcp: init: Using DHCP client 'internal'
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7807] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7821] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7833] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7848] device (lo): Activation: starting connection 'lo' (e79ef2b9-7dd2-4056-b7de-d12dc9e46160)
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7861] device (eth0): carrier: link connected
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7869] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7878] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7879] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7890] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7903] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7914] device (eth1): carrier: link connected
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7921] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7929] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (d95e5de3-2db4-3998-8047-40bafb2966b1) (indicated)
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7929] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7939] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7953] device (eth1): Activation: starting connection 'Wired connection 1' (d95e5de3-2db4-3998-8047-40bafb2966b1)
Oct  3 04:43:00 np0005468397 systemd[1]: Started Network Manager.
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7962] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7969] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7974] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7977] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7981] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7994] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7997] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.7999] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.8003] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.8010] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.8013] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.8020] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.8022] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.8051] dhcp4 (eth0): state changed new lease, address=38.129.56.154
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.8065] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  3 04:43:00 np0005468397 systemd[1]: Starting Network Manager Wait Online...
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.8220] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.8226] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.8233] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.8239] device (lo): Activation: successful, device activated.
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.8269] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.8271] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.8274] manager: NetworkManager state is now CONNECTED_SITE
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.8278] device (eth0): Activation: successful, device activated.
Oct  3 04:43:00 np0005468397 NetworkManager[3949]: <info>  [1759480980.8281] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  3 04:43:01 np0005468397 python3[4022]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-3c22-2ca0-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 04:43:10 np0005468397 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  3 04:43:30 np0005468397 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3333] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  3 04:43:46 np0005468397 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  3 04:43:46 np0005468397 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3694] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3697] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3705] device (eth1): Activation: successful, device activated.
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3711] manager: startup complete
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3714] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <warn>  [1759481026.3718] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3724] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct  3 04:43:46 np0005468397 systemd[1]: Finished Network Manager Wait Online.
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3873] dhcp4 (eth1): canceled DHCP transaction
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3873] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3873] dhcp4 (eth1): state changed no lease
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3887] policy: auto-activating connection 'ci-private-network' (57abeb53-2533-54e2-a6fe-262ec4fd2b38)
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3892] device (eth1): Activation: starting connection 'ci-private-network' (57abeb53-2533-54e2-a6fe-262ec4fd2b38)
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3893] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3895] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3902] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3909] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3944] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3945] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  3 04:43:46 np0005468397 NetworkManager[3949]: <info>  [1759481026.3949] device (eth1): Activation: successful, device activated.
Oct  3 04:43:56 np0005468397 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  3 04:44:01 np0005468397 systemd-logind[798]: Session 1 logged out. Waiting for processes to exit.
Oct  3 04:44:01 np0005468397 systemd-logind[798]: New session 3 of user zuul.
Oct  3 04:44:01 np0005468397 systemd[1]: Started Session 3 of User zuul.
Oct  3 04:44:01 np0005468397 python3[4131]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 04:44:01 np0005468397 systemd[1064]: Starting Mark boot as successful...
Oct  3 04:44:01 np0005468397 systemd[1064]: Finished Mark boot as successful.
Oct  3 04:44:02 np0005468397 python3[4205]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759481041.3491046-267-102629536233413/source _original_basename=tmp6a3pvsvg follow=False checksum=ea6ccf61e31b4a63e328b96de0c1db08665c2746 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:44:04 np0005468397 systemd[1]: session-3.scope: Deactivated successfully.
Oct  3 04:44:04 np0005468397 systemd-logind[798]: Session 3 logged out. Waiting for processes to exit.
Oct  3 04:44:04 np0005468397 systemd-logind[798]: Removed session 3.
Oct  3 04:47:01 np0005468397 systemd[1064]: Created slice User Background Tasks Slice.
Oct  3 04:47:01 np0005468397 systemd[1064]: Starting Cleanup of User's Temporary Files and Directories...
Oct  3 04:47:01 np0005468397 systemd[1064]: Finished Cleanup of User's Temporary Files and Directories.
Oct  3 04:49:38 np0005468397 systemd-logind[798]: New session 4 of user zuul.
Oct  3 04:49:38 np0005468397 systemd[1]: Started Session 4 of User zuul.
Oct  3 04:49:39 np0005468397 python3[4263]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-d01c-5813-000000001ce6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 04:49:39 np0005468397 python3[4292]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:49:39 np0005468397 python3[4318]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:49:39 np0005468397 python3[4344]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:49:40 np0005468397 python3[4370]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:49:40 np0005468397 python3[4396]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:49:40 np0005468397 python3[4396]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct  3 04:49:41 np0005468397 python3[4422]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 04:49:41 np0005468397 systemd[1]: Reloading.
Oct  3 04:49:41 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 04:49:43 np0005468397 python3[4478]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct  3 04:49:43 np0005468397 python3[4504]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 04:49:44 np0005468397 python3[4532]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 04:49:44 np0005468397 python3[4560]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 04:49:44 np0005468397 python3[4588]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 04:49:45 np0005468397 python3[4615]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-d01c-5813-000000001cec-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 04:49:45 np0005468397 python3[4645]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 04:49:47 np0005468397 systemd[1]: session-4.scope: Deactivated successfully.
Oct  3 04:49:47 np0005468397 systemd[1]: session-4.scope: Consumed 3.495s CPU time.
Oct  3 04:49:47 np0005468397 systemd-logind[798]: Session 4 logged out. Waiting for processes to exit.
Oct  3 04:49:47 np0005468397 systemd-logind[798]: Removed session 4.
Oct  3 04:49:49 np0005468397 systemd-logind[798]: New session 5 of user zuul.
Oct  3 04:49:49 np0005468397 systemd[1]: Started Session 5 of User zuul.
Oct  3 04:49:49 np0005468397 python3[4679]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  3 04:50:05 np0005468397 kernel: SELinux:  Converting 363 SID table entries...
Oct  3 04:50:05 np0005468397 kernel: SELinux:  policy capability network_peer_controls=1
Oct  3 04:50:05 np0005468397 kernel: SELinux:  policy capability open_perms=1
Oct  3 04:50:05 np0005468397 kernel: SELinux:  policy capability extended_socket_class=1
Oct  3 04:50:05 np0005468397 kernel: SELinux:  policy capability always_check_network=0
Oct  3 04:50:05 np0005468397 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  3 04:50:05 np0005468397 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  3 04:50:05 np0005468397 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  3 04:50:15 np0005468397 kernel: SELinux:  Converting 363 SID table entries...
Oct  3 04:50:15 np0005468397 kernel: SELinux:  policy capability network_peer_controls=1
Oct  3 04:50:15 np0005468397 kernel: SELinux:  policy capability open_perms=1
Oct  3 04:50:15 np0005468397 kernel: SELinux:  policy capability extended_socket_class=1
Oct  3 04:50:15 np0005468397 kernel: SELinux:  policy capability always_check_network=0
Oct  3 04:50:15 np0005468397 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  3 04:50:15 np0005468397 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  3 04:50:15 np0005468397 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  3 04:50:25 np0005468397 kernel: SELinux:  Converting 363 SID table entries...
Oct  3 04:50:25 np0005468397 kernel: SELinux:  policy capability network_peer_controls=1
Oct  3 04:50:25 np0005468397 kernel: SELinux:  policy capability open_perms=1
Oct  3 04:50:25 np0005468397 kernel: SELinux:  policy capability extended_socket_class=1
Oct  3 04:50:25 np0005468397 kernel: SELinux:  policy capability always_check_network=0
Oct  3 04:50:25 np0005468397 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  3 04:50:25 np0005468397 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  3 04:50:25 np0005468397 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  3 04:50:26 np0005468397 setsebool[4745]: The virt_use_nfs policy boolean was changed to 1 by root
Oct  3 04:50:26 np0005468397 setsebool[4745]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct  3 04:50:37 np0005468397 kernel: SELinux:  Converting 366 SID table entries...
Oct  3 04:50:37 np0005468397 kernel: SELinux:  policy capability network_peer_controls=1
Oct  3 04:50:37 np0005468397 kernel: SELinux:  policy capability open_perms=1
Oct  3 04:50:37 np0005468397 kernel: SELinux:  policy capability extended_socket_class=1
Oct  3 04:50:37 np0005468397 kernel: SELinux:  policy capability always_check_network=0
Oct  3 04:50:37 np0005468397 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  3 04:50:37 np0005468397 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  3 04:50:37 np0005468397 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  3 04:50:56 np0005468397 dbus-broker-launch[784]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  3 04:50:56 np0005468397 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  3 04:50:56 np0005468397 systemd[1]: Starting man-db-cache-update.service...
Oct  3 04:50:56 np0005468397 systemd[1]: Reloading.
Oct  3 04:50:56 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 04:50:56 np0005468397 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  3 04:50:57 np0005468397 systemd[1]: Starting PackageKit Daemon...
Oct  3 04:50:57 np0005468397 systemd[1]: Starting Authorization Manager...
Oct  3 04:50:57 np0005468397 polkitd[6285]: Started polkitd version 0.117
Oct  3 04:50:57 np0005468397 systemd[1]: Started Authorization Manager.
Oct  3 04:50:57 np0005468397 systemd[1]: Started PackageKit Daemon.
Oct  3 04:51:00 np0005468397 python3[9279]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-551e-1f7a-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 04:51:01 np0005468397 kernel: evm: overlay not supported
Oct  3 04:51:01 np0005468397 systemd[1064]: Starting D-Bus User Message Bus...
Oct  3 04:51:01 np0005468397 dbus-broker-launch[10209]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct  3 04:51:01 np0005468397 dbus-broker-launch[10209]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct  3 04:51:01 np0005468397 systemd[1064]: Started D-Bus User Message Bus.
Oct  3 04:51:01 np0005468397 dbus-broker-lau[10209]: Ready
Oct  3 04:51:01 np0005468397 systemd[1064]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  3 04:51:01 np0005468397 systemd[1064]: Created slice Slice /user.
Oct  3 04:51:01 np0005468397 systemd[1064]: podman-10100.scope: unit configures an IP firewall, but not running as root.
Oct  3 04:51:01 np0005468397 systemd[1064]: (This warning is only shown for the first unit using IP firewalling.)
Oct  3 04:51:01 np0005468397 systemd[1064]: Started podman-10100.scope.
Oct  3 04:51:01 np0005468397 systemd[1064]: Started podman-pause-63dc9364.scope.
Oct  3 04:51:02 np0005468397 python3[10528]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.70:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.70:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:51:02 np0005468397 systemd-logind[798]: Session 5 logged out. Waiting for processes to exit.
Oct  3 04:51:02 np0005468397 systemd[1]: session-5.scope: Deactivated successfully.
Oct  3 04:51:02 np0005468397 systemd[1]: session-5.scope: Consumed 1min 2.191s CPU time.
Oct  3 04:51:02 np0005468397 systemd-logind[798]: Removed session 5.
Oct  3 04:51:13 np0005468397 irqbalance[794]: Cannot change IRQ 27 affinity: Operation not permitted
Oct  3 04:51:13 np0005468397 irqbalance[794]: IRQ 27 affinity is now unmanaged
Oct  3 04:51:25 np0005468397 systemd-logind[798]: New session 6 of user zuul.
Oct  3 04:51:25 np0005468397 systemd[1]: Started Session 6 of User zuul.
Oct  3 04:51:26 np0005468397 python3[19986]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKj6kz6ZHx9S7xKAtGuMwHjzDtuGNqgW3AJmjU4aWxs3Dv1hmYrzLFzH+mkWXoWM+n5TD+NFAJNMUDefULYs3G0= zuul@np0005468396.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:51:26 np0005468397 python3[20136]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKj6kz6ZHx9S7xKAtGuMwHjzDtuGNqgW3AJmjU4aWxs3Dv1hmYrzLFzH+mkWXoWM+n5TD+NFAJNMUDefULYs3G0= zuul@np0005468396.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:51:27 np0005468397 python3[20442]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005468397.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct  3 04:51:27 np0005468397 python3[20660]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKj6kz6ZHx9S7xKAtGuMwHjzDtuGNqgW3AJmjU4aWxs3Dv1hmYrzLFzH+mkWXoWM+n5TD+NFAJNMUDefULYs3G0= zuul@np0005468396.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  3 04:51:28 np0005468397 python3[20932]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 04:51:28 np0005468397 python3[21197]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759481488.0752873-135-230933758564482/source _original_basename=tmp9zukeali follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:51:29 np0005468397 python3[21531]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct  3 04:51:29 np0005468397 systemd[1]: Starting Hostname Service...
Oct  3 04:51:29 np0005468397 systemd[1]: Started Hostname Service.
Oct  3 04:51:29 np0005468397 systemd-hostnamed[21622]: Changed pretty hostname to 'compute-0'
Oct  3 04:51:29 np0005468397 systemd-hostnamed[21622]: Hostname set to <compute-0> (static)
Oct  3 04:51:29 np0005468397 NetworkManager[3949]: <info>  [1759481489.8334] hostname: static hostname changed from "np0005468397.novalocal" to "compute-0"
Oct  3 04:51:29 np0005468397 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  3 04:51:29 np0005468397 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  3 04:51:30 np0005468397 systemd[1]: session-6.scope: Deactivated successfully.
Oct  3 04:51:30 np0005468397 systemd[1]: session-6.scope: Consumed 2.173s CPU time.
Oct  3 04:51:30 np0005468397 systemd-logind[798]: Session 6 logged out. Waiting for processes to exit.
Oct  3 04:51:30 np0005468397 systemd-logind[798]: Removed session 6.
Oct  3 04:51:39 np0005468397 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  3 04:51:41 np0005468397 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  3 04:51:41 np0005468397 systemd[1]: Finished man-db-cache-update.service.
Oct  3 04:51:41 np0005468397 systemd[1]: man-db-cache-update.service: Consumed 54.267s CPU time.
Oct  3 04:51:41 np0005468397 systemd[1]: run-rfea3004eeb944ccfa056395dcd6d96e1.service: Deactivated successfully.
Oct  3 04:51:59 np0005468397 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  3 04:55:58 np0005468397 systemd[1]: Starting Cleanup of Temporary Directories...
Oct  3 04:55:58 np0005468397 systemd-logind[798]: New session 7 of user zuul.
Oct  3 04:55:58 np0005468397 systemd[1]: Started Session 7 of User zuul.
Oct  3 04:55:58 np0005468397 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct  3 04:55:58 np0005468397 systemd[1]: Finished Cleanup of Temporary Directories.
Oct  3 04:55:58 np0005468397 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct  3 04:55:59 np0005468397 python3[26652]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 04:56:00 np0005468397 python3[26768]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 04:56:01 np0005468397 python3[26841]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759481760.4433699-30273-115504999742709/source mode=0755 _original_basename=delorean.repo follow=False checksum=bb4c2ff9dad546f135d54d9729ea11b84117755d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:56:01 np0005468397 python3[26867]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 04:56:01 np0005468397 python3[26940]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759481760.4433699-30273-115504999742709/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:56:01 np0005468397 python3[26966]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 04:56:02 np0005468397 python3[27039]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759481760.4433699-30273-115504999742709/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:56:02 np0005468397 systemd[1]: packagekit.service: Deactivated successfully.
Oct  3 04:56:02 np0005468397 python3[27065]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 04:56:02 np0005468397 python3[27138]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759481760.4433699-30273-115504999742709/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:56:02 np0005468397 python3[27164]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 04:56:03 np0005468397 python3[27237]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759481760.4433699-30273-115504999742709/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:56:03 np0005468397 python3[27263]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 04:56:03 np0005468397 python3[27336]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759481760.4433699-30273-115504999742709/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:56:04 np0005468397 python3[27362]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 04:56:04 np0005468397 python3[27435]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759481760.4433699-30273-115504999742709/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=d911291791b114a72daf18f370e91cb1ae300933 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 04:57:01 np0005468397 systemd[1]: Starting dnf makecache...
Oct  3 04:57:01 np0005468397 dnf[27471]: Failed determining last makecache time.
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-openstack-barbican-42b4c41831408a8e323 402 kB/s |  13 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 2.9 MB/s |  65 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-openstack-cinder-1c00d6490d88e436f26ef 1.4 MB/s |  32 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-python-stevedore-c4acc5639fd2329372142 5.2 MB/s | 131 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-python-cloudkitty-tests-tempest-3961dc 801 kB/s |  25 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-os-net-config-28598c2978b9e2207dd19fc4  11 MB/s | 356 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 1.5 MB/s |  42 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-python-designate-tests-tempest-347fdbc 759 kB/s |  18 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-openstack-glance-1fd12c29b339f30fe823e 699 kB/s |  18 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 1.1 MB/s |  29 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-openstack-manila-3c01b7181572c95dac462 1.2 MB/s |  25 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-python-whitebox-neutron-tests-tempest- 6.1 MB/s | 154 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-openstack-octavia-ba397f07a7331190208c 1.1 MB/s |  26 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-openstack-watcher-c014f81a8647287f6dcc 623 kB/s |  16 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-edpm-image-builder-55ba53cf215b14ed95b 291 kB/s | 7.4 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-puppet-ceph-b0c245ccde541a63fde0564366 5.1 MB/s | 144 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-openstack-swift-dc98a8463506ac520c469a 492 kB/s |  14 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-python-tempestconf-8515371b7cceebd4282 2.3 MB/s |  53 kB     00:00
Oct  3 04:57:02 np0005468397 dnf[27471]: delorean-openstack-heat-ui-013accbfd179753bc3f0 3.5 MB/s |  96 kB     00:00
Oct  3 04:57:03 np0005468397 dnf[27471]: CentOS Stream 9 - BaseOS                         59 kB/s | 6.7 kB     00:00
Oct  3 04:57:03 np0005468397 dnf[27471]: CentOS Stream 9 - AppStream                      76 kB/s | 6.8 kB     00:00
Oct  3 04:57:03 np0005468397 dnf[27471]: CentOS Stream 9 - CRB                            27 kB/s | 6.6 kB     00:00
Oct  3 04:57:03 np0005468397 dnf[27471]: CentOS Stream 9 - Extras packages                73 kB/s | 8.0 kB     00:00
Oct  3 04:57:03 np0005468397 dnf[27471]: dlrn-antelope-testing                            21 MB/s | 1.1 MB     00:00
Oct  3 04:57:04 np0005468397 dnf[27471]: dlrn-antelope-build-deps                         17 MB/s | 461 kB     00:00
Oct  3 04:57:04 np0005468397 dnf[27471]: centos9-rabbitmq                                8.8 MB/s | 123 kB     00:00
Oct  3 04:57:04 np0005468397 dnf[27471]: centos9-storage                                  21 MB/s | 415 kB     00:00
Oct  3 04:57:04 np0005468397 dnf[27471]: centos9-opstools                                3.7 MB/s |  51 kB     00:00
Oct  3 04:57:04 np0005468397 dnf[27471]: NFV SIG OpenvSwitch                              22 MB/s | 447 kB     00:00
Oct  3 04:57:05 np0005468397 dnf[27471]: repo-setup-centos-appstream                      81 MB/s |  25 MB     00:00
Oct  3 04:57:10 np0005468397 dnf[27471]: repo-setup-centos-baseos                         51 MB/s | 8.8 MB     00:00
Oct  3 04:57:12 np0005468397 dnf[27471]: repo-setup-centos-highavailability               32 MB/s | 744 kB     00:00
Oct  3 04:57:12 np0005468397 dnf[27471]: repo-setup-centos-powertools                     84 MB/s | 7.1 MB     00:00
Oct  3 04:57:15 np0005468397 dnf[27471]: Extra Packages for Enterprise Linux 9 - x86_64   18 MB/s |  20 MB     00:01
Oct  3 04:57:29 np0005468397 dnf[27471]: Metadata cache created.
Oct  3 04:57:29 np0005468397 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  3 04:57:29 np0005468397 systemd[1]: Finished dnf makecache.
Oct  3 04:57:29 np0005468397 systemd[1]: dnf-makecache.service: Consumed 25.607s CPU time.
Oct  3 04:58:42 np0005468397 python3[27596]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:03:42 np0005468397 systemd[1]: session-7.scope: Deactivated successfully.
Oct  3 05:03:42 np0005468397 systemd[1]: session-7.scope: Consumed 4.502s CPU time.
Oct  3 05:03:42 np0005468397 systemd-logind[798]: Session 7 logged out. Waiting for processes to exit.
Oct  3 05:03:42 np0005468397 systemd-logind[798]: Removed session 7.
Oct  3 05:10:01 np0005468397 systemd-logind[798]: New session 8 of user zuul.
Oct  3 05:10:01 np0005468397 systemd[1]: Started Session 8 of User zuul.
Oct  3 05:10:02 np0005468397 python3.9[27772]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:10:03 np0005468397 python3.9[27953]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:10:11 np0005468397 systemd[1]: session-8.scope: Deactivated successfully.
Oct  3 05:10:11 np0005468397 systemd[1]: session-8.scope: Consumed 7.889s CPU time.
Oct  3 05:10:11 np0005468397 systemd-logind[798]: Session 8 logged out. Waiting for processes to exit.
Oct  3 05:10:11 np0005468397 systemd-logind[798]: Removed session 8.
Oct  3 05:10:26 np0005468397 systemd-logind[798]: New session 9 of user zuul.
Oct  3 05:10:26 np0005468397 systemd[1]: Started Session 9 of User zuul.
Oct  3 05:10:27 np0005468397 python3.9[28165]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  3 05:10:28 np0005468397 python3.9[28339]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:10:29 np0005468397 python3.9[28491]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:10:29 np0005468397 python3.9[28644]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:10:30 np0005468397 python3.9[28796]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:10:31 np0005468397 python3.9[28948]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:10:31 np0005468397 python3.9[29071]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759482630.8023255-73-254271064269431/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:10:32 np0005468397 python3.9[29223]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:10:33 np0005468397 python3.9[29379]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:10:34 np0005468397 python3.9[29529]: ansible-ansible.builtin.service_facts Invoked
Oct  3 05:10:37 np0005468397 python3.9[29784]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:10:38 np0005468397 python3.9[29934]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:10:39 np0005468397 python3.9[30088]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:10:40 np0005468397 python3.9[30246]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 05:10:40 np0005468397 python3.9[30330]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 05:11:23 np0005468397 systemd[1]: Reloading.
Oct  3 05:11:23 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:11:23 np0005468397 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct  3 05:11:23 np0005468397 systemd[1]: Reloading.
Oct  3 05:11:23 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:11:23 np0005468397 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct  3 05:11:23 np0005468397 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct  3 05:11:23 np0005468397 systemd[1]: Reloading.
Oct  3 05:11:23 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:11:24 np0005468397 systemd[1]: Listening on LVM2 poll daemon socket.
Oct  3 05:11:24 np0005468397 dbus-broker-launch[777]: Noticed file-system modification, trigger reload.
Oct  3 05:11:24 np0005468397 dbus-broker-launch[777]: Noticed file-system modification, trigger reload.
Oct  3 05:11:24 np0005468397 dbus-broker-launch[777]: Noticed file-system modification, trigger reload.
Oct  3 05:12:25 np0005468397 kernel: SELinux:  Converting 2716 SID table entries...
Oct  3 05:12:25 np0005468397 kernel: SELinux:  policy capability network_peer_controls=1
Oct  3 05:12:25 np0005468397 kernel: SELinux:  policy capability open_perms=1
Oct  3 05:12:25 np0005468397 kernel: SELinux:  policy capability extended_socket_class=1
Oct  3 05:12:25 np0005468397 kernel: SELinux:  policy capability always_check_network=0
Oct  3 05:12:25 np0005468397 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  3 05:12:25 np0005468397 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  3 05:12:25 np0005468397 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  3 05:12:26 np0005468397 dbus-broker-launch[784]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct  3 05:12:26 np0005468397 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  3 05:12:26 np0005468397 systemd[1]: Starting man-db-cache-update.service...
Oct  3 05:12:26 np0005468397 systemd[1]: Reloading.
Oct  3 05:12:26 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:12:26 np0005468397 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  3 05:12:26 np0005468397 systemd[1]: Starting PackageKit Daemon...
Oct  3 05:12:26 np0005468397 systemd[1]: Started PackageKit Daemon.
Oct  3 05:12:27 np0005468397 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  3 05:12:27 np0005468397 systemd[1]: Finished man-db-cache-update.service.
Oct  3 05:12:27 np0005468397 systemd[1]: man-db-cache-update.service: Consumed 1.034s CPU time.
Oct  3 05:12:27 np0005468397 systemd[1]: run-r4dfc98e5d63845319358d56d16377085.service: Deactivated successfully.
Oct  3 05:12:27 np0005468397 python3.9[31855]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:12:29 np0005468397 python3.9[32136]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  3 05:12:30 np0005468397 python3.9[32288]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  3 05:12:32 np0005468397 python3.9[32441]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:12:33 np0005468397 python3.9[32593]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  3 05:12:34 np0005468397 python3.9[32745]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:12:34 np0005468397 python3.9[32897]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:12:35 np0005468397 python3.9[33020]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759482754.3397882-227-70474559450402/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0840bbf95bceacbb578c810a3eebb37e12b74c19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:12:38 np0005468397 python3.9[33172]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  3 05:12:39 np0005468397 python3.9[33325]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  3 05:12:39 np0005468397 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 05:12:39 np0005468397 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 05:12:40 np0005468397 python3.9[33485]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  3 05:12:41 np0005468397 python3.9[33645]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  3 05:12:41 np0005468397 python3.9[33798]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  3 05:12:42 np0005468397 python3.9[33956]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  3 05:12:43 np0005468397 python3.9[34108]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 05:12:45 np0005468397 python3.9[34261]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:12:46 np0005468397 python3.9[34413]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:12:46 np0005468397 python3.9[34536]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759482765.6968973-322-36155490780372/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:12:47 np0005468397 python3.9[34688]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:12:47 np0005468397 systemd[1]: Starting Load Kernel Modules...
Oct  3 05:12:48 np0005468397 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  3 05:12:48 np0005468397 kernel: Bridge firewalling registered
Oct  3 05:12:48 np0005468397 systemd-modules-load[34692]: Inserted module 'br_netfilter'
Oct  3 05:12:48 np0005468397 systemd[1]: Finished Load Kernel Modules.
Oct  3 05:12:48 np0005468397 python3.9[34847]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:12:49 np0005468397 python3.9[34970]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759482768.1929417-345-2105964165052/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:12:49 np0005468397 python3.9[35122]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 05:12:52 np0005468397 dbus-broker-launch[777]: Noticed file-system modification, trigger reload.
Oct  3 05:12:53 np0005468397 dbus-broker-launch[777]: Noticed file-system modification, trigger reload.
Oct  3 05:12:53 np0005468397 irqbalance[794]: Cannot change IRQ 26 affinity: Operation not permitted
Oct  3 05:12:53 np0005468397 irqbalance[794]: IRQ 26 affinity is now unmanaged
Oct  3 05:12:53 np0005468397 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  3 05:12:53 np0005468397 systemd[1]: Starting man-db-cache-update.service...
Oct  3 05:12:53 np0005468397 systemd[1]: Reloading.
Oct  3 05:12:53 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:12:53 np0005468397 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  3 05:12:54 np0005468397 python3.9[36252]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:12:55 np0005468397 python3.9[37223]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  3 05:12:56 np0005468397 python3.9[37910]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:12:57 np0005468397 python3.9[38737]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:12:57 np0005468397 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  3 05:12:57 np0005468397 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  3 05:12:57 np0005468397 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  3 05:12:57 np0005468397 systemd[1]: Finished man-db-cache-update.service.
Oct  3 05:12:57 np0005468397 systemd[1]: man-db-cache-update.service: Consumed 5.204s CPU time.
Oct  3 05:12:57 np0005468397 systemd[1]: run-r42815ba9067547ab92dceb177281690c.service: Deactivated successfully.
Oct  3 05:12:58 np0005468397 python3.9[39659]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:12:58 np0005468397 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  3 05:12:58 np0005468397 systemd[1]: tuned.service: Deactivated successfully.
Oct  3 05:12:58 np0005468397 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  3 05:12:58 np0005468397 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  3 05:12:58 np0005468397 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  3 05:12:59 np0005468397 python3.9[39820]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  3 05:13:01 np0005468397 python3.9[39972]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:13:01 np0005468397 systemd[1]: Reloading.
Oct  3 05:13:01 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:13:02 np0005468397 python3.9[40161]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:13:02 np0005468397 systemd[1]: Reloading.
Oct  3 05:13:02 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:13:03 np0005468397 python3.9[40350]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:13:03 np0005468397 python3.9[40503]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:13:03 np0005468397 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct  3 05:13:04 np0005468397 python3.9[40656]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:13:06 np0005468397 python3.9[40818]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:13:07 np0005468397 python3.9[40971]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:13:07 np0005468397 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  3 05:13:07 np0005468397 systemd[1]: Stopped Apply Kernel Variables.
Oct  3 05:13:07 np0005468397 systemd[1]: Stopping Apply Kernel Variables...
Oct  3 05:13:07 np0005468397 systemd[1]: Starting Apply Kernel Variables...
Oct  3 05:13:07 np0005468397 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  3 05:13:07 np0005468397 systemd[1]: Finished Apply Kernel Variables.
Oct  3 05:13:07 np0005468397 systemd[1]: session-9.scope: Deactivated successfully.
Oct  3 05:13:07 np0005468397 systemd[1]: session-9.scope: Consumed 2min 7.022s CPU time.
Oct  3 05:13:07 np0005468397 systemd-logind[798]: Session 9 logged out. Waiting for processes to exit.
Oct  3 05:13:07 np0005468397 systemd-logind[798]: Removed session 9.
Oct  3 05:13:13 np0005468397 systemd-logind[798]: New session 10 of user zuul.
Oct  3 05:13:13 np0005468397 systemd[1]: Started Session 10 of User zuul.
Oct  3 05:13:14 np0005468397 python3.9[41155]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:13:15 np0005468397 python3.9[41311]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  3 05:13:16 np0005468397 python3.9[41464]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  3 05:13:17 np0005468397 python3.9[41622]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  3 05:13:18 np0005468397 python3.9[41782]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 05:13:19 np0005468397 python3.9[41867]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  3 05:13:22 np0005468397 python3.9[42030]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 05:13:33 np0005468397 kernel: SELinux:  Converting 2726 SID table entries...
Oct  3 05:13:33 np0005468397 kernel: SELinux:  policy capability network_peer_controls=1
Oct  3 05:13:33 np0005468397 kernel: SELinux:  policy capability open_perms=1
Oct  3 05:13:33 np0005468397 kernel: SELinux:  policy capability extended_socket_class=1
Oct  3 05:13:33 np0005468397 kernel: SELinux:  policy capability always_check_network=0
Oct  3 05:13:33 np0005468397 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  3 05:13:33 np0005468397 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  3 05:13:33 np0005468397 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  3 05:13:34 np0005468397 dbus-broker-launch[784]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct  3 05:13:34 np0005468397 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct  3 05:13:35 np0005468397 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  3 05:13:35 np0005468397 systemd[1]: Starting man-db-cache-update.service...
Oct  3 05:13:35 np0005468397 systemd[1]: Reloading.
Oct  3 05:13:35 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:13:35 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:13:35 np0005468397 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  3 05:13:36 np0005468397 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  3 05:13:36 np0005468397 systemd[1]: Finished man-db-cache-update.service.
Oct  3 05:13:36 np0005468397 systemd[1]: run-ra53840a2e4c04597a745e6f2350d6603.service: Deactivated successfully.
Oct  3 05:13:37 np0005468397 python3.9[43133]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  3 05:13:37 np0005468397 systemd[1]: Reloading.
Oct  3 05:13:37 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:13:37 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:13:37 np0005468397 systemd[1]: Starting Open vSwitch Database Unit...
Oct  3 05:13:37 np0005468397 chown[43176]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct  3 05:13:37 np0005468397 ovs-ctl[43181]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct  3 05:13:37 np0005468397 ovs-ctl[43181]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct  3 05:13:37 np0005468397 ovs-ctl[43181]: Starting ovsdb-server [  OK  ]
Oct  3 05:13:37 np0005468397 ovs-vsctl[43230]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct  3 05:13:37 np0005468397 ovs-vsctl[43250]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"41fabae1-2dc7-46e2-b697-d9133d158399\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct  3 05:13:37 np0005468397 ovs-ctl[43181]: Configuring Open vSwitch system IDs [  OK  ]
Oct  3 05:13:37 np0005468397 ovs-vsctl[43256]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  3 05:13:37 np0005468397 ovs-ctl[43181]: Enabling remote OVSDB managers [  OK  ]
Oct  3 05:13:37 np0005468397 systemd[1]: Started Open vSwitch Database Unit.
Oct  3 05:13:37 np0005468397 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct  3 05:13:37 np0005468397 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct  3 05:13:37 np0005468397 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct  3 05:13:37 np0005468397 kernel: openvswitch: Open vSwitch switching datapath
Oct  3 05:13:37 np0005468397 ovs-ctl[43302]: Inserting openvswitch module [  OK  ]
Oct  3 05:13:37 np0005468397 ovs-ctl[43270]: Starting ovs-vswitchd [  OK  ]
Oct  3 05:13:37 np0005468397 ovs-vsctl[43320]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  3 05:13:37 np0005468397 ovs-ctl[43270]: Enabling remote OVSDB managers [  OK  ]
Oct  3 05:13:37 np0005468397 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct  3 05:13:37 np0005468397 systemd[1]: Starting Open vSwitch...
Oct  3 05:13:37 np0005468397 systemd[1]: Finished Open vSwitch.
Oct  3 05:13:38 np0005468397 python3.9[43471]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:13:39 np0005468397 python3.9[43623]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  3 05:13:40 np0005468397 kernel: SELinux:  Converting 2740 SID table entries...
Oct  3 05:13:40 np0005468397 kernel: SELinux:  policy capability network_peer_controls=1
Oct  3 05:13:40 np0005468397 kernel: SELinux:  policy capability open_perms=1
Oct  3 05:13:40 np0005468397 kernel: SELinux:  policy capability extended_socket_class=1
Oct  3 05:13:40 np0005468397 kernel: SELinux:  policy capability always_check_network=0
Oct  3 05:13:40 np0005468397 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  3 05:13:40 np0005468397 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  3 05:13:40 np0005468397 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  3 05:13:41 np0005468397 python3.9[43778]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:13:42 np0005468397 dbus-broker-launch[784]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct  3 05:13:42 np0005468397 python3.9[43936]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 05:13:44 np0005468397 python3.9[44089]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:13:45 np0005468397 python3.9[44376]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  3 05:13:46 np0005468397 python3.9[44526]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:13:47 np0005468397 python3.9[44680]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 05:13:49 np0005468397 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  3 05:13:49 np0005468397 systemd[1]: Starting man-db-cache-update.service...
Oct  3 05:13:49 np0005468397 systemd[1]: Reloading.
Oct  3 05:13:49 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:13:49 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:13:49 np0005468397 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  3 05:13:49 np0005468397 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  3 05:13:49 np0005468397 systemd[1]: Finished man-db-cache-update.service.
Oct  3 05:13:49 np0005468397 systemd[1]: run-rc62bde6a3f8042a2b8ebd493158d5ad7.service: Deactivated successfully.
Oct  3 05:13:50 np0005468397 python3.9[44997]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:13:50 np0005468397 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  3 05:13:50 np0005468397 systemd[1]: Stopped Network Manager Wait Online.
Oct  3 05:13:50 np0005468397 systemd[1]: Stopping Network Manager Wait Online...
Oct  3 05:13:50 np0005468397 systemd[1]: Stopping Network Manager...
Oct  3 05:13:50 np0005468397 NetworkManager[3949]: <info>  [1759482830.5257] caught SIGTERM, shutting down normally.
Oct  3 05:13:50 np0005468397 NetworkManager[3949]: <info>  [1759482830.5274] dhcp4 (eth0): canceled DHCP transaction
Oct  3 05:13:50 np0005468397 NetworkManager[3949]: <info>  [1759482830.5274] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  3 05:13:50 np0005468397 NetworkManager[3949]: <info>  [1759482830.5274] dhcp4 (eth0): state changed no lease
Oct  3 05:13:50 np0005468397 NetworkManager[3949]: <info>  [1759482830.5277] manager: NetworkManager state is now CONNECTED_SITE
Oct  3 05:13:50 np0005468397 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  3 05:13:50 np0005468397 NetworkManager[3949]: <info>  [1759482830.5541] exiting (success)
Oct  3 05:13:50 np0005468397 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  3 05:13:50 np0005468397 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  3 05:13:50 np0005468397 systemd[1]: Stopped Network Manager.
Oct  3 05:13:50 np0005468397 systemd[1]: NetworkManager.service: Consumed 10.274s CPU time, 4.1M memory peak, read 0B from disk, written 32.0K to disk.
Oct  3 05:13:50 np0005468397 systemd[1]: Starting Network Manager...
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.6278] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:0f78053e-e0d8-4b4c-acfe-b75004689112)
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.6281] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.6343] manager[0x563f5e843090]: monitoring kernel firmware directory '/lib/firmware'.
Oct  3 05:13:50 np0005468397 systemd[1]: Starting Hostname Service...
Oct  3 05:13:50 np0005468397 systemd[1]: Started Hostname Service.
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7349] hostname: hostname: using hostnamed
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7351] hostname: static hostname changed from (none) to "compute-0"
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7359] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7364] manager[0x563f5e843090]: rfkill: Wi-Fi hardware radio set enabled
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7367] manager[0x563f5e843090]: rfkill: WWAN hardware radio set enabled
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7390] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7400] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7401] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7402] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7404] manager: Networking is enabled by state file
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7408] settings: Loaded settings plugin: keyfile (internal)
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7413] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7438] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7459] dhcp: init: Using DHCP client 'internal'
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7461] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7466] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7471] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7476] device (lo): Activation: starting connection 'lo' (e79ef2b9-7dd2-4056-b7de-d12dc9e46160)
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7482] device (eth0): carrier: link connected
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7485] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7489] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7490] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7495] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7499] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7504] device (eth1): carrier: link connected
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7507] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7510] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (57abeb53-2533-54e2-a6fe-262ec4fd2b38) (indicated)
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7510] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7514] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7520] device (eth1): Activation: starting connection 'ci-private-network' (57abeb53-2533-54e2-a6fe-262ec4fd2b38)
Oct  3 05:13:50 np0005468397 systemd[1]: Started Network Manager.
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7535] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7544] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7546] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7548] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7550] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7553] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7555] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7557] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7561] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7565] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7567] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7583] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7604] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7615] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7619] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7628] device (lo): Activation: successful, device activated.
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7637] dhcp4 (eth0): state changed new lease, address=38.129.56.154
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7647] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  3 05:13:50 np0005468397 systemd[1]: Starting Network Manager Wait Online...
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7741] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7748] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7757] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7761] manager: NetworkManager state is now CONNECTED_LOCAL
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7767] device (eth1): Activation: successful, device activated.
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7782] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7783] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7785] manager: NetworkManager state is now CONNECTED_SITE
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7787] device (eth0): Activation: successful, device activated.
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7791] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  3 05:13:50 np0005468397 NetworkManager[45015]: <info>  [1759482830.7793] manager: startup complete
Oct  3 05:13:50 np0005468397 systemd[1]: Finished Network Manager Wait Online.
Oct  3 05:13:51 np0005468397 python3.9[45224]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 05:13:56 np0005468397 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  3 05:13:56 np0005468397 systemd[1]: Starting man-db-cache-update.service...
Oct  3 05:13:56 np0005468397 systemd[1]: Reloading.
Oct  3 05:13:56 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:13:56 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:13:56 np0005468397 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  3 05:13:57 np0005468397 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  3 05:13:57 np0005468397 systemd[1]: Finished man-db-cache-update.service.
Oct  3 05:13:57 np0005468397 systemd[1]: run-r6f5f6b1fa3c14cba88434d315f22278c.service: Deactivated successfully.
Oct  3 05:13:58 np0005468397 python3.9[45687]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:13:58 np0005468397 python3.9[45839]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:13:59 np0005468397 python3.9[45993]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:14:00 np0005468397 python3.9[46145]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:14:00 np0005468397 python3.9[46297]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:14:00 np0005468397 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  3 05:14:01 np0005468397 python3.9[46449]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:14:01 np0005468397 python3.9[46601]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:14:02 np0005468397 python3.9[46724]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759482841.5447206-229-142688899386749/.source _original_basename=.gcy72c5e follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:14:03 np0005468397 python3.9[46876]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:14:04 np0005468397 python3.9[47028]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct  3 05:14:04 np0005468397 python3.9[47180]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:14:06 np0005468397 python3.9[47607]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct  3 05:14:07 np0005468397 ansible-async_wrapper.py[47782]: Invoked with j781269489794 300 /home/zuul/.ansible/tmp/ansible-tmp-1759482847.1595993-295-150378468786162/AnsiballZ_edpm_os_net_config.py _
Oct  3 05:14:08 np0005468397 ansible-async_wrapper.py[47785]: Starting module and watcher
Oct  3 05:14:08 np0005468397 ansible-async_wrapper.py[47785]: Start watching 47786 (300)
Oct  3 05:14:08 np0005468397 ansible-async_wrapper.py[47786]: Start module (47786)
Oct  3 05:14:08 np0005468397 ansible-async_wrapper.py[47782]: Return async_wrapper task started.
Oct  3 05:14:08 np0005468397 python3.9[47787]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct  3 05:14:08 np0005468397 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct  3 05:14:08 np0005468397 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct  3 05:14:08 np0005468397 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct  3 05:14:08 np0005468397 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct  3 05:14:08 np0005468397 kernel: cfg80211: failed to load regulatory.db
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0071] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0085] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0612] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0617] audit: op="connection-add" uuid="331cbe20-d04b-41ba-930b-719bc55f19b0" name="br-ex-br" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0633] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0636] audit: op="connection-add" uuid="431ffe38-2319-42b8-a309-e43a8474acd3" name="br-ex-port" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0650] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0653] audit: op="connection-add" uuid="08606dde-ec69-45f9-9a28-c7fdc2953586" name="eth1-port" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0665] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0667] audit: op="connection-add" uuid="5e8eaa07-bd5a-4b73-a27b-edecb5023189" name="vlan20-port" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0679] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0682] audit: op="connection-add" uuid="59b7c820-e512-4e1f-9fce-6b0d6331de05" name="vlan21-port" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0696] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0698] audit: op="connection-add" uuid="aa937610-1fca-49a3-969f-8bc9d3ef5a02" name="vlan22-port" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0712] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0715] audit: op="connection-add" uuid="3554fb98-4f9f-4a84-a314-a4dbd3a03f15" name="vlan23-port" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0734] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0751] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0753] audit: op="connection-add" uuid="d56904ca-f375-4cde-baa1-868e0c438778" name="br-ex-if" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0791] audit: op="connection-update" uuid="57abeb53-2533-54e2-a6fe-262ec4fd2b38" name="ci-private-network" args="ovs-external-ids.data,ovs-interface.type,connection.controller,connection.master,connection.port-type,connection.timestamp,connection.slave-type,ipv4.routing-rules,ipv4.method,ipv4.addresses,ipv4.never-default,ipv4.routes,ipv4.dns,ipv6.routing-rules,ipv6.method,ipv6.addr-gen-mode,ipv6.addresses,ipv6.routes,ipv6.dns" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0807] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0809] audit: op="connection-add" uuid="cae299bd-3c50-4e36-8200-a59011eab12a" name="vlan20-if" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0825] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0827] audit: op="connection-add" uuid="8f02a5fc-04a2-4124-9094-15b62ad4e946" name="vlan21-if" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0844] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0847] audit: op="connection-add" uuid="17fcc982-f8e2-4c45-9772-0274cc3edeb2" name="vlan22-if" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0862] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0865] audit: op="connection-add" uuid="97602870-5cd8-447c-96d3-48f592540beb" name="vlan23-if" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0879] audit: op="connection-delete" uuid="d95e5de3-2db4-3998-8047-40bafb2966b1" name="Wired connection 1" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0892] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0903] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0908] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (331cbe20-d04b-41ba-930b-719bc55f19b0)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0910] audit: op="connection-activate" uuid="331cbe20-d04b-41ba-930b-719bc55f19b0" name="br-ex-br" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0912] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0921] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0926] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (431ffe38-2319-42b8-a309-e43a8474acd3)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0928] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0936] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0941] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (08606dde-ec69-45f9-9a28-c7fdc2953586)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0944] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0952] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0957] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (5e8eaa07-bd5a-4b73-a27b-edecb5023189)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0960] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0968] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0974] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (59b7c820-e512-4e1f-9fce-6b0d6331de05)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0976] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0985] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0989] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (aa937610-1fca-49a3-969f-8bc9d3ef5a02)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0991] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.0997] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1000] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (3554fb98-4f9f-4a84-a314-a4dbd3a03f15)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1001] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1003] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1005] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1010] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1014] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1017] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (d56904ca-f375-4cde-baa1-868e0c438778)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1017] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1020] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1021] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1023] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1024] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1032] device (eth1): disconnecting for new activation request.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1032] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1034] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1036] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1037] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1041] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1045] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1048] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (cae299bd-3c50-4e36-8200-a59011eab12a)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1048] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1050] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1052] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1053] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1055] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1059] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1062] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (8f02a5fc-04a2-4124-9094-15b62ad4e946)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1063] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1065] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1067] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1068] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1071] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1075] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1078] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (17fcc982-f8e2-4c45-9772-0274cc3edeb2)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1078] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1080] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1081] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1082] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1084] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1087] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1090] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (97602870-5cd8-447c-96d3-48f592540beb)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1091] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1094] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1095] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1096] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1097] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1110] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.method,ipv6.addr-gen-mode" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1111] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1114] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1115] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1124] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1127] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1133] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1136] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1138] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1143] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1147] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1149] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1151] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1156] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1160] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1163] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1165] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1170] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1173] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1176] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 kernel: ovs-system: entered promiscuous mode
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1177] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1181] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1185] dhcp4 (eth0): canceled DHCP transaction
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1187] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1187] dhcp4 (eth0): state changed no lease
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1188] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct  3 05:14:10 np0005468397 systemd-udevd[47794]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1202] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1205] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47788 uid=0 result="fail" reason="Device is not activated"
Oct  3 05:14:10 np0005468397 kernel: Timeout policy base is empty
Oct  3 05:14:10 np0005468397 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1242] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1246] dhcp4 (eth0): state changed new lease, address=38.129.56.154
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1254] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct  3 05:14:10 np0005468397 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1333] device (eth1): disconnecting for new activation request.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1334] audit: op="connection-activate" uuid="57abeb53-2533-54e2-a6fe-262ec4fd2b38" name="ci-private-network" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1338] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1346] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  3 05:14:10 np0005468397 kernel: br-ex: entered promiscuous mode
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1510] device (eth1): Activation: starting connection 'ci-private-network' (57abeb53-2533-54e2-a6fe-262ec4fd2b38)
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1514] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1518] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1528] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1531] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1537] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1540] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1548] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47788 uid=0 result="success"
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1548] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1550] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1553] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1555] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1556] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1557] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1559] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1566] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1569] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1572] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1577] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1580] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 kernel: vlan22: entered promiscuous mode
Oct  3 05:14:10 np0005468397 systemd-udevd[47792]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1586] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1589] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1593] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1596] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1600] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1604] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1608] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1613] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1616] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 kernel: vlan21: entered promiscuous mode
Oct  3 05:14:10 np0005468397 systemd-udevd[47793]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1673] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1678] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1684] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1689] device (eth1): Activation: successful, device activated.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1695] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1717] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 kernel: vlan23: entered promiscuous mode
Oct  3 05:14:10 np0005468397 systemd-udevd[47894]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1737] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1748] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1752] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1753] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1759] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1765] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1770] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1774] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  3 05:14:10 np0005468397 kernel: vlan20: entered promiscuous mode
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1790] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1857] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1858] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1862] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1868] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1897] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1908] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1915] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1924] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1930] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1936] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1967] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1969] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  3 05:14:10 np0005468397 NetworkManager[45015]: <info>  [1759482850.1975] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  3 05:14:11 np0005468397 NetworkManager[45015]: <info>  [1759482851.3340] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47788 uid=0 result="success"
Oct  3 05:14:11 np0005468397 NetworkManager[45015]: <info>  [1759482851.5357] checkpoint[0x563f5e819950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct  3 05:14:11 np0005468397 NetworkManager[45015]: <info>  [1759482851.5359] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47788 uid=0 result="success"
Oct  3 05:14:11 np0005468397 NetworkManager[45015]: <info>  [1759482851.8920] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47788 uid=0 result="success"
Oct  3 05:14:11 np0005468397 python3.9[48147]: ansible-ansible.legacy.async_status Invoked with jid=j781269489794.47782 mode=status _async_dir=/root/.ansible_async
Oct  3 05:14:11 np0005468397 NetworkManager[45015]: <info>  [1759482851.8947] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47788 uid=0 result="success"
Oct  3 05:14:12 np0005468397 NetworkManager[45015]: <info>  [1759482852.0955] audit: op="networking-control" arg="global-dns-configuration" pid=47788 uid=0 result="success"
Oct  3 05:14:12 np0005468397 NetworkManager[45015]: <info>  [1759482852.0992] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct  3 05:14:12 np0005468397 NetworkManager[45015]: <info>  [1759482852.1018] audit: op="networking-control" arg="global-dns-configuration" pid=47788 uid=0 result="success"
Oct  3 05:14:12 np0005468397 NetworkManager[45015]: <info>  [1759482852.1034] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47788 uid=0 result="success"
Oct  3 05:14:12 np0005468397 NetworkManager[45015]: <info>  [1759482852.2468] checkpoint[0x563f5e819a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct  3 05:14:12 np0005468397 NetworkManager[45015]: <info>  [1759482852.2474] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47788 uid=0 result="success"
Oct  3 05:14:12 np0005468397 ansible-async_wrapper.py[47786]: Module complete (47786)
Oct  3 05:14:13 np0005468397 ansible-async_wrapper.py[47785]: Done in kid B.
Oct  3 05:14:15 np0005468397 python3.9[48252]: ansible-ansible.legacy.async_status Invoked with jid=j781269489794.47782 mode=status _async_dir=/root/.ansible_async
Oct  3 05:14:15 np0005468397 python3.9[48352]: ansible-ansible.legacy.async_status Invoked with jid=j781269489794.47782 mode=cleanup _async_dir=/root/.ansible_async
Oct  3 05:14:16 np0005468397 python3.9[48504]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:14:17 np0005468397 python3.9[48627]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759482855.9987304-322-49071398172227/.source.returncode _original_basename=.ijwhx6es follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:14:17 np0005468397 python3.9[48779]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:14:18 np0005468397 python3.9[48902]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759482857.334561-338-220669415856242/.source.cfg _original_basename=.1n88tibu follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:14:19 np0005468397 python3.9[49055]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:14:19 np0005468397 systemd[1]: Reloading Network Manager...
Oct  3 05:14:19 np0005468397 NetworkManager[45015]: <info>  [1759482859.2785] audit: op="reload" arg="0" pid=49059 uid=0 result="success"
Oct  3 05:14:19 np0005468397 NetworkManager[45015]: <info>  [1759482859.2794] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct  3 05:14:19 np0005468397 systemd[1]: Reloaded Network Manager.
Oct  3 05:14:19 np0005468397 systemd[1]: session-10.scope: Deactivated successfully.
Oct  3 05:14:19 np0005468397 systemd[1]: session-10.scope: Consumed 48.187s CPU time.
Oct  3 05:14:19 np0005468397 systemd-logind[798]: Session 10 logged out. Waiting for processes to exit.
Oct  3 05:14:19 np0005468397 systemd-logind[798]: Removed session 10.
Oct  3 05:14:20 np0005468397 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  3 05:14:25 np0005468397 systemd-logind[798]: New session 11 of user zuul.
Oct  3 05:14:25 np0005468397 systemd[1]: Started Session 11 of User zuul.
Oct  3 05:14:26 np0005468397 python3.9[49245]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:14:27 np0005468397 python3.9[49399]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 05:14:28 np0005468397 python3.9[49593]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:14:29 np0005468397 systemd[1]: session-11.scope: Deactivated successfully.
Oct  3 05:14:29 np0005468397 systemd[1]: session-11.scope: Consumed 2.245s CPU time.
Oct  3 05:14:29 np0005468397 systemd-logind[798]: Session 11 logged out. Waiting for processes to exit.
Oct  3 05:14:29 np0005468397 systemd-logind[798]: Removed session 11.
Oct  3 05:14:29 np0005468397 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  3 05:14:34 np0005468397 systemd-logind[798]: New session 12 of user zuul.
Oct  3 05:14:34 np0005468397 systemd[1]: Started Session 12 of User zuul.
Oct  3 05:14:35 np0005468397 python3.9[49775]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:14:36 np0005468397 python3.9[49929]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:14:37 np0005468397 python3.9[50085]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 05:14:38 np0005468397 python3.9[50170]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 05:14:40 np0005468397 python3.9[50323]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 05:14:41 np0005468397 python3.9[50519]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:14:42 np0005468397 python3.9[50671]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:14:42 np0005468397 systemd[1]: var-lib-containers-storage-overlay-compat2504504294-merged.mount: Deactivated successfully.
Oct  3 05:14:42 np0005468397 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck2473377256-merged.mount: Deactivated successfully.
Oct  3 05:14:42 np0005468397 podman[50672]: 2025-10-03 09:14:42.123568011 +0000 UTC m=+0.040176980 system refresh
Oct  3 05:14:42 np0005468397 python3.9[50835]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:14:43 np0005468397 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  3 05:14:43 np0005468397 python3.9[50958]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759482882.3062835-79-27189500265295/.source.json follow=False _original_basename=podman_network_config.j2 checksum=ba907af0b38e2a2c592a41d48f6b0acf5ae99f85 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:14:44 np0005468397 python3.9[51110]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:14:45 np0005468397 python3.9[51233]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759482883.9211462-94-59299890548669/.source.conf follow=False _original_basename=registries.conf.j2 checksum=a4d0af73e82956a82115da1152ffa584e292554a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:14:45 np0005468397 python3.9[51385]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:14:46 np0005468397 python3.9[51537]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:14:47 np0005468397 python3.9[51689]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:14:47 np0005468397 python3.9[51841]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:14:48 np0005468397 python3.9[51993]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 05:14:50 np0005468397 python3.9[52146]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:14:51 np0005468397 python3.9[52300]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:14:52 np0005468397 python3.9[52452]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:14:53 np0005468397 python3.9[52604]: ansible-service_facts Invoked
Oct  3 05:14:53 np0005468397 network[52621]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  3 05:14:53 np0005468397 network[52622]: 'network-scripts' will be removed from distribution in near future.
Oct  3 05:14:53 np0005468397 network[52623]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  3 05:14:57 np0005468397 python3.9[53077]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 05:14:59 np0005468397 python3.9[53230]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  3 05:15:00 np0005468397 python3.9[53382]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:15:01 np0005468397 python3.9[53507]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759482900.3659456-226-179390383982602/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:15:02 np0005468397 python3.9[53661]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:15:02 np0005468397 python3.9[53786]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759482901.6933248-241-122355029969130/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:15:03 np0005468397 python3.9[53940]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:15:04 np0005468397 python3.9[54094]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 05:15:05 np0005468397 python3.9[54178]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:15:07 np0005468397 python3.9[54332]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 05:15:07 np0005468397 python3.9[54416]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:15:07 np0005468397 chronyd[807]: chronyd exiting
Oct  3 05:15:07 np0005468397 systemd[1]: Stopping NTP client/server...
Oct  3 05:15:07 np0005468397 systemd[1]: chronyd.service: Deactivated successfully.
Oct  3 05:15:07 np0005468397 systemd[1]: Stopped NTP client/server.
Oct  3 05:15:07 np0005468397 systemd[1]: Starting NTP client/server...
Oct  3 05:15:08 np0005468397 chronyd[54425]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  3 05:15:08 np0005468397 chronyd[54425]: Frequency -32.457 +/- 0.135 ppm read from /var/lib/chrony/drift
Oct  3 05:15:08 np0005468397 chronyd[54425]: Loaded seccomp filter (level 2)
Oct  3 05:15:08 np0005468397 systemd[1]: Started NTP client/server.
Oct  3 05:15:08 np0005468397 systemd[1]: session-12.scope: Deactivated successfully.
Oct  3 05:15:08 np0005468397 systemd[1]: session-12.scope: Consumed 24.368s CPU time.
Oct  3 05:15:08 np0005468397 systemd-logind[798]: Session 12 logged out. Waiting for processes to exit.
Oct  3 05:15:08 np0005468397 systemd-logind[798]: Removed session 12.
Oct  3 05:15:14 np0005468397 systemd-logind[798]: New session 13 of user zuul.
Oct  3 05:15:14 np0005468397 systemd[1]: Started Session 13 of User zuul.
Oct  3 05:15:14 np0005468397 python3.9[54606]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:15:15 np0005468397 python3.9[54758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:15:16 np0005468397 python3.9[54881]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759482915.1418993-34-85118838456876/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:15:17 np0005468397 systemd[1]: session-13.scope: Deactivated successfully.
Oct  3 05:15:17 np0005468397 systemd[1]: session-13.scope: Consumed 1.684s CPU time.
Oct  3 05:15:17 np0005468397 systemd-logind[798]: Session 13 logged out. Waiting for processes to exit.
Oct  3 05:15:17 np0005468397 systemd-logind[798]: Removed session 13.
Oct  3 05:15:22 np0005468397 systemd-logind[798]: New session 14 of user zuul.
Oct  3 05:15:22 np0005468397 systemd[1]: Started Session 14 of User zuul.
Oct  3 05:15:23 np0005468397 python3.9[55059]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:15:24 np0005468397 python3.9[55215]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:15:25 np0005468397 python3.9[55390]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:15:26 np0005468397 python3.9[55513]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759482924.508195-41-229360858889535/.source.json _original_basename=.f2_oum7q follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:15:26 np0005468397 python3.9[55665]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:15:27 np0005468397 python3.9[55788]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759482926.3177345-64-13372811572195/.source _original_basename=.e6kvfiik follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:15:28 np0005468397 python3.9[55940]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:15:28 np0005468397 python3.9[56092]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:15:29 np0005468397 python3.9[56215]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759482928.2157645-88-183562410707954/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:15:29 np0005468397 python3.9[56367]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:15:30 np0005468397 python3.9[56490]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759482929.4223723-88-74235933505009/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:15:31 np0005468397 python3.9[56642]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:15:31 np0005468397 python3.9[56794]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:15:32 np0005468397 python3.9[56917]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759482931.303647-125-250992208131035/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:15:33 np0005468397 python3.9[57069]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:15:33 np0005468397 python3.9[57192]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759482932.6540656-140-65009829978056/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:15:34 np0005468397 python3.9[57344]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:15:34 np0005468397 systemd[1]: Reloading.
Oct  3 05:15:34 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:15:34 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:15:35 np0005468397 systemd[1]: Reloading.
Oct  3 05:15:35 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:15:35 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:15:35 np0005468397 systemd[1]: Starting EDPM Container Shutdown...
Oct  3 05:15:35 np0005468397 systemd[1]: Finished EDPM Container Shutdown.
Oct  3 05:15:36 np0005468397 python3.9[57570]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:15:36 np0005468397 python3.9[57693]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759482935.5622067-163-228729751255785/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:15:37 np0005468397 python3.9[57845]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:15:37 np0005468397 python3.9[57968]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759482936.8693871-178-154216128355326/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:15:38 np0005468397 python3.9[58120]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:15:38 np0005468397 systemd[1]: Reloading.
Oct  3 05:15:38 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:15:38 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:15:38 np0005468397 systemd[1]: Reloading.
Oct  3 05:15:39 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:15:39 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:15:39 np0005468397 systemd[1]: Starting Create netns directory...
Oct  3 05:15:39 np0005468397 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  3 05:15:39 np0005468397 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  3 05:15:39 np0005468397 systemd[1]: Finished Create netns directory.
Oct  3 05:15:40 np0005468397 python3.9[58347]: ansible-ansible.builtin.service_facts Invoked
Oct  3 05:15:40 np0005468397 network[58364]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  3 05:15:40 np0005468397 network[58365]: 'network-scripts' will be removed from distribution in near future.
Oct  3 05:15:40 np0005468397 network[58366]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  3 05:15:44 np0005468397 python3.9[58630]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:15:44 np0005468397 systemd[1]: Reloading.
Oct  3 05:15:44 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:15:44 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:15:44 np0005468397 systemd[1]: Stopping IPv4 firewall with iptables...
Oct  3 05:15:45 np0005468397 iptables.init[58670]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct  3 05:15:45 np0005468397 iptables.init[58670]: iptables: Flushing firewall rules: [  OK  ]
Oct  3 05:15:45 np0005468397 systemd[1]: iptables.service: Deactivated successfully.
Oct  3 05:15:45 np0005468397 systemd[1]: Stopped IPv4 firewall with iptables.
Oct  3 05:15:46 np0005468397 python3.9[58866]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:15:46 np0005468397 python3.9[59020]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:15:46 np0005468397 systemd[1]: Reloading.
Oct  3 05:15:47 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:15:47 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:15:47 np0005468397 systemd[1]: Starting Netfilter Tables...
Oct  3 05:15:47 np0005468397 systemd[1]: Finished Netfilter Tables.
Oct  3 05:15:48 np0005468397 python3.9[59212]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:15:49 np0005468397 python3.9[59365]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:15:49 np0005468397 python3.9[59490]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759482948.6289494-247-174334667558420/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:15:50 np0005468397 python3.9[59641]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:16:16 np0005468397 systemd[1]: session-14.scope: Deactivated successfully.
Oct  3 05:16:16 np0005468397 systemd[1]: session-14.scope: Consumed 19.629s CPU time.
Oct  3 05:16:16 np0005468397 systemd-logind[798]: Session 14 logged out. Waiting for processes to exit.
Oct  3 05:16:16 np0005468397 systemd-logind[798]: Removed session 14.
Oct  3 05:16:28 np0005468397 systemd-logind[798]: New session 15 of user zuul.
Oct  3 05:16:28 np0005468397 systemd[1]: Started Session 15 of User zuul.
Oct  3 05:16:29 np0005468397 python3.9[59834]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:16:31 np0005468397 python3.9[59990]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:16:32 np0005468397 python3.9[60165]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:16:32 np0005468397 python3.9[60243]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.utdv5qsv recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:16:33 np0005468397 python3.9[60395]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:16:33 np0005468397 python3.9[60473]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.ah2ubta5 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:16:34 np0005468397 python3.9[60625]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:16:35 np0005468397 python3.9[60777]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:16:35 np0005468397 python3.9[60855]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:16:36 np0005468397 python3.9[61007]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:16:36 np0005468397 python3.9[61085]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:16:37 np0005468397 python3.9[61237]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:16:37 np0005468397 python3.9[61389]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:16:38 np0005468397 python3.9[61467]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:16:38 np0005468397 python3.9[61619]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:16:39 np0005468397 python3.9[61697]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:16:40 np0005468397 python3.9[61849]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:16:40 np0005468397 systemd[1]: Reloading.
Oct  3 05:16:40 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:16:40 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:16:41 np0005468397 python3.9[62038]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:16:41 np0005468397 python3.9[62116]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:16:42 np0005468397 python3.9[62268]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:16:43 np0005468397 python3.9[62346]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:16:43 np0005468397 python3.9[62498]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:16:43 np0005468397 systemd[1]: Reloading.
Oct  3 05:16:44 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:16:44 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:16:44 np0005468397 systemd[1]: Starting Create netns directory...
Oct  3 05:16:44 np0005468397 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  3 05:16:44 np0005468397 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  3 05:16:44 np0005468397 systemd[1]: Finished Create netns directory.
Oct  3 05:16:44 np0005468397 python3.9[62689]: ansible-ansible.builtin.service_facts Invoked
Oct  3 05:16:45 np0005468397 network[62706]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  3 05:16:45 np0005468397 network[62707]: 'network-scripts' will be removed from distribution in near future.
Oct  3 05:16:45 np0005468397 network[62708]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  3 05:16:49 np0005468397 python3.9[62971]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:16:50 np0005468397 python3.9[63049]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:16:51 np0005468397 python3.9[63201]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:16:51 np0005468397 python3.9[63353]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:16:52 np0005468397 python3.9[63476]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483011.1885278-216-92374046264732/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:16:53 np0005468397 python3.9[63628]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  3 05:16:53 np0005468397 systemd[1]: Starting Time & Date Service...
Oct  3 05:16:53 np0005468397 systemd[1]: Started Time & Date Service.
Oct  3 05:16:54 np0005468397 python3.9[63784]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:16:54 np0005468397 python3.9[63936]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:16:55 np0005468397 python3.9[64059]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483014.3529527-251-227741828606309/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:16:55 np0005468397 python3.9[64211]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:16:56 np0005468397 python3.9[64334]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483015.550304-266-81241269088741/.source.yaml _original_basename=.20qdzag_ follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:16:57 np0005468397 python3.9[64486]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:16:57 np0005468397 python3.9[64609]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483016.7866712-281-254300622885635/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:16:58 np0005468397 python3.9[64761]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:16:59 np0005468397 python3.9[64914]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:17:00 np0005468397 python3[65067]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  3 05:17:01 np0005468397 python3.9[65219]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:17:01 np0005468397 python3.9[65342]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483020.5370286-320-217192301911376/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:17:02 np0005468397 python3.9[65494]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:17:02 np0005468397 python3.9[65617]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483021.8050442-335-36944126049770/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:17:03 np0005468397 python3.9[65769]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:17:04 np0005468397 python3.9[65892]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483023.0992146-350-147934479292379/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:17:04 np0005468397 python3.9[66044]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:17:05 np0005468397 python3.9[66167]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483024.42432-365-242649913637003/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:17:06 np0005468397 python3.9[66319]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:17:06 np0005468397 python3.9[66442]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483025.6576684-380-184067038810704/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:17:07 np0005468397 python3.9[66594]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:17:08 np0005468397 python3.9[66746]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:17:08 np0005468397 python3.9[66905]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:17:09 np0005468397 python3.9[67058]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:17:10 np0005468397 python3.9[67210]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:17:11 np0005468397 python3.9[67362]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  3 05:17:11 np0005468397 python3.9[67515]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  3 05:17:12 np0005468397 systemd[1]: session-15.scope: Deactivated successfully.
Oct  3 05:17:12 np0005468397 systemd[1]: session-15.scope: Consumed 32.533s CPU time.
Oct  3 05:17:12 np0005468397 systemd-logind[798]: Session 15 logged out. Waiting for processes to exit.
Oct  3 05:17:12 np0005468397 systemd-logind[798]: Removed session 15.
Oct  3 05:17:17 np0005468397 chronyd[54425]: Selected source 54.39.23.64 (pool.ntp.org)
Oct  3 05:17:18 np0005468397 systemd-logind[798]: New session 16 of user zuul.
Oct  3 05:17:18 np0005468397 systemd[1]: Started Session 16 of User zuul.
Oct  3 05:17:19 np0005468397 python3.9[67696]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  3 05:17:20 np0005468397 python3.9[67848]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:17:21 np0005468397 python3.9[68000]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:17:22 np0005468397 python3.9[68152]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDK4WTt4xfXLVwcRwaUjEcQuTzZaA6+xfZoKvSEmlHXlteBKLg49zRr9WMOm7VuGvF7VdZSGZC2rKIhWoMtv/Znw8t2sgD9s7fQHlhJarA5UzZOuA8UEmlEVuJiIuxqO0U/3vocfIPfFsINVOJJSQcsXmBmar2rJHMSLTcxSZ1gIJKbt4zWALA2xd4rm0RJPMmAbCVBx//Q3Tq/agJ2+esCcGprB3rJZ1KETzXEaZTnp1ea7xZsb4B+QM07L7PAvMed0ELxdUlDDtPWDl3nVmt8mTFmVUF8XkQMWDrXfT8L5r9vBDYFTXbmUT6hwYElNZuSJRsz2AKj8T1Ww4RjWM/3+nwJzUIFYQ1qDgTnfO/gQb2hkSPHxm+uYPCy8XJUvX0JTq4Dy9phAnvTBBiZRUBL7IJWCoAUrQqgQDzz/cmBGY/h/9WXab5t62pvyq5GjSyKQeCgl6C+LNizUU5DJqjNzstWIz1tzFVetJjz68d7g9MMwlniuw/XjIuijz9+zj8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC/sH/Biub/ue+Yt01F7tQoZjOq2HzQ6x0hqVBc5qpVY#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKW8xTeb68zZuBoaFZxe0+liZSD/t9iQ2YlLG27C9NpUXcRSwJq28L2aw0M8BztmsIWjN+83014f6s2TAnQ4raE=#012 create=True mode=0644 path=/tmp/ansible.pha9nsqv state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:17:23 np0005468397 python3.9[68304]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.pha9nsqv' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:17:23 np0005468397 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  3 05:17:23 np0005468397 python3.9[68460]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.pha9nsqv state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:17:24 np0005468397 systemd[1]: session-16.scope: Deactivated successfully.
Oct  3 05:17:24 np0005468397 systemd[1]: session-16.scope: Consumed 3.501s CPU time.
Oct  3 05:17:24 np0005468397 systemd-logind[798]: Session 16 logged out. Waiting for processes to exit.
Oct  3 05:17:24 np0005468397 systemd-logind[798]: Removed session 16.
Oct  3 05:17:29 np0005468397 systemd-logind[798]: New session 17 of user zuul.
Oct  3 05:17:29 np0005468397 systemd[1]: Started Session 17 of User zuul.
Oct  3 05:17:30 np0005468397 python3.9[68638]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:17:31 np0005468397 python3.9[68794]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  3 05:17:32 np0005468397 python3.9[68948]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:17:33 np0005468397 python3.9[69101]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:17:34 np0005468397 python3.9[69254]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:17:35 np0005468397 python3.9[69408]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:17:35 np0005468397 python3.9[69563]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:17:36 np0005468397 systemd[1]: session-17.scope: Deactivated successfully.
Oct  3 05:17:36 np0005468397 systemd[1]: session-17.scope: Consumed 4.541s CPU time.
Oct  3 05:17:36 np0005468397 systemd-logind[798]: Session 17 logged out. Waiting for processes to exit.
Oct  3 05:17:36 np0005468397 systemd-logind[798]: Removed session 17.
Oct  3 05:17:41 np0005468397 systemd-logind[798]: New session 18 of user zuul.
Oct  3 05:17:41 np0005468397 systemd[1]: Started Session 18 of User zuul.
Oct  3 05:17:42 np0005468397 python3.9[69741]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:17:43 np0005468397 python3.9[69897]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 05:17:44 np0005468397 python3.9[69981]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  3 05:17:46 np0005468397 python3.9[70132]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:17:47 np0005468397 python3.9[70283]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  3 05:17:48 np0005468397 python3.9[70433]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:17:48 np0005468397 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 05:17:48 np0005468397 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 05:17:49 np0005468397 python3.9[70584]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:17:50 np0005468397 systemd[1]: session-18.scope: Deactivated successfully.
Oct  3 05:17:50 np0005468397 systemd[1]: session-18.scope: Consumed 5.917s CPU time.
Oct  3 05:17:50 np0005468397 systemd-logind[798]: Session 18 logged out. Waiting for processes to exit.
Oct  3 05:17:50 np0005468397 systemd-logind[798]: Removed session 18.
Oct  3 05:17:55 np0005468397 systemd-logind[798]: New session 19 of user zuul.
Oct  3 05:17:55 np0005468397 systemd[1]: Started Session 19 of User zuul.
Oct  3 05:17:56 np0005468397 python3.9[70762]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:17:58 np0005468397 python3.9[70918]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:17:59 np0005468397 python3.9[71070]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:18:00 np0005468397 python3.9[71222]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:00 np0005468397 python3.9[71345]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483079.5093005-65-154897519485765/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d404cda1cfd269cdbafd6c3659cd99fc0f8ea50b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:01 np0005468397 python3.9[71497]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:02 np0005468397 python3.9[71620]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483081.1076515-65-221417929006423/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=5d509800cfa02434e1ea13173921eb544c4014eb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:03 np0005468397 python3.9[71772]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:03 np0005468397 python3.9[71895]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483082.448241-65-128269508956506/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=fc457d6cda71cd437dabf66a49d7bedd4f12caa6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:04 np0005468397 python3.9[72047]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:18:05 np0005468397 python3.9[72199]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:18:05 np0005468397 python3.9[72351]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:06 np0005468397 python3.9[72474]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483085.4191506-124-165997367077943/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=0a7b54b34408a018b101c9089e8d5833448f7aac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:07 np0005468397 python3.9[72626]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:07 np0005468397 python3.9[72749]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483086.5437257-124-234468637665146/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=5d509800cfa02434e1ea13173921eb544c4014eb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:08 np0005468397 python3.9[72901]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:08 np0005468397 python3.9[73025]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483087.7637134-124-188789288162927/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6bcc3bb3ab6747a81531dbe85941cb6826808883 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:09 np0005468397 python3.9[73177]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:18:10 np0005468397 python3.9[73329]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:18:10 np0005468397 python3.9[73481]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:11 np0005468397 python3.9[73604]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483090.4685173-183-229149924513191/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=a0302dcda93d9b61d8d87612905e54393e1f588d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:12 np0005468397 python3.9[73756]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:13 np0005468397 python3.9[73879]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483091.9100158-183-207984280110615/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d1986e54e6c7087f1316849f5b5c8fdeaa3146bf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:13 np0005468397 python3.9[74031]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:14 np0005468397 python3.9[74154]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483093.1418433-183-212825866467401/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=9112400f5f93be8b7aabafe8d78f9b18ca6a950e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:14 np0005468397 python3.9[74306]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:18:15 np0005468397 python3.9[74458]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:18:16 np0005468397 python3.9[74610]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:16 np0005468397 python3.9[74733]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483095.777182-242-221381847071317/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=6e5c74a4c5ab0d1ff684cac440b041c2c52cf240 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:17 np0005468397 python3.9[74885]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:18 np0005468397 python3.9[75008]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483096.9811566-242-100156872680667/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=05088e0c1f92b3abd58f217858280bd4aaa04b30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:18 np0005468397 python3.9[75160]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:19 np0005468397 python3.9[75283]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483098.2738106-242-245825823307977/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=882c0401c031523f09e60454f52f7d6806b67692 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:20 np0005468397 python3.9[75435]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:18:21 np0005468397 python3.9[75587]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:22 np0005468397 python3.9[75710]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483100.9743273-310-138604990088803/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0840bbf95bceacbb578c810a3eebb37e12b74c19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:22 np0005468397 python3.9[75862]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:18:23 np0005468397 python3.9[76014]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:24 np0005468397 python3.9[76137]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483103.1258953-334-215603014962166/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0840bbf95bceacbb578c810a3eebb37e12b74c19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:25 np0005468397 python3.9[76289]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:18:25 np0005468397 python3.9[76441]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:26 np0005468397 python3.9[76564]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483105.227198-358-274574797018313/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0840bbf95bceacbb578c810a3eebb37e12b74c19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:27 np0005468397 python3.9[76716]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:18:27 np0005468397 python3.9[76868]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:28 np0005468397 python3.9[76991]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483107.3939056-382-103205294922482/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0840bbf95bceacbb578c810a3eebb37e12b74c19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:29 np0005468397 python3.9[77143]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:18:30 np0005468397 python3.9[77295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:30 np0005468397 python3.9[77418]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483109.6572516-406-145667290358463/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0840bbf95bceacbb578c810a3eebb37e12b74c19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:31 np0005468397 python3.9[77570]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:18:32 np0005468397 python3.9[77722]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:32 np0005468397 python3.9[77845]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483111.67534-430-193445148581717/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0840bbf95bceacbb578c810a3eebb37e12b74c19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:33 np0005468397 systemd[1]: session-19.scope: Deactivated successfully.
Oct  3 05:18:33 np0005468397 systemd-logind[798]: Session 19 logged out. Waiting for processes to exit.
Oct  3 05:18:33 np0005468397 systemd[1]: session-19.scope: Consumed 29.346s CPU time.
Oct  3 05:18:33 np0005468397 systemd-logind[798]: Removed session 19.
Oct  3 05:18:38 np0005468397 systemd-logind[798]: New session 20 of user zuul.
Oct  3 05:18:38 np0005468397 systemd[1]: Started Session 20 of User zuul.
Oct  3 05:18:39 np0005468397 python3.9[78024]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:18:40 np0005468397 python3.9[78180]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:18:41 np0005468397 python3.9[78332]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:18:42 np0005468397 python3.9[78482]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:18:42 np0005468397 python3.9[78634]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  3 05:18:44 np0005468397 dbus-broker-launch[784]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct  3 05:18:44 np0005468397 python3.9[78790]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 05:18:45 np0005468397 python3.9[78874]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 05:18:48 np0005468397 python3.9[79027]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  3 05:18:48 np0005468397 python3[79182]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct  3 05:18:49 np0005468397 python3.9[79334]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:50 np0005468397 python3.9[79486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:50 np0005468397 python3.9[79564]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:51 np0005468397 python3.9[79716]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:52 np0005468397 python3.9[79794]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.oofqof68 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:52 np0005468397 python3.9[79946]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:53 np0005468397 python3.9[80024]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:54 np0005468397 python3.9[80176]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:18:54 np0005468397 python3[80329]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  3 05:18:55 np0005468397 python3.9[80481]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:56 np0005468397 python3.9[80606]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483134.955004-157-163252348186648/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:56 np0005468397 python3.9[80758]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:57 np0005468397 python3.9[80883]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483136.396805-172-262261520798523/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:58 np0005468397 python3.9[81035]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:18:58 np0005468397 python3.9[81160]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483137.720218-187-206527041906912/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:18:59 np0005468397 python3.9[81312]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:19:00 np0005468397 python3.9[81437]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483139.0471766-202-14482821074472/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:19:00 np0005468397 python3.9[81589]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:19:01 np0005468397 python3.9[81714]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483140.2134025-217-244537637828964/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:19:01 np0005468397 python3.9[81867]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:19:02 np0005468397 systemd[1]: packagekit.service: Deactivated successfully.
Oct  3 05:19:02 np0005468397 python3.9[82021]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:19:03 np0005468397 python3.9[82176]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:19:04 np0005468397 python3.9[82328]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:19:05 np0005468397 python3.9[82481]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:19:05 np0005468397 python3.9[82635]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:19:06 np0005468397 python3.9[82790]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:19:07 np0005468397 python3.9[82940]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:19:08 np0005468397 python3.9[83093]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:19:08 np0005468397 ovs-vsctl[83094]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct  3 05:19:09 np0005468397 python3.9[83246]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:19:09 np0005468397 python3.9[83401]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:19:09 np0005468397 ovs-vsctl[83402]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct  3 05:19:10 np0005468397 python3.9[83552]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:19:11 np0005468397 python3.9[83706]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:19:12 np0005468397 python3.9[83858]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:19:12 np0005468397 python3.9[83936]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:19:13 np0005468397 python3.9[84088]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:19:13 np0005468397 python3.9[84166]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:19:14 np0005468397 python3.9[84318]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:19:15 np0005468397 python3.9[84470]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:19:15 np0005468397 python3.9[84548]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:19:16 np0005468397 python3.9[84700]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:19:16 np0005468397 python3.9[84778]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:19:17 np0005468397 python3.9[84930]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:19:17 np0005468397 systemd[1]: Reloading.
Oct  3 05:19:17 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:19:17 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:19:18 np0005468397 python3.9[85120]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:19:19 np0005468397 python3.9[85198]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:19:19 np0005468397 python3.9[85350]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:19:20 np0005468397 python3.9[85428]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:19:21 np0005468397 python3.9[85580]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:19:21 np0005468397 systemd[1]: Reloading.
Oct  3 05:19:21 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:19:21 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:19:21 np0005468397 systemd[1]: Starting Create netns directory...
Oct  3 05:19:21 np0005468397 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  3 05:19:21 np0005468397 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  3 05:19:21 np0005468397 systemd[1]: Finished Create netns directory.
Oct  3 05:19:22 np0005468397 python3.9[85773]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:19:22 np0005468397 python3.9[85925]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:19:23 np0005468397 python3.9[86048]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759483162.5011075-468-201465418358357/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:19:24 np0005468397 python3.9[86200]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:19:24 np0005468397 python3.9[86352]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:19:25 np0005468397 python3.9[86475]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483164.4437776-493-270794408347065/.source.json _original_basename=.aohgznii follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:19:26 np0005468397 python3.9[86627]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:19:28 np0005468397 python3.9[87054]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct  3 05:19:28 np0005468397 python3.9[87206]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 05:19:29 np0005468397 python3.9[87358]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  3 05:19:29 np0005468397 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  3 05:19:30 np0005468397 python3[87521]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 05:19:31 np0005468397 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  3 05:19:33 np0005468397 systemd[1]: var-lib-containers-storage-overlay-compat1610488415-lower\x2dmapped.mount: Deactivated successfully.
Oct  3 05:19:37 np0005468397 podman[87534]: 2025-10-03 09:19:37.017364337 +0000 UTC m=+5.947004716 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct  3 05:19:37 np0005468397 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  3 05:19:37 np0005468397 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  3 05:19:37 np0005468397 podman[87654]: 2025-10-03 09:19:37.194197612 +0000 UTC m=+0.071669826 container create e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Oct  3 05:19:37 np0005468397 podman[87654]: 2025-10-03 09:19:37.141602907 +0000 UTC m=+0.019075161 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct  3 05:19:37 np0005468397 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  3 05:19:37 np0005468397 python3[87521]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct  3 05:19:37 np0005468397 python3.9[87845]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:19:38 np0005468397 python3.9[87999]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:19:39 np0005468397 python3.9[88075]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:19:40 np0005468397 python3.9[88226]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759483179.3128877-581-181182083930543/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:19:40 np0005468397 python3.9[88302]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 05:19:40 np0005468397 systemd[1]: Reloading.
Oct  3 05:19:40 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:19:40 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:19:41 np0005468397 python3.9[88413]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:19:41 np0005468397 systemd[1]: Reloading.
Oct  3 05:19:41 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:19:41 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:19:41 np0005468397 systemd[1]: Starting ovn_controller container...
Oct  3 05:19:41 np0005468397 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct  3 05:19:41 np0005468397 systemd[1]: Started libcrun container.
Oct  3 05:19:41 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2ff24ecae7d4639a1632c070c44752716f7d9ec7dfee30ae4cd75c65954cd3/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct  3 05:19:41 np0005468397 systemd[1]: Started /usr/bin/podman healthcheck run e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4.
Oct  3 05:19:41 np0005468397 podman[88455]: 2025-10-03 09:19:41.996524783 +0000 UTC m=+0.126633293 container init e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: + sudo -E kolla_set_configs
Oct  3 05:19:42 np0005468397 podman[88455]: 2025-10-03 09:19:42.021457235 +0000 UTC m=+0.151565725 container start e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 05:19:42 np0005468397 edpm-start-podman-container[88455]: ovn_controller
Oct  3 05:19:42 np0005468397 systemd[1]: Created slice User Slice of UID 0.
Oct  3 05:19:42 np0005468397 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  3 05:19:42 np0005468397 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  3 05:19:42 np0005468397 systemd[1]: Starting User Manager for UID 0...
Oct  3 05:19:42 np0005468397 edpm-start-podman-container[88454]: Creating additional drop-in dependency for "ovn_controller" (e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4)
Oct  3 05:19:42 np0005468397 podman[88478]: 2025-10-03 09:19:42.092132818 +0000 UTC m=+0.062484721 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  3 05:19:42 np0005468397 systemd[1]: e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4-47346b62f1c4778c.service: Main process exited, code=exited, status=1/FAILURE
Oct  3 05:19:42 np0005468397 systemd[1]: e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4-47346b62f1c4778c.service: Failed with result 'exit-code'.
Oct  3 05:19:42 np0005468397 systemd[1]: Reloading.
Oct  3 05:19:42 np0005468397 systemd[88509]: Queued start job for default target Main User Target.
Oct  3 05:19:42 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:19:42 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:19:42 np0005468397 systemd[88509]: Created slice User Application Slice.
Oct  3 05:19:42 np0005468397 systemd[88509]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  3 05:19:42 np0005468397 systemd[88509]: Started Daily Cleanup of User's Temporary Directories.
Oct  3 05:19:42 np0005468397 systemd[88509]: Reached target Paths.
Oct  3 05:19:42 np0005468397 systemd[88509]: Reached target Timers.
Oct  3 05:19:42 np0005468397 systemd[88509]: Starting D-Bus User Message Bus Socket...
Oct  3 05:19:42 np0005468397 systemd[88509]: Starting Create User's Volatile Files and Directories...
Oct  3 05:19:42 np0005468397 systemd[88509]: Finished Create User's Volatile Files and Directories.
Oct  3 05:19:42 np0005468397 systemd[88509]: Listening on D-Bus User Message Bus Socket.
Oct  3 05:19:42 np0005468397 systemd[88509]: Reached target Sockets.
Oct  3 05:19:42 np0005468397 systemd[88509]: Reached target Basic System.
Oct  3 05:19:42 np0005468397 systemd[88509]: Reached target Main User Target.
Oct  3 05:19:42 np0005468397 systemd[88509]: Startup finished in 107ms.
Oct  3 05:19:42 np0005468397 systemd[1]: Started User Manager for UID 0.
Oct  3 05:19:42 np0005468397 systemd[1]: Started ovn_controller container.
Oct  3 05:19:42 np0005468397 systemd[1]: Started Session c1 of User root.
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: INFO:__main__:Validating config file
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: INFO:__main__:Writing out command to execute
Oct  3 05:19:42 np0005468397 systemd[1]: session-c1.scope: Deactivated successfully.
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: ++ cat /run_command
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: + ARGS=
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: + sudo kolla_copy_cacerts
Oct  3 05:19:42 np0005468397 systemd[1]: Started Session c2 of User root.
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: + [[ ! -n '' ]]
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: + . kolla_extend_start
Oct  3 05:19:42 np0005468397 systemd[1]: session-c2.scope: Deactivated successfully.
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: + umask 0022
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct  3 05:19:42 np0005468397 NetworkManager[45015]: <info>  [1759483182.4591] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct  3 05:19:42 np0005468397 NetworkManager[45015]: <info>  [1759483182.4598] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  3 05:19:42 np0005468397 NetworkManager[45015]: <info>  [1759483182.4607] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct  3 05:19:42 np0005468397 NetworkManager[45015]: <info>  [1759483182.4611] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct  3 05:19:42 np0005468397 NetworkManager[45015]: <info>  [1759483182.4613] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct  3 05:19:42 np0005468397 kernel: br-int: entered promiscuous mode
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00024|main|INFO|OVS feature set changed, force recompute.
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  3 05:19:42 np0005468397 ovn_controller[88471]: 2025-10-03T09:19:42Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  3 05:19:42 np0005468397 NetworkManager[45015]: <info>  [1759483182.4762] manager: (ovn-e5f102-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct  3 05:19:42 np0005468397 kernel: genev_sys_6081: entered promiscuous mode
Oct  3 05:19:42 np0005468397 NetworkManager[45015]: <info>  [1759483182.4921] device (genev_sys_6081): carrier: link connected
Oct  3 05:19:42 np0005468397 NetworkManager[45015]: <info>  [1759483182.4923] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Oct  3 05:19:42 np0005468397 systemd-udevd[88624]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 05:19:42 np0005468397 systemd-udevd[88627]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 05:19:42 np0005468397 python3.9[88736]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:19:42 np0005468397 ovs-vsctl[88737]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct  3 05:19:43 np0005468397 python3.9[88889]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:19:43 np0005468397 ovs-vsctl[88891]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct  3 05:19:44 np0005468397 python3.9[89044]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:19:44 np0005468397 ovs-vsctl[89045]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct  3 05:19:44 np0005468397 systemd-logind[798]: Session 20 logged out. Waiting for processes to exit.
Oct  3 05:19:44 np0005468397 systemd[1]: session-20.scope: Deactivated successfully.
Oct  3 05:19:44 np0005468397 systemd[1]: session-20.scope: Consumed 57.554s CPU time.
Oct  3 05:19:44 np0005468397 systemd-logind[798]: Removed session 20.
Oct  3 05:19:49 np0005468397 systemd-logind[798]: New session 22 of user zuul.
Oct  3 05:19:50 np0005468397 systemd[1]: Started Session 22 of User zuul.
Oct  3 05:19:51 np0005468397 python3.9[89223]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:19:52 np0005468397 python3.9[89379]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:19:52 np0005468397 systemd[1]: Stopping User Manager for UID 0...
Oct  3 05:19:52 np0005468397 systemd[88509]: Activating special unit Exit the Session...
Oct  3 05:19:52 np0005468397 systemd[88509]: Stopped target Main User Target.
Oct  3 05:19:52 np0005468397 systemd[88509]: Stopped target Basic System.
Oct  3 05:19:52 np0005468397 systemd[88509]: Stopped target Paths.
Oct  3 05:19:52 np0005468397 systemd[88509]: Stopped target Sockets.
Oct  3 05:19:52 np0005468397 systemd[88509]: Stopped target Timers.
Oct  3 05:19:52 np0005468397 systemd[88509]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  3 05:19:52 np0005468397 systemd[88509]: Closed D-Bus User Message Bus Socket.
Oct  3 05:19:52 np0005468397 systemd[88509]: Stopped Create User's Volatile Files and Directories.
Oct  3 05:19:52 np0005468397 systemd[88509]: Removed slice User Application Slice.
Oct  3 05:19:52 np0005468397 systemd[88509]: Reached target Shutdown.
Oct  3 05:19:52 np0005468397 systemd[88509]: Finished Exit the Session.
Oct  3 05:19:52 np0005468397 systemd[88509]: Reached target Exit the Session.
Oct  3 05:19:52 np0005468397 systemd[1]: user@0.service: Deactivated successfully.
Oct  3 05:19:52 np0005468397 systemd[1]: Stopped User Manager for UID 0.
Oct  3 05:19:52 np0005468397 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  3 05:19:52 np0005468397 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  3 05:19:52 np0005468397 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  3 05:19:52 np0005468397 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  3 05:19:52 np0005468397 systemd[1]: Removed slice User Slice of UID 0.
Oct  3 05:19:53 np0005468397 python3.9[89545]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 05:19:53 np0005468397 systemd[1]: Reloading.
Oct  3 05:19:54 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:19:54 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:19:55 np0005468397 python3.9[89730]: ansible-ansible.builtin.service_facts Invoked
Oct  3 05:19:55 np0005468397 network[89747]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  3 05:19:55 np0005468397 network[89748]: 'network-scripts' will be removed from distribution in near future.
Oct  3 05:19:55 np0005468397 network[89749]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  3 05:19:59 np0005468397 python3.9[90013]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:20:00 np0005468397 python3.9[90166]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:20:01 np0005468397 python3.9[90320]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:20:02 np0005468397 python3.9[90473]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:20:02 np0005468397 python3.9[90626]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:20:03 np0005468397 python3.9[90779]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:20:04 np0005468397 python3.9[90932]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:20:05 np0005468397 python3.9[91085]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:20:05 np0005468397 python3.9[91237]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:20:06 np0005468397 python3.9[91389]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:20:07 np0005468397 python3.9[91541]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:20:08 np0005468397 python3.9[91693]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:20:08 np0005468397 python3.9[91845]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:20:09 np0005468397 python3.9[91997]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:20:10 np0005468397 python3.9[92150]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:20:11 np0005468397 python3.9[92302]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:20:11 np0005468397 python3.9[92454]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:20:12 np0005468397 python3.9[92606]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:20:12 np0005468397 ovn_controller[88471]: 2025-10-03T09:20:12Z|00025|memory|INFO|16512 kB peak resident set size after 30.4 seconds
Oct  3 05:20:12 np0005468397 ovn_controller[88471]: 2025-10-03T09:20:12Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Oct  3 05:20:12 np0005468397 podman[92730]: 2025-10-03 09:20:12.836650699 +0000 UTC m=+0.097890806 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 05:20:13 np0005468397 python3.9[92777]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:20:13 np0005468397 python3.9[92936]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:20:14 np0005468397 python3.9[93088]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:20:14 np0005468397 python3.9[93240]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:20:15 np0005468397 python3.9[93392]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  3 05:20:16 np0005468397 python3.9[93544]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 05:20:16 np0005468397 systemd[1]: Reloading.
Oct  3 05:20:16 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:20:16 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:20:17 np0005468397 python3.9[93730]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:20:18 np0005468397 python3.9[93883]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:20:18 np0005468397 python3.9[94036]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:20:19 np0005468397 python3.9[94189]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:20:20 np0005468397 python3.9[94342]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:20:21 np0005468397 python3.9[94495]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:20:21 np0005468397 python3.9[94648]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:20:22 np0005468397 python3.9[94801]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct  3 05:20:23 np0005468397 python3.9[94954]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  3 05:20:24 np0005468397 python3.9[95112]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  3 05:20:25 np0005468397 python3.9[95272]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 05:20:26 np0005468397 python3.9[95356]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 05:20:43 np0005468397 podman[95541]: 2025-10-03 09:20:43.827262225 +0000 UTC m=+0.092630284 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  3 05:21:05 np0005468397 kernel: SELinux:  Converting 2754 SID table entries...
Oct  3 05:21:05 np0005468397 kernel: SELinux:  policy capability network_peer_controls=1
Oct  3 05:21:05 np0005468397 kernel: SELinux:  policy capability open_perms=1
Oct  3 05:21:05 np0005468397 kernel: SELinux:  policy capability extended_socket_class=1
Oct  3 05:21:05 np0005468397 kernel: SELinux:  policy capability always_check_network=0
Oct  3 05:21:05 np0005468397 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  3 05:21:05 np0005468397 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  3 05:21:05 np0005468397 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  3 05:21:14 np0005468397 dbus-broker-launch[784]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct  3 05:21:14 np0005468397 podman[95579]: 2025-10-03 09:21:14.865884896 +0000 UTC m=+0.105709743 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  3 05:21:16 np0005468397 kernel: SELinux:  Converting 2754 SID table entries...
Oct  3 05:21:16 np0005468397 kernel: SELinux:  policy capability network_peer_controls=1
Oct  3 05:21:16 np0005468397 kernel: SELinux:  policy capability open_perms=1
Oct  3 05:21:16 np0005468397 kernel: SELinux:  policy capability extended_socket_class=1
Oct  3 05:21:16 np0005468397 kernel: SELinux:  policy capability always_check_network=0
Oct  3 05:21:16 np0005468397 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  3 05:21:16 np0005468397 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  3 05:21:16 np0005468397 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  3 05:21:45 np0005468397 dbus-broker-launch[784]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct  3 05:21:45 np0005468397 podman[108033]: 2025-10-03 09:21:45.816977898 +0000 UTC m=+0.080928578 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  3 05:22:04 np0005468397 kernel: SELinux:  Converting 2755 SID table entries...
Oct  3 05:22:04 np0005468397 kernel: SELinux:  policy capability network_peer_controls=1
Oct  3 05:22:04 np0005468397 kernel: SELinux:  policy capability open_perms=1
Oct  3 05:22:04 np0005468397 kernel: SELinux:  policy capability extended_socket_class=1
Oct  3 05:22:04 np0005468397 kernel: SELinux:  policy capability always_check_network=0
Oct  3 05:22:04 np0005468397 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  3 05:22:04 np0005468397 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  3 05:22:04 np0005468397 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  3 05:22:05 np0005468397 dbus-broker-launch[777]: Noticed file-system modification, trigger reload.
Oct  3 05:22:05 np0005468397 dbus-broker-launch[784]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct  3 05:22:05 np0005468397 dbus-broker-launch[777]: Noticed file-system modification, trigger reload.
Oct  3 05:22:11 np0005468397 systemd[1]: Stopping OpenSSH server daemon...
Oct  3 05:22:11 np0005468397 systemd[1]: sshd.service: Deactivated successfully.
Oct  3 05:22:11 np0005468397 systemd[1]: Stopped OpenSSH server daemon.
Oct  3 05:22:11 np0005468397 systemd[1]: sshd.service: Consumed 1.122s CPU time, no IO.
Oct  3 05:22:11 np0005468397 systemd[1]: Stopped target sshd-keygen.target.
Oct  3 05:22:11 np0005468397 systemd[1]: Stopping sshd-keygen.target...
Oct  3 05:22:11 np0005468397 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  3 05:22:11 np0005468397 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  3 05:22:11 np0005468397 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  3 05:22:11 np0005468397 systemd[1]: Reached target sshd-keygen.target.
Oct  3 05:22:11 np0005468397 systemd[1]: Starting OpenSSH server daemon...
Oct  3 05:22:11 np0005468397 systemd[1]: Started OpenSSH server daemon.
Oct  3 05:22:13 np0005468397 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  3 05:22:13 np0005468397 systemd[1]: Starting man-db-cache-update.service...
Oct  3 05:22:13 np0005468397 systemd[1]: Reloading.
Oct  3 05:22:13 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:22:13 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:22:14 np0005468397 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  3 05:22:16 np0005468397 systemd[1]: Starting PackageKit Daemon...
Oct  3 05:22:16 np0005468397 systemd[1]: Started PackageKit Daemon.
Oct  3 05:22:16 np0005468397 podman[115360]: 2025-10-03 09:22:16.178156736 +0000 UTC m=+0.113694912 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct  3 05:22:17 np0005468397 python3.9[117202]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  3 05:22:17 np0005468397 systemd[1]: Reloading.
Oct  3 05:22:17 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:22:17 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:22:18 np0005468397 python3.9[118559]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  3 05:22:18 np0005468397 systemd[1]: Reloading.
Oct  3 05:22:18 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:22:18 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:22:19 np0005468397 python3.9[119741]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  3 05:22:19 np0005468397 systemd[1]: Reloading.
Oct  3 05:22:19 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:22:19 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:22:20 np0005468397 python3.9[120948]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  3 05:22:20 np0005468397 systemd[1]: Reloading.
Oct  3 05:22:21 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:22:21 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:22:21 np0005468397 python3.9[122224]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:22 np0005468397 systemd[1]: Reloading.
Oct  3 05:22:22 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:22:22 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:22:22 np0005468397 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  3 05:22:22 np0005468397 systemd[1]: Finished man-db-cache-update.service.
Oct  3 05:22:22 np0005468397 systemd[1]: man-db-cache-update.service: Consumed 10.704s CPU time.
Oct  3 05:22:22 np0005468397 systemd[1]: run-re3773c7116a64ca3b2c01ee9208342d2.service: Deactivated successfully.
Oct  3 05:22:23 np0005468397 python3.9[122762]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:23 np0005468397 systemd[1]: Reloading.
Oct  3 05:22:23 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:22:23 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:22:24 np0005468397 python3.9[122951]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:24 np0005468397 systemd[1]: Reloading.
Oct  3 05:22:24 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:22:24 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:22:25 np0005468397 python3.9[123140]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:26 np0005468397 python3.9[123295]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:26 np0005468397 systemd[1]: Reloading.
Oct  3 05:22:26 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:22:26 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:22:27 np0005468397 python3.9[123485]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  3 05:22:27 np0005468397 systemd[1]: Reloading.
Oct  3 05:22:27 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:22:27 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:22:27 np0005468397 systemd[1]: Listening on libvirt proxy daemon socket.
Oct  3 05:22:27 np0005468397 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct  3 05:22:28 np0005468397 python3.9[123678]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:29 np0005468397 python3.9[123833]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:30 np0005468397 python3.9[123988]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:31 np0005468397 python3.9[124143]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:32 np0005468397 python3.9[124298]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:32 np0005468397 python3.9[124453]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:34 np0005468397 python3.9[124608]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:35 np0005468397 python3.9[124763]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:36 np0005468397 python3.9[124918]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:37 np0005468397 python3.9[125073]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:38 np0005468397 python3.9[125228]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:38 np0005468397 python3.9[125383]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:39 np0005468397 python3.9[125538]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:40 np0005468397 python3.9[125693]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 05:22:41 np0005468397 python3.9[125848]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:22:42 np0005468397 python3.9[126000]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:22:43 np0005468397 python3.9[126152]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:22:43 np0005468397 python3.9[126304]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:22:44 np0005468397 python3.9[126456]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:22:45 np0005468397 python3.9[126608]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:22:46 np0005468397 python3.9[126760]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:22:46 np0005468397 podman[126857]: 2025-10-03 09:22:46.779070733 +0000 UTC m=+0.100321883 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 05:22:46 np0005468397 python3.9[126902]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759483365.4066613-554-199875697168995/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:22:47 np0005468397 python3.9[127063]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:22:48 np0005468397 python3.9[127188]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759483367.1754348-554-40909681588748/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:22:49 np0005468397 python3.9[127340]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:22:49 np0005468397 python3.9[127465]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759483368.6337595-554-261089279061779/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:22:50 np0005468397 python3.9[127617]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:22:50 np0005468397 python3.9[127742]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759483369.9384463-554-184137375712736/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:22:51 np0005468397 python3.9[127894]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:22:52 np0005468397 python3.9[128019]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759483371.1893976-554-164014473168428/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:22:53 np0005468397 python3.9[128171]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:22:53 np0005468397 python3.9[128296]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759483372.5685205-554-12482818682139/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:22:54 np0005468397 python3.9[128448]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:22:54 np0005468397 python3.9[128571]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759483373.8352635-554-108530844767220/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:22:55 np0005468397 python3.9[128723]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:22:56 np0005468397 python3.9[128848]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759483375.0672734-554-131221655681022/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:22:56 np0005468397 python3.9[129000]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct  3 05:22:57 np0005468397 python3.9[129153]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:22:58 np0005468397 python3.9[129305]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:22:59 np0005468397 python3.9[129457]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:22:59 np0005468397 python3.9[129609]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:00 np0005468397 python3.9[129761]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:01 np0005468397 python3.9[129913]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:01 np0005468397 python3.9[130065]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:02 np0005468397 python3.9[130217]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:03 np0005468397 python3.9[130369]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:04 np0005468397 python3.9[130521]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:04 np0005468397 python3.9[130673]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:05 np0005468397 python3.9[130825]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:06 np0005468397 python3.9[130977]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:07 np0005468397 python3.9[131129]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:08 np0005468397 python3.9[131281]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:08 np0005468397 python3.9[131404]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483387.5294943-775-134355421260179/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:09 np0005468397 python3.9[131556]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:10 np0005468397 python3.9[131679]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483389.1019304-775-102752062208779/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:11 np0005468397 python3.9[131831]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:12 np0005468397 python3.9[131954]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483390.7324145-775-189951901191825/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:12 np0005468397 python3.9[132106]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:13 np0005468397 python3.9[132229]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483392.224929-775-204374761617060/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:14 np0005468397 python3.9[132381]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:14 np0005468397 python3.9[132504]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483393.6045406-775-9219885811240/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:15 np0005468397 python3.9[132656]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:16 np0005468397 python3.9[132779]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483395.1065712-775-4212283759474/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:16 np0005468397 podman[132903]: 2025-10-03 09:23:16.972614009 +0000 UTC m=+0.125638065 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  3 05:23:17 np0005468397 python3.9[132949]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:17 np0005468397 python3.9[133080]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483396.5434744-775-86603714763427/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:18 np0005468397 python3.9[133232]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:19 np0005468397 python3.9[133355]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483398.050282-775-2664218907593/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:19 np0005468397 python3.9[133507]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:20 np0005468397 python3.9[133630]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483399.3674145-775-191931546050944/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:21 np0005468397 python3.9[133782]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:22 np0005468397 python3.9[133905]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483400.8372803-775-93172685639357/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:22 np0005468397 python3.9[134057]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:23 np0005468397 python3.9[134180]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483402.2399538-775-194589237863459/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:24 np0005468397 python3.9[134332]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:24 np0005468397 python3.9[134455]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483403.594551-775-151195722053810/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:25 np0005468397 python3.9[134607]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:26 np0005468397 python3.9[134730]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483405.1500235-775-134756049640412/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:27 np0005468397 python3.9[134882]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:27 np0005468397 python3.9[135005]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483406.55224-775-245908317036338/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:28 np0005468397 python3.9[135155]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:23:29 np0005468397 python3.9[135310]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct  3 05:23:31 np0005468397 dbus-broker-launch[784]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct  3 05:23:31 np0005468397 python3.9[135466]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:31 np0005468397 python3.9[135618]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:32 np0005468397 python3.9[135770]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:33 np0005468397 python3.9[135922]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:33 np0005468397 python3.9[136074]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:34 np0005468397 python3.9[136226]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:35 np0005468397 python3.9[136378]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:36 np0005468397 python3.9[136530]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:36 np0005468397 python3.9[136682]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:37 np0005468397 python3.9[136834]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:38 np0005468397 python3.9[136986]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:23:38 np0005468397 systemd[1]: Reloading.
Oct  3 05:23:38 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:23:38 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:23:38 np0005468397 systemd[1]: Starting libvirt logging daemon socket...
Oct  3 05:23:38 np0005468397 systemd[1]: Listening on libvirt logging daemon socket.
Oct  3 05:23:38 np0005468397 systemd[1]: Starting libvirt logging daemon admin socket...
Oct  3 05:23:38 np0005468397 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct  3 05:23:38 np0005468397 systemd[1]: Starting libvirt logging daemon...
Oct  3 05:23:38 np0005468397 systemd[1]: Started libvirt logging daemon.
Oct  3 05:23:39 np0005468397 python3.9[137179]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:23:39 np0005468397 systemd[1]: Reloading.
Oct  3 05:23:39 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:23:39 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:23:39 np0005468397 systemd[1]: Starting libvirt nodedev daemon socket...
Oct  3 05:23:39 np0005468397 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct  3 05:23:39 np0005468397 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct  3 05:23:39 np0005468397 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct  3 05:23:40 np0005468397 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct  3 05:23:40 np0005468397 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct  3 05:23:40 np0005468397 systemd[1]: Starting libvirt nodedev daemon...
Oct  3 05:23:40 np0005468397 systemd[1]: Started libvirt nodedev daemon.
Oct  3 05:23:40 np0005468397 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct  3 05:23:40 np0005468397 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct  3 05:23:40 np0005468397 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct  3 05:23:40 np0005468397 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct  3 05:23:40 np0005468397 python3.9[137402]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:23:40 np0005468397 systemd[1]: Reloading.
Oct  3 05:23:41 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:23:41 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:23:41 np0005468397 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct  3 05:23:41 np0005468397 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct  3 05:23:41 np0005468397 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct  3 05:23:41 np0005468397 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct  3 05:23:41 np0005468397 systemd[1]: Starting libvirt proxy daemon...
Oct  3 05:23:41 np0005468397 systemd[1]: Started libvirt proxy daemon.
Oct  3 05:23:41 np0005468397 setroubleshoot[137241]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f14adb83-bd70-43eb-bc2e-ebb2acc062b9
Oct  3 05:23:41 np0005468397 setroubleshoot[137241]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  3 05:23:41 np0005468397 setroubleshoot[137241]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f14adb83-bd70-43eb-bc2e-ebb2acc062b9
Oct  3 05:23:41 np0005468397 setroubleshoot[137241]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  3 05:23:42 np0005468397 python3.9[137613]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:23:42 np0005468397 systemd[1]: Reloading.
Oct  3 05:23:42 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:23:42 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:23:42 np0005468397 systemd[1]: Listening on libvirt locking daemon socket.
Oct  3 05:23:42 np0005468397 systemd[1]: Starting libvirt QEMU daemon socket...
Oct  3 05:23:42 np0005468397 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct  3 05:23:42 np0005468397 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct  3 05:23:42 np0005468397 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct  3 05:23:42 np0005468397 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct  3 05:23:42 np0005468397 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct  3 05:23:42 np0005468397 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct  3 05:23:42 np0005468397 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct  3 05:23:42 np0005468397 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct  3 05:23:42 np0005468397 systemd[1]: Starting libvirt QEMU daemon...
Oct  3 05:23:42 np0005468397 systemd[1]: Started libvirt QEMU daemon.
Oct  3 05:23:43 np0005468397 python3.9[137826]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:23:43 np0005468397 systemd[1]: Reloading.
Oct  3 05:23:43 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:23:43 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:23:43 np0005468397 systemd[1]: Starting libvirt secret daemon socket...
Oct  3 05:23:44 np0005468397 systemd[1]: Listening on libvirt secret daemon socket.
Oct  3 05:23:44 np0005468397 systemd[1]: Starting libvirt secret daemon admin socket...
Oct  3 05:23:44 np0005468397 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct  3 05:23:44 np0005468397 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct  3 05:23:44 np0005468397 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct  3 05:23:44 np0005468397 systemd[1]: Starting libvirt secret daemon...
Oct  3 05:23:44 np0005468397 systemd[1]: Started libvirt secret daemon.
Oct  3 05:23:45 np0005468397 python3.9[138035]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:45 np0005468397 python3.9[138187]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  3 05:23:46 np0005468397 python3.9[138339]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:47 np0005468397 podman[138434]: 2025-10-03 09:23:47.23089958 +0000 UTC m=+0.122113740 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  3 05:23:47 np0005468397 python3.9[138482]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483426.1360998-1120-27841602504856/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:48 np0005468397 python3.9[138641]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:49 np0005468397 python3.9[138793]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:49 np0005468397 python3.9[138871]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:50 np0005468397 python3.9[139023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:50 np0005468397 python3.9[139101]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.4dbrjejx recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:51 np0005468397 python3.9[139253]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:51 np0005468397 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct  3 05:23:51 np0005468397 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct  3 05:23:52 np0005468397 python3.9[139332]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:52 np0005468397 python3.9[139484]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:23:53 np0005468397 python3[139637]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  3 05:23:54 np0005468397 python3.9[139789]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:55 np0005468397 python3.9[139867]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:55 np0005468397 python3.9[140019]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:56 np0005468397 python3.9[140097]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:57 np0005468397 python3.9[140249]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:57 np0005468397 python3.9[140327]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:58 np0005468397 python3.9[140479]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:23:58 np0005468397 python3.9[140557]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:23:59 np0005468397 python3.9[140709]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:00 np0005468397 python3.9[140834]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483438.9338803-1245-79081750991505/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:00 np0005468397 python3.9[140986]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:01 np0005468397 python3.9[141138]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:24:02 np0005468397 python3.9[141293]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:02 np0005468397 python3.9[141445]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:24:03 np0005468397 python3.9[141598]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:24:04 np0005468397 python3.9[141752]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:24:05 np0005468397 python3.9[141907]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:06 np0005468397 python3.9[142060]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:06 np0005468397 python3.9[142183]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483445.535333-1317-169461451433807/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:07 np0005468397 python3.9[142335]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:08 np0005468397 python3.9[142458]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483447.0072358-1332-200995414902942/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:08 np0005468397 python3.9[142610]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:09 np0005468397 python3.9[142733]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483448.3485243-1347-165347490904923/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:10 np0005468397 python3.9[142885]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:24:10 np0005468397 systemd[1]: Reloading.
Oct  3 05:24:10 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:24:10 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:24:10 np0005468397 systemd[1]: Reached target edpm_libvirt.target.
Oct  3 05:24:11 np0005468397 python3.9[143077]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  3 05:24:11 np0005468397 systemd[1]: Reloading.
Oct  3 05:24:11 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:24:11 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:24:11 np0005468397 systemd[1]: Reloading.
Oct  3 05:24:11 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:24:11 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:24:12 np0005468397 systemd[1]: session-22.scope: Deactivated successfully.
Oct  3 05:24:12 np0005468397 systemd[1]: session-22.scope: Consumed 3min 26.348s CPU time.
Oct  3 05:24:12 np0005468397 systemd-logind[798]: Session 22 logged out. Waiting for processes to exit.
Oct  3 05:24:12 np0005468397 systemd-logind[798]: Removed session 22.
Oct  3 05:24:17 np0005468397 podman[143176]: 2025-10-03 09:24:17.918926492 +0000 UTC m=+0.167750196 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 05:24:18 np0005468397 systemd-logind[798]: New session 23 of user zuul.
Oct  3 05:24:18 np0005468397 systemd[1]: Started Session 23 of User zuul.
Oct  3 05:24:19 np0005468397 python3.9[143356]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:24:20 np0005468397 python3.9[143512]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 05:24:20 np0005468397 systemd[1]: Reloading.
Oct  3 05:24:21 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:24:21 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:24:22 np0005468397 python3.9[143696]: ansible-ansible.builtin.service_facts Invoked
Oct  3 05:24:22 np0005468397 network[143713]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  3 05:24:22 np0005468397 network[143714]: 'network-scripts' will be removed from distribution in near future.
Oct  3 05:24:22 np0005468397 network[143715]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  3 05:24:26 np0005468397 python3.9[143988]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:24:27 np0005468397 python3.9[144141]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:28 np0005468397 python3.9[144293]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:29 np0005468397 python3.9[144445]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:24:30 np0005468397 python3.9[144597]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  3 05:24:30 np0005468397 python3.9[144749]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 05:24:30 np0005468397 systemd[1]: Reloading.
Oct  3 05:24:31 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:24:31 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:24:31 np0005468397 python3.9[144937]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:24:32 np0005468397 python3.9[145090]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:24:33 np0005468397 python3.9[145240]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:24:34 np0005468397 python3.9[145392]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:34 np0005468397 python3.9[145513]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759483473.7542033-133-165110457085147/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:24:35 np0005468397 python3.9[145665]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Oct  3 05:24:37 np0005468397 python3.9[145817]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Oct  3 05:24:37 np0005468397 python3.9[145970]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  3 05:24:38 np0005468397 python3.9[146128]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  3 05:24:40 np0005468397 python3.9[146286]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:41 np0005468397 python3.9[146407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759483479.7768815-201-93036632301120/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:41 np0005468397 python3.9[146557]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:42 np0005468397 python3.9[146678]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759483481.194514-201-36774772160246/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:42 np0005468397 python3.9[146828]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:43 np0005468397 python3.9[146949]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759483482.5069983-201-173391517470566/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:44 np0005468397 python3.9[147099]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:24:45 np0005468397 python3.9[147251]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:24:45 np0005468397 python3.9[147403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:46 np0005468397 python3.9[147524]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483485.33799-260-4977040550617/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:47 np0005468397 python3.9[147674]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:47 np0005468397 python3.9[147750]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:48 np0005468397 podman[147874]: 2025-10-03 09:24:48.264562917 +0000 UTC m=+0.111093207 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 05:24:48 np0005468397 python3.9[147917]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:49 np0005468397 python3.9[148047]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483487.8244143-260-196718337370329/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:49 np0005468397 python3.9[148197]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:50 np0005468397 python3.9[148318]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483489.2578971-260-151277576583862/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:51 np0005468397 python3.9[148468]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:51 np0005468397 python3.9[148589]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483490.538432-260-197267467756164/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:52 np0005468397 python3.9[148739]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:53 np0005468397 python3.9[148860]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483491.9264688-260-224873777287644/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:53 np0005468397 python3.9[149010]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:54 np0005468397 python3.9[149131]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483493.269585-260-232253264817657/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:55 np0005468397 python3.9[149281]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:55 np0005468397 python3.9[149402]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483494.5425396-260-180016530432659/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:56 np0005468397 python3.9[149552]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:57 np0005468397 python3.9[149673]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483495.8581088-260-35063788184951/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:57 np0005468397 python3.9[149823]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:58 np0005468397 python3.9[149944]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483497.1980956-260-58830543350693/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:24:59 np0005468397 python3.9[150094]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:24:59 np0005468397 python3.9[150215]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483498.5541162-260-79310856838901/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:25:00 np0005468397 python3.9[150365]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:25:01 np0005468397 python3.9[150441]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:25:01 np0005468397 python3.9[150591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:25:02 np0005468397 python3.9[150667]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:25:03 np0005468397 python3.9[150817]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:25:03 np0005468397 python3.9[150893]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:25:04 np0005468397 python3.9[151045]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:25:05 np0005468397 python3.9[151197]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:25:06 np0005468397 python3.9[151349]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:25:06 np0005468397 python3.9[151501]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:25:07 np0005468397 systemd[1]: Reloading.
Oct  3 05:25:07 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:25:07 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:25:07 np0005468397 systemd[1]: Listening on Podman API Socket.
Oct  3 05:25:08 np0005468397 python3.9[151693]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:25:08 np0005468397 python3.9[151816]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759483507.7189088-482-61541585393435/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:25:09 np0005468397 python3.9[151892]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:25:09 np0005468397 python3.9[152015]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759483507.7189088-482-61541585393435/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:25:10 np0005468397 python3.9[152167]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Oct  3 05:25:11 np0005468397 python3.9[152319]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 05:25:12 np0005468397 python3[152471]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 05:25:18 np0005468397 podman[152522]: 2025-10-03 09:25:18.837023559 +0000 UTC m=+0.093282458 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct  3 05:25:27 np0005468397 podman[152484]: 2025-10-03 09:25:27.273506764 +0000 UTC m=+14.286043125 image pull f679b9c320fc42d5695129dd54be81f43c4c4ec41e2859a2f48785c28a8d8cbc quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Oct  3 05:25:27 np0005468397 podman[152652]: 2025-10-03 09:25:27.483304133 +0000 UTC m=+0.079754315 container create d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 05:25:27 np0005468397 podman[152652]: 2025-10-03 09:25:27.444430318 +0000 UTC m=+0.040880570 image pull f679b9c320fc42d5695129dd54be81f43c4c4ec41e2859a2f48785c28a8d8cbc quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Oct  3 05:25:27 np0005468397 python3[152471]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Oct  3 05:25:28 np0005468397 python3.9[152842]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:25:29 np0005468397 python3.9[152996]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:25:29 np0005468397 python3.9[153147]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759483529.2536058-546-44206113335781/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:25:30 np0005468397 python3.9[153223]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 05:25:30 np0005468397 systemd[1]: Reloading.
Oct  3 05:25:30 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:25:30 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:25:31 np0005468397 python3.9[153334]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:25:31 np0005468397 systemd[1]: Reloading.
Oct  3 05:25:31 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:25:31 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:25:32 np0005468397 systemd[1]: Starting ceilometer_agent_compute container...
Oct  3 05:25:32 np0005468397 systemd[1]: Started libcrun container.
Oct  3 05:25:32 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c98a55060b58199c4bc42ce3ae28d8cb4d260b5f5bce8470981640f442d1727/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 05:25:32 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c98a55060b58199c4bc42ce3ae28d8cb4d260b5f5bce8470981640f442d1727/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 05:25:32 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c98a55060b58199c4bc42ce3ae28d8cb4d260b5f5bce8470981640f442d1727/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct  3 05:25:32 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c98a55060b58199c4bc42ce3ae28d8cb4d260b5f5bce8470981640f442d1727/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct  3 05:25:32 np0005468397 systemd[1]: Started /usr/bin/podman healthcheck run d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.
Oct  3 05:25:32 np0005468397 podman[153375]: 2025-10-03 09:25:32.275959266 +0000 UTC m=+0.178857610 container init d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: + sudo -E kolla_set_configs
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: sudo: unable to send audit message: Operation not permitted
Oct  3 05:25:32 np0005468397 podman[153375]: 2025-10-03 09:25:32.311327228 +0000 UTC m=+0.214225552 container start d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 05:25:32 np0005468397 podman[153375]: ceilometer_agent_compute
Oct  3 05:25:32 np0005468397 systemd[1]: Started ceilometer_agent_compute container.
Oct  3 05:25:32 np0005468397 podman[153398]: 2025-10-03 09:25:32.376107513 +0000 UTC m=+0.045031393 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  3 05:25:32 np0005468397 systemd[1]: d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47-3b2aa11d5ab3d4f8.service: Main process exited, code=exited, status=1/FAILURE
Oct  3 05:25:32 np0005468397 systemd[1]: d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47-3b2aa11d5ab3d4f8.service: Failed with result 'exit-code'.
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: INFO:__main__:Validating config file
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: INFO:__main__:Copying service configuration files
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: INFO:__main__:Writing out command to execute
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: ++ cat /run_command
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: + ARGS=
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: + sudo kolla_copy_cacerts
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: sudo: unable to send audit message: Operation not permitted
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: + [[ ! -n '' ]]
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: + . kolla_extend_start
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: + umask 0022
Oct  3 05:25:32 np0005468397 ceilometer_agent_compute[153390]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Oct  3 05:25:33 np0005468397 python3.9[153575]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:25:33 np0005468397 systemd[1]: Stopping ceilometer_agent_compute container...
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.261 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.261 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.261 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.261 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.261 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.262 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.262 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.262 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.262 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.262 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.262 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.262 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.262 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.262 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.262 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.263 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.263 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.263 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.263 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.263 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.263 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.263 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.263 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.263 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.263 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.264 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.264 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.264 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.264 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.264 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.264 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.264 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.264 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.264 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.264 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.264 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.264 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.265 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.265 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.265 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.265 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.265 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.265 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.265 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.265 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.265 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.265 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.265 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.265 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.265 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.266 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.266 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.266 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.266 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.266 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.266 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.266 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.266 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.266 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.266 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.266 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.266 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.267 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.267 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.267 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.267 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.267 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.267 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.267 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.267 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.267 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.267 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.267 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.267 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.268 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.268 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.268 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.268 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.268 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.268 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.268 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.268 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.268 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.268 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.268 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.269 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.269 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.269 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.269 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.269 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.269 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.269 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.269 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.269 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.269 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.269 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.270 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.270 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.270 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.270 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.270 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.270 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.270 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.270 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.270 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.270 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.271 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.271 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.271 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.271 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.271 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.271 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.271 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.271 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.271 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.271 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.271 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.271 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.272 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.272 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.272 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.272 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.272 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.272 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.272 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.272 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.272 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.272 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.272 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.272 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.272 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.273 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.273 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.273 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.273 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.273 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.273 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.273 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.273 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.273 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.273 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.273 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.273 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.273 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.274 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.274 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.274 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.274 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.274 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.274 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.274 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.274 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.274 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.291 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.294 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.295 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.295 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.295 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.295 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.295 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.295 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.295 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.296 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.296 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.296 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.296 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.296 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.296 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.296 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.296 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.296 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.296 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.296 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.296 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.296 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.297 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.297 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.297 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.297 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.297 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.297 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.297 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.297 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.297 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.297 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.297 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.297 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.297 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.298 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.298 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.298 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.298 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.298 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.298 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.298 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.298 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.298 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.298 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.298 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.298 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.298 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.298 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.298 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.299 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.299 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.299 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.299 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.299 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.299 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.299 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.299 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.299 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.299 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.299 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.299 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.299 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.299 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.299 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.300 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.300 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.300 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.300 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.300 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.300 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.300 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.300 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.300 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.300 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.300 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.300 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.300 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.300 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.301 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.301 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.301 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.301 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.301 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.301 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.301 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.301 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.301 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.301 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.301 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.301 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.302 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.302 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.302 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.302 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.302 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.302 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.302 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.302 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.302 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.302 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.302 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.302 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.302 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.302 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.302 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.302 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.303 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.303 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.303 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.303 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.303 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.303 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.303 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.303 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.303 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.303 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.303 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.303 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.303 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.303 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.304 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.304 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.304 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.304 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.304 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.304 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.304 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.304 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.304 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.304 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.304 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.304 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.305 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.305 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.305 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.305 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.305 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.305 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.305 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.305 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.305 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.305 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.305 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.305 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.305 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.305 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.305 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.306 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.306 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.306 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.306 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.308 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.310 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.311 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.409 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.409 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.410 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.517 15 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.529 15 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.529 15 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.529 15 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.642 15 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.642 15 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.642 15 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.643 15 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.643 15 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.643 15 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.643 15 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.643 15 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.643 15 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.643 15 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.643 15 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.643 15 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.643 15 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.643 15 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.644 15 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.644 15 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.644 15 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.644 15 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.644 15 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.644 15 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.644 15 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.644 15 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.644 15 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.644 15 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.644 15 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.645 15 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.645 15 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.645 15 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.645 15 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.645 15 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.645 15 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.645 15 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.645 15 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.645 15 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.645 15 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.645 15 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.645 15 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.645 15 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.645 15 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.646 15 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.646 15 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.646 15 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.646 15 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.646 15 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.646 15 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.646 15 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.646 15 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.646 15 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.646 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.646 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.646 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.647 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.647 15 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.647 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.647 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.647 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.647 15 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.647 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.647 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.647 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.647 15 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.647 15 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.647 15 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.647 15 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.647 15 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.648 15 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.648 15 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.648 15 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.648 15 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.648 15 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.648 15 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.648 15 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.648 15 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.648 15 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.648 15 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.648 15 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.648 15 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.648 15 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.649 15 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.649 15 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.649 15 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.649 15 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.649 15 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.649 15 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.649 15 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.649 15 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.649 15 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.649 15 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.649 15 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.649 15 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.649 15 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.650 15 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.650 15 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.650 15 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.650 15 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.650 15 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.650 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.650 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.650 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.650 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.650 15 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.650 15 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.650 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.650 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.650 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.651 15 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.651 15 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.651 15 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.651 15 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.651 15 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.651 15 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.651 15 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.651 15 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.651 15 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.651 15 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.651 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.651 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.651 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.651 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.652 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.653 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.653 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.653 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.653 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.653 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.653 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.653 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.653 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.653 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.653 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.653 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.653 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.653 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.653 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.654 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.654 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.654 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.654 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.654 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.654 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.654 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.654 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.654 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.654 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.654 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.654 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.654 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.654 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.654 15 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.655 15 DEBUG cotyledon._service [-] Run service AgentManager(0) [15] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.655 15 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [15]
Oct  3 05:25:33 np0005468397 ceilometer_agent_compute[153390]: 2025-10-03 09:25:33.662 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Oct  3 05:25:33 np0005468397 virtqemud[137656]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct  3 05:25:33 np0005468397 virtqemud[137656]: hostname: compute-0
Oct  3 05:25:33 np0005468397 virtqemud[137656]: End of file while reading data: Input/output error
Oct  3 05:25:33 np0005468397 systemd[1]: libpod-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.scope: Deactivated successfully.
Oct  3 05:25:33 np0005468397 systemd[1]: libpod-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.scope: Consumed 1.584s CPU time.
Oct  3 05:25:33 np0005468397 podman[153579]: 2025-10-03 09:25:33.871684568 +0000 UTC m=+0.616128472 container died d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct  3 05:25:33 np0005468397 systemd[1]: d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47-3b2aa11d5ab3d4f8.timer: Deactivated successfully.
Oct  3 05:25:33 np0005468397 systemd[1]: Stopped /usr/bin/podman healthcheck run d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.
Oct  3 05:25:33 np0005468397 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47-userdata-shm.mount: Deactivated successfully.
Oct  3 05:25:33 np0005468397 systemd[1]: var-lib-containers-storage-overlay-6c98a55060b58199c4bc42ce3ae28d8cb4d260b5f5bce8470981640f442d1727-merged.mount: Deactivated successfully.
Oct  3 05:25:37 np0005468397 podman[153579]: 2025-10-03 09:25:37.469671071 +0000 UTC m=+4.214114995 container cleanup d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Oct  3 05:25:37 np0005468397 podman[153579]: ceilometer_agent_compute
Oct  3 05:25:37 np0005468397 podman[153621]: ceilometer_agent_compute
Oct  3 05:25:37 np0005468397 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Oct  3 05:25:37 np0005468397 systemd[1]: Stopped ceilometer_agent_compute container.
Oct  3 05:25:37 np0005468397 systemd[1]: Starting ceilometer_agent_compute container...
Oct  3 05:25:37 np0005468397 systemd[1]: Started libcrun container.
Oct  3 05:25:37 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c98a55060b58199c4bc42ce3ae28d8cb4d260b5f5bce8470981640f442d1727/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 05:25:37 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c98a55060b58199c4bc42ce3ae28d8cb4d260b5f5bce8470981640f442d1727/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 05:25:37 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c98a55060b58199c4bc42ce3ae28d8cb4d260b5f5bce8470981640f442d1727/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct  3 05:25:37 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c98a55060b58199c4bc42ce3ae28d8cb4d260b5f5bce8470981640f442d1727/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct  3 05:25:37 np0005468397 systemd[1]: Started /usr/bin/podman healthcheck run d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.
Oct  3 05:25:37 np0005468397 podman[153634]: 2025-10-03 09:25:37.724197342 +0000 UTC m=+0.137114952 container init d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: + sudo -E kolla_set_configs
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: sudo: unable to send audit message: Operation not permitted
Oct  3 05:25:37 np0005468397 podman[153634]: 2025-10-03 09:25:37.750157984 +0000 UTC m=+0.163075544 container start d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 05:25:37 np0005468397 podman[153634]: ceilometer_agent_compute
Oct  3 05:25:37 np0005468397 systemd[1]: Started ceilometer_agent_compute container.
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: INFO:__main__:Validating config file
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: INFO:__main__:Copying service configuration files
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: INFO:__main__:Writing out command to execute
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: ++ cat /run_command
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: + ARGS=
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: + sudo kolla_copy_cacerts
Oct  3 05:25:37 np0005468397 podman[153658]: 2025-10-03 09:25:37.826461767 +0000 UTC m=+0.059797466 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: sudo: unable to send audit message: Operation not permitted
Oct  3 05:25:37 np0005468397 systemd[1]: d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47-50705040be7aa45f.service: Main process exited, code=exited, status=1/FAILURE
Oct  3 05:25:37 np0005468397 systemd[1]: d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47-50705040be7aa45f.service: Failed with result 'exit-code'.
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: + [[ ! -n '' ]]
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: + . kolla_extend_start
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: + umask 0022
Oct  3 05:25:37 np0005468397 ceilometer_agent_compute[153651]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Oct  3 05:25:38 np0005468397 python3.9[153834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.722 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.722 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.722 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.723 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.723 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.723 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.723 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.723 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.723 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.723 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.723 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.723 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.724 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.724 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.724 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.724 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.724 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.724 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.724 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.724 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.724 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.725 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.725 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.725 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.725 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.725 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.725 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.725 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.725 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.725 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.725 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.726 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.726 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.726 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.726 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.726 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.726 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.726 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.726 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.726 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.726 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.727 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.727 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.727 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.727 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.727 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.727 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.727 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.727 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.727 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.727 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.727 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.727 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.728 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.728 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.728 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.728 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.728 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.728 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.728 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.728 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.728 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.728 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.729 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.729 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.729 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.729 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.729 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.729 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.729 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.729 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.729 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.729 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.729 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.729 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.730 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.730 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.730 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.730 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.730 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.730 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.730 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.730 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.730 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.730 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.731 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.731 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.731 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.731 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.731 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.731 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.731 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.731 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.731 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.732 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.732 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.732 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.732 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.732 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.732 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.732 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.732 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.732 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.733 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.733 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.733 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.733 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.733 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.733 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.733 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.733 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.733 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.733 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.734 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.734 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.734 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.734 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.734 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.734 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.734 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.734 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.734 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.735 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.735 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.735 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.735 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.735 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.735 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.735 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.735 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.735 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.735 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.736 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.736 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.736 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.736 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.736 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.736 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.736 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.736 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.736 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.736 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.737 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.737 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.737 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.737 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.737 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.737 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.737 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.737 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.737 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.737 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.738 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.758 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.759 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.760 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.760 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.760 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.760 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.760 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.761 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.761 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.761 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.761 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.762 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.762 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.762 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.762 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.763 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.763 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.763 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.763 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.763 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.764 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.764 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.764 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.764 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.764 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.765 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.765 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.765 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.765 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.765 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.766 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.766 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.766 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.766 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.766 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.767 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.767 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.767 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.767 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.767 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.768 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.768 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.768 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.768 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.769 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.769 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.769 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.769 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.769 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.770 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.770 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.770 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.770 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.770 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.771 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.771 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.771 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.771 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.771 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.772 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.772 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.772 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.772 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.772 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.773 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.773 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.773 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.773 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.773 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.774 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.774 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.774 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.774 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.775 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.775 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.775 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.775 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.775 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.776 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.776 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.776 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.776 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.776 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.777 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.777 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.777 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.777 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.778 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.778 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.778 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.778 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.778 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.779 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.779 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.779 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.779 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.779 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.780 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.780 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.780 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.780 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.780 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.781 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.781 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.781 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.781 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.781 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.782 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.782 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.782 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.782 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.782 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.783 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.783 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.782 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.783 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.783 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.784 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.784 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.784 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.784 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.784 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.785 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.785 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.785 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.785 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.785 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.786 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.786 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.786 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.786 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.786 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.787 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.787 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.787 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.787 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.787 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.788 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.788 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.788 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.788 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.789 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.789 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.789 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.789 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.789 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.789 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.789 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.790 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.790 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.790 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.790 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.790 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.791 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.791 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.791 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.791 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.792 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.793 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.795 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.796 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.910 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.910 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.910 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.910 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.910 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.910 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.910 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.910 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.911 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.911 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.911 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.911 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.911 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.911 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.911 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.911 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.911 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.911 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.912 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.912 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.912 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.912 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.912 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.912 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.912 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.912 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.912 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.912 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.912 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.913 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.913 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.913 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.913 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.913 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.913 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.913 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.913 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.913 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.914 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.914 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.914 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.914 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.914 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.914 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.914 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.914 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.914 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.914 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.915 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.915 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.915 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.915 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.915 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.915 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.915 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.915 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.915 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.915 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.915 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.915 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.915 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.916 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.916 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.916 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.916 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.916 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.916 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.916 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.916 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.916 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.916 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.916 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.916 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.917 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.917 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.917 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.917 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.917 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.917 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.917 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.917 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.917 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.917 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.917 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.918 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.918 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.918 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.918 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.918 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.918 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.918 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.918 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.918 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.918 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.918 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.918 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.918 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.919 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.919 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.919 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.919 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.919 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.919 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.919 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.919 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.919 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.919 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.919 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.919 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.920 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.920 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.920 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.920 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.920 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.920 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.920 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.920 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.920 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.920 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.920 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.920 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.920 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.920 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.921 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.921 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.921 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.921 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.921 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.921 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.921 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.921 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.921 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.921 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.921 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.921 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.921 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.921 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.921 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.921 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.921 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.922 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.922 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.922 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.922 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.922 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.922 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.922 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.922 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.922 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.922 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.922 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.922 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.923 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.923 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.923 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.923 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.923 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.923 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.923 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.923 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.923 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.923 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.923 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.923 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.924 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.924 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.924 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.924 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.924 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.927 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.947 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.947 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.947 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f70b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa35c9f7170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b8940b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.949 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35df74380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b894380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e566ba0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f73b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e6b9400>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.953 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa35b894080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f76e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa35c9f71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa35c9f7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa35c9f7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa35c9f72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa35c9f7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa35c955970>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa35b894350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa35c92f7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa35c9f7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa35c9f7710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa35c9f79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa35e6e6180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa35c9f73e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa35c9f7c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa35c9f7440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa35c9f7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa35c9f7ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa35c9f7d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa35c9f7e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa35c9f7650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa35c9f7e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa35c9f76b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa35c9f7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa35c9f7fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:25:38.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:25:39 np0005468397 python3.9[153965]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759483537.9911647-578-138540397126311/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:25:40 np0005468397 python3.9[154122]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Oct  3 05:25:40 np0005468397 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct  3 05:25:40 np0005468397 python3.9[154275]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 05:25:41 np0005468397 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  3 05:25:41 np0005468397 python3[154428]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 05:25:42 np0005468397 podman[154440]: 2025-10-03 09:25:42.990744752 +0000 UTC m=+1.168873814 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Oct  3 05:25:43 np0005468397 podman[154536]: 2025-10-03 09:25:43.121966164 +0000 UTC m=+0.042611695 container create 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm)
Oct  3 05:25:43 np0005468397 podman[154536]: 2025-10-03 09:25:43.096382775 +0000 UTC m=+0.017028326 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Oct  3 05:25:43 np0005468397 python3[154428]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Oct  3 05:25:43 np0005468397 python3.9[154724]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:25:44 np0005468397 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  3 05:25:44 np0005468397 python3.9[154879]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:25:45 np0005468397 python3.9[155030]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759483544.981541-631-105358386380632/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:25:46 np0005468397 python3.9[155106]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 05:25:46 np0005468397 systemd[1]: Reloading.
Oct  3 05:25:46 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:25:46 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:25:47 np0005468397 python3.9[155217]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:25:47 np0005468397 systemd[1]: Reloading.
Oct  3 05:25:47 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:25:47 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:25:47 np0005468397 systemd[1]: Starting node_exporter container...
Oct  3 05:25:48 np0005468397 systemd[1]: Started libcrun container.
Oct  3 05:25:48 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbe36b3145962233b5e5c8c64a6acbf4327cbc0c9eca94a637632715abf8d06/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 05:25:48 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbe36b3145962233b5e5c8c64a6acbf4327cbc0c9eca94a637632715abf8d06/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 05:25:48 np0005468397 systemd[1]: Started /usr/bin/podman healthcheck run 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866.
Oct  3 05:25:48 np0005468397 podman[155256]: 2025-10-03 09:25:48.113691721 +0000 UTC m=+0.171830493 container init 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.134Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.134Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.134Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.134Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.134Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=node_exporter.go:117 level=info collector=arp
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=node_exporter.go:117 level=info collector=bcache
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=node_exporter.go:117 level=info collector=bonding
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=node_exporter.go:117 level=info collector=btrfs
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=node_exporter.go:117 level=info collector=conntrack
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=node_exporter.go:117 level=info collector=cpu
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=node_exporter.go:117 level=info collector=diskstats
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=node_exporter.go:117 level=info collector=edac
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=node_exporter.go:117 level=info collector=filefd
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=node_exporter.go:117 level=info collector=filesystem
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.135Z caller=node_exporter.go:117 level=info collector=infiniband
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=ipvs
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=loadavg
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=mdadm
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=meminfo
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=netclass
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=netdev
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=netstat
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=nfs
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=nfsd
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=nvme
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=schedstat
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=sockstat
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=softnet
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=systemd
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=tapestats
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=vmstat
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=xfs
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.136Z caller=node_exporter.go:117 level=info collector=zfs
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.137Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Oct  3 05:25:48 np0005468397 node_exporter[155271]: ts=2025-10-03T09:25:48.137Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Oct  3 05:25:48 np0005468397 podman[155256]: 2025-10-03 09:25:48.145403647 +0000 UTC m=+0.203542359 container start 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 05:25:48 np0005468397 podman[155256]: node_exporter
Oct  3 05:25:48 np0005468397 systemd[1]: Started node_exporter container.
Oct  3 05:25:48 np0005468397 podman[155280]: 2025-10-03 09:25:48.224417808 +0000 UTC m=+0.066280814 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 05:25:49 np0005468397 python3.9[155454]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:25:49 np0005468397 systemd[1]: Stopping node_exporter container...
Oct  3 05:25:49 np0005468397 podman[155456]: 2025-10-03 09:25:49.335053545 +0000 UTC m=+0.173042572 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  3 05:25:49 np0005468397 systemd[1]: libpod-18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866.scope: Deactivated successfully.
Oct  3 05:25:49 np0005468397 podman[155469]: 2025-10-03 09:25:49.349212689 +0000 UTC m=+0.148037192 container died 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 05:25:49 np0005468397 systemd[1]: 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866-2ba59a94d3b75560.timer: Deactivated successfully.
Oct  3 05:25:49 np0005468397 systemd[1]: Stopped /usr/bin/podman healthcheck run 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866.
Oct  3 05:25:49 np0005468397 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866-userdata-shm.mount: Deactivated successfully.
Oct  3 05:25:49 np0005468397 systemd[1]: var-lib-containers-storage-overlay-6fbe36b3145962233b5e5c8c64a6acbf4327cbc0c9eca94a637632715abf8d06-merged.mount: Deactivated successfully.
Oct  3 05:25:49 np0005468397 podman[155469]: 2025-10-03 09:25:49.873417296 +0000 UTC m=+0.672241839 container cleanup 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 05:25:49 np0005468397 podman[155469]: node_exporter
Oct  3 05:25:49 np0005468397 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct  3 05:25:49 np0005468397 podman[155511]: node_exporter
Oct  3 05:25:49 np0005468397 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Oct  3 05:25:49 np0005468397 systemd[1]: Stopped node_exporter container.
Oct  3 05:25:49 np0005468397 systemd[1]: Starting node_exporter container...
Oct  3 05:25:50 np0005468397 systemd[1]: Started libcrun container.
Oct  3 05:25:50 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbe36b3145962233b5e5c8c64a6acbf4327cbc0c9eca94a637632715abf8d06/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 05:25:50 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbe36b3145962233b5e5c8c64a6acbf4327cbc0c9eca94a637632715abf8d06/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 05:25:50 np0005468397 systemd[1]: Started /usr/bin/podman healthcheck run 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866.
Oct  3 05:25:50 np0005468397 podman[155524]: 2025-10-03 09:25:50.146861353 +0000 UTC m=+0.155367866 container init 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.167Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.167Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.168Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.169Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.169Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.169Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.169Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.170Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.170Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.170Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.170Z caller=node_exporter.go:117 level=info collector=arp
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.170Z caller=node_exporter.go:117 level=info collector=bcache
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.170Z caller=node_exporter.go:117 level=info collector=bonding
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.170Z caller=node_exporter.go:117 level=info collector=btrfs
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.170Z caller=node_exporter.go:117 level=info collector=conntrack
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.170Z caller=node_exporter.go:117 level=info collector=cpu
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.170Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.170Z caller=node_exporter.go:117 level=info collector=diskstats
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.170Z caller=node_exporter.go:117 level=info collector=edac
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.170Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.170Z caller=node_exporter.go:117 level=info collector=filefd
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.170Z caller=node_exporter.go:117 level=info collector=filesystem
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.170Z caller=node_exporter.go:117 level=info collector=infiniband
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.170Z caller=node_exporter.go:117 level=info collector=ipvs
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=loadavg
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=mdadm
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=meminfo
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=netclass
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=netdev
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=netstat
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=nfs
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=nfsd
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=nvme
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=schedstat
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=sockstat
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=softnet
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=systemd
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=tapestats
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=vmstat
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=xfs
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.171Z caller=node_exporter.go:117 level=info collector=zfs
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.172Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Oct  3 05:25:50 np0005468397 node_exporter[155539]: ts=2025-10-03T09:25:50.173Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Oct  3 05:25:50 np0005468397 podman[155524]: 2025-10-03 09:25:50.191545654 +0000 UTC m=+0.200052137 container start 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 05:25:50 np0005468397 podman[155524]: node_exporter
Oct  3 05:25:50 np0005468397 systemd[1]: Started node_exporter container.
Oct  3 05:25:50 np0005468397 podman[155549]: 2025-10-03 09:25:50.290126331 +0000 UTC m=+0.079770356 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 05:25:51 np0005468397 python3.9[155723]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:25:51 np0005468397 python3.9[155846]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759483550.4481063-663-248675403552712/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:25:52 np0005468397 python3.9[155998]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Oct  3 05:25:53 np0005468397 python3.9[156150]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 05:25:54 np0005468397 python3[156302]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 05:25:55 np0005468397 podman[156316]: 2025-10-03 09:25:55.724751903 +0000 UTC m=+1.203699679 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Oct  3 05:25:55 np0005468397 podman[156414]: 2025-10-03 09:25:55.873131525 +0000 UTC m=+0.050887471 container create b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm)
Oct  3 05:25:55 np0005468397 podman[156414]: 2025-10-03 09:25:55.847001388 +0000 UTC m=+0.024757424 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Oct  3 05:25:55 np0005468397 python3[156302]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Oct  3 05:25:56 np0005468397 python3.9[156604]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:25:57 np0005468397 python3.9[156758]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:25:58 np0005468397 python3.9[156909]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759483557.7124145-716-210134669028360/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:25:59 np0005468397 python3.9[156985]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 05:25:59 np0005468397 systemd[1]: Reloading.
Oct  3 05:25:59 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:25:59 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:25:59 np0005468397 python3.9[157097]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:26:00 np0005468397 systemd[1]: Reloading.
Oct  3 05:26:00 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:26:00 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:26:00 np0005468397 systemd[1]: Starting podman_exporter container...
Oct  3 05:26:00 np0005468397 systemd[1]: Started libcrun container.
Oct  3 05:26:00 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae78b2a9d0131aa74ea1592b94b2e8358ef5f3ee849098d3fd1d608a7d74365d/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 05:26:00 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae78b2a9d0131aa74ea1592b94b2e8358ef5f3ee849098d3fd1d608a7d74365d/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 05:26:00 np0005468397 systemd[1]: Started /usr/bin/podman healthcheck run b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1.
Oct  3 05:26:00 np0005468397 podman[157137]: 2025-10-03 09:26:00.527405556 +0000 UTC m=+0.150799100 container init b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 05:26:00 np0005468397 podman_exporter[157153]: ts=2025-10-03T09:26:00.552Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct  3 05:26:00 np0005468397 podman_exporter[157153]: ts=2025-10-03T09:26:00.552Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct  3 05:26:00 np0005468397 podman_exporter[157153]: ts=2025-10-03T09:26:00.552Z caller=handler.go:94 level=info msg="enabled collectors"
Oct  3 05:26:00 np0005468397 podman_exporter[157153]: ts=2025-10-03T09:26:00.552Z caller=handler.go:105 level=info collector=container
Oct  3 05:26:00 np0005468397 podman[157137]: 2025-10-03 09:26:00.566489001 +0000 UTC m=+0.189882555 container start b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 05:26:00 np0005468397 podman[157137]: podman_exporter
Oct  3 05:26:00 np0005468397 systemd[1]: Starting Podman API Service...
Oct  3 05:26:00 np0005468397 systemd[1]: Started podman_exporter container.
Oct  3 05:26:00 np0005468397 systemd[1]: Started Podman API Service.
Oct  3 05:26:00 np0005468397 podman[157165]: time="2025-10-03T09:26:00Z" level=info msg="/usr/bin/podman filtering at log level info"
Oct  3 05:26:00 np0005468397 podman[157165]: time="2025-10-03T09:26:00Z" level=info msg="Setting parallel job count to 25"
Oct  3 05:26:00 np0005468397 podman[157165]: time="2025-10-03T09:26:00Z" level=info msg="Using sqlite as database backend"
Oct  3 05:26:00 np0005468397 podman[157165]: time="2025-10-03T09:26:00Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct  3 05:26:00 np0005468397 podman[157165]: time="2025-10-03T09:26:00Z" level=info msg="Using systemd socket activation to determine API endpoint"
Oct  3 05:26:00 np0005468397 podman[157165]: time="2025-10-03T09:26:00Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Oct  3 05:26:00 np0005468397 podman[157165]: @ - - [03/Oct/2025:09:26:00 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct  3 05:26:00 np0005468397 podman[157165]: time="2025-10-03T09:26:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 05:26:00 np0005468397 podman[157165]: @ - - [03/Oct/2025:09:26:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 9686 "" "Go-http-client/1.1"
Oct  3 05:26:00 np0005468397 podman_exporter[157153]: ts=2025-10-03T09:26:00.664Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Oct  3 05:26:00 np0005468397 podman_exporter[157153]: ts=2025-10-03T09:26:00.665Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Oct  3 05:26:00 np0005468397 podman_exporter[157153]: ts=2025-10-03T09:26:00.666Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Oct  3 05:26:00 np0005468397 podman[157163]: 2025-10-03 09:26:00.685345007 +0000 UTC m=+0.103759398 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 05:26:00 np0005468397 systemd[1]: b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1-7598d5ab8eb5889d.service: Main process exited, code=exited, status=1/FAILURE
Oct  3 05:26:00 np0005468397 systemd[1]: b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1-7598d5ab8eb5889d.service: Failed with result 'exit-code'.
Oct  3 05:26:01 np0005468397 python3.9[157353]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:26:01 np0005468397 systemd[1]: Stopping podman_exporter container...
Oct  3 05:26:01 np0005468397 podman[157165]: @ - - [03/Oct/2025:09:26:00 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Oct  3 05:26:01 np0005468397 systemd[1]: libpod-b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1.scope: Deactivated successfully.
Oct  3 05:26:01 np0005468397 podman[157357]: 2025-10-03 09:26:01.653842366 +0000 UTC m=+0.042106553 container died b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 05:26:01 np0005468397 systemd[1]: b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1-7598d5ab8eb5889d.timer: Deactivated successfully.
Oct  3 05:26:01 np0005468397 systemd[1]: Stopped /usr/bin/podman healthcheck run b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1.
Oct  3 05:26:01 np0005468397 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1-userdata-shm.mount: Deactivated successfully.
Oct  3 05:26:01 np0005468397 systemd[1]: var-lib-containers-storage-overlay-ae78b2a9d0131aa74ea1592b94b2e8358ef5f3ee849098d3fd1d608a7d74365d-merged.mount: Deactivated successfully.
Oct  3 05:26:01 np0005468397 podman[157357]: 2025-10-03 09:26:01.911890856 +0000 UTC m=+0.300155073 container cleanup b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 05:26:01 np0005468397 podman[157357]: podman_exporter
Oct  3 05:26:01 np0005468397 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct  3 05:26:01 np0005468397 podman[157384]: podman_exporter
Oct  3 05:26:01 np0005468397 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Oct  3 05:26:01 np0005468397 systemd[1]: Stopped podman_exporter container.
Oct  3 05:26:01 np0005468397 systemd[1]: Starting podman_exporter container...
Oct  3 05:26:02 np0005468397 systemd[1]: Started libcrun container.
Oct  3 05:26:02 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae78b2a9d0131aa74ea1592b94b2e8358ef5f3ee849098d3fd1d608a7d74365d/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 05:26:02 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae78b2a9d0131aa74ea1592b94b2e8358ef5f3ee849098d3fd1d608a7d74365d/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 05:26:02 np0005468397 systemd[1]: Started /usr/bin/podman healthcheck run b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1.
Oct  3 05:26:02 np0005468397 podman[157397]: 2025-10-03 09:26:02.139836792 +0000 UTC m=+0.132512229 container init b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 05:26:02 np0005468397 podman_exporter[157413]: ts=2025-10-03T09:26:02.155Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct  3 05:26:02 np0005468397 podman_exporter[157413]: ts=2025-10-03T09:26:02.155Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct  3 05:26:02 np0005468397 podman[157165]: @ - - [03/Oct/2025:09:26:02 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct  3 05:26:02 np0005468397 podman_exporter[157413]: ts=2025-10-03T09:26:02.155Z caller=handler.go:94 level=info msg="enabled collectors"
Oct  3 05:26:02 np0005468397 podman_exporter[157413]: ts=2025-10-03T09:26:02.155Z caller=handler.go:105 level=info collector=container
Oct  3 05:26:02 np0005468397 podman[157165]: time="2025-10-03T09:26:02Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 05:26:02 np0005468397 podman[157397]: 2025-10-03 09:26:02.172397246 +0000 UTC m=+0.165072663 container start b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 05:26:02 np0005468397 podman[157397]: podman_exporter
Oct  3 05:26:02 np0005468397 podman[157165]: @ - - [03/Oct/2025:09:26:02 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 9688 "" "Go-http-client/1.1"
Oct  3 05:26:02 np0005468397 podman_exporter[157413]: ts=2025-10-03T09:26:02.188Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Oct  3 05:26:02 np0005468397 podman_exporter[157413]: ts=2025-10-03T09:26:02.188Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Oct  3 05:26:02 np0005468397 podman_exporter[157413]: ts=2025-10-03T09:26:02.189Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Oct  3 05:26:02 np0005468397 systemd[1]: Started podman_exporter container.
Oct  3 05:26:02 np0005468397 podman[157423]: 2025-10-03 09:26:02.244710756 +0000 UTC m=+0.061856703 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 05:26:02 np0005468397 python3.9[157599]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:26:03 np0005468397 python3.9[157722]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759483562.396202-748-242217461750261/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:26:04 np0005468397 python3.9[157874]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Oct  3 05:26:05 np0005468397 python3.9[158026]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 05:26:05 np0005468397 python3[158178]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 05:26:08 np0005468397 podman[158250]: 2025-10-03 09:26:08.154231818 +0000 UTC m=+0.117144952 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Oct  3 05:26:08 np0005468397 systemd[1]: d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47-50705040be7aa45f.service: Main process exited, code=exited, status=1/FAILURE
Oct  3 05:26:08 np0005468397 systemd[1]: d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47-50705040be7aa45f.service: Failed with result 'exit-code'.
Oct  3 05:26:08 np0005468397 podman[158191]: 2025-10-03 09:26:08.417667362 +0000 UTC m=+2.407532204 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct  3 05:26:08 np0005468397 podman[158304]: 2025-10-03 09:26:08.580865323 +0000 UTC m=+0.057023916 container create e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Oct  3 05:26:08 np0005468397 podman[158304]: 2025-10-03 09:26:08.548815936 +0000 UTC m=+0.024974549 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct  3 05:26:08 np0005468397 python3[158178]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct  3 05:26:09 np0005468397 python3.9[158494]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:26:10 np0005468397 python3.9[158648]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:11 np0005468397 python3.9[158799]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759483570.3779018-801-197053737790000/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:11 np0005468397 python3.9[158875]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 05:26:11 np0005468397 systemd[1]: Reloading.
Oct  3 05:26:11 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:26:11 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:26:12 np0005468397 python3.9[158986]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:26:12 np0005468397 systemd[1]: Reloading.
Oct  3 05:26:12 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:26:12 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:26:12 np0005468397 systemd[1]: Starting openstack_network_exporter container...
Oct  3 05:26:13 np0005468397 systemd[1]: Started libcrun container.
Oct  3 05:26:13 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402c235bd6531d096123bc08afe911a8e62d39966c0eafcc0015bdf22273d2fe/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct  3 05:26:13 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402c235bd6531d096123bc08afe911a8e62d39966c0eafcc0015bdf22273d2fe/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 05:26:13 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402c235bd6531d096123bc08afe911a8e62d39966c0eafcc0015bdf22273d2fe/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 05:26:13 np0005468397 systemd[1]: Started /usr/bin/podman healthcheck run e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3.
Oct  3 05:26:13 np0005468397 podman[159026]: 2025-10-03 09:26:13.092179081 +0000 UTC m=+0.161311650 container init e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Oct  3 05:26:13 np0005468397 openstack_network_exporter[159043]: INFO    09:26:13 main.go:48: registering *bridge.Collector
Oct  3 05:26:13 np0005468397 openstack_network_exporter[159043]: INFO    09:26:13 main.go:48: registering *coverage.Collector
Oct  3 05:26:13 np0005468397 openstack_network_exporter[159043]: INFO    09:26:13 main.go:48: registering *datapath.Collector
Oct  3 05:26:13 np0005468397 openstack_network_exporter[159043]: INFO    09:26:13 main.go:48: registering *iface.Collector
Oct  3 05:26:13 np0005468397 openstack_network_exporter[159043]: INFO    09:26:13 main.go:48: registering *memory.Collector
Oct  3 05:26:13 np0005468397 openstack_network_exporter[159043]: INFO    09:26:13 main.go:48: registering *ovnnorthd.Collector
Oct  3 05:26:13 np0005468397 openstack_network_exporter[159043]: INFO    09:26:13 main.go:48: registering *ovn.Collector
Oct  3 05:26:13 np0005468397 openstack_network_exporter[159043]: INFO    09:26:13 main.go:48: registering *ovsdbserver.Collector
Oct  3 05:26:13 np0005468397 openstack_network_exporter[159043]: INFO    09:26:13 main.go:48: registering *pmd_perf.Collector
Oct  3 05:26:13 np0005468397 openstack_network_exporter[159043]: INFO    09:26:13 main.go:48: registering *pmd_rxq.Collector
Oct  3 05:26:13 np0005468397 openstack_network_exporter[159043]: INFO    09:26:13 main.go:48: registering *vswitch.Collector
Oct  3 05:26:13 np0005468397 openstack_network_exporter[159043]: NOTICE  09:26:13 main.go:76: listening on https://:9105/metrics
Oct  3 05:26:13 np0005468397 podman[159026]: 2025-10-03 09:26:13.125072085 +0000 UTC m=+0.194204574 container start e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350)
Oct  3 05:26:13 np0005468397 podman[159026]: openstack_network_exporter
Oct  3 05:26:13 np0005468397 systemd[1]: Started openstack_network_exporter container.
Oct  3 05:26:13 np0005468397 podman[159052]: 2025-10-03 09:26:13.22472491 +0000 UTC m=+0.080047692 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_id=edpm, name=ubi9-minimal, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  3 05:26:13 np0005468397 python3.9[159227]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:26:14 np0005468397 systemd[1]: Stopping openstack_network_exporter container...
Oct  3 05:26:14 np0005468397 systemd[1]: libpod-e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3.scope: Deactivated successfully.
Oct  3 05:26:14 np0005468397 podman[159231]: 2025-10-03 09:26:14.089612876 +0000 UTC m=+0.054522065 container died e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41)
Oct  3 05:26:14 np0005468397 systemd[1]: e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3-32d7028a30c19074.timer: Deactivated successfully.
Oct  3 05:26:14 np0005468397 systemd[1]: Stopped /usr/bin/podman healthcheck run e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3.
Oct  3 05:26:14 np0005468397 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3-userdata-shm.mount: Deactivated successfully.
Oct  3 05:26:14 np0005468397 systemd[1]: var-lib-containers-storage-overlay-402c235bd6531d096123bc08afe911a8e62d39966c0eafcc0015bdf22273d2fe-merged.mount: Deactivated successfully.
Oct  3 05:26:14 np0005468397 podman[159231]: 2025-10-03 09:26:14.778757126 +0000 UTC m=+0.743666345 container cleanup e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct  3 05:26:14 np0005468397 podman[159231]: openstack_network_exporter
Oct  3 05:26:14 np0005468397 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct  3 05:26:14 np0005468397 podman[159258]: openstack_network_exporter
Oct  3 05:26:14 np0005468397 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Oct  3 05:26:14 np0005468397 systemd[1]: Stopped openstack_network_exporter container.
Oct  3 05:26:14 np0005468397 systemd[1]: Starting openstack_network_exporter container...
Oct  3 05:26:14 np0005468397 systemd[1]: Started libcrun container.
Oct  3 05:26:14 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402c235bd6531d096123bc08afe911a8e62d39966c0eafcc0015bdf22273d2fe/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct  3 05:26:14 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402c235bd6531d096123bc08afe911a8e62d39966c0eafcc0015bdf22273d2fe/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 05:26:14 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402c235bd6531d096123bc08afe911a8e62d39966c0eafcc0015bdf22273d2fe/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 05:26:15 np0005468397 systemd[1]: Started /usr/bin/podman healthcheck run e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3.
Oct  3 05:26:15 np0005468397 podman[159271]: 2025-10-03 09:26:15.045757205 +0000 UTC m=+0.152549297 container init e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, version=9.6)
Oct  3 05:26:15 np0005468397 openstack_network_exporter[159287]: INFO    09:26:15 main.go:48: registering *bridge.Collector
Oct  3 05:26:15 np0005468397 openstack_network_exporter[159287]: INFO    09:26:15 main.go:48: registering *coverage.Collector
Oct  3 05:26:15 np0005468397 openstack_network_exporter[159287]: INFO    09:26:15 main.go:48: registering *datapath.Collector
Oct  3 05:26:15 np0005468397 openstack_network_exporter[159287]: INFO    09:26:15 main.go:48: registering *iface.Collector
Oct  3 05:26:15 np0005468397 openstack_network_exporter[159287]: INFO    09:26:15 main.go:48: registering *memory.Collector
Oct  3 05:26:15 np0005468397 openstack_network_exporter[159287]: INFO    09:26:15 main.go:48: registering *ovnnorthd.Collector
Oct  3 05:26:15 np0005468397 openstack_network_exporter[159287]: INFO    09:26:15 main.go:48: registering *ovn.Collector
Oct  3 05:26:15 np0005468397 openstack_network_exporter[159287]: INFO    09:26:15 main.go:48: registering *ovsdbserver.Collector
Oct  3 05:26:15 np0005468397 openstack_network_exporter[159287]: INFO    09:26:15 main.go:48: registering *pmd_perf.Collector
Oct  3 05:26:15 np0005468397 openstack_network_exporter[159287]: INFO    09:26:15 main.go:48: registering *pmd_rxq.Collector
Oct  3 05:26:15 np0005468397 openstack_network_exporter[159287]: INFO    09:26:15 main.go:48: registering *vswitch.Collector
Oct  3 05:26:15 np0005468397 openstack_network_exporter[159287]: NOTICE  09:26:15 main.go:76: listening on https://:9105/metrics
Oct  3 05:26:15 np0005468397 podman[159271]: 2025-10-03 09:26:15.080567772 +0000 UTC m=+0.187359824 container start e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350)
Oct  3 05:26:15 np0005468397 podman[159271]: openstack_network_exporter
Oct  3 05:26:15 np0005468397 systemd[1]: Started openstack_network_exporter container.
Oct  3 05:26:15 np0005468397 podman[159297]: 2025-10-03 09:26:15.188222315 +0000 UTC m=+0.091123009 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 05:26:15 np0005468397 python3.9[159469]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  3 05:26:16 np0005468397 python3.9[159621]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Oct  3 05:26:17 np0005468397 python3.9[159786]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:26:18 np0005468397 systemd[1]: Started libpod-conmon-e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4.scope.
Oct  3 05:26:18 np0005468397 rsyslogd[1009]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 05:26:18 np0005468397 podman[159787]: 2025-10-03 09:26:18.033642738 +0000 UTC m=+0.092534435 container exec e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 05:26:18 np0005468397 podman[159787]: 2025-10-03 09:26:18.067013618 +0000 UTC m=+0.125905335 container exec_died e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller)
Oct  3 05:26:18 np0005468397 systemd[1]: libpod-conmon-e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4.scope: Deactivated successfully.
Oct  3 05:26:18 np0005468397 python3.9[159971]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:26:18 np0005468397 systemd[1]: Started libpod-conmon-e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4.scope.
Oct  3 05:26:18 np0005468397 podman[159972]: 2025-10-03 09:26:18.922966214 +0000 UTC m=+0.080069021 container exec e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 05:26:18 np0005468397 podman[159992]: 2025-10-03 09:26:18.991552974 +0000 UTC m=+0.054708832 container exec_died e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  3 05:26:18 np0005468397 podman[159972]: 2025-10-03 09:26:18.998740136 +0000 UTC m=+0.155842953 container exec_died e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 05:26:19 np0005468397 systemd[1]: libpod-conmon-e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4.scope: Deactivated successfully.
Oct  3 05:26:19 np0005468397 podman[160128]: 2025-10-03 09:26:19.516725157 +0000 UTC m=+0.094994754 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  3 05:26:19 np0005468397 python3.9[160175]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:20 np0005468397 python3.9[160333]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Oct  3 05:26:20 np0005468397 podman[160394]: 2025-10-03 09:26:20.80737206 +0000 UTC m=+0.066346267 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 05:26:21 np0005468397 python3.9[160522]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:26:21 np0005468397 systemd[1]: Started libpod-conmon-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.scope.
Oct  3 05:26:21 np0005468397 podman[160523]: 2025-10-03 09:26:21.413498574 +0000 UTC m=+0.116467010 container exec d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, managed_by=edpm_ansible)
Oct  3 05:26:21 np0005468397 podman[160523]: 2025-10-03 09:26:21.44553648 +0000 UTC m=+0.148504916 container exec_died d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Oct  3 05:26:21 np0005468397 systemd[1]: libpod-conmon-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.scope: Deactivated successfully.
Oct  3 05:26:22 np0005468397 python3.9[160704]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:26:22 np0005468397 systemd[1]: Started libpod-conmon-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.scope.
Oct  3 05:26:22 np0005468397 podman[160705]: 2025-10-03 09:26:22.395629754 +0000 UTC m=+0.082932764 container exec d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Oct  3 05:26:22 np0005468397 podman[160705]: 2025-10-03 09:26:22.426070659 +0000 UTC m=+0.113373669 container exec_died d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Oct  3 05:26:22 np0005468397 systemd[1]: libpod-conmon-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.scope: Deactivated successfully.
Oct  3 05:26:23 np0005468397 python3.9[160889]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:23 np0005468397 python3.9[161041]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Oct  3 05:26:24 np0005468397 python3.9[161206]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:26:24 np0005468397 systemd[1]: Started libpod-conmon-18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866.scope.
Oct  3 05:26:24 np0005468397 podman[161207]: 2025-10-03 09:26:24.992567176 +0000 UTC m=+0.086591783 container exec 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 05:26:25 np0005468397 podman[161207]: 2025-10-03 09:26:25.027872829 +0000 UTC m=+0.121897426 container exec_died 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 05:26:25 np0005468397 systemd[1]: libpod-conmon-18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866.scope: Deactivated successfully.
Oct  3 05:26:25 np0005468397 python3.9[161389]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:26:25 np0005468397 systemd[1]: Started libpod-conmon-18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866.scope.
Oct  3 05:26:25 np0005468397 podman[161390]: 2025-10-03 09:26:25.808116066 +0000 UTC m=+0.081183678 container exec 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 05:26:25 np0005468397 podman[161390]: 2025-10-03 09:26:25.841730023 +0000 UTC m=+0.114797635 container exec_died 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 05:26:25 np0005468397 systemd[1]: libpod-conmon-18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866.scope: Deactivated successfully.
Oct  3 05:26:26 np0005468397 python3.9[161572]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:27 np0005468397 python3.9[161724]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Oct  3 05:26:28 np0005468397 python3.9[161889]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:26:28 np0005468397 systemd[1]: Started libpod-conmon-b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1.scope.
Oct  3 05:26:28 np0005468397 podman[161890]: 2025-10-03 09:26:28.188866243 +0000 UTC m=+0.058420693 container exec b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 05:26:28 np0005468397 podman[161890]: 2025-10-03 09:26:28.221610272 +0000 UTC m=+0.091164722 container exec_died b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 05:26:28 np0005468397 systemd[1]: libpod-conmon-b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1.scope: Deactivated successfully.
Oct  3 05:26:28 np0005468397 python3.9[162073]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:26:28 np0005468397 systemd[1]: Started libpod-conmon-b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1.scope.
Oct  3 05:26:28 np0005468397 podman[162074]: 2025-10-03 09:26:28.993094416 +0000 UTC m=+0.067078082 container exec b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 05:26:29 np0005468397 podman[162074]: 2025-10-03 09:26:29.023955834 +0000 UTC m=+0.097939480 container exec_died b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 05:26:29 np0005468397 systemd[1]: libpod-conmon-b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1.scope: Deactivated successfully.
Oct  3 05:26:29 np0005468397 python3.9[162257]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:30 np0005468397 python3.9[162409]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Oct  3 05:26:31 np0005468397 python3.9[162574]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:26:31 np0005468397 systemd[1]: Started libpod-conmon-e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3.scope.
Oct  3 05:26:31 np0005468397 podman[162575]: 2025-10-03 09:26:31.328545077 +0000 UTC m=+0.069676146 container exec e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64)
Oct  3 05:26:31 np0005468397 podman[162575]: 2025-10-03 09:26:31.333490656 +0000 UTC m=+0.074621735 container exec_died e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 05:26:31 np0005468397 systemd[1]: libpod-conmon-e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3.scope: Deactivated successfully.
Oct  3 05:26:31 np0005468397 python3.9[162758]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:26:32 np0005468397 systemd[1]: Started libpod-conmon-e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3.scope.
Oct  3 05:26:32 np0005468397 podman[162759]: 2025-10-03 09:26:32.087626679 +0000 UTC m=+0.069594953 container exec e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct  3 05:26:32 np0005468397 podman[162759]: 2025-10-03 09:26:32.120384189 +0000 UTC m=+0.102352423 container exec_died e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git)
Oct  3 05:26:32 np0005468397 systemd[1]: libpod-conmon-e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3.scope: Deactivated successfully.
Oct  3 05:26:32 np0005468397 podman[162914]: 2025-10-03 09:26:32.72859394 +0000 UTC m=+0.097111943 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 05:26:32 np0005468397 python3.9[162955]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:33 np0005468397 python3.9[163118]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:34 np0005468397 python3.9[163270]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:26:34 np0005468397 python3.9[163393]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483593.8295722-1016-147464978054928/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:35 np0005468397 python3.9[163545]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:36 np0005468397 python3.9[163697]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:26:37 np0005468397 python3.9[163775]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:37 np0005468397 python3.9[163927]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:26:38 np0005468397 python3.9[164005]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.inofjfhj recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:38 np0005468397 podman[164105]: 2025-10-03 09:26:38.801031414 +0000 UTC m=+0.062711710 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 05:26:39 np0005468397 python3.9[164177]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:26:39 np0005468397 python3.9[164255]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:40 np0005468397 python3.9[164407]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:26:41 np0005468397 python3[164560]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  3 05:26:42 np0005468397 python3.9[164712]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:26:42 np0005468397 python3.9[164790]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:43 np0005468397 python3.9[164942]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:26:43 np0005468397 python3.9[165020]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:44 np0005468397 python3.9[165172]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:26:45 np0005468397 python3.9[165250]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:45 np0005468397 podman[165374]: 2025-10-03 09:26:45.570289106 +0000 UTC m=+0.060481158 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, distribution-scope=public, version=9.6, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, release=1755695350, vcs-type=git)
Oct  3 05:26:45 np0005468397 python3.9[165423]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:26:46 np0005468397 python3.9[165502]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:47 np0005468397 python3.9[165654]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:26:47 np0005468397 python3.9[165779]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483606.434969-1141-112194611525828/.source.nft follow=False _original_basename=ruleset.j2 checksum=bc835bd485c96b4ac7465e87d3a790a8d097f2aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:48 np0005468397 python3.9[165931]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:48 np0005468397 python3.9[166083]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:26:49 np0005468397 podman[166210]: 2025-10-03 09:26:49.777196823 +0000 UTC m=+0.097869758 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 05:26:49 np0005468397 python3.9[166254]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:50 np0005468397 python3.9[166413]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:26:51 np0005468397 podman[166538]: 2025-10-03 09:26:51.137743878 +0000 UTC m=+0.041838035 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 05:26:51 np0005468397 python3.9[166588]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:26:52 np0005468397 python3.9[166742]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:26:52 np0005468397 python3.9[166898]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:26:53 np0005468397 systemd[1]: session-23.scope: Deactivated successfully.
Oct  3 05:26:53 np0005468397 systemd[1]: session-23.scope: Consumed 2min 16.539s CPU time.
Oct  3 05:26:53 np0005468397 systemd-logind[798]: Session 23 logged out. Waiting for processes to exit.
Oct  3 05:26:53 np0005468397 systemd-logind[798]: Removed session 23.
Oct  3 05:26:59 np0005468397 systemd-logind[798]: New session 24 of user zuul.
Oct  3 05:26:59 np0005468397 systemd[1]: Started Session 24 of User zuul.
Oct  3 05:26:59 np0005468397 podman[157165]: time="2025-10-03T09:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 05:26:59 np0005468397 podman[157165]: @ - - [03/Oct/2025:09:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 12784 "" "Go-http-client/1.1"
Oct  3 05:26:59 np0005468397 podman[157165]: @ - - [03/Oct/2025:09:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2126 "" "Go-http-client/1.1"
Oct  3 05:27:00 np0005468397 python3.9[167081]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 05:27:00 np0005468397 systemd[1]: Reloading.
Oct  3 05:27:00 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:27:00 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:27:01 np0005468397 python3.9[167266]: ansible-ansible.builtin.service_facts Invoked
Oct  3 05:27:01 np0005468397 network[167283]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  3 05:27:01 np0005468397 network[167284]: 'network-scripts' will be removed from distribution in near future.
Oct  3 05:27:01 np0005468397 network[167285]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  3 05:27:01 np0005468397 openstack_network_exporter[159287]: ERROR   09:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 05:27:01 np0005468397 openstack_network_exporter[159287]: ERROR   09:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 05:27:01 np0005468397 openstack_network_exporter[159287]: ERROR   09:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 05:27:01 np0005468397 openstack_network_exporter[159287]: ERROR   09:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 05:27:01 np0005468397 openstack_network_exporter[159287]: 
Oct  3 05:27:01 np0005468397 openstack_network_exporter[159287]: ERROR   09:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 05:27:01 np0005468397 openstack_network_exporter[159287]: 
Oct  3 05:27:02 np0005468397 podman[167323]: 2025-10-03 09:27:02.866587952 +0000 UTC m=+0.070007256 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 05:27:08 np0005468397 python3.9[167585]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:27:09 np0005468397 podman[167710]: 2025-10-03 09:27:09.213142099 +0000 UTC m=+0.068426245 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, org.label-schema.build-date=20250930)
Oct  3 05:27:09 np0005468397 python3.9[167755]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:27:10 np0005468397 python3.9[167910]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:27:11 np0005468397 python3.9[168062]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:27:11 np0005468397 python3.9[168214]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  3 05:27:12 np0005468397 python3.9[168366]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 05:27:12 np0005468397 systemd[1]: Reloading.
Oct  3 05:27:13 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:27:13 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:27:14 np0005468397 python3.9[168553]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:27:14 np0005468397 python3.9[168706]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:27:15 np0005468397 python3.9[168856]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:27:15 np0005468397 podman[168857]: 2025-10-03 09:27:15.800326638 +0000 UTC m=+0.063050923 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=)
Oct  3 05:27:16 np0005468397 python3.9[169029]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:27:17 np0005468397 python3.9[169150]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759483635.965121-125-30959203107508/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:27:18 np0005468397 python3.9[169302]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Oct  3 05:27:19 np0005468397 python3.9[169453]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:27:19 np0005468397 python3.9[169575]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759483638.8751357-171-68175131795848/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:27:19 np0005468397 podman[169576]: 2025-10-03 09:27:19.915925405 +0000 UTC m=+0.085092383 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3)
Oct  3 05:27:20 np0005468397 python3.9[169748]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:27:20 np0005468397 python3.9[169869]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759483639.95615-171-245843097887532/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:27:21 np0005468397 systemd[1]: packagekit.service: Deactivated successfully.
Oct  3 05:27:21 np0005468397 podman[169993]: 2025-10-03 09:27:21.393063064 +0000 UTC m=+0.050725715 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 05:27:21 np0005468397 python3.9[170037]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:27:22 np0005468397 python3.9[170165]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759483641.1012273-171-105629529449445/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:27:22 np0005468397 python3.9[170315]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:27:23 np0005468397 python3.9[170467]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:27:24 np0005468397 python3.9[170619]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:27:24 np0005468397 python3.9[170740]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483643.5837064-230-39385502875249/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:27:25 np0005468397 python3.9[170890]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:27:25 np0005468397 python3.9[170966]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:27:26 np0005468397 python3.9[171116]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:27:26 np0005468397 python3.9[171237]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483645.8874664-230-17467818000766/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:27:27 np0005468397 python3.9[171387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:27:28 np0005468397 python3.9[171508]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483647.140004-230-272727602831921/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:27:28 np0005468397 python3.9[171658]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:27:29 np0005468397 python3.9[171779]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483648.2615323-230-225470317875858/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:27:29 np0005468397 podman[157165]: time="2025-10-03T09:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 05:27:29 np0005468397 podman[157165]: @ - - [03/Oct/2025:09:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 12784 "" "Go-http-client/1.1"
Oct  3 05:27:29 np0005468397 podman[157165]: @ - - [03/Oct/2025:09:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2134 "" "Go-http-client/1.1"
Oct  3 05:27:29 np0005468397 python3.9[171929]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:27:30 np0005468397 python3.9[172050]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483649.4177856-230-87039943502682/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:27:31 np0005468397 python3.9[172200]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:27:31 np0005468397 openstack_network_exporter[159287]: ERROR   09:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 05:27:31 np0005468397 openstack_network_exporter[159287]: ERROR   09:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 05:27:31 np0005468397 openstack_network_exporter[159287]: ERROR   09:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 05:27:31 np0005468397 openstack_network_exporter[159287]: 
Oct  3 05:27:31 np0005468397 openstack_network_exporter[159287]: ERROR   09:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 05:27:31 np0005468397 openstack_network_exporter[159287]: ERROR   09:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 05:27:31 np0005468397 openstack_network_exporter[159287]: 
Oct  3 05:27:31 np0005468397 python3.9[172276]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:27:32 np0005468397 python3.9[172429]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:27:32 np0005468397 python3.9[172581]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:27:33 np0005468397 podman[172582]: 2025-10-03 09:27:33.099194278 +0000 UTC m=+0.075477772 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 05:27:33 np0005468397 python3.9[172758]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:27:34 np0005468397 python3.9[172910]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:27:34 np0005468397 python3.9[173033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759483653.8574197-349-27324926429610/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:27:35 np0005468397 python3.9[173109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:27:36 np0005468397 python3.9[173232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759483653.8574197-349-27324926429610/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:27:36 np0005468397 python3.9[173384]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:27:37 np0005468397 python3.9[173507]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759483656.364967-349-27901031690234/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  3 05:27:38 np0005468397 python3.9[173659]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.947 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.948 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f70b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa35c9f7170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b8940b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.949 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35df74380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b894380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e566ba0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f73b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e6b9400>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f76e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b391520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.955 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa35b894080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.955 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa35c9f71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa35c9f7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa35c9f7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa35c9f72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa35c9f7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa35c955970>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa35b894350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa35c92f7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa35c9f7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa35c9f7710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa35c9f79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa35e6e6180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa35c9f73e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa35c9f7c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa35c9f7440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa35c9f7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa35c9f7ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa35c9f7d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa35c9f7e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa35c9f7650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa35c9f7e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa35c9f76b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa35c9f7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa35c9f7fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:38 np0005468397 ceilometer_agent_compute[153651]: 2025-10-03 09:27:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 05:27:39 np0005468397 podman[173812]: 2025-10-03 09:27:39.395153277 +0000 UTC m=+0.074183831 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  3 05:27:39 np0005468397 python3.9[173813]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 05:27:40 np0005468397 python3[173984]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 05:27:46 np0005468397 podman[174072]: 2025-10-03 09:27:46.704402244 +0000 UTC m=+0.222450155 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-type=git, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9)
Oct  3 05:27:46 np0005468397 podman[173996]: 2025-10-03 09:27:46.750686726 +0000 UTC m=+5.948663774 image pull 4e3fcb5b1fba62258ff06f167ae06a1ec1b5619d7c6c0d986039bf8e54f8eb69 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Oct  3 05:27:46 np0005468397 podman[174116]: 2025-10-03 09:27:46.940868242 +0000 UTC m=+0.096976706 container create e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 05:27:46 np0005468397 podman[174116]: 2025-10-03 09:27:46.866739013 +0000 UTC m=+0.022847497 image pull 4e3fcb5b1fba62258ff06f167ae06a1ec1b5619d7c6c0d986039bf8e54f8eb69 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Oct  3 05:27:46 np0005468397 python3[173984]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Oct  3 05:27:47 np0005468397 python3.9[174304]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:27:48 np0005468397 python3.9[174458]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:27:49 np0005468397 python3.9[174609]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759483668.6353288-427-63058477649473/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:27:50 np0005468397 python3.9[174686]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 05:27:50 np0005468397 systemd[1]: Reloading.
Oct  3 05:27:50 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:27:50 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:27:50 np0005468397 podman[174688]: 2025-10-03 09:27:50.448442723 +0000 UTC m=+0.111898125 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 05:27:51 np0005468397 python3.9[174823]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:27:51 np0005468397 systemd[1]: Reloading.
Oct  3 05:27:51 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:27:51 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:27:51 np0005468397 systemd[1]: Starting ceilometer_agent_ipmi container...
Oct  3 05:27:51 np0005468397 podman[174862]: 2025-10-03 09:27:51.755451063 +0000 UTC m=+0.165464931 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 05:27:51 np0005468397 systemd[1]: Started libcrun container.
Oct  3 05:27:51 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dab6352b7fe95bce8343d5b29441da55af56ff7858d9613cc673c93b41ca2c/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 05:27:51 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dab6352b7fe95bce8343d5b29441da55af56ff7858d9613cc673c93b41ca2c/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 05:27:51 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dab6352b7fe95bce8343d5b29441da55af56ff7858d9613cc673c93b41ca2c/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct  3 05:27:51 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dab6352b7fe95bce8343d5b29441da55af56ff7858d9613cc673c93b41ca2c/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct  3 05:27:51 np0005468397 systemd[1]: Started /usr/bin/podman healthcheck run e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7.
Oct  3 05:27:52 np0005468397 podman[174864]: 2025-10-03 09:27:52.042990835 +0000 UTC m=+0.448871120 container init e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001)
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: + sudo -E kolla_set_configs
Oct  3 05:27:52 np0005468397 podman[174864]: 2025-10-03 09:27:52.07639868 +0000 UTC m=+0.482279035 container start e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: INFO:__main__:Validating config file
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: INFO:__main__:Copying service configuration files
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: INFO:__main__:Writing out command to execute
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: ++ cat /run_command
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: + ARGS=
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: + sudo kolla_copy_cacerts
Oct  3 05:27:52 np0005468397 podman[174864]: ceilometer_agent_ipmi
Oct  3 05:27:52 np0005468397 systemd[1]: Started ceilometer_agent_ipmi container.
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: + [[ ! -n '' ]]
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: + . kolla_extend_start
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: + umask 0022
Oct  3 05:27:52 np0005468397 ceilometer_agent_ipmi[174899]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Oct  3 05:27:52 np0005468397 podman[174907]: 2025-10-03 09:27:52.248072091 +0000 UTC m=+0.161674799 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 05:27:52 np0005468397 systemd[1]: e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7-75aaa152195b60f9.service: Main process exited, code=exited, status=1/FAILURE
Oct  3 05:27:52 np0005468397 systemd[1]: e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7-75aaa152195b60f9.service: Failed with result 'exit-code'.
Oct  3 05:27:53 np0005468397 python3.9[175082]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.098 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.099 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.099 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.099 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.099 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.099 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.099 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.099 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.099 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.099 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.099 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.100 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.100 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.100 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.100 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.100 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.100 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.100 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.100 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.100 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.101 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.101 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.101 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.101 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.101 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.101 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.101 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.101 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.101 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.101 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.101 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.101 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.101 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.102 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.102 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.102 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.102 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.102 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.102 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.102 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.102 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.102 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.102 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.102 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.102 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.103 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.103 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.103 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.103 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.103 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.103 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.103 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.103 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.103 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.103 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.103 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.104 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.104 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.104 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.104 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.104 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.104 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.104 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.104 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.104 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.105 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.105 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.105 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.105 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.105 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.105 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.105 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.105 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.105 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.105 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.105 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.105 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.106 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.106 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.106 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.106 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.106 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.106 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.106 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.106 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.106 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.106 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.106 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.106 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.107 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.107 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.107 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.107 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.107 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.107 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.107 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.107 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.107 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.107 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.107 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.107 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.108 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.108 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.108 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.108 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.108 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.108 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.108 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.108 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.108 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.108 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.109 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.109 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.109 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.109 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.109 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.109 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.109 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.109 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.109 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.109 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.110 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.110 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.110 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.110 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.110 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.110 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.110 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.110 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.110 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.110 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.110 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.111 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.111 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.111 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.111 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.111 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.111 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.111 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.111 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.111 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.111 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.111 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.111 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.111 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.112 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.112 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.112 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.112 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.112 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.112 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.112 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.112 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.112 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.112 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.112 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.113 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.113 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.113 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.113 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.113 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.113 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.113 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.113 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.113 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.134 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.136 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.137 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.239 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp3zb4oanz/privsep.sock']
Oct  3 05:27:53 np0005468397 python3.9[175242]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 05:27:53 np0005468397 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.983 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.983 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp3zb4oanz/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.835 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.842 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.845 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct  3 05:27:53 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:53.845 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.096 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.096 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.098 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.098 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.099 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.099 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.099 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.099 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.100 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.100 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.100 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.101 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.101 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.106 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.106 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.106 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.107 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.107 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.107 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.107 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.108 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.108 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.108 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.108 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.109 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.109 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.109 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.110 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.110 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.110 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.111 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.111 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.111 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.111 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.112 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.112 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.112 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.112 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.113 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.113 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.113 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.113 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.114 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.114 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.114 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.114 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.115 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.115 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.115 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.115 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.115 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.116 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.116 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.116 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.116 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.117 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.117 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.117 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.117 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.118 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.118 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.118 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.119 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.119 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.119 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.119 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.120 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.120 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.120 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.120 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.121 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.121 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.121 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.121 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.122 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.122 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.122 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.122 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.123 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.123 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.123 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.123 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.124 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.124 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.124 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.124 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.125 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.125 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.125 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.125 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.126 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.126 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.126 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.126 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.127 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.127 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.127 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.127 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.128 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.128 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.128 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.128 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.129 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.129 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.129 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.130 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.130 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.131 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.131 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.131 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.131 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.132 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.132 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.132 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.132 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.133 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.133 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.133 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.134 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.134 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.134 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.134 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.135 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.135 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.135 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.135 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.136 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.136 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.136 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.137 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.137 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.137 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.137 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.138 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.138 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.138 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.138 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.139 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.139 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.139 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.140 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.140 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.140 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.140 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.141 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.141 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.141 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.141 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.142 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.142 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.142 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.142 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.143 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.143 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.143 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.143 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.143 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.144 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.144 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.144 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.144 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.144 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.144 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.144 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.145 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.145 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.145 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.145 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.145 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.145 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.146 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.146 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.146 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.146 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.146 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.146 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.146 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.147 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.147 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.147 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.147 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.147 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.147 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.147 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.148 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.148 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.148 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.148 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.148 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.148 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.149 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.149 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.149 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.149 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.149 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.149 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.150 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.150 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.150 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.150 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.150 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.150 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.150 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.151 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.151 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.151 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.151 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.151 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.151 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.152 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.152 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.152 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.152 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.152 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.152 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.152 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Oct  3 05:27:54 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:27:54.155 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Oct  3 05:27:54 np0005468397 python3[175399]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 05:27:59 np0005468397 podman[157165]: time="2025-10-03T09:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 05:27:59 np0005468397 podman[157165]: @ - - [03/Oct/2025:09:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 15574 "" "Go-http-client/1.1"
Oct  3 05:27:59 np0005468397 podman[157165]: @ - - [03/Oct/2025:09:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2576 "" "Go-http-client/1.1"
Oct  3 05:28:00 np0005468397 podman[175413]: 2025-10-03 09:28:00.138556679 +0000 UTC m=+5.314405222 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Oct  3 05:28:00 np0005468397 podman[175641]: 2025-10-03 09:28:00.347106777 +0000 UTC m=+0.074499830 container create 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_id=edpm, version=9.4, com.redhat.component=ubi9-container, architecture=x86_64, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, vcs-type=git, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9)
Oct  3 05:28:00 np0005468397 podman[175641]: 2025-10-03 09:28:00.310082344 +0000 UTC m=+0.037475397 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Oct  3 05:28:00 np0005468397 python3[175399]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Oct  3 05:28:01 np0005468397 python3.9[175832]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:28:01 np0005468397 openstack_network_exporter[159287]: ERROR   09:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 05:28:01 np0005468397 openstack_network_exporter[159287]: ERROR   09:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 05:28:01 np0005468397 openstack_network_exporter[159287]: ERROR   09:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 05:28:01 np0005468397 openstack_network_exporter[159287]: ERROR   09:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 05:28:01 np0005468397 openstack_network_exporter[159287]: 
Oct  3 05:28:01 np0005468397 openstack_network_exporter[159287]: ERROR   09:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 05:28:01 np0005468397 openstack_network_exporter[159287]: 
Oct  3 05:28:02 np0005468397 python3.9[175986]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:02 np0005468397 python3.9[176137]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759483682.1836193-489-4067261845747/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:03 np0005468397 python3.9[176213]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 05:28:03 np0005468397 systemd[1]: Reloading.
Oct  3 05:28:03 np0005468397 podman[176215]: 2025-10-03 09:28:03.506394009 +0000 UTC m=+0.057441960 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 05:28:03 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:28:03 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:28:04 np0005468397 python3.9[176348]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 05:28:04 np0005468397 systemd[1]: Reloading.
Oct  3 05:28:04 np0005468397 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 05:28:04 np0005468397 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 05:28:04 np0005468397 systemd[1]: Starting kepler container...
Oct  3 05:28:04 np0005468397 systemd[1]: Started libcrun container.
Oct  3 05:28:04 np0005468397 systemd[1]: Started /usr/bin/podman healthcheck run 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8.
Oct  3 05:28:04 np0005468397 podman[176388]: 2025-10-03 09:28:04.843008583 +0000 UTC m=+0.134596536 container init 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=, release=1214.1726694543, vendor=Red Hat, Inc.)
Oct  3 05:28:04 np0005468397 kepler[176403]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct  3 05:28:04 np0005468397 kepler[176403]: I1003 09:28:04.869875       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Oct  3 05:28:04 np0005468397 kepler[176403]: I1003 09:28:04.869989       1 config.go:293] using gCgroup ID in the BPF program: true
Oct  3 05:28:04 np0005468397 kepler[176403]: I1003 09:28:04.870011       1 config.go:295] kernel version: 5.14
Oct  3 05:28:04 np0005468397 kepler[176403]: I1003 09:28:04.870593       1 power.go:78] Unable to obtain power, use estimate method
Oct  3 05:28:04 np0005468397 kepler[176403]: I1003 09:28:04.870616       1 redfish.go:169] failed to get redfish credential file path
Oct  3 05:28:04 np0005468397 kepler[176403]: I1003 09:28:04.870930       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Oct  3 05:28:04 np0005468397 kepler[176403]: I1003 09:28:04.870943       1 power.go:79] using none to obtain power
Oct  3 05:28:04 np0005468397 kepler[176403]: E1003 09:28:04.870955       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Oct  3 05:28:04 np0005468397 kepler[176403]: E1003 09:28:04.870977       1 exporter.go:154] failed to init GPU accelerators: no devices found
Oct  3 05:28:04 np0005468397 kepler[176403]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct  3 05:28:04 np0005468397 kepler[176403]: I1003 09:28:04.872452       1 exporter.go:84] Number of CPUs: 8
Oct  3 05:28:04 np0005468397 podman[176388]: 2025-10-03 09:28:04.877837616 +0000 UTC m=+0.169425519 container start 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-container, distribution-scope=public, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, vcs-type=git, release-0.7.12=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 05:28:04 np0005468397 podman[176388]: kepler
Oct  3 05:28:04 np0005468397 systemd[1]: Started kepler container.
Oct  3 05:28:04 np0005468397 podman[176413]: 2025-10-03 09:28:04.974299192 +0000 UTC m=+0.085903337 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, io.buildah.version=1.29.0, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, io.openshift.expose-services=, container_name=kepler, name=ubi9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Oct  3 05:28:04 np0005468397 systemd[1]: 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8-733796f8c939088c.service: Main process exited, code=exited, status=1/FAILURE
Oct  3 05:28:04 np0005468397 systemd[1]: 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8-733796f8c939088c.service: Failed with result 'exit-code'.
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.437886       1 watcher.go:83] Using in cluster k8s config
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.438488       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Oct  3 05:28:05 np0005468397 kepler[176403]: E1003 09:28:05.438624       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.446991       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.447062       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.451311       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.451363       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.461775       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.461846       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.461874       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.473985       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.474038       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.474048       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.474057       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.474067       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.474088       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.474196       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.474287       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.474322       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.474349       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.474546       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Oct  3 05:28:05 np0005468397 kepler[176403]: I1003 09:28:05.475210       1 exporter.go:208] Started Kepler in 605.520885ms
Oct  3 05:28:05 np0005468397 python3.9[176589]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:28:05 np0005468397 systemd[1]: Stopping ceilometer_agent_ipmi container...
Oct  3 05:28:06 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:28:06.024 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Oct  3 05:28:06 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:28:06.126 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Oct  3 05:28:06 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:28:06.126 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Oct  3 05:28:06 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:28:06.127 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Oct  3 05:28:06 np0005468397 ceilometer_agent_ipmi[174899]: 2025-10-03 09:28:06.141 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Oct  3 05:28:06 np0005468397 systemd[1]: libpod-e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7.scope: Deactivated successfully.
Oct  3 05:28:06 np0005468397 systemd[1]: libpod-e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7.scope: Consumed 2.400s CPU time.
Oct  3 05:28:06 np0005468397 podman[176603]: 2025-10-03 09:28:06.321309838 +0000 UTC m=+0.547219235 container died e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Oct  3 05:28:06 np0005468397 systemd[1]: e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7-75aaa152195b60f9.timer: Deactivated successfully.
Oct  3 05:28:06 np0005468397 systemd[1]: Stopped /usr/bin/podman healthcheck run e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7.
Oct  3 05:28:06 np0005468397 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7-userdata-shm.mount: Deactivated successfully.
Oct  3 05:28:06 np0005468397 systemd[1]: var-lib-containers-storage-overlay-82dab6352b7fe95bce8343d5b29441da55af56ff7858d9613cc673c93b41ca2c-merged.mount: Deactivated successfully.
Oct  3 05:28:07 np0005468397 podman[176603]: 2025-10-03 09:28:07.462678904 +0000 UTC m=+1.688588271 container cleanup e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  3 05:28:07 np0005468397 podman[176603]: ceilometer_agent_ipmi
Oct  3 05:28:07 np0005468397 podman[176630]: ceilometer_agent_ipmi
Oct  3 05:28:07 np0005468397 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Oct  3 05:28:07 np0005468397 systemd[1]: Stopped ceilometer_agent_ipmi container.
Oct  3 05:28:07 np0005468397 systemd[1]: Starting ceilometer_agent_ipmi container...
Oct  3 05:28:07 np0005468397 systemd[1]: Started libcrun container.
Oct  3 05:28:07 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dab6352b7fe95bce8343d5b29441da55af56ff7858d9613cc673c93b41ca2c/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 05:28:07 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dab6352b7fe95bce8343d5b29441da55af56ff7858d9613cc673c93b41ca2c/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 05:28:07 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dab6352b7fe95bce8343d5b29441da55af56ff7858d9613cc673c93b41ca2c/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct  3 05:28:07 np0005468397 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dab6352b7fe95bce8343d5b29441da55af56ff7858d9613cc673c93b41ca2c/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct  3 05:28:07 np0005468397 systemd[1]: Started /usr/bin/podman healthcheck run e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7.
Oct  3 05:28:07 np0005468397 podman[176643]: 2025-10-03 09:28:07.752436371 +0000 UTC m=+0.175790586 container init e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: + sudo -E kolla_set_configs
Oct  3 05:28:07 np0005468397 podman[176643]: 2025-10-03 09:28:07.787080401 +0000 UTC m=+0.210434596 container start e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct  3 05:28:07 np0005468397 podman[176643]: ceilometer_agent_ipmi
Oct  3 05:28:07 np0005468397 systemd[1]: Started ceilometer_agent_ipmi container.
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: INFO:__main__:Validating config file
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: INFO:__main__:Copying service configuration files
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: INFO:__main__:Writing out command to execute
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: ++ cat /run_command
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: + ARGS=
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: + sudo kolla_copy_cacerts
Oct  3 05:28:07 np0005468397 podman[176666]: 2025-10-03 09:28:07.879366923 +0000 UTC m=+0.081515370 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 05:28:07 np0005468397 systemd[1]: e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7-6cef6d41888ae8b6.service: Main process exited, code=exited, status=1/FAILURE
Oct  3 05:28:07 np0005468397 systemd[1]: e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7-6cef6d41888ae8b6.service: Failed with result 'exit-code'.
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: + [[ ! -n '' ]]
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: + . kolla_extend_start
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: + umask 0022
Oct  3 05:28:07 np0005468397 ceilometer_agent_ipmi[176659]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Oct  3 05:28:08 np0005468397 python3.9[176839]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 05:28:08 np0005468397 systemd[1]: Stopping kepler container...
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.812 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.812 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.812 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.812 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.812 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.813 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.813 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.813 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.813 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.813 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.813 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.813 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.814 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.814 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.814 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.814 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.814 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.814 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.814 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.814 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.815 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.815 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.815 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.815 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.815 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.815 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.815 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.815 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.816 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.816 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.816 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.816 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.816 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.816 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.816 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.816 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.816 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.817 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.817 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.817 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.817 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.817 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.817 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.817 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.817 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.818 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.818 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.818 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.818 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.818 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.818 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.818 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.818 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.818 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.819 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.819 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.819 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.819 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.819 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.819 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.819 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.820 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.820 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.820 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.820 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.820 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.820 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.820 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.820 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.821 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.821 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.821 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.821 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.821 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.821 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.821 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.821 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.822 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.822 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.822 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.822 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.822 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.822 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.822 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.822 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.823 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.823 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.823 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.823 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.823 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.823 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.823 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.823 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.824 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.824 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.824 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.824 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.824 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.824 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.824 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.824 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.825 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.825 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.825 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.825 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.825 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.825 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.825 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.826 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.826 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.826 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.826 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.826 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.826 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.826 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.826 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.827 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.827 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.827 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.827 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.827 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.827 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.827 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.827 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.828 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.828 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.828 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.828 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.828 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.828 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.828 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.828 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.829 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.829 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.829 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.829 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.829 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.829 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.829 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.829 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.830 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.830 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.830 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.830 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.830 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.830 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.830 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.831 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.831 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.831 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.831 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.831 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.831 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.831 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.831 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.832 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.833 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.852 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.854 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.856 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Oct  3 05:28:08 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:08.872 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpp82wh0d4/privsep.sock']
Oct  3 05:28:08 np0005468397 kepler[176403]: I1003 09:28:08.936386       1 exporter.go:218] Received shutdown signal
Oct  3 05:28:08 np0005468397 kepler[176403]: I1003 09:28:08.936631       1 exporter.go:226] Exiting...
Oct  3 05:28:09 np0005468397 systemd[1]: libpod-02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8.scope: Deactivated successfully.
Oct  3 05:28:09 np0005468397 podman[176843]: 2025-10-03 09:28:09.137714827 +0000 UTC m=+0.326415532 container died 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release=1214.1726694543, managed_by=edpm_ansible, config_id=edpm, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, distribution-scope=public, name=ubi9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct  3 05:28:09 np0005468397 systemd[1]: 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8-733796f8c939088c.timer: Deactivated successfully.
Oct  3 05:28:09 np0005468397 systemd[1]: Stopped /usr/bin/podman healthcheck run 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8.
Oct  3 05:28:09 np0005468397 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8-userdata-shm.mount: Deactivated successfully.
Oct  3 05:28:09 np0005468397 systemd[1]: var-lib-containers-storage-overlay-5893b20fbf3101d69c6051d87d970cd31b130343e0eb2911a2fa23bf76bf9532-merged.mount: Deactivated successfully.
Oct  3 05:28:09 np0005468397 podman[176877]: 2025-10-03 09:28:09.506488094 +0000 UTC m=+0.062080474 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.597 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.597 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpp82wh0d4/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.466 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.470 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.472 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.472 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Oct  3 05:28:09 np0005468397 podman[176843]: 2025-10-03 09:28:09.651182222 +0000 UTC m=+0.839882927 container cleanup 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, managed_by=edpm_ansible, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, version=9.4, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vcs-type=git, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Oct  3 05:28:09 np0005468397 podman[176843]: kepler
Oct  3 05:28:09 np0005468397 podman[176904]: kepler
Oct  3 05:28:09 np0005468397 systemd[1]: edpm_kepler.service: Deactivated successfully.
Oct  3 05:28:09 np0005468397 systemd[1]: Stopped kepler container.
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.738 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.738 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.739 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.739 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.740 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.740 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.740 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.740 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.740 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.740 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.740 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.740 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.741 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.743 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.744 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.744 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.744 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.744 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.744 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.744 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.744 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.745 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.745 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.745 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.745 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.745 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.745 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.745 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.745 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.746 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.746 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.746 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.746 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.746 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.746 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.746 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.746 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.746 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.746 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.746 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.747 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.747 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.747 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.747 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.747 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.747 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.747 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.747 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.747 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.747 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.747 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.747 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.748 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.748 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.748 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.748 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.748 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.748 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.748 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.748 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.748 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.748 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.749 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.749 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 systemd[1]: Starting kepler container...
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.749 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.749 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.749 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.749 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.749 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.749 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.749 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.749 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.749 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.750 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.750 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.750 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.750 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.750 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.750 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.750 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.750 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.750 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.750 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.751 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.751 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.751 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.751 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.751 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.751 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.751 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.751 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.751 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.752 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.752 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.752 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.752 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.752 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.752 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.752 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.752 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.752 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.753 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.753 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.753 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.753 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.753 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.753 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.753 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.753 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.753 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.754 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.754 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.754 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.754 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.754 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.754 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.754 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.754 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.754 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.755 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.755 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.755 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.755 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.755 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.755 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.755 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.755 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.755 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.755 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.756 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.756 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.756 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.756 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.756 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.756 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.756 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.756 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.756 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.756 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.757 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.757 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.757 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.757 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.757 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.757 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.757 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.757 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.758 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.758 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.758 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.758 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.758 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.758 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.758 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.758 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.758 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.759 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.759 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.759 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.759 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.759 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.759 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.759 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.759 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.759 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.759 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.759 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.760 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.760 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.760 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.760 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.760 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.760 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.760 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.760 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.760 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.760 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.760 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.761 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.761 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.761 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.761 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.761 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.761 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.761 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.761 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.761 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.761 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.762 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.762 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.762 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.762 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.762 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.762 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.762 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.762 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.762 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.762 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.763 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.763 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.763 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.763 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.763 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.763 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.763 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.763 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.763 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.764 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.764 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.764 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.764 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.764 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.764 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.764 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.764 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.764 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Oct  3 05:28:09 np0005468397 ceilometer_agent_ipmi[176659]: 2025-10-03 09:28:09.767 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Oct  3 05:28:09 np0005468397 systemd[1]: Started libcrun container.
Oct  3 05:28:09 np0005468397 systemd[1]: Started /usr/bin/podman healthcheck run 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8.
Oct  3 05:28:10 np0005468397 podman[176918]: 2025-10-03 09:28:10.05903819 +0000 UTC m=+0.291474472 container init 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.component=ubi9-container, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct  3 05:28:10 np0005468397 kepler[176935]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.083175       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.083324       1 config.go:293] using gCgroup ID in the BPF program: true
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.083339       1 config.go:295] kernel version: 5.14
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.083932       1 power.go:78] Unable to obtain power, use estimate method
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.083959       1 redfish.go:169] failed to get redfish credential file path
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.084267       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.084277       1 power.go:79] using none to obtain power
Oct  3 05:28:10 np0005468397 kepler[176935]: E1003 09:28:10.084288       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Oct  3 05:28:10 np0005468397 kepler[176935]: E1003 09:28:10.084311       1 exporter.go:154] failed to init GPU accelerators: no devices found
Oct  3 05:28:10 np0005468397 kepler[176935]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.085796       1 exporter.go:84] Number of CPUs: 8
Oct  3 05:28:10 np0005468397 podman[176918]: 2025-10-03 09:28:10.095298783 +0000 UTC m=+0.327735045 container start 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.component=ubi9-container, version=9.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0)
Oct  3 05:28:10 np0005468397 podman[176918]: kepler
Oct  3 05:28:10 np0005468397 systemd[1]: Started kepler container.
Oct  3 05:28:10 np0005468397 podman[176945]: 2025-10-03 09:28:10.181118249 +0000 UTC m=+0.076414688 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Oct  3 05:28:10 np0005468397 systemd[1]: 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8-74c3e41ed1965459.service: Main process exited, code=exited, status=1/FAILURE
Oct  3 05:28:10 np0005468397 systemd[1]: 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8-74c3e41ed1965459.service: Failed with result 'exit-code'.
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.581750       1 watcher.go:83] Using in cluster k8s config
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.582224       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Oct  3 05:28:10 np0005468397 kepler[176935]: E1003 09:28:10.582343       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.585677       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.585755       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.588990       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.589022       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.595863       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.595903       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.595917       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.604723       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.604754       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.604759       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.604763       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.604768       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.604778       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.604848       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.604875       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.604914       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.605130       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.605222       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Oct  3 05:28:10 np0005468397 kepler[176935]: I1003 09:28:10.605996       1 exporter.go:208] Started Kepler in 523.848773ms
Oct  3 05:28:10 np0005468397 python3.9[177130]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  3 05:28:12 np0005468397 python3.9[177282]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Oct  3 05:28:13 np0005468397 python3.9[177444]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:28:13 np0005468397 systemd[1]: Started libpod-conmon-e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4.scope.
Oct  3 05:28:13 np0005468397 podman[177445]: 2025-10-03 09:28:13.264673935 +0000 UTC m=+0.101660731 container exec e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  3 05:28:13 np0005468397 podman[177445]: 2025-10-03 09:28:13.274322231 +0000 UTC m=+0.111309047 container exec_died e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 05:28:13 np0005468397 systemd[1]: libpod-conmon-e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4.scope: Deactivated successfully.
Oct  3 05:28:14 np0005468397 python3.9[177626]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:28:14 np0005468397 systemd[1]: Started libpod-conmon-e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4.scope.
Oct  3 05:28:14 np0005468397 podman[177627]: 2025-10-03 09:28:14.300744695 +0000 UTC m=+0.092871252 container exec e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 05:28:14 np0005468397 podman[177627]: 2025-10-03 09:28:14.331608856 +0000 UTC m=+0.123735383 container exec_died e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 05:28:14 np0005468397 systemd[1]: libpod-conmon-e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4.scope: Deactivated successfully.
Oct  3 05:28:15 np0005468397 python3.9[177806]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:16 np0005468397 python3.9[177958]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Oct  3 05:28:16 np0005468397 python3.9[178128]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:28:17 np0005468397 systemd[1]: Started libpod-conmon-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.scope.
Oct  3 05:28:17 np0005468397 podman[178129]: 2025-10-03 09:28:17.323322594 +0000 UTC m=+0.338491926 container exec d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_id=edpm)
Oct  3 05:28:17 np0005468397 podman[178129]: 2025-10-03 09:28:17.465331706 +0000 UTC m=+0.480501038 container exec_died d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 05:28:17 np0005468397 podman[178145]: 2025-10-03 09:28:17.748887605 +0000 UTC m=+0.420159770 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41)
Oct  3 05:28:17 np0005468397 systemd[1]: libpod-conmon-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.scope: Deactivated successfully.
Oct  3 05:28:18 np0005468397 python3.9[178333]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:28:18 np0005468397 systemd[1]: Started libpod-conmon-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.scope.
Oct  3 05:28:18 np0005468397 podman[178334]: 2025-10-03 09:28:18.796396869 +0000 UTC m=+0.098459420 container exec d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930)
Oct  3 05:28:18 np0005468397 podman[178334]: 2025-10-03 09:28:18.828748897 +0000 UTC m=+0.130811418 container exec_died d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 05:28:18 np0005468397 systemd[1]: libpod-conmon-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.scope: Deactivated successfully.
Oct  3 05:28:19 np0005468397 python3.9[178518]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:20 np0005468397 python3.9[178671]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Oct  3 05:28:20 np0005468397 podman[178709]: 2025-10-03 09:28:20.86038669 +0000 UTC m=+0.112490105 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  3 05:28:21 np0005468397 python3.9[178863]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:28:21 np0005468397 systemd[1]: Started libpod-conmon-18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866.scope.
Oct  3 05:28:21 np0005468397 podman[178864]: 2025-10-03 09:28:21.452800803 +0000 UTC m=+0.093198482 container exec 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 05:28:21 np0005468397 podman[178864]: 2025-10-03 09:28:21.487839007 +0000 UTC m=+0.128236676 container exec_died 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 05:28:21 np0005468397 systemd[1]: libpod-conmon-18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866.scope: Deactivated successfully.
Oct  3 05:28:22 np0005468397 podman[179016]: 2025-10-03 09:28:22.173527804 +0000 UTC m=+0.070203102 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 05:28:22 np0005468397 python3.9[179068]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:28:22 np0005468397 systemd[1]: Started libpod-conmon-18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866.scope.
Oct  3 05:28:22 np0005468397 podman[179069]: 2025-10-03 09:28:22.496757314 +0000 UTC m=+0.091717815 container exec 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 05:28:22 np0005468397 podman[179069]: 2025-10-03 09:28:22.529136083 +0000 UTC m=+0.124096594 container exec_died 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 05:28:22 np0005468397 systemd[1]: libpod-conmon-18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866.scope: Deactivated successfully.
Oct  3 05:28:23 np0005468397 python3.9[179250]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:24 np0005468397 python3.9[179402]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Oct  3 05:28:25 np0005468397 python3.9[179566]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:28:25 np0005468397 systemd[1]: Started libpod-conmon-b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1.scope.
Oct  3 05:28:25 np0005468397 podman[179567]: 2025-10-03 09:28:25.409191523 +0000 UTC m=+0.193287192 container exec b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 05:28:25 np0005468397 podman[179567]: 2025-10-03 09:28:25.442064777 +0000 UTC m=+0.226160426 container exec_died b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 05:28:25 np0005468397 systemd[1]: libpod-conmon-b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1.scope: Deactivated successfully.
Oct  3 05:28:26 np0005468397 python3.9[179748]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:28:26 np0005468397 systemd[1]: Started libpod-conmon-b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1.scope.
Oct  3 05:28:26 np0005468397 podman[179749]: 2025-10-03 09:28:26.380464664 +0000 UTC m=+0.132789911 container exec b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 05:28:26 np0005468397 podman[179749]: 2025-10-03 09:28:26.420537378 +0000 UTC m=+0.172862605 container exec_died b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 05:28:26 np0005468397 systemd[1]: libpod-conmon-b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1.scope: Deactivated successfully.
Oct  3 05:28:27 np0005468397 python3.9[179931]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:28 np0005468397 python3.9[180083]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Oct  3 05:28:29 np0005468397 python3.9[180248]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:28:29 np0005468397 systemd[1]: Started libpod-conmon-e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3.scope.
Oct  3 05:28:29 np0005468397 podman[180249]: 2025-10-03 09:28:29.216699182 +0000 UTC m=+0.110659357 container exec e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Oct  3 05:28:29 np0005468397 podman[180249]: 2025-10-03 09:28:29.250402073 +0000 UTC m=+0.144362228 container exec_died e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, version=9.6, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Oct  3 05:28:29 np0005468397 systemd[1]: libpod-conmon-e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3.scope: Deactivated successfully.
Oct  3 05:28:29 np0005468397 podman[157165]: time="2025-10-03T09:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 05:28:29 np0005468397 podman[157165]: @ - - [03/Oct/2025:09:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18534 "" "Go-http-client/1.1"
Oct  3 05:28:29 np0005468397 podman[157165]: @ - - [03/Oct/2025:09:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2983 "" "Go-http-client/1.1"
Oct  3 05:28:30 np0005468397 python3.9[180431]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:28:30 np0005468397 systemd[1]: Started libpod-conmon-e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3.scope.
Oct  3 05:28:30 np0005468397 podman[180432]: 2025-10-03 09:28:30.185095652 +0000 UTC m=+0.105993679 container exec e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vcs-type=git)
Oct  3 05:28:30 np0005468397 podman[180432]: 2025-10-03 09:28:30.220325931 +0000 UTC m=+0.141223938 container exec_died e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, version=9.6)
Oct  3 05:28:30 np0005468397 systemd[1]: libpod-conmon-e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3.scope: Deactivated successfully.
Oct  3 05:28:31 np0005468397 python3.9[180612]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:31 np0005468397 openstack_network_exporter[159287]: ERROR   09:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 05:28:31 np0005468397 openstack_network_exporter[159287]: ERROR   09:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 05:28:31 np0005468397 openstack_network_exporter[159287]: ERROR   09:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 05:28:31 np0005468397 openstack_network_exporter[159287]: ERROR   09:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 05:28:31 np0005468397 openstack_network_exporter[159287]: 
Oct  3 05:28:31 np0005468397 openstack_network_exporter[159287]: ERROR   09:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 05:28:31 np0005468397 openstack_network_exporter[159287]: 
Oct  3 05:28:31 np0005468397 python3.9[180764]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Oct  3 05:28:32 np0005468397 python3.9[180929]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:28:33 np0005468397 systemd[1]: Started libpod-conmon-e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7.scope.
Oct  3 05:28:33 np0005468397 podman[180930]: 2025-10-03 09:28:33.059811532 +0000 UTC m=+0.118795566 container exec e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 05:28:33 np0005468397 podman[180930]: 2025-10-03 09:28:33.09782917 +0000 UTC m=+0.156813224 container exec_died e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 05:28:33 np0005468397 systemd[1]: libpod-conmon-e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7.scope: Deactivated successfully.
Oct  3 05:28:33 np0005468397 podman[181112]: 2025-10-03 09:28:33.851369723 +0000 UTC m=+0.089747503 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 05:28:33 np0005468397 python3.9[181113]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:28:34 np0005468397 systemd[1]: Started libpod-conmon-e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7.scope.
Oct  3 05:28:34 np0005468397 podman[181138]: 2025-10-03 09:28:34.090721518 +0000 UTC m=+0.112567108 container exec e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  3 05:28:34 np0005468397 podman[181138]: 2025-10-03 09:28:34.125652878 +0000 UTC m=+0.147498478 container exec_died e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 05:28:34 np0005468397 systemd[1]: libpod-conmon-e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7.scope: Deactivated successfully.
Oct  3 05:28:34 np0005468397 python3.9[181321]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:35 np0005468397 python3.9[181473]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Oct  3 05:28:36 np0005468397 python3.9[181637]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:28:36 np0005468397 systemd[1]: Started libpod-conmon-02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8.scope.
Oct  3 05:28:36 np0005468397 podman[181638]: 2025-10-03 09:28:36.951901759 +0000 UTC m=+0.124277010 container exec 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=kepler, config_id=edpm, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, build-date=2024-09-18T21:23:30, vcs-type=git, com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=)
Oct  3 05:28:36 np0005468397 podman[181638]: 2025-10-03 09:28:36.963643392 +0000 UTC m=+0.136018623 container exec_died 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, vcs-type=git, release=1214.1726694543, release-0.7.12=, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, io.openshift.expose-services=, com.redhat.component=ubi9-container)
Oct  3 05:28:37 np0005468397 systemd[1]: libpod-conmon-02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8.scope: Deactivated successfully.
Oct  3 05:28:37 np0005468397 python3.9[181821]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 05:28:37 np0005468397 systemd[1]: Started libpod-conmon-02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8.scope.
Oct  3 05:28:37 np0005468397 podman[181822]: 2025-10-03 09:28:37.888817099 +0000 UTC m=+0.090772975 container exec 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, release-0.7.12=, build-date=2024-09-18T21:23:30)
Oct  3 05:28:37 np0005468397 podman[181822]: 2025-10-03 09:28:37.924590585 +0000 UTC m=+0.126546471 container exec_died 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, config_id=edpm, architecture=x86_64, maintainer=Red Hat, Inc., distribution-scope=public)
Oct  3 05:28:37 np0005468397 systemd[1]: libpod-conmon-02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8.scope: Deactivated successfully.
Oct  3 05:28:38 np0005468397 podman[181852]: 2025-10-03 09:28:38.116807103 +0000 UTC m=+0.111753552 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  3 05:28:38 np0005468397 python3.9[182019]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:39 np0005468397 python3.9[182171]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:39 np0005468397 podman[182172]: 2025-10-03 09:28:39.83292439 +0000 UTC m=+0.089799844 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  3 05:28:40 np0005468397 podman[182313]: 2025-10-03 09:28:40.35821153 +0000 UTC m=+0.091170027 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release-0.7.12=, vcs-type=git, distribution-scope=public, version=9.4, architecture=x86_64, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc.)
Oct  3 05:28:40 np0005468397 python3.9[182361]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:28:41 np0005468397 python3.9[182484]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759483719.9204044-778-173921772819055/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:42 np0005468397 python3.9[182636]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:43 np0005468397 python3.9[182788]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:28:43 np0005468397 python3.9[182866]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:44 np0005468397 python3.9[183018]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:28:45 np0005468397 python3.9[183096]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.5ey2_jrs recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:46 np0005468397 python3.9[183248]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:28:46 np0005468397 python3.9[183326]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:47 np0005468397 python3.9[183478]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:28:48 np0005468397 podman[183603]: 2025-10-03 09:28:48.359894884 +0000 UTC m=+0.104789401 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., name=ubi9-minimal, version=9.6, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.expose-services=, config_id=edpm, vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 05:28:48 np0005468397 python3[183647]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  3 05:28:49 np0005468397 python3.9[183803]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:28:49 np0005468397 python3.9[183882]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:51 np0005468397 python3.9[184034]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:28:51 np0005468397 podman[184084]: 2025-10-03 09:28:51.46077637 +0000 UTC m=+0.131187149 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  3 05:28:51 np0005468397 python3.9[184129]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:52 np0005468397 podman[184259]: 2025-10-03 09:28:52.327736687 +0000 UTC m=+0.094884906 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 05:28:52 np0005468397 python3.9[184308]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:28:53 np0005468397 python3.9[184386]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:53 np0005468397 python3.9[184538]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:28:54 np0005468397 python3.9[184616]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:55 np0005468397 python3.9[184768]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:28:56 np0005468397 python3.9[184893]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759483734.8350964-903-58521336423655/.source.nft follow=False _original_basename=ruleset.j2 checksum=195cfcdc3ed4fc7d98b13eed88ef5cb7956fa1b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:57 np0005468397 python3.9[185045]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:57 np0005468397 python3.9[185197]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:28:58 np0005468397 python3.9[185352]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:28:59 np0005468397 podman[157165]: time="2025-10-03T09:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 05:28:59 np0005468397 podman[157165]: @ - - [03/Oct/2025:09:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18532 "" "Go-http-client/1.1"
Oct  3 05:28:59 np0005468397 podman[157165]: @ - - [03/Oct/2025:09:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2990 "" "Go-http-client/1.1"
Oct  3 05:28:59 np0005468397 python3.9[185504]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:29:00 np0005468397 python3.9[185657]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 05:29:01 np0005468397 openstack_network_exporter[159287]: ERROR   09:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 05:29:01 np0005468397 openstack_network_exporter[159287]: ERROR   09:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 05:29:01 np0005468397 openstack_network_exporter[159287]: ERROR   09:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 05:29:01 np0005468397 openstack_network_exporter[159287]: ERROR   09:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 05:29:01 np0005468397 openstack_network_exporter[159287]: 
Oct  3 05:29:01 np0005468397 openstack_network_exporter[159287]: ERROR   09:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 05:29:01 np0005468397 openstack_network_exporter[159287]: 
Oct  3 05:29:01 np0005468397 python3.9[185811]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 05:29:02 np0005468397 python3.9[185968]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:29:02 np0005468397 systemd[1]: session-24.scope: Deactivated successfully.
Oct  3 05:29:02 np0005468397 systemd[1]: session-24.scope: Consumed 1min 46.444s CPU time.
Oct  3 05:29:02 np0005468397 systemd-logind[798]: Session 24 logged out. Waiting for processes to exit.
Oct  3 05:29:02 np0005468397 systemd-logind[798]: Removed session 24.
Oct  3 05:29:04 np0005468397 podman[185993]: 2025-10-03 09:29:04.818996522 +0000 UTC m=+0.077293107 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 05:29:08 np0005468397 podman[186017]: 2025-10-03 09:29:08.820440123 +0000 UTC m=+0.078140744 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 05:29:09 np0005468397 systemd-logind[798]: New session 25 of user zuul.
Oct  3 05:29:09 np0005468397 systemd[1]: Started Session 25 of User zuul.
Oct  3 05:29:10 np0005468397 podman[186164]: 2025-10-03 09:29:10.433174655 +0000 UTC m=+0.083064020 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  3 05:29:10 np0005468397 podman[186209]: 2025-10-03 09:29:10.528884867 +0000 UTC m=+0.068577367 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 05:29:10 np0005468397 python3.9[186203]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 05:29:12 np0005468397 python3.9[186384]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Oct  3 05:29:13 np0005468397 python3.9[186537]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 05:29:14 np0005468397 python3.9[186621]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 05:29:18 np0005468397 podman[186623]: 2025-10-03 09:29:18.85453051 +0000 UTC m=+0.107148557 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, architecture=x86_64, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct  3 05:29:20 np0005468397 systemd[1]: Starting PackageKit Daemon...
Oct  3 05:29:20 np0005468397 systemd[1]: Started PackageKit Daemon.
Oct  3 05:29:21 np0005468397 podman[186776]: 2025-10-03 09:29:21.790959862 +0000 UTC m=+0.119130540 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  3 05:29:21 np0005468397 python3.9[186822]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:29:22 np0005468397 podman[186923]: 2025-10-03 09:29:22.652877669 +0000 UTC m=+0.091031754 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 05:29:22 np0005468397 python3.9[186966]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759483761.164609-54-169957752661919/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:29:23 np0005468397 python3.9[187125]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 05:29:24 np0005468397 python3.9[187277]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 05:29:25 np0005468397 python3.9[187400]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759483764.011158-77-251458878234511/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:29:26 compute-0 python3.9[187552]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 09:29:26 compute-0 systemd[1]: Stopping System Logging Service...
Oct  3 09:29:26 compute-0 rsyslogd[1009]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1009" x-info="https://www.rsyslog.com"] exiting on signal 15.
Oct  3 09:29:26 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Oct  3 09:29:26 compute-0 systemd[1]: Stopped System Logging Service.
Oct  3 09:29:26 compute-0 systemd[1]: rsyslog.service: Consumed 1.789s CPU time, 4.9M memory peak, read 0B from disk, written 3.7M to disk.
Oct  3 09:29:26 compute-0 systemd[1]: Starting System Logging Service...
Oct  3 09:29:26 compute-0 rsyslogd[187556]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="187556" x-info="https://www.rsyslog.com"] start
Oct  3 09:29:26 compute-0 systemd[1]: Started System Logging Service.
Oct  3 09:29:26 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 09:29:26 compute-0 rsyslogd[187556]: Warning: Certificate file is not set [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Oct  3 09:29:26 compute-0 rsyslogd[187556]: Warning: Key file is not set [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Oct  3 09:29:26 compute-0 rsyslogd[187556]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2506.0-2.el9]
Oct  3 09:29:26 compute-0 rsyslogd[187556]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2506.0-2.el9]
Oct  3 09:29:27 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Oct  3 09:29:27 compute-0 systemd[1]: session-25.scope: Consumed 14.386s CPU time.
Oct  3 09:29:27 compute-0 systemd-logind[798]: Session 25 logged out. Waiting for processes to exit.
Oct  3 09:29:27 compute-0 systemd-logind[798]: Removed session 25.
Oct  3 09:29:29 compute-0 podman[157165]: time="2025-10-03T09:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:29:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18532 "" "Go-http-client/1.1"
Oct  3 09:29:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2988 "" "Go-http-client/1.1"
Oct  3 09:29:31 compute-0 openstack_network_exporter[159287]: ERROR   09:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:29:31 compute-0 openstack_network_exporter[159287]: ERROR   09:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:29:31 compute-0 openstack_network_exporter[159287]: ERROR   09:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:29:31 compute-0 openstack_network_exporter[159287]: ERROR   09:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:29:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:29:31 compute-0 openstack_network_exporter[159287]: ERROR   09:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:29:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:29:35 compute-0 systemd-logind[798]: New session 26 of user zuul.
Oct  3 09:29:35 compute-0 systemd[1]: Started Session 26 of User zuul.
Oct  3 09:29:35 compute-0 podman[187587]: 2025-10-03 09:29:35.368069093 +0000 UTC m=+0.086977105 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.948 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.948 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f70b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.949 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa35c9f7170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b8940b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35df74380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b894380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e566ba0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f73b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e6b9400>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f76e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa35b894080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa35c9f71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa35c9f7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa35c9f7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa35c9f72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa35c9f7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa35c955970>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa35b894350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa35c92f7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa35c9f7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa35c9f7710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa35c9f79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa35e6e6180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa35c9f73e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa35c9f7c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa35c9f7440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa35c9f7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa35c9f7ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa35c9f7d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa35c9f7e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa35c9f7650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa35c9f7e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa35c9f76b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa35c9f7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.967 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa35c9f7fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.967 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.969 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.969 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.969 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.969 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.969 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.969 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.969 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.969 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.969 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.969 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:29:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:29:39 compute-0 podman[187971]: 2025-10-03 09:29:39.055783963 +0000 UTC m=+0.120696769 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 09:29:40 compute-0 podman[188246]: 2025-10-03 09:29:40.827048318 +0000 UTC m=+0.095273799 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, name=ubi9, maintainer=Red Hat, Inc., release-0.7.12=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, release=1214.1726694543)
Oct  3 09:29:40 compute-0 podman[188247]: 2025-10-03 09:29:40.84715354 +0000 UTC m=+0.112005913 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true)
Oct  3 09:29:42 compute-0 python3[188407]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:29:44 compute-0 python3[188511]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  3 09:29:45 compute-0 python3[188539]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:29:46 compute-0 python3[188565]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:29:46 compute-0 kernel: loop: module loaded
Oct  3 09:29:46 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Oct  3 09:29:46 compute-0 python3[188600]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:29:46 compute-0 lvm[188603]: PV /dev/loop3 not used.
Oct  3 09:29:46 compute-0 lvm[188605]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  3 09:29:46 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct  3 09:29:47 compute-0 lvm[188607]:  0 logical volume(s) in volume group "ceph_vg0" now active
Oct  3 09:29:47 compute-0 lvm[188608]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  3 09:29:47 compute-0 lvm[188608]: VG ceph_vg0 finished
Oct  3 09:29:47 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct  3 09:29:47 compute-0 lvm[188616]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  3 09:29:47 compute-0 lvm[188616]: VG ceph_vg0 finished
Oct  3 09:29:47 compute-0 python3[188694]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 09:29:48 compute-0 python3[188767]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759483787.3541732-33407-162708345434704/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:29:49 compute-0 python3[188817]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:29:49 compute-0 systemd[1]: Reloading.
Oct  3 09:29:49 compute-0 podman[188819]: 2025-10-03 09:29:49.177213493 +0000 UTC m=+0.099253145 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal)
Oct  3 09:29:49 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:29:49 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:29:49 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct  3 09:29:49 compute-0 bash[188880]: /dev/loop3: [64513]:4194940 (/var/lib/ceph-osd-0.img)
Oct  3 09:29:49 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct  3 09:29:49 compute-0 lvm[188882]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  3 09:29:49 compute-0 lvm[188882]: VG ceph_vg0 finished
Oct  3 09:29:50 compute-0 python3[188908]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  3 09:29:51 compute-0 python3[188935]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:29:52 compute-0 podman[188961]: 2025-10-03 09:29:52.066867094 +0000 UTC m=+0.120051750 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  3 09:29:52 compute-0 python3[188962]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:29:52 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Oct  3 09:29:52 compute-0 python3[189018]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:29:52 compute-0 podman[189021]: 2025-10-03 09:29:52.848835551 +0000 UTC m=+0.107123088 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:29:53 compute-0 lvm[189039]: PV /dev/loop4 has no VG metadata.
Oct  3 09:29:53 compute-0 lvm[189039]: PV /dev/loop4 online, VG unknown.
Oct  3 09:29:53 compute-0 lvm[189039]: VG unknown
Oct  3 09:29:53 compute-0 lvm[189055]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  3 09:29:53 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Oct  3 09:29:53 compute-0 lvm[189057]:  PVs online not found for VG ceph_vg1, using all devices.
Oct  3 09:29:53 compute-0 lvm[189057]:  1 logical volume(s) in volume group "ceph_vg1" now active
Oct  3 09:29:53 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Oct  3 09:29:53 compute-0 python3[189135]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 09:29:54 compute-0 python3[189208]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759483793.5579822-33434-112087036989183/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:29:55 compute-0 python3[189258]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:29:55 compute-0 systemd[1]: Reloading.
Oct  3 09:29:55 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:29:55 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:29:55 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct  3 09:29:55 compute-0 bash[189298]: /dev/loop4: [64513]:4647135 (/var/lib/ceph-osd-1.img)
Oct  3 09:29:55 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct  3 09:29:55 compute-0 lvm[189299]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  3 09:29:55 compute-0 lvm[189299]: VG ceph_vg1 finished
Oct  3 09:29:56 compute-0 python3[189325]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  3 09:29:57 compute-0 python3[189352]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:29:58 compute-0 python3[189378]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:29:58 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Oct  3 09:29:58 compute-0 python3[189410]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:29:58 compute-0 lvm[189413]: PV /dev/loop5 not used.
Oct  3 09:29:58 compute-0 lvm[189415]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  3 09:29:58 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Oct  3 09:29:58 compute-0 lvm[189424]:  1 logical volume(s) in volume group "ceph_vg2" now active
Oct  3 09:29:58 compute-0 lvm[189426]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  3 09:29:58 compute-0 lvm[189426]: VG ceph_vg2 finished
Oct  3 09:29:58 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Oct  3 09:29:59 compute-0 python3[189504]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 09:29:59 compute-0 podman[157165]: time="2025-10-03T09:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:29:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18532 "" "Go-http-client/1.1"
Oct  3 09:29:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2993 "" "Go-http-client/1.1"
Oct  3 09:30:00 compute-0 python3[189577]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759483799.1668417-33461-83590153784563/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:30:00 compute-0 python3[189627]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:30:00 compute-0 systemd[1]: Reloading.
Oct  3 09:30:00 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:30:00 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:30:01 compute-0 systemd[1]: Starting Ceph OSD losetup...
Oct  3 09:30:01 compute-0 bash[189666]: /dev/loop5: [64513]:4647137 (/var/lib/ceph-osd-2.img)
Oct  3 09:30:01 compute-0 systemd[1]: Finished Ceph OSD losetup.
Oct  3 09:30:01 compute-0 lvm[189668]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  3 09:30:01 compute-0 lvm[189668]: VG ceph_vg2 finished
Oct  3 09:30:01 compute-0 openstack_network_exporter[159287]: ERROR   09:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:30:01 compute-0 openstack_network_exporter[159287]: ERROR   09:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:30:01 compute-0 openstack_network_exporter[159287]: ERROR   09:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:30:01 compute-0 openstack_network_exporter[159287]: ERROR   09:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:30:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:30:01 compute-0 openstack_network_exporter[159287]: ERROR   09:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:30:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:30:03 compute-0 python3[189692]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:30:05 compute-0 podman[189793]: 2025-10-03 09:30:05.706965714 +0000 UTC m=+0.090268880 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:30:05 compute-0 python3[189794]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  3 09:30:07 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  3 09:30:07 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct  3 09:30:08 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  3 09:30:08 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct  3 09:30:08 compute-0 systemd[1]: run-rb9944b485aad461aba068c2a490ae9c1.service: Deactivated successfully.
Oct  3 09:30:08 compute-0 python3[189945]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:30:09 compute-0 podman[189974]: 2025-10-03 09:30:09.230842489 +0000 UTC m=+0.086250762 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 09:30:09 compute-0 python3[189975]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:30:10 compute-0 python3[190058]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:30:10 compute-0 python3[190084]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:30:11 compute-0 podman[190163]: 2025-10-03 09:30:11.344611006 +0000 UTC m=+0.072621917 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm)
Oct  3 09:30:11 compute-0 podman[190162]: 2025-10-03 09:30:11.354832492 +0000 UTC m=+0.091234161 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, config_id=edpm, vendor=Red Hat, Inc., io.buildah.version=1.29.0, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543)
Oct  3 09:30:11 compute-0 python3[190164]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 09:30:11 compute-0 python3[190269]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759483811.0349996-33609-228216081641473/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:30:13 compute-0 python3[190371]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 09:30:13 compute-0 python3[190444]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759483812.6327648-33627-70677084327389/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:30:13 compute-0 python3[190494]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:30:14 compute-0 python3[190522]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:30:14 compute-0 python3[190550]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:30:15 compute-0 python3[190578]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:30:15 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct  3 09:30:15 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  3 09:30:15 compute-0 systemd-logind[798]: New session 27 of user ceph-admin.
Oct  3 09:30:15 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  3 09:30:15 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct  3 09:30:15 compute-0 systemd[190597]: Queued start job for default target Main User Target.
Oct  3 09:30:15 compute-0 systemd[190597]: Created slice User Application Slice.
Oct  3 09:30:15 compute-0 systemd[190597]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  3 09:30:15 compute-0 systemd[190597]: Started Daily Cleanup of User's Temporary Directories.
Oct  3 09:30:15 compute-0 systemd[190597]: Reached target Paths.
Oct  3 09:30:15 compute-0 systemd[190597]: Reached target Timers.
Oct  3 09:30:15 compute-0 systemd[190597]: Starting D-Bus User Message Bus Socket...
Oct  3 09:30:15 compute-0 systemd[190597]: Starting Create User's Volatile Files and Directories...
Oct  3 09:30:15 compute-0 systemd[190597]: Listening on D-Bus User Message Bus Socket.
Oct  3 09:30:15 compute-0 systemd[190597]: Reached target Sockets.
Oct  3 09:30:15 compute-0 systemd[190597]: Finished Create User's Volatile Files and Directories.
Oct  3 09:30:15 compute-0 systemd[190597]: Reached target Basic System.
Oct  3 09:30:15 compute-0 systemd[190597]: Reached target Main User Target.
Oct  3 09:30:15 compute-0 systemd[190597]: Startup finished in 150ms.
Oct  3 09:30:15 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct  3 09:30:15 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Oct  3 09:30:15 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Oct  3 09:30:15 compute-0 systemd-logind[798]: Session 27 logged out. Waiting for processes to exit.
Oct  3 09:30:15 compute-0 systemd-logind[798]: Removed session 27.
Oct  3 09:30:22 compute-0 podman[190691]: 2025-10-03 09:30:22.254643861 +0000 UTC m=+2.522104703 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=)
Oct  3 09:30:22 compute-0 podman[190714]: 2025-10-03 09:30:22.383791411 +0000 UTC m=+0.104314153 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct  3 09:30:23 compute-0 podman[190738]: 2025-10-03 09:30:23.798057181 +0000 UTC m=+0.065134865 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 09:30:25 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Oct  3 09:30:25 compute-0 systemd[190597]: Activating special unit Exit the Session...
Oct  3 09:30:25 compute-0 systemd[190597]: Stopped target Main User Target.
Oct  3 09:30:25 compute-0 systemd[190597]: Stopped target Basic System.
Oct  3 09:30:25 compute-0 systemd[190597]: Stopped target Paths.
Oct  3 09:30:25 compute-0 systemd[190597]: Stopped target Sockets.
Oct  3 09:30:25 compute-0 systemd[190597]: Stopped target Timers.
Oct  3 09:30:25 compute-0 systemd[190597]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  3 09:30:25 compute-0 systemd[190597]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  3 09:30:25 compute-0 systemd[190597]: Closed D-Bus User Message Bus Socket.
Oct  3 09:30:25 compute-0 systemd[190597]: Stopped Create User's Volatile Files and Directories.
Oct  3 09:30:25 compute-0 systemd[190597]: Removed slice User Application Slice.
Oct  3 09:30:25 compute-0 systemd[190597]: Reached target Shutdown.
Oct  3 09:30:25 compute-0 systemd[190597]: Finished Exit the Session.
Oct  3 09:30:25 compute-0 systemd[190597]: Reached target Exit the Session.
Oct  3 09:30:25 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Oct  3 09:30:25 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Oct  3 09:30:25 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct  3 09:30:26 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct  3 09:30:26 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct  3 09:30:26 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct  3 09:30:26 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Oct  3 09:30:29 compute-0 podman[157165]: time="2025-10-03T09:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:30:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18532 "" "Go-http-client/1.1"
Oct  3 09:30:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2992 "" "Go-http-client/1.1"
Oct  3 09:30:31 compute-0 openstack_network_exporter[159287]: ERROR   09:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:30:31 compute-0 openstack_network_exporter[159287]: ERROR   09:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:30:31 compute-0 openstack_network_exporter[159287]: ERROR   09:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:30:31 compute-0 openstack_network_exporter[159287]: ERROR   09:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:30:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:30:31 compute-0 openstack_network_exporter[159287]: ERROR   09:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:30:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:30:44 compute-0 podman[190798]: 2025-10-03 09:30:44.939544694 +0000 UTC m=+3.189915593 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.tags=base rhel9, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, name=ubi9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.29.0, release=1214.1726694543, version=9.4, container_name=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 09:30:44 compute-0 podman[190788]: 2025-10-03 09:30:44.946773846 +0000 UTC m=+5.212452650 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 09:30:44 compute-0 podman[190799]: 2025-10-03 09:30:44.946772976 +0000 UTC m=+3.189790308 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 09:30:44 compute-0 podman[190778]: 2025-10-03 09:30:44.956189189 +0000 UTC m=+8.218683290 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:30:45 compute-0 podman[190650]: 2025-10-03 09:30:45.410885162 +0000 UTC m=+29.413424607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:30:45 compute-0 podman[190862]: 2025-10-03 09:30:45.496497533 +0000 UTC m=+0.060275567 container create f1d7b3b3c0f9fc02bb2cd9a31f6e1a97eccfb6094ad8641209beb9439b5b6057 (image=quay.io/ceph/ceph:v18, name=friendly_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 09:30:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck927365127-merged.mount: Deactivated successfully.
Oct  3 09:30:45 compute-0 systemd[1]: Started libpod-conmon-f1d7b3b3c0f9fc02bb2cd9a31f6e1a97eccfb6094ad8641209beb9439b5b6057.scope.
Oct  3 09:30:45 compute-0 podman[190862]: 2025-10-03 09:30:45.467582294 +0000 UTC m=+0.031360388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:30:45 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:30:45 compute-0 podman[190862]: 2025-10-03 09:30:45.620790338 +0000 UTC m=+0.184568402 container init f1d7b3b3c0f9fc02bb2cd9a31f6e1a97eccfb6094ad8641209beb9439b5b6057 (image=quay.io/ceph/ceph:v18, name=friendly_keller, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:30:45 compute-0 podman[190862]: 2025-10-03 09:30:45.631390768 +0000 UTC m=+0.195168802 container start f1d7b3b3c0f9fc02bb2cd9a31f6e1a97eccfb6094ad8641209beb9439b5b6057 (image=quay.io/ceph/ceph:v18, name=friendly_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  3 09:30:45 compute-0 podman[190862]: 2025-10-03 09:30:45.63612227 +0000 UTC m=+0.199900334 container attach f1d7b3b3c0f9fc02bb2cd9a31f6e1a97eccfb6094ad8641209beb9439b5b6057 (image=quay.io/ceph/ceph:v18, name=friendly_keller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:30:45 compute-0 friendly_keller[190877]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct  3 09:30:45 compute-0 systemd[1]: libpod-f1d7b3b3c0f9fc02bb2cd9a31f6e1a97eccfb6094ad8641209beb9439b5b6057.scope: Deactivated successfully.
Oct  3 09:30:45 compute-0 podman[190862]: 2025-10-03 09:30:45.95166614 +0000 UTC m=+0.515444204 container died f1d7b3b3c0f9fc02bb2cd9a31f6e1a97eccfb6094ad8641209beb9439b5b6057 (image=quay.io/ceph/ceph:v18, name=friendly_keller, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 09:30:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca1b27a10c70ec03e9339c66c656adce24ee6ed4f8f08e926503b15187975e6f-merged.mount: Deactivated successfully.
Oct  3 09:30:46 compute-0 podman[190862]: 2025-10-03 09:30:46.893600791 +0000 UTC m=+1.457378865 container remove f1d7b3b3c0f9fc02bb2cd9a31f6e1a97eccfb6094ad8641209beb9439b5b6057 (image=quay.io/ceph/ceph:v18, name=friendly_keller, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  3 09:30:46 compute-0 systemd[1]: libpod-conmon-f1d7b3b3c0f9fc02bb2cd9a31f6e1a97eccfb6094ad8641209beb9439b5b6057.scope: Deactivated successfully.
Oct  3 09:30:47 compute-0 podman[190894]: 2025-10-03 09:30:47.01367387 +0000 UTC m=+0.088977540 container create 612e1515561878eb5d357ffdbd28a46d15411d73cda57d964a5659c5d1cd4d12 (image=quay.io/ceph/ceph:v18, name=suspicious_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 09:30:47 compute-0 podman[190894]: 2025-10-03 09:30:46.967182296 +0000 UTC m=+0.042485976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:30:47 compute-0 systemd[1]: Started libpod-conmon-612e1515561878eb5d357ffdbd28a46d15411d73cda57d964a5659c5d1cd4d12.scope.
Oct  3 09:30:47 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:30:47 compute-0 podman[190894]: 2025-10-03 09:30:47.183489216 +0000 UTC m=+0.258792906 container init 612e1515561878eb5d357ffdbd28a46d15411d73cda57d964a5659c5d1cd4d12 (image=quay.io/ceph/ceph:v18, name=suspicious_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 09:30:47 compute-0 podman[190894]: 2025-10-03 09:30:47.198892652 +0000 UTC m=+0.274196322 container start 612e1515561878eb5d357ffdbd28a46d15411d73cda57d964a5659c5d1cd4d12 (image=quay.io/ceph/ceph:v18, name=suspicious_ptolemy, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:30:47 compute-0 suspicious_ptolemy[190910]: 167 167
Oct  3 09:30:47 compute-0 systemd[1]: libpod-612e1515561878eb5d357ffdbd28a46d15411d73cda57d964a5659c5d1cd4d12.scope: Deactivated successfully.
Oct  3 09:30:47 compute-0 podman[190894]: 2025-10-03 09:30:47.225512697 +0000 UTC m=+0.300816387 container attach 612e1515561878eb5d357ffdbd28a46d15411d73cda57d964a5659c5d1cd4d12 (image=quay.io/ceph/ceph:v18, name=suspicious_ptolemy, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 09:30:47 compute-0 podman[190894]: 2025-10-03 09:30:47.226991845 +0000 UTC m=+0.302295515 container died 612e1515561878eb5d357ffdbd28a46d15411d73cda57d964a5659c5d1cd4d12 (image=quay.io/ceph/ceph:v18, name=suspicious_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  3 09:30:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-71575c415d8293fa47c03e01b89ab1962cc7b6786040bde238c500a956e058ed-merged.mount: Deactivated successfully.
Oct  3 09:30:47 compute-0 podman[190894]: 2025-10-03 09:30:47.572635473 +0000 UTC m=+0.647939183 container remove 612e1515561878eb5d357ffdbd28a46d15411d73cda57d964a5659c5d1cd4d12 (image=quay.io/ceph/ceph:v18, name=suspicious_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 09:30:47 compute-0 systemd[1]: libpod-conmon-612e1515561878eb5d357ffdbd28a46d15411d73cda57d964a5659c5d1cd4d12.scope: Deactivated successfully.
Oct  3 09:30:47 compute-0 podman[190926]: 2025-10-03 09:30:47.678447574 +0000 UTC m=+0.069942739 container create 7da92b1cf64657e94ca6089908396f563a5026ca4fb0be41bc1f4670a6e3e40d (image=quay.io/ceph/ceph:v18, name=cool_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 09:30:47 compute-0 systemd[1]: Started libpod-conmon-7da92b1cf64657e94ca6089908396f563a5026ca4fb0be41bc1f4670a6e3e40d.scope.
Oct  3 09:30:47 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:30:47 compute-0 podman[190926]: 2025-10-03 09:30:47.650475105 +0000 UTC m=+0.041970320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:30:47 compute-0 podman[190926]: 2025-10-03 09:30:47.755937034 +0000 UTC m=+0.147432219 container init 7da92b1cf64657e94ca6089908396f563a5026ca4fb0be41bc1f4670a6e3e40d (image=quay.io/ceph/ceph:v18, name=cool_lumiere, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:30:47 compute-0 podman[190926]: 2025-10-03 09:30:47.76360117 +0000 UTC m=+0.155096335 container start 7da92b1cf64657e94ca6089908396f563a5026ca4fb0be41bc1f4670a6e3e40d (image=quay.io/ceph/ceph:v18, name=cool_lumiere, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  3 09:30:47 compute-0 podman[190926]: 2025-10-03 09:30:47.768407904 +0000 UTC m=+0.159903079 container attach 7da92b1cf64657e94ca6089908396f563a5026ca4fb0be41bc1f4670a6e3e40d (image=quay.io/ceph/ceph:v18, name=cool_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 09:30:47 compute-0 cool_lumiere[190943]: AQDHl99oBZ5hLxAAR2HwRhc1k6CcPZip7Uhnbw==
Oct  3 09:30:47 compute-0 systemd[1]: libpod-7da92b1cf64657e94ca6089908396f563a5026ca4fb0be41bc1f4670a6e3e40d.scope: Deactivated successfully.
Oct  3 09:30:47 compute-0 podman[190926]: 2025-10-03 09:30:47.803627466 +0000 UTC m=+0.195122681 container died 7da92b1cf64657e94ca6089908396f563a5026ca4fb0be41bc1f4670a6e3e40d (image=quay.io/ceph/ceph:v18, name=cool_lumiere, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:30:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-21a64b3ba545434f2fcb8070b2e882c18542fd1593f8bf0e02258e0365f3ff05-merged.mount: Deactivated successfully.
Oct  3 09:30:47 compute-0 podman[190926]: 2025-10-03 09:30:47.860459543 +0000 UTC m=+0.251954728 container remove 7da92b1cf64657e94ca6089908396f563a5026ca4fb0be41bc1f4670a6e3e40d (image=quay.io/ceph/ceph:v18, name=cool_lumiere, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 09:30:47 compute-0 systemd[1]: libpod-conmon-7da92b1cf64657e94ca6089908396f563a5026ca4fb0be41bc1f4670a6e3e40d.scope: Deactivated successfully.
Oct  3 09:30:47 compute-0 podman[190960]: 2025-10-03 09:30:47.935770413 +0000 UTC m=+0.051059962 container create 62847c5960c1ce848dc18b27992d4befa95672db9cd6e2253e1c5f5634edd065 (image=quay.io/ceph/ceph:v18, name=relaxed_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:30:47 compute-0 systemd[1]: Started libpod-conmon-62847c5960c1ce848dc18b27992d4befa95672db9cd6e2253e1c5f5634edd065.scope.
Oct  3 09:30:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:30:48 compute-0 podman[190960]: 2025-10-03 09:30:47.918635402 +0000 UTC m=+0.033924971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:30:48 compute-0 podman[190960]: 2025-10-03 09:30:48.023940446 +0000 UTC m=+0.139230045 container init 62847c5960c1ce848dc18b27992d4befa95672db9cd6e2253e1c5f5634edd065 (image=quay.io/ceph/ceph:v18, name=relaxed_fermat, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 09:30:48 compute-0 podman[190960]: 2025-10-03 09:30:48.033728331 +0000 UTC m=+0.149017880 container start 62847c5960c1ce848dc18b27992d4befa95672db9cd6e2253e1c5f5634edd065 (image=quay.io/ceph/ceph:v18, name=relaxed_fermat, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 09:30:48 compute-0 podman[190960]: 2025-10-03 09:30:48.038660189 +0000 UTC m=+0.153949748 container attach 62847c5960c1ce848dc18b27992d4befa95672db9cd6e2253e1c5f5634edd065 (image=quay.io/ceph/ceph:v18, name=relaxed_fermat, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 09:30:48 compute-0 relaxed_fermat[190975]: AQDIl99oyys6AxAAK+1wwNzslINtl3BqUm6Fmg==
Oct  3 09:30:48 compute-0 systemd[1]: libpod-62847c5960c1ce848dc18b27992d4befa95672db9cd6e2253e1c5f5634edd065.scope: Deactivated successfully.
Oct  3 09:30:48 compute-0 podman[190960]: 2025-10-03 09:30:48.062425363 +0000 UTC m=+0.177714922 container died 62847c5960c1ce848dc18b27992d4befa95672db9cd6e2253e1c5f5634edd065 (image=quay.io/ceph/ceph:v18, name=relaxed_fermat, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:30:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-269403cded793930894d9bcd5908e9699396d602374c4c0f94767096d60c6cce-merged.mount: Deactivated successfully.
Oct  3 09:30:48 compute-0 podman[190960]: 2025-10-03 09:30:48.128458285 +0000 UTC m=+0.243747834 container remove 62847c5960c1ce848dc18b27992d4befa95672db9cd6e2253e1c5f5634edd065 (image=quay.io/ceph/ceph:v18, name=relaxed_fermat, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:30:48 compute-0 systemd[1]: libpod-conmon-62847c5960c1ce848dc18b27992d4befa95672db9cd6e2253e1c5f5634edd065.scope: Deactivated successfully.
Oct  3 09:30:48 compute-0 podman[190994]: 2025-10-03 09:30:48.210030137 +0000 UTC m=+0.056127035 container create 72a10dceb88137e9d39f743d0cfb58992e7633647120115c8296786ef3c128af (image=quay.io/ceph/ceph:v18, name=brave_fermi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 09:30:48 compute-0 systemd[1]: Started libpod-conmon-72a10dceb88137e9d39f743d0cfb58992e7633647120115c8296786ef3c128af.scope.
Oct  3 09:30:48 compute-0 podman[190994]: 2025-10-03 09:30:48.187503963 +0000 UTC m=+0.033600891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:30:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:30:48 compute-0 podman[190994]: 2025-10-03 09:30:48.306851038 +0000 UTC m=+0.152947956 container init 72a10dceb88137e9d39f743d0cfb58992e7633647120115c8296786ef3c128af (image=quay.io/ceph/ceph:v18, name=brave_fermi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  3 09:30:48 compute-0 podman[190994]: 2025-10-03 09:30:48.31563375 +0000 UTC m=+0.161730668 container start 72a10dceb88137e9d39f743d0cfb58992e7633647120115c8296786ef3c128af (image=quay.io/ceph/ceph:v18, name=brave_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Oct  3 09:30:48 compute-0 podman[190994]: 2025-10-03 09:30:48.322509862 +0000 UTC m=+0.168606770 container attach 72a10dceb88137e9d39f743d0cfb58992e7633647120115c8296786ef3c128af (image=quay.io/ceph/ceph:v18, name=brave_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:30:48 compute-0 brave_fermi[191009]: AQDIl99o1Wj8ExAAdvHOOc+vnw9MhPLacGSfnQ==
Oct  3 09:30:48 compute-0 systemd[1]: libpod-72a10dceb88137e9d39f743d0cfb58992e7633647120115c8296786ef3c128af.scope: Deactivated successfully.
Oct  3 09:30:48 compute-0 podman[190994]: 2025-10-03 09:30:48.340893982 +0000 UTC m=+0.186990880 container died 72a10dceb88137e9d39f743d0cfb58992e7633647120115c8296786ef3c128af (image=quay.io/ceph/ceph:v18, name=brave_fermi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:30:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-69841bf6eb51d7fc54eff62e825e9b8a98773e2a2303259873cda88f2646c426-merged.mount: Deactivated successfully.
Oct  3 09:30:48 compute-0 podman[190994]: 2025-10-03 09:30:48.401868221 +0000 UTC m=+0.247965119 container remove 72a10dceb88137e9d39f743d0cfb58992e7633647120115c8296786ef3c128af (image=quay.io/ceph/ceph:v18, name=brave_fermi, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 09:30:48 compute-0 systemd[1]: libpod-conmon-72a10dceb88137e9d39f743d0cfb58992e7633647120115c8296786ef3c128af.scope: Deactivated successfully.
Oct  3 09:30:48 compute-0 podman[191026]: 2025-10-03 09:30:48.477645357 +0000 UTC m=+0.050166293 container create 5be17099ebb7ca3e63b954bb6f003b93d700414429acab7f76cf9236e60d0cf4 (image=quay.io/ceph/ceph:v18, name=relaxed_ramanujan, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 09:30:48 compute-0 systemd[1]: Started libpod-conmon-5be17099ebb7ca3e63b954bb6f003b93d700414429acab7f76cf9236e60d0cf4.scope.
Oct  3 09:30:48 compute-0 podman[191026]: 2025-10-03 09:30:48.45687923 +0000 UTC m=+0.029400186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:30:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/765882b560f75bcb61f76e7a11327269b4cd337980fdd93e0500e7646bc840d2/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:48 compute-0 podman[191026]: 2025-10-03 09:30:48.581313238 +0000 UTC m=+0.153834204 container init 5be17099ebb7ca3e63b954bb6f003b93d700414429acab7f76cf9236e60d0cf4 (image=quay.io/ceph/ceph:v18, name=relaxed_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 09:30:48 compute-0 podman[191026]: 2025-10-03 09:30:48.588532561 +0000 UTC m=+0.161053497 container start 5be17099ebb7ca3e63b954bb6f003b93d700414429acab7f76cf9236e60d0cf4 (image=quay.io/ceph/ceph:v18, name=relaxed_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:30:48 compute-0 podman[191026]: 2025-10-03 09:30:48.594157502 +0000 UTC m=+0.166678438 container attach 5be17099ebb7ca3e63b954bb6f003b93d700414429acab7f76cf9236e60d0cf4 (image=quay.io/ceph/ceph:v18, name=relaxed_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:30:48 compute-0 relaxed_ramanujan[191041]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct  3 09:30:48 compute-0 relaxed_ramanujan[191041]: setting min_mon_release = pacific
Oct  3 09:30:48 compute-0 relaxed_ramanujan[191041]: /usr/bin/monmaptool: set fsid to 9b4e8c9a-5555-5510-a631-4742a1182561
Oct  3 09:30:48 compute-0 relaxed_ramanujan[191041]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct  3 09:30:48 compute-0 systemd[1]: libpod-5be17099ebb7ca3e63b954bb6f003b93d700414429acab7f76cf9236e60d0cf4.scope: Deactivated successfully.
Oct  3 09:30:48 compute-0 podman[191026]: 2025-10-03 09:30:48.624092053 +0000 UTC m=+0.196612989 container died 5be17099ebb7ca3e63b954bb6f003b93d700414429acab7f76cf9236e60d0cf4 (image=quay.io/ceph/ceph:v18, name=relaxed_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  3 09:30:48 compute-0 podman[191026]: 2025-10-03 09:30:48.668404427 +0000 UTC m=+0.240925363 container remove 5be17099ebb7ca3e63b954bb6f003b93d700414429acab7f76cf9236e60d0cf4 (image=quay.io/ceph/ceph:v18, name=relaxed_ramanujan, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 09:30:48 compute-0 systemd[1]: libpod-conmon-5be17099ebb7ca3e63b954bb6f003b93d700414429acab7f76cf9236e60d0cf4.scope: Deactivated successfully.
Oct  3 09:30:48 compute-0 podman[191060]: 2025-10-03 09:30:48.740883236 +0000 UTC m=+0.047247129 container create d130f8cbc3f2585cb270c723625752c0106433205a25c397eda2a2a832f6e39e (image=quay.io/ceph/ceph:v18, name=sad_dijkstra, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:30:48 compute-0 systemd[1]: Started libpod-conmon-d130f8cbc3f2585cb270c723625752c0106433205a25c397eda2a2a832f6e39e.scope.
Oct  3 09:30:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08245abf6edf4d01eefa5d63e4b3def7203364d65d7bfe9167314fe8f894fc23/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08245abf6edf4d01eefa5d63e4b3def7203364d65d7bfe9167314fe8f894fc23/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08245abf6edf4d01eefa5d63e4b3def7203364d65d7bfe9167314fe8f894fc23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08245abf6edf4d01eefa5d63e4b3def7203364d65d7bfe9167314fe8f894fc23/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:48 compute-0 podman[191060]: 2025-10-03 09:30:48.719780499 +0000 UTC m=+0.026144412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:30:48 compute-0 podman[191060]: 2025-10-03 09:30:48.831730856 +0000 UTC m=+0.138094769 container init d130f8cbc3f2585cb270c723625752c0106433205a25c397eda2a2a832f6e39e (image=quay.io/ceph/ceph:v18, name=sad_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 09:30:48 compute-0 podman[191060]: 2025-10-03 09:30:48.84305169 +0000 UTC m=+0.149415583 container start d130f8cbc3f2585cb270c723625752c0106433205a25c397eda2a2a832f6e39e (image=quay.io/ceph/ceph:v18, name=sad_dijkstra, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 09:30:48 compute-0 podman[191060]: 2025-10-03 09:30:48.847956458 +0000 UTC m=+0.154320371 container attach d130f8cbc3f2585cb270c723625752c0106433205a25c397eda2a2a832f6e39e (image=quay.io/ceph/ceph:v18, name=sad_dijkstra, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:30:48 compute-0 systemd[1]: libpod-d130f8cbc3f2585cb270c723625752c0106433205a25c397eda2a2a832f6e39e.scope: Deactivated successfully.
Oct  3 09:30:48 compute-0 conmon[191076]: conmon d130f8cbc3f2585cb270 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d130f8cbc3f2585cb270c723625752c0106433205a25c397eda2a2a832f6e39e.scope/container/memory.events
Oct  3 09:30:48 compute-0 podman[191060]: 2025-10-03 09:30:48.944893032 +0000 UTC m=+0.251256955 container died d130f8cbc3f2585cb270c723625752c0106433205a25c397eda2a2a832f6e39e (image=quay.io/ceph/ceph:v18, name=sad_dijkstra, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 09:30:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-08245abf6edf4d01eefa5d63e4b3def7203364d65d7bfe9167314fe8f894fc23-merged.mount: Deactivated successfully.
Oct  3 09:30:49 compute-0 podman[191060]: 2025-10-03 09:30:49.079488658 +0000 UTC m=+0.385852551 container remove d130f8cbc3f2585cb270c723625752c0106433205a25c397eda2a2a832f6e39e (image=quay.io/ceph/ceph:v18, name=sad_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 09:30:49 compute-0 systemd[1]: libpod-conmon-d130f8cbc3f2585cb270c723625752c0106433205a25c397eda2a2a832f6e39e.scope: Deactivated successfully.
Oct  3 09:30:49 compute-0 systemd[1]: Reloading.
Oct  3 09:30:49 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:30:49 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:30:49 compute-0 systemd[1]: Reloading.
Oct  3 09:30:49 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:30:49 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:30:49 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Oct  3 09:30:49 compute-0 systemd[1]: Reloading.
Oct  3 09:30:50 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:30:50 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:30:50 compute-0 systemd[1]: Reached target Ceph cluster 9b4e8c9a-5555-5510-a631-4742a1182561.
Oct  3 09:30:50 compute-0 systemd[1]: Reloading.
Oct  3 09:30:50 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:30:50 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:30:50 compute-0 systemd[1]: Reloading.
Oct  3 09:30:50 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:30:50 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:30:51 compute-0 systemd[1]: Created slice Slice /system/ceph-9b4e8c9a-5555-5510-a631-4742a1182561.
Oct  3 09:30:51 compute-0 systemd[1]: Reached target System Time Set.
Oct  3 09:30:51 compute-0 systemd[1]: Reached target System Time Synchronized.
Oct  3 09:30:51 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 9b4e8c9a-5555-5510-a631-4742a1182561...
Oct  3 09:30:51 compute-0 podman[191347]: 2025-10-03 09:30:51.389023488 +0000 UTC m=+0.044501991 container create 0c5bb4045e9c94ad09fa042ae6df24a3be50d7490891210428f59a925ace1030 (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 09:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2259c88621862f4ba9d8f3bbb54e68b4869864459fac10998468b9bb5e3d0061/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2259c88621862f4ba9d8f3bbb54e68b4869864459fac10998468b9bb5e3d0061/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2259c88621862f4ba9d8f3bbb54e68b4869864459fac10998468b9bb5e3d0061/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:51 compute-0 podman[191347]: 2025-10-03 09:30:51.373140848 +0000 UTC m=+0.028619361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2259c88621862f4ba9d8f3bbb54e68b4869864459fac10998468b9bb5e3d0061/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:51 compute-0 podman[191347]: 2025-10-03 09:30:51.498742694 +0000 UTC m=+0.154221217 container init 0c5bb4045e9c94ad09fa042ae6df24a3be50d7490891210428f59a925ace1030 (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:30:51 compute-0 podman[191347]: 2025-10-03 09:30:51.514744648 +0000 UTC m=+0.170223151 container start 0c5bb4045e9c94ad09fa042ae6df24a3be50d7490891210428f59a925ace1030 (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 09:30:51 compute-0 bash[191347]: 0c5bb4045e9c94ad09fa042ae6df24a3be50d7490891210428f59a925ace1030
Oct  3 09:30:51 compute-0 systemd[1]: Started Ceph mon.compute-0 for 9b4e8c9a-5555-5510-a631-4742a1182561.
Oct  3 09:30:51 compute-0 ceph-mon[191366]: set uid:gid to 167:167 (ceph:ceph)
Oct  3 09:30:51 compute-0 ceph-mon[191366]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct  3 09:30:51 compute-0 ceph-mon[191366]: pidfile_write: ignore empty --pid-file
Oct  3 09:30:51 compute-0 ceph-mon[191366]: load: jerasure load: lrc 
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: RocksDB version: 7.9.2
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Git sha 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: DB SUMMARY
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: DB Session ID:  W4CLN0B2XN4LAM4CHBIV
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: CURRENT file:  CURRENT
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: IDENTITY file:  IDENTITY
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                         Options.error_if_exists: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                       Options.create_if_missing: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                         Options.paranoid_checks: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                                     Options.env: 0x55d422c7fc40
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                                Options.info_log: 0x55d423558e80
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                Options.max_file_opening_threads: 16
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                              Options.statistics: (nil)
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                               Options.use_fsync: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                       Options.max_log_file_size: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                         Options.allow_fallocate: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                        Options.use_direct_reads: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:          Options.create_missing_column_families: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                              Options.db_log_dir: 
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                                 Options.wal_dir: 
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                   Options.advise_random_on_open: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                    Options.write_buffer_manager: 0x55d423568b40
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                            Options.rate_limiter: (nil)
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                  Options.unordered_write: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                               Options.row_cache: None
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                              Options.wal_filter: None
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.allow_ingest_behind: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.two_write_queues: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.manual_wal_flush: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.wal_compression: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.atomic_flush: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                 Options.log_readahead_size: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.allow_data_in_errors: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.db_host_id: __hostname__
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.max_background_jobs: 2
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.max_background_compactions: -1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.max_subcompactions: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.max_total_wal_size: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                          Options.max_open_files: -1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                          Options.bytes_per_sync: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:       Options.compaction_readahead_size: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                  Options.max_background_flushes: -1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Compression algorithms supported:
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: #011kZSTD supported: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: #011kXpressCompression supported: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: #011kBZip2Compression supported: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: #011kLZ4Compression supported: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: #011kZlibCompression supported: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: #011kSnappyCompression supported: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:           Options.merge_operator: 
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:        Options.compaction_filter: None
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d423558a80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d4235511f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:        Options.write_buffer_size: 33554432
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:  Options.max_write_buffer_number: 2
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:          Options.compression: NoCompression
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.num_levels: 7
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 86380f0c-e5ab-4a78-a709-deb613d5683c
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483851563528, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483851566990, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "W4CLN0B2XN4LAM4CHBIV", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483851567108, "job": 1, "event": "recovery_finished"}
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d42357ae00
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: DB pointer 0x55d423604000
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 09:30:51 compute-0 ceph-mon[191366]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.06 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.06 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d4235511f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  3 09:30:51 compute-0 ceph-mon[191366]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 9b4e8c9a-5555-5510-a631-4742a1182561
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@-1(???) e0 preinit fsid 9b4e8c9a-5555-5510-a631-4742a1182561
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(probing) e0 win_standalone_election
Oct  3 09:30:51 compute-0 ceph-mon[191366]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  3 09:30:51 compute-0 ceph-mon[191366]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct  3 09:30:51 compute-0 podman[191367]: 2025-10-03 09:30:51.629861388 +0000 UTC m=+0.066169987 container create 7986f652137833322cc5220625ed29280fae6d8bf63bb224ffbe0086c21ca08d (image=quay.io/ceph/ceph:v18, name=pedantic_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(probing) e1 win_standalone_election
Oct  3 09:30:51 compute-0 ceph-mon[191366]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  3 09:30:51 compute-0 ceph-mon[191366]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  3 09:30:51 compute-0 ceph-mon[191366]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-10-03T09:30:48.880174Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864100,os=Linux}
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).mds e1 new map
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct  3 09:30:51 compute-0 ceph-mon[191366]: log_channel(cluster) log [DBG] : fsmap 
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mkfs 9b4e8c9a-5555-5510-a631-4742a1182561
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct  3 09:30:51 compute-0 systemd[1]: Started libpod-conmon-7986f652137833322cc5220625ed29280fae6d8bf63bb224ffbe0086c21ca08d.scope.
Oct  3 09:30:51 compute-0 ceph-mon[191366]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct  3 09:30:51 compute-0 podman[191367]: 2025-10-03 09:30:51.600787134 +0000 UTC m=+0.037095753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:30:51 compute-0 ceph-mon[191366]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct  3 09:30:51 compute-0 ceph-mon[191366]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  3 09:30:51 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706d27c5613ead1dc66852a7cbdb05fb00ca8132109789f9f6ae56b05f14ff71/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706d27c5613ead1dc66852a7cbdb05fb00ca8132109789f9f6ae56b05f14ff71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/706d27c5613ead1dc66852a7cbdb05fb00ca8132109789f9f6ae56b05f14ff71/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:51 compute-0 podman[191367]: 2025-10-03 09:30:51.77428979 +0000 UTC m=+0.210598419 container init 7986f652137833322cc5220625ed29280fae6d8bf63bb224ffbe0086c21ca08d (image=quay.io/ceph/ceph:v18, name=pedantic_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 09:30:51 compute-0 podman[191367]: 2025-10-03 09:30:51.784305421 +0000 UTC m=+0.220614020 container start 7986f652137833322cc5220625ed29280fae6d8bf63bb224ffbe0086c21ca08d (image=quay.io/ceph/ceph:v18, name=pedantic_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 09:30:51 compute-0 podman[191367]: 2025-10-03 09:30:51.795188341 +0000 UTC m=+0.231496960 container attach 7986f652137833322cc5220625ed29280fae6d8bf63bb224ffbe0086c21ca08d (image=quay.io/ceph/ceph:v18, name=pedantic_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 09:30:52 compute-0 ceph-mon[191366]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  3 09:30:52 compute-0 ceph-mon[191366]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589214586' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  3 09:30:52 compute-0 pedantic_kalam[191421]:  cluster:
Oct  3 09:30:52 compute-0 pedantic_kalam[191421]:    id:     9b4e8c9a-5555-5510-a631-4742a1182561
Oct  3 09:30:52 compute-0 pedantic_kalam[191421]:    health: HEALTH_OK
Oct  3 09:30:52 compute-0 pedantic_kalam[191421]: 
Oct  3 09:30:52 compute-0 pedantic_kalam[191421]:  services:
Oct  3 09:30:52 compute-0 pedantic_kalam[191421]:    mon: 1 daemons, quorum compute-0 (age 0.561379s)
Oct  3 09:30:52 compute-0 pedantic_kalam[191421]:    mgr: no daemons active
Oct  3 09:30:52 compute-0 pedantic_kalam[191421]:    osd: 0 osds: 0 up, 0 in
Oct  3 09:30:52 compute-0 pedantic_kalam[191421]: 
Oct  3 09:30:52 compute-0 pedantic_kalam[191421]:  data:
Oct  3 09:30:52 compute-0 pedantic_kalam[191421]:    pools:   0 pools, 0 pgs
Oct  3 09:30:52 compute-0 pedantic_kalam[191421]:    objects: 0 objects, 0 B
Oct  3 09:30:52 compute-0 pedantic_kalam[191421]:    usage:   0 B used, 0 B / 0 B avail
Oct  3 09:30:52 compute-0 pedantic_kalam[191421]:    pgs:     
Oct  3 09:30:52 compute-0 pedantic_kalam[191421]: 
Oct  3 09:30:52 compute-0 systemd[1]: libpod-7986f652137833322cc5220625ed29280fae6d8bf63bb224ffbe0086c21ca08d.scope: Deactivated successfully.
Oct  3 09:30:52 compute-0 podman[191367]: 2025-10-03 09:30:52.219116315 +0000 UTC m=+0.655424914 container died 7986f652137833322cc5220625ed29280fae6d8bf63bb224ffbe0086c21ca08d (image=quay.io/ceph/ceph:v18, name=pedantic_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 09:30:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-706d27c5613ead1dc66852a7cbdb05fb00ca8132109789f9f6ae56b05f14ff71-merged.mount: Deactivated successfully.
Oct  3 09:30:52 compute-0 podman[191367]: 2025-10-03 09:30:52.365524529 +0000 UTC m=+0.801833128 container remove 7986f652137833322cc5220625ed29280fae6d8bf63bb224ffbe0086c21ca08d (image=quay.io/ceph/ceph:v18, name=pedantic_kalam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:30:52 compute-0 systemd[1]: libpod-conmon-7986f652137833322cc5220625ed29280fae6d8bf63bb224ffbe0086c21ca08d.scope: Deactivated successfully.
Oct  3 09:30:52 compute-0 podman[191458]: 2025-10-03 09:30:52.453331451 +0000 UTC m=+0.136479586 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 09:30:52 compute-0 podman[191471]: 2025-10-03 09:30:52.519362313 +0000 UTC m=+0.107644280 container create d092d8d717c7752bb14eceeda77db456802844f10b2f0a5727b54e7fca26f8b5 (image=quay.io/ceph/ceph:v18, name=quirky_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:30:52 compute-0 podman[191471]: 2025-10-03 09:30:52.457544187 +0000 UTC m=+0.045826154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:30:52 compute-0 systemd[1]: Started libpod-conmon-d092d8d717c7752bb14eceeda77db456802844f10b2f0a5727b54e7fca26f8b5.scope.
Oct  3 09:30:52 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/619024e62fc5b2ec04161d4f61d07638d52dcc861d0d73f1c0722b71292ab14d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/619024e62fc5b2ec04161d4f61d07638d52dcc861d0d73f1c0722b71292ab14d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/619024e62fc5b2ec04161d4f61d07638d52dcc861d0d73f1c0722b71292ab14d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/619024e62fc5b2ec04161d4f61d07638d52dcc861d0d73f1c0722b71292ab14d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:52 compute-0 ceph-mon[191366]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  3 09:30:52 compute-0 podman[191471]: 2025-10-03 09:30:52.723648298 +0000 UTC m=+0.311930275 container init d092d8d717c7752bb14eceeda77db456802844f10b2f0a5727b54e7fca26f8b5 (image=quay.io/ceph/ceph:v18, name=quirky_kilby, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:30:52 compute-0 podman[191471]: 2025-10-03 09:30:52.741811872 +0000 UTC m=+0.330093829 container start d092d8d717c7752bb14eceeda77db456802844f10b2f0a5727b54e7fca26f8b5 (image=quay.io/ceph/ceph:v18, name=quirky_kilby, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:30:52 compute-0 podman[191471]: 2025-10-03 09:30:52.751818633 +0000 UTC m=+0.340100690 container attach d092d8d717c7752bb14eceeda77db456802844f10b2f0a5727b54e7fca26f8b5 (image=quay.io/ceph/ceph:v18, name=quirky_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 09:30:52 compute-0 podman[191489]: 2025-10-03 09:30:52.752846667 +0000 UTC m=+0.279215024 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 09:30:53 compute-0 ceph-mon[191366]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct  3 09:30:53 compute-0 ceph-mon[191366]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3635610541' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  3 09:30:53 compute-0 ceph-mon[191366]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3635610541' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  3 09:30:53 compute-0 quirky_kilby[191507]: 
Oct  3 09:30:53 compute-0 quirky_kilby[191507]: [global]
Oct  3 09:30:53 compute-0 quirky_kilby[191507]: #011fsid = 9b4e8c9a-5555-5510-a631-4742a1182561
Oct  3 09:30:53 compute-0 quirky_kilby[191507]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct  3 09:30:53 compute-0 quirky_kilby[191507]: #011osd_crush_chooseleaf_type = 0
Oct  3 09:30:53 compute-0 systemd[1]: libpod-d092d8d717c7752bb14eceeda77db456802844f10b2f0a5727b54e7fca26f8b5.scope: Deactivated successfully.
Oct  3 09:30:53 compute-0 podman[191546]: 2025-10-03 09:30:53.253045271 +0000 UTC m=+0.035101088 container died d092d8d717c7752bb14eceeda77db456802844f10b2f0a5727b54e7fca26f8b5 (image=quay.io/ceph/ceph:v18, name=quirky_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:30:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-619024e62fc5b2ec04161d4f61d07638d52dcc861d0d73f1c0722b71292ab14d-merged.mount: Deactivated successfully.
Oct  3 09:30:53 compute-0 podman[191546]: 2025-10-03 09:30:53.324596581 +0000 UTC m=+0.106652388 container remove d092d8d717c7752bb14eceeda77db456802844f10b2f0a5727b54e7fca26f8b5 (image=quay.io/ceph/ceph:v18, name=quirky_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 09:30:53 compute-0 systemd[1]: libpod-conmon-d092d8d717c7752bb14eceeda77db456802844f10b2f0a5727b54e7fca26f8b5.scope: Deactivated successfully.
Oct  3 09:30:53 compute-0 podman[191560]: 2025-10-03 09:30:53.41354894 +0000 UTC m=+0.056606751 container create 9031f01009dce0b8620a86b3f6b2202c330c9a06af31aaa905207efce0d17630 (image=quay.io/ceph/ceph:v18, name=blissful_cerf, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:30:53 compute-0 systemd[1]: Started libpod-conmon-9031f01009dce0b8620a86b3f6b2202c330c9a06af31aaa905207efce0d17630.scope.
Oct  3 09:30:53 compute-0 podman[191560]: 2025-10-03 09:30:53.386566353 +0000 UTC m=+0.029624194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:30:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:30:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc51a91b5695545f4ebb13475c52932484fee5f8e50e49f059e175c0b356ea0e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc51a91b5695545f4ebb13475c52932484fee5f8e50e49f059e175c0b356ea0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc51a91b5695545f4ebb13475c52932484fee5f8e50e49f059e175c0b356ea0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc51a91b5695545f4ebb13475c52932484fee5f8e50e49f059e175c0b356ea0e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:53 compute-0 podman[191560]: 2025-10-03 09:30:53.52370092 +0000 UTC m=+0.166758761 container init 9031f01009dce0b8620a86b3f6b2202c330c9a06af31aaa905207efce0d17630 (image=quay.io/ceph/ceph:v18, name=blissful_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 09:30:53 compute-0 podman[191560]: 2025-10-03 09:30:53.535080415 +0000 UTC m=+0.178138226 container start 9031f01009dce0b8620a86b3f6b2202c330c9a06af31aaa905207efce0d17630 (image=quay.io/ceph/ceph:v18, name=blissful_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 09:30:53 compute-0 podman[191560]: 2025-10-03 09:30:53.54676251 +0000 UTC m=+0.189820321 container attach 9031f01009dce0b8620a86b3f6b2202c330c9a06af31aaa905207efce0d17630 (image=quay.io/ceph/ceph:v18, name=blissful_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:30:53 compute-0 ceph-mon[191366]: from='client.? 192.168.122.100:0/3635610541' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  3 09:30:53 compute-0 ceph-mon[191366]: from='client.? 192.168.122.100:0/3635610541' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  3 09:30:53 compute-0 ceph-mon[191366]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:30:53 compute-0 ceph-mon[191366]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3646435272' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:30:54 compute-0 systemd[1]: libpod-9031f01009dce0b8620a86b3f6b2202c330c9a06af31aaa905207efce0d17630.scope: Deactivated successfully.
Oct  3 09:30:54 compute-0 podman[191560]: 2025-10-03 09:30:54.029910017 +0000 UTC m=+0.672967828 container died 9031f01009dce0b8620a86b3f6b2202c330c9a06af31aaa905207efce0d17630 (image=quay.io/ceph/ceph:v18, name=blissful_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 09:30:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc51a91b5695545f4ebb13475c52932484fee5f8e50e49f059e175c0b356ea0e-merged.mount: Deactivated successfully.
Oct  3 09:30:54 compute-0 podman[191560]: 2025-10-03 09:30:54.099774553 +0000 UTC m=+0.742832364 container remove 9031f01009dce0b8620a86b3f6b2202c330c9a06af31aaa905207efce0d17630 (image=quay.io/ceph/ceph:v18, name=blissful_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 09:30:54 compute-0 systemd[1]: libpod-conmon-9031f01009dce0b8620a86b3f6b2202c330c9a06af31aaa905207efce0d17630.scope: Deactivated successfully.
Oct  3 09:30:54 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 9b4e8c9a-5555-5510-a631-4742a1182561...
Oct  3 09:30:54 compute-0 podman[191604]: 2025-10-03 09:30:54.156904869 +0000 UTC m=+0.091771221 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 09:30:54 compute-0 ceph-mon[191366]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct  3 09:30:54 compute-0 ceph-mon[191366]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct  3 09:30:54 compute-0 ceph-mon[191366]: mon.compute-0@0(leader) e1 shutdown
Oct  3 09:30:54 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0[191362]: 2025-10-03T09:30:54.351+0000 7fd21a2ff640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct  3 09:30:54 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0[191362]: 2025-10-03T09:30:54.351+0000 7fd21a2ff640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct  3 09:30:54 compute-0 ceph-mon[191366]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  3 09:30:54 compute-0 ceph-mon[191366]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  3 09:30:54 compute-0 podman[191665]: 2025-10-03 09:30:54.520538665 +0000 UTC m=+0.219347281 container died 0c5bb4045e9c94ad09fa042ae6df24a3be50d7490891210428f59a925ace1030 (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:30:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-2259c88621862f4ba9d8f3bbb54e68b4869864459fac10998468b9bb5e3d0061-merged.mount: Deactivated successfully.
Oct  3 09:30:54 compute-0 podman[191665]: 2025-10-03 09:30:54.591181784 +0000 UTC m=+0.289990400 container remove 0c5bb4045e9c94ad09fa042ae6df24a3be50d7490891210428f59a925ace1030 (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:30:54 compute-0 bash[191665]: ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0
Oct  3 09:30:54 compute-0 systemd[1]: ceph-9b4e8c9a-5555-5510-a631-4742a1182561@mon.compute-0.service: Deactivated successfully.
Oct  3 09:30:54 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 9b4e8c9a-5555-5510-a631-4742a1182561.
Oct  3 09:30:54 compute-0 systemd[1]: ceph-9b4e8c9a-5555-5510-a631-4742a1182561@mon.compute-0.service: Consumed 1.362s CPU time.
Oct  3 09:30:54 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 9b4e8c9a-5555-5510-a631-4742a1182561...
Oct  3 09:30:55 compute-0 podman[191764]: 2025-10-03 09:30:55.096960229 +0000 UTC m=+0.053969126 container create 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd38ba9c5b08bd709d889f3d16e50d8bb9c0752107bd3eb3f06ba4f2670eb0c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd38ba9c5b08bd709d889f3d16e50d8bb9c0752107bd3eb3f06ba4f2670eb0c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd38ba9c5b08bd709d889f3d16e50d8bb9c0752107bd3eb3f06ba4f2670eb0c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd38ba9c5b08bd709d889f3d16e50d8bb9c0752107bd3eb3f06ba4f2670eb0c0/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:55 compute-0 podman[191764]: 2025-10-03 09:30:55.075802548 +0000 UTC m=+0.032811465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:30:55 compute-0 podman[191764]: 2025-10-03 09:30:55.219554268 +0000 UTC m=+0.176563185 container init 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 09:30:55 compute-0 podman[191764]: 2025-10-03 09:30:55.227852375 +0000 UTC m=+0.184861262 container start 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:30:55 compute-0 bash[191764]: 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b
Oct  3 09:30:55 compute-0 systemd[1]: Started Ceph mon.compute-0 for 9b4e8c9a-5555-5510-a631-4742a1182561.
Oct  3 09:30:55 compute-0 ceph-mon[191783]: set uid:gid to 167:167 (ceph:ceph)
Oct  3 09:30:55 compute-0 ceph-mon[191783]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct  3 09:30:55 compute-0 ceph-mon[191783]: pidfile_write: ignore empty --pid-file
Oct  3 09:30:55 compute-0 ceph-mon[191783]: load: jerasure load: lrc 
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: RocksDB version: 7.9.2
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Git sha 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: DB SUMMARY
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: DB Session ID:  FRSIUNOLIZ7G5G8L8D2S
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: CURRENT file:  CURRENT
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: IDENTITY file:  IDENTITY
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 54560 ; 
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                         Options.error_if_exists: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                       Options.create_if_missing: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                         Options.paranoid_checks: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                                     Options.env: 0x56005c5f9c40
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                                Options.info_log: 0x56005dde3040
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                Options.max_file_opening_threads: 16
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                              Options.statistics: (nil)
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                               Options.use_fsync: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                       Options.max_log_file_size: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                         Options.allow_fallocate: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                        Options.use_direct_reads: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:          Options.create_missing_column_families: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                              Options.db_log_dir: 
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                                 Options.wal_dir: 
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                   Options.advise_random_on_open: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                    Options.write_buffer_manager: 0x56005ddf2b40
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                            Options.rate_limiter: (nil)
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                  Options.unordered_write: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                               Options.row_cache: None
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                              Options.wal_filter: None
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.allow_ingest_behind: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.two_write_queues: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.manual_wal_flush: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.wal_compression: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.atomic_flush: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                 Options.log_readahead_size: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.allow_data_in_errors: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.db_host_id: __hostname__
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.max_background_jobs: 2
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.max_background_compactions: -1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.max_subcompactions: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.max_total_wal_size: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                          Options.max_open_files: -1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                          Options.bytes_per_sync: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:       Options.compaction_readahead_size: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                  Options.max_background_flushes: -1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Compression algorithms supported:
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: #011kZSTD supported: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: #011kXpressCompression supported: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: #011kBZip2Compression supported: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: #011kLZ4Compression supported: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: #011kZlibCompression supported: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: #011kSnappyCompression supported: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:           Options.merge_operator: 
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:        Options.compaction_filter: None
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56005dde2c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56005dddb1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:        Options.write_buffer_size: 33554432
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:  Options.max_write_buffer_number: 2
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:          Options.compression: NoCompression
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.num_levels: 7
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 86380f0c-e5ab-4a78-a709-deb613d5683c
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483855270516, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483855280066, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54149, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 52691, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3023, "raw_average_key_size": 30, "raw_value_size": 50293, "raw_average_value_size": 502, "num_data_blocks": 8, "num_entries": 100, "num_filter_entries": 100, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483855, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483855280196, "job": 1, "event": "recovery_finished"}
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56005de04e00
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: DB pointer 0x56005df0c000
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 09:30:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   54.78 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      5.6      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Sum      2/0   54.78 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      5.6      0.01              0.00         1    0.009       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      5.6      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.6      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 1.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 1.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56005dddb1f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  3 09:30:55 compute-0 ceph-mon[191783]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 9b4e8c9a-5555-5510-a631-4742a1182561
Oct  3 09:30:55 compute-0 ceph-mon[191783]: mon.compute-0@-1(???) e1 preinit fsid 9b4e8c9a-5555-5510-a631-4742a1182561
Oct  3 09:30:55 compute-0 ceph-mon[191783]: mon.compute-0@-1(???).mds e1 new map
Oct  3 09:30:55 compute-0 ceph-mon[191783]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  3 09:30:55 compute-0 ceph-mon[191783]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct  3 09:30:55 compute-0 ceph-mon[191783]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  3 09:30:55 compute-0 ceph-mon[191783]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  3 09:30:55 compute-0 ceph-mon[191783]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  3 09:30:55 compute-0 ceph-mon[191783]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct  3 09:30:55 compute-0 ceph-mon[191783]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct  3 09:30:55 compute-0 ceph-mon[191783]: mon.compute-0@0(probing) e1 win_standalone_election
Oct  3 09:30:55 compute-0 ceph-mon[191783]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct  3 09:30:55 compute-0 podman[191784]: 2025-10-03 09:30:55.32603135 +0000 UTC m=+0.062607983 container create ec219ba1bd5ccaf1bb00c3a92b5433ebbdb5f722a8dfb4d788497bfc666c3983 (image=quay.io/ceph/ceph:v18, name=awesome_mestorf, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 09:30:55 compute-0 ceph-mon[191783]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  3 09:30:55 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  3 09:30:55 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct  3 09:30:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  3 09:30:55 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : fsmap 
Oct  3 09:30:55 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct  3 09:30:55 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct  3 09:30:55 compute-0 systemd[1]: Started libpod-conmon-ec219ba1bd5ccaf1bb00c3a92b5433ebbdb5f722a8dfb4d788497bfc666c3983.scope.
Oct  3 09:30:55 compute-0 podman[191784]: 2025-10-03 09:30:55.298304339 +0000 UTC m=+0.034880962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:30:55 compute-0 ceph-mon[191783]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  3 09:30:55 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/962883e15318c1475401fcd9dfca8ace8cc370207f0544a80076b2b42f9c9bd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/962883e15318c1475401fcd9dfca8ace8cc370207f0544a80076b2b42f9c9bd4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/962883e15318c1475401fcd9dfca8ace8cc370207f0544a80076b2b42f9c9bd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:55 compute-0 podman[191784]: 2025-10-03 09:30:55.472305261 +0000 UTC m=+0.208881884 container init ec219ba1bd5ccaf1bb00c3a92b5433ebbdb5f722a8dfb4d788497bfc666c3983 (image=quay.io/ceph/ceph:v18, name=awesome_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 09:30:55 compute-0 podman[191784]: 2025-10-03 09:30:55.482686335 +0000 UTC m=+0.219262938 container start ec219ba1bd5ccaf1bb00c3a92b5433ebbdb5f722a8dfb4d788497bfc666c3983 (image=quay.io/ceph/ceph:v18, name=awesome_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:30:55 compute-0 podman[191784]: 2025-10-03 09:30:55.493793422 +0000 UTC m=+0.230370065 container attach ec219ba1bd5ccaf1bb00c3a92b5433ebbdb5f722a8dfb4d788497bfc666c3983 (image=quay.io/ceph/ceph:v18, name=awesome_mestorf, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 09:30:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Oct  3 09:30:55 compute-0 systemd[1]: libpod-ec219ba1bd5ccaf1bb00c3a92b5433ebbdb5f722a8dfb4d788497bfc666c3983.scope: Deactivated successfully.
Oct  3 09:30:55 compute-0 podman[191864]: 2025-10-03 09:30:55.992895991 +0000 UTC m=+0.031078360 container died ec219ba1bd5ccaf1bb00c3a92b5433ebbdb5f722a8dfb4d788497bfc666c3983 (image=quay.io/ceph/ceph:v18, name=awesome_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  3 09:30:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-962883e15318c1475401fcd9dfca8ace8cc370207f0544a80076b2b42f9c9bd4-merged.mount: Deactivated successfully.
Oct  3 09:30:56 compute-0 podman[191864]: 2025-10-03 09:30:56.067684704 +0000 UTC m=+0.105867053 container remove ec219ba1bd5ccaf1bb00c3a92b5433ebbdb5f722a8dfb4d788497bfc666c3983 (image=quay.io/ceph/ceph:v18, name=awesome_mestorf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:30:56 compute-0 systemd[1]: libpod-conmon-ec219ba1bd5ccaf1bb00c3a92b5433ebbdb5f722a8dfb4d788497bfc666c3983.scope: Deactivated successfully.
Oct  3 09:30:56 compute-0 podman[191877]: 2025-10-03 09:30:56.168340149 +0000 UTC m=+0.061975343 container create ecfcb88b31a487537fe13791b63faf0ab0965a7d003bcffbde5d92ae5b0893eb (image=quay.io/ceph/ceph:v18, name=cool_hopper, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:30:56 compute-0 podman[191877]: 2025-10-03 09:30:56.142661814 +0000 UTC m=+0.036297008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:30:56 compute-0 systemd[1]: Started libpod-conmon-ecfcb88b31a487537fe13791b63faf0ab0965a7d003bcffbde5d92ae5b0893eb.scope.
Oct  3 09:30:56 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:30:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbf222cf6b15be91451370fca59adc53e56747aff4600616208870b6f595dcf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbf222cf6b15be91451370fca59adc53e56747aff4600616208870b6f595dcf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbf222cf6b15be91451370fca59adc53e56747aff4600616208870b6f595dcf6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:56 compute-0 podman[191877]: 2025-10-03 09:30:56.30682014 +0000 UTC m=+0.200455364 container init ecfcb88b31a487537fe13791b63faf0ab0965a7d003bcffbde5d92ae5b0893eb (image=quay.io/ceph/ceph:v18, name=cool_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 09:30:56 compute-0 podman[191877]: 2025-10-03 09:30:56.320128907 +0000 UTC m=+0.213764081 container start ecfcb88b31a487537fe13791b63faf0ab0965a7d003bcffbde5d92ae5b0893eb (image=quay.io/ceph/ceph:v18, name=cool_hopper, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 09:30:56 compute-0 podman[191877]: 2025-10-03 09:30:56.330374966 +0000 UTC m=+0.224010160 container attach ecfcb88b31a487537fe13791b63faf0ab0965a7d003bcffbde5d92ae5b0893eb (image=quay.io/ceph/ceph:v18, name=cool_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 09:30:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Oct  3 09:30:56 compute-0 systemd[1]: libpod-ecfcb88b31a487537fe13791b63faf0ab0965a7d003bcffbde5d92ae5b0893eb.scope: Deactivated successfully.
Oct  3 09:30:56 compute-0 podman[191919]: 2025-10-03 09:30:56.819753923 +0000 UTC m=+0.041776183 container died ecfcb88b31a487537fe13791b63faf0ab0965a7d003bcffbde5d92ae5b0893eb (image=quay.io/ceph/ceph:v18, name=cool_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:30:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbf222cf6b15be91451370fca59adc53e56747aff4600616208870b6f595dcf6-merged.mount: Deactivated successfully.
Oct  3 09:30:56 compute-0 podman[191919]: 2025-10-03 09:30:56.936120753 +0000 UTC m=+0.158142933 container remove ecfcb88b31a487537fe13791b63faf0ab0965a7d003bcffbde5d92ae5b0893eb (image=quay.io/ceph/ceph:v18, name=cool_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:30:56 compute-0 systemd[1]: libpod-conmon-ecfcb88b31a487537fe13791b63faf0ab0965a7d003bcffbde5d92ae5b0893eb.scope: Deactivated successfully.
Oct  3 09:30:57 compute-0 systemd[1]: Reloading.
Oct  3 09:30:57 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:30:57 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:30:57 compute-0 systemd[1]: Reloading.
Oct  3 09:30:57 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:30:57 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:30:58 compute-0 systemd[1]: Starting Ceph mgr.compute-0.vtkhde for 9b4e8c9a-5555-5510-a631-4742a1182561...
Oct  3 09:30:58 compute-0 podman[192052]: 2025-10-03 09:30:58.387468504 +0000 UTC m=+0.037188265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:30:58 compute-0 podman[192052]: 2025-10-03 09:30:58.531872045 +0000 UTC m=+0.181591786 container create b32ef6df7b9354134cb339b9c89a6b3a129ca5bff8bee72e946514db8f67e278 (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 09:30:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee92d41f866a3f058476755452c1de6e4b01a86ece34644dc0b43400dea36a96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee92d41f866a3f058476755452c1de6e4b01a86ece34644dc0b43400dea36a96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee92d41f866a3f058476755452c1de6e4b01a86ece34644dc0b43400dea36a96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee92d41f866a3f058476755452c1de6e4b01a86ece34644dc0b43400dea36a96/merged/var/lib/ceph/mgr/ceph-compute-0.vtkhde supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:58 compute-0 podman[192052]: 2025-10-03 09:30:58.761516356 +0000 UTC m=+0.411236127 container init b32ef6df7b9354134cb339b9c89a6b3a129ca5bff8bee72e946514db8f67e278 (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct  3 09:30:58 compute-0 podman[192052]: 2025-10-03 09:30:58.776015942 +0000 UTC m=+0.425735693 container start b32ef6df7b9354134cb339b9c89a6b3a129ca5bff8bee72e946514db8f67e278 (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:30:58 compute-0 ceph-mgr[192071]: set uid:gid to 167:167 (ceph:ceph)
Oct  3 09:30:58 compute-0 ceph-mgr[192071]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  3 09:30:58 compute-0 ceph-mgr[192071]: pidfile_write: ignore empty --pid-file
Oct  3 09:30:58 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'alerts'
Oct  3 09:30:59 compute-0 bash[192052]: b32ef6df7b9354134cb339b9c89a6b3a129ca5bff8bee72e946514db8f67e278
Oct  3 09:30:59 compute-0 systemd[1]: Started Ceph mgr.compute-0.vtkhde for 9b4e8c9a-5555-5510-a631-4742a1182561.
Oct  3 09:30:59 compute-0 ceph-mgr[192071]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  3 09:30:59 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'balancer'
Oct  3 09:30:59 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:30:59.243+0000 7f447edca140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  3 09:30:59 compute-0 podman[192096]: 2025-10-03 09:30:59.312471091 +0000 UTC m=+0.055624278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:30:59 compute-0 podman[192096]: 2025-10-03 09:30:59.452359937 +0000 UTC m=+0.195513124 container create e5ca5b73825e244bfee786183ce81bc275978c0b40451bcd26b4982f3aeb22c2 (image=quay.io/ceph/ceph:v18, name=thirsty_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 09:30:59 compute-0 ceph-mgr[192071]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  3 09:30:59 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'cephadm'
Oct  3 09:30:59 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:30:59.526+0000 7f447edca140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  3 09:30:59 compute-0 systemd[1]: Started libpod-conmon-e5ca5b73825e244bfee786183ce81bc275978c0b40451bcd26b4982f3aeb22c2.scope.
Oct  3 09:30:59 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:30:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30e6964c2b4558699023ed2794d7ce85a092c1b2b63aafde141cc9659c3db525/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30e6964c2b4558699023ed2794d7ce85a092c1b2b63aafde141cc9659c3db525/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30e6964c2b4558699023ed2794d7ce85a092c1b2b63aafde141cc9659c3db525/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:30:59 compute-0 podman[157165]: time="2025-10-03T09:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:30:59 compute-0 podman[192096]: 2025-10-03 09:30:59.756885383 +0000 UTC m=+0.500038570 container init e5ca5b73825e244bfee786183ce81bc275978c0b40451bcd26b4982f3aeb22c2 (image=quay.io/ceph/ceph:v18, name=thirsty_lamarr, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 09:30:59 compute-0 podman[192096]: 2025-10-03 09:30:59.771314947 +0000 UTC m=+0.514468134 container start e5ca5b73825e244bfee786183ce81bc275978c0b40451bcd26b4982f3aeb22c2 (image=quay.io/ceph/ceph:v18, name=thirsty_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 09:30:59 compute-0 podman[192096]: 2025-10-03 09:30:59.788718496 +0000 UTC m=+0.531871703 container attach e5ca5b73825e244bfee786183ce81bc275978c0b40451bcd26b4982f3aeb22c2 (image=quay.io/ceph/ceph:v18, name=thirsty_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 09:30:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 23491 "" "Go-http-client/1.1"
Oct  3 09:30:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Oct  3 09:31:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  3 09:31:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1748354147' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]: 
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]: {
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    "fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    "health": {
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "status": "HEALTH_OK",
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "checks": {},
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "mutes": []
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    },
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    "election_epoch": 5,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    "quorum": [
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        0
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    ],
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    "quorum_names": [
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "compute-0"
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    ],
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    "quorum_age": 4,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    "monmap": {
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "epoch": 1,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "min_mon_release_name": "reef",
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "num_mons": 1
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    },
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    "osdmap": {
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "epoch": 1,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "num_osds": 0,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "num_up_osds": 0,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "osd_up_since": 0,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "num_in_osds": 0,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "osd_in_since": 0,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "num_remapped_pgs": 0
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    },
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    "pgmap": {
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "pgs_by_state": [],
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "num_pgs": 0,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "num_pools": 0,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "num_objects": 0,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "data_bytes": 0,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "bytes_used": 0,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "bytes_avail": 0,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "bytes_total": 0
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    },
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    "fsmap": {
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "epoch": 1,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "by_rank": [],
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "up:standby": 0
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    },
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    "mgrmap": {
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "available": false,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "num_standbys": 0,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "modules": [
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:            "iostat",
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:            "nfs",
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:            "restful"
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        ],
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "services": {}
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    },
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    "servicemap": {
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "epoch": 1,
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "modified": "2025-10-03T09:30:51.647609+0000",
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:        "services": {}
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    },
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]:    "progress_events": {}
Oct  3 09:31:00 compute-0 thirsty_lamarr[192111]: }
Oct  3 09:31:00 compute-0 systemd[1]: libpod-e5ca5b73825e244bfee786183ce81bc275978c0b40451bcd26b4982f3aeb22c2.scope: Deactivated successfully.
Oct  3 09:31:00 compute-0 podman[192096]: 2025-10-03 09:31:00.244156102 +0000 UTC m=+0.987309289 container died e5ca5b73825e244bfee786183ce81bc275978c0b40451bcd26b4982f3aeb22c2 (image=quay.io/ceph/ceph:v18, name=thirsty_lamarr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:31:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-30e6964c2b4558699023ed2794d7ce85a092c1b2b63aafde141cc9659c3db525-merged.mount: Deactivated successfully.
Oct  3 09:31:00 compute-0 podman[192096]: 2025-10-03 09:31:00.331991385 +0000 UTC m=+1.075144582 container remove e5ca5b73825e244bfee786183ce81bc275978c0b40451bcd26b4982f3aeb22c2 (image=quay.io/ceph/ceph:v18, name=thirsty_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:31:00 compute-0 systemd[1]: libpod-conmon-e5ca5b73825e244bfee786183ce81bc275978c0b40451bcd26b4982f3aeb22c2.scope: Deactivated successfully.
Oct  3 09:31:01 compute-0 openstack_network_exporter[159287]: ERROR   09:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:31:01 compute-0 openstack_network_exporter[159287]: ERROR   09:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:31:01 compute-0 openstack_network_exporter[159287]: ERROR   09:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:31:01 compute-0 openstack_network_exporter[159287]: ERROR   09:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:31:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:31:01 compute-0 openstack_network_exporter[159287]: ERROR   09:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:31:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:31:01 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'crash'
Oct  3 09:31:02 compute-0 ceph-mgr[192071]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  3 09:31:02 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:02.114+0000 7f447edca140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  3 09:31:02 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'dashboard'
Oct  3 09:31:02 compute-0 podman[192160]: 2025-10-03 09:31:02.43130994 +0000 UTC m=+0.069212425 container create e1db6a35fb4cdf9c9d3123c93ce30243d2997f7d2183d45d4cb6f1735157dca5 (image=quay.io/ceph/ceph:v18, name=quizzical_lalande, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:31:02 compute-0 podman[192160]: 2025-10-03 09:31:02.395648634 +0000 UTC m=+0.033551149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:02 compute-0 systemd[1]: Started libpod-conmon-e1db6a35fb4cdf9c9d3123c93ce30243d2997f7d2183d45d4cb6f1735157dca5.scope.
Oct  3 09:31:02 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b31bea366143f82058d9e71d0b02e1a88b0ffee4871885a80f81c85847e3cac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b31bea366143f82058d9e71d0b02e1a88b0ffee4871885a80f81c85847e3cac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b31bea366143f82058d9e71d0b02e1a88b0ffee4871885a80f81c85847e3cac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:02 compute-0 podman[192160]: 2025-10-03 09:31:02.592843831 +0000 UTC m=+0.230746326 container init e1db6a35fb4cdf9c9d3123c93ce30243d2997f7d2183d45d4cb6f1735157dca5 (image=quay.io/ceph/ceph:v18, name=quizzical_lalande, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 09:31:02 compute-0 podman[192160]: 2025-10-03 09:31:02.600829987 +0000 UTC m=+0.238732472 container start e1db6a35fb4cdf9c9d3123c93ce30243d2997f7d2183d45d4cb6f1735157dca5 (image=quay.io/ceph/ceph:v18, name=quizzical_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:31:02 compute-0 podman[192160]: 2025-10-03 09:31:02.626346617 +0000 UTC m=+0.264249102 container attach e1db6a35fb4cdf9c9d3123c93ce30243d2997f7d2183d45d4cb6f1735157dca5 (image=quay.io/ceph/ceph:v18, name=quizzical_lalande, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:31:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  3 09:31:03 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/727798182' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]: 
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]: {
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    "fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    "health": {
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "status": "HEALTH_OK",
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "checks": {},
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "mutes": []
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    },
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    "election_epoch": 5,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    "quorum": [
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        0
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    ],
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    "quorum_names": [
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "compute-0"
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    ],
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    "quorum_age": 7,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    "monmap": {
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "epoch": 1,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "min_mon_release_name": "reef",
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "num_mons": 1
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    },
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    "osdmap": {
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "epoch": 1,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "num_osds": 0,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "num_up_osds": 0,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "osd_up_since": 0,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "num_in_osds": 0,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "osd_in_since": 0,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "num_remapped_pgs": 0
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    },
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    "pgmap": {
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "pgs_by_state": [],
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "num_pgs": 0,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "num_pools": 0,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "num_objects": 0,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "data_bytes": 0,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "bytes_used": 0,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "bytes_avail": 0,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "bytes_total": 0
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    },
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    "fsmap": {
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "epoch": 1,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "by_rank": [],
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "up:standby": 0
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    },
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    "mgrmap": {
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "available": false,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "num_standbys": 0,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "modules": [
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:            "iostat",
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:            "nfs",
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:            "restful"
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        ],
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "services": {}
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    },
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    "servicemap": {
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "epoch": 1,
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "modified": "2025-10-03T09:30:51.647609+0000",
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:        "services": {}
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    },
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]:    "progress_events": {}
Oct  3 09:31:03 compute-0 quizzical_lalande[192176]: }
Oct  3 09:31:03 compute-0 systemd[1]: libpod-e1db6a35fb4cdf9c9d3123c93ce30243d2997f7d2183d45d4cb6f1735157dca5.scope: Deactivated successfully.
Oct  3 09:31:03 compute-0 podman[192160]: 2025-10-03 09:31:03.069662424 +0000 UTC m=+0.707564899 container died e1db6a35fb4cdf9c9d3123c93ce30243d2997f7d2183d45d4cb6f1735157dca5 (image=quay.io/ceph/ceph:v18, name=quizzical_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:31:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b31bea366143f82058d9e71d0b02e1a88b0ffee4871885a80f81c85847e3cac-merged.mount: Deactivated successfully.
Oct  3 09:31:03 compute-0 podman[192160]: 2025-10-03 09:31:03.181013203 +0000 UTC m=+0.818915688 container remove e1db6a35fb4cdf9c9d3123c93ce30243d2997f7d2183d45d4cb6f1735157dca5 (image=quay.io/ceph/ceph:v18, name=quizzical_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:31:03 compute-0 systemd[1]: libpod-conmon-e1db6a35fb4cdf9c9d3123c93ce30243d2997f7d2183d45d4cb6f1735157dca5.scope: Deactivated successfully.
Oct  3 09:31:03 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'devicehealth'
Oct  3 09:31:04 compute-0 ceph-mgr[192071]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  3 09:31:04 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'diskprediction_local'
Oct  3 09:31:04 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:04.281+0000 7f447edca140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  3 09:31:04 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  3 09:31:04 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  3 09:31:04 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]:  from numpy import show_config as show_numpy_config
Oct  3 09:31:04 compute-0 ceph-mgr[192071]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  3 09:31:04 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:04.864+0000 7f447edca140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  3 09:31:04 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'influx'
Oct  3 09:31:05 compute-0 ceph-mgr[192071]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  3 09:31:05 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'insights'
Oct  3 09:31:05 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:05.121+0000 7f447edca140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  3 09:31:05 compute-0 podman[192212]: 2025-10-03 09:31:05.348901501 +0000 UTC m=+0.131188746 container create f257b9b2a07cfde7f0475663e146eea471f74147faa9fa38bf138bc0ec98ce86 (image=quay.io/ceph/ceph:v18, name=practical_pike, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:31:05 compute-0 podman[192212]: 2025-10-03 09:31:05.259170458 +0000 UTC m=+0.041457723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:05 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'iostat'
Oct  3 09:31:05 compute-0 systemd[1]: Started libpod-conmon-f257b9b2a07cfde7f0475663e146eea471f74147faa9fa38bf138bc0ec98ce86.scope.
Oct  3 09:31:05 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41430bad0f05b24fc646deccd59a343f1bdf6379aaaafe482b76e9f592d06605/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41430bad0f05b24fc646deccd59a343f1bdf6379aaaafe482b76e9f592d06605/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41430bad0f05b24fc646deccd59a343f1bdf6379aaaafe482b76e9f592d06605/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:05 compute-0 podman[192212]: 2025-10-03 09:31:05.531165489 +0000 UTC m=+0.313452764 container init f257b9b2a07cfde7f0475663e146eea471f74147faa9fa38bf138bc0ec98ce86 (image=quay.io/ceph/ceph:v18, name=practical_pike, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 09:31:05 compute-0 podman[192212]: 2025-10-03 09:31:05.538742882 +0000 UTC m=+0.321030127 container start f257b9b2a07cfde7f0475663e146eea471f74147faa9fa38bf138bc0ec98ce86 (image=quay.io/ceph/ceph:v18, name=practical_pike, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:31:05 compute-0 podman[192212]: 2025-10-03 09:31:05.548036081 +0000 UTC m=+0.330323326 container attach f257b9b2a07cfde7f0475663e146eea471f74147faa9fa38bf138bc0ec98ce86 (image=quay.io/ceph/ceph:v18, name=practical_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 09:31:05 compute-0 ceph-mgr[192071]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  3 09:31:05 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'k8sevents'
Oct  3 09:31:05 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:05.686+0000 7f447edca140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  3 09:31:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  3 09:31:05 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1097720251' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  3 09:31:05 compute-0 practical_pike[192228]: 
Oct  3 09:31:05 compute-0 practical_pike[192228]: {
Oct  3 09:31:05 compute-0 practical_pike[192228]:    "fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:31:05 compute-0 practical_pike[192228]:    "health": {
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "status": "HEALTH_OK",
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "checks": {},
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "mutes": []
Oct  3 09:31:05 compute-0 practical_pike[192228]:    },
Oct  3 09:31:05 compute-0 practical_pike[192228]:    "election_epoch": 5,
Oct  3 09:31:05 compute-0 practical_pike[192228]:    "quorum": [
Oct  3 09:31:05 compute-0 practical_pike[192228]:        0
Oct  3 09:31:05 compute-0 practical_pike[192228]:    ],
Oct  3 09:31:05 compute-0 practical_pike[192228]:    "quorum_names": [
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "compute-0"
Oct  3 09:31:05 compute-0 practical_pike[192228]:    ],
Oct  3 09:31:05 compute-0 practical_pike[192228]:    "quorum_age": 10,
Oct  3 09:31:05 compute-0 practical_pike[192228]:    "monmap": {
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "epoch": 1,
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "min_mon_release_name": "reef",
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "num_mons": 1
Oct  3 09:31:05 compute-0 practical_pike[192228]:    },
Oct  3 09:31:05 compute-0 practical_pike[192228]:    "osdmap": {
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "epoch": 1,
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "num_osds": 0,
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "num_up_osds": 0,
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "osd_up_since": 0,
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "num_in_osds": 0,
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "osd_in_since": 0,
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "num_remapped_pgs": 0
Oct  3 09:31:05 compute-0 practical_pike[192228]:    },
Oct  3 09:31:05 compute-0 practical_pike[192228]:    "pgmap": {
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "pgs_by_state": [],
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "num_pgs": 0,
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "num_pools": 0,
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "num_objects": 0,
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "data_bytes": 0,
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "bytes_used": 0,
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "bytes_avail": 0,
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "bytes_total": 0
Oct  3 09:31:05 compute-0 practical_pike[192228]:    },
Oct  3 09:31:05 compute-0 practical_pike[192228]:    "fsmap": {
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "epoch": 1,
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "by_rank": [],
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "up:standby": 0
Oct  3 09:31:05 compute-0 practical_pike[192228]:    },
Oct  3 09:31:05 compute-0 practical_pike[192228]:    "mgrmap": {
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "available": false,
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "num_standbys": 0,
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "modules": [
Oct  3 09:31:05 compute-0 practical_pike[192228]:            "iostat",
Oct  3 09:31:05 compute-0 practical_pike[192228]:            "nfs",
Oct  3 09:31:05 compute-0 practical_pike[192228]:            "restful"
Oct  3 09:31:05 compute-0 practical_pike[192228]:        ],
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "services": {}
Oct  3 09:31:05 compute-0 practical_pike[192228]:    },
Oct  3 09:31:05 compute-0 practical_pike[192228]:    "servicemap": {
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "epoch": 1,
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "modified": "2025-10-03T09:30:51.647609+0000",
Oct  3 09:31:05 compute-0 practical_pike[192228]:        "services": {}
Oct  3 09:31:05 compute-0 practical_pike[192228]:    },
Oct  3 09:31:05 compute-0 practical_pike[192228]:    "progress_events": {}
Oct  3 09:31:05 compute-0 practical_pike[192228]: }
Oct  3 09:31:05 compute-0 systemd[1]: libpod-f257b9b2a07cfde7f0475663e146eea471f74147faa9fa38bf138bc0ec98ce86.scope: Deactivated successfully.
Oct  3 09:31:05 compute-0 podman[192212]: 2025-10-03 09:31:05.990541472 +0000 UTC m=+0.772828737 container died f257b9b2a07cfde7f0475663e146eea471f74147faa9fa38bf138bc0ec98ce86 (image=quay.io/ceph/ceph:v18, name=practical_pike, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:31:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-41430bad0f05b24fc646deccd59a343f1bdf6379aaaafe482b76e9f592d06605-merged.mount: Deactivated successfully.
Oct  3 09:31:06 compute-0 podman[192212]: 2025-10-03 09:31:06.512112862 +0000 UTC m=+1.294400107 container remove f257b9b2a07cfde7f0475663e146eea471f74147faa9fa38bf138bc0ec98ce86 (image=quay.io/ceph/ceph:v18, name=practical_pike, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:31:06 compute-0 systemd[1]: libpod-conmon-f257b9b2a07cfde7f0475663e146eea471f74147faa9fa38bf138bc0ec98ce86.scope: Deactivated successfully.
Oct  3 09:31:07 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'localpool'
Oct  3 09:31:07 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'mds_autoscaler'
Oct  3 09:31:08 compute-0 podman[192265]: 2025-10-03 09:31:08.600686482 +0000 UTC m=+0.057583191 container create 623a1ccff3ec2e258513d134410bb8202899ac4625e0c60bafddf06661e73679 (image=quay.io/ceph/ceph:v18, name=competent_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 09:31:08 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'mirroring'
Oct  3 09:31:08 compute-0 systemd[1]: Started libpod-conmon-623a1ccff3ec2e258513d134410bb8202899ac4625e0c60bafddf06661e73679.scope.
Oct  3 09:31:08 compute-0 podman[192265]: 2025-10-03 09:31:08.572032711 +0000 UTC m=+0.028929450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:08 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b513caf6f2da7ff3180c0cb5b90416a94a157614422f1171750f7ed7ceedbae/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b513caf6f2da7ff3180c0cb5b90416a94a157614422f1171750f7ed7ceedbae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b513caf6f2da7ff3180c0cb5b90416a94a157614422f1171750f7ed7ceedbae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:08 compute-0 podman[192265]: 2025-10-03 09:31:08.734982628 +0000 UTC m=+0.191879407 container init 623a1ccff3ec2e258513d134410bb8202899ac4625e0c60bafddf06661e73679 (image=quay.io/ceph/ceph:v18, name=competent_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:31:08 compute-0 podman[192265]: 2025-10-03 09:31:08.742959064 +0000 UTC m=+0.199855753 container start 623a1ccff3ec2e258513d134410bb8202899ac4625e0c60bafddf06661e73679 (image=quay.io/ceph/ceph:v18, name=competent_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:31:08 compute-0 podman[192265]: 2025-10-03 09:31:08.747969496 +0000 UTC m=+0.204866215 container attach 623a1ccff3ec2e258513d134410bb8202899ac4625e0c60bafddf06661e73679 (image=quay.io/ceph/ceph:v18, name=competent_sinoussi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:31:08 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'nfs'
Oct  3 09:31:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  3 09:31:09 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/473458993' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]: 
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]: {
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    "fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    "health": {
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "status": "HEALTH_OK",
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "checks": {},
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "mutes": []
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    },
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    "election_epoch": 5,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    "quorum": [
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        0
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    ],
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    "quorum_names": [
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "compute-0"
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    ],
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    "quorum_age": 13,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    "monmap": {
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "epoch": 1,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "min_mon_release_name": "reef",
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "num_mons": 1
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    },
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    "osdmap": {
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "epoch": 1,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "num_osds": 0,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "num_up_osds": 0,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "osd_up_since": 0,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "num_in_osds": 0,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "osd_in_since": 0,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "num_remapped_pgs": 0
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    },
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    "pgmap": {
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "pgs_by_state": [],
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "num_pgs": 0,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "num_pools": 0,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "num_objects": 0,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "data_bytes": 0,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "bytes_used": 0,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "bytes_avail": 0,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "bytes_total": 0
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    },
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    "fsmap": {
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "epoch": 1,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "by_rank": [],
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "up:standby": 0
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    },
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    "mgrmap": {
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "available": false,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "num_standbys": 0,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "modules": [
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:            "iostat",
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:            "nfs",
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:            "restful"
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        ],
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "services": {}
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    },
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    "servicemap": {
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "epoch": 1,
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "modified": "2025-10-03T09:30:51.647609+0000",
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:        "services": {}
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    },
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]:    "progress_events": {}
Oct  3 09:31:09 compute-0 competent_sinoussi[192280]: }
Oct  3 09:31:09 compute-0 systemd[1]: libpod-623a1ccff3ec2e258513d134410bb8202899ac4625e0c60bafddf06661e73679.scope: Deactivated successfully.
Oct  3 09:31:09 compute-0 podman[192265]: 2025-10-03 09:31:09.168407016 +0000 UTC m=+0.625303705 container died 623a1ccff3ec2e258513d134410bb8202899ac4625e0c60bafddf06661e73679 (image=quay.io/ceph/ceph:v18, name=competent_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  3 09:31:09 compute-0 ceph-mgr[192071]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  3 09:31:09 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'orchestrator'
Oct  3 09:31:09 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:09.642+0000 7f447edca140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  3 09:31:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b513caf6f2da7ff3180c0cb5b90416a94a157614422f1171750f7ed7ceedbae-merged.mount: Deactivated successfully.
Oct  3 09:31:09 compute-0 podman[192265]: 2025-10-03 09:31:09.811619268 +0000 UTC m=+1.268515967 container remove 623a1ccff3ec2e258513d134410bb8202899ac4625e0c60bafddf06661e73679 (image=quay.io/ceph/ceph:v18, name=competent_sinoussi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:31:09 compute-0 systemd[1]: libpod-conmon-623a1ccff3ec2e258513d134410bb8202899ac4625e0c60bafddf06661e73679.scope: Deactivated successfully.
Oct  3 09:31:10 compute-0 ceph-mgr[192071]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  3 09:31:10 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'osd_perf_query'
Oct  3 09:31:10 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:10.401+0000 7f447edca140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  3 09:31:10 compute-0 ceph-mgr[192071]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  3 09:31:10 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:10.762+0000 7f447edca140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  3 09:31:10 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'osd_support'
Oct  3 09:31:11 compute-0 ceph-mgr[192071]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  3 09:31:11 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:11.009+0000 7f447edca140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  3 09:31:11 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'pg_autoscaler'
Oct  3 09:31:11 compute-0 ceph-mgr[192071]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  3 09:31:11 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:11.318+0000 7f447edca140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  3 09:31:11 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'progress'
Oct  3 09:31:11 compute-0 ceph-mgr[192071]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  3 09:31:11 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:11.563+0000 7f447edca140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  3 09:31:11 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'prometheus'
Oct  3 09:31:11 compute-0 podman[192320]: 2025-10-03 09:31:11.914010671 +0000 UTC m=+0.070011701 container create ff6d2fe8bd4cfc5e035fb43d4f7dcac84ecc4768987d63b6701510223c09e7ea (image=quay.io/ceph/ceph:v18, name=epic_sanderson, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:31:11 compute-0 systemd[1]: Started libpod-conmon-ff6d2fe8bd4cfc5e035fb43d4f7dcac84ecc4768987d63b6701510223c09e7ea.scope.
Oct  3 09:31:11 compute-0 podman[192320]: 2025-10-03 09:31:11.884022597 +0000 UTC m=+0.040023637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:11 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24f1c042cd519e9363fefc8042d875155bae2b6116b38ae78e6aca7d12597537/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24f1c042cd519e9363fefc8042d875155bae2b6116b38ae78e6aca7d12597537/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24f1c042cd519e9363fefc8042d875155bae2b6116b38ae78e6aca7d12597537/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:12 compute-0 podman[192320]: 2025-10-03 09:31:12.029499912 +0000 UTC m=+0.185500972 container init ff6d2fe8bd4cfc5e035fb43d4f7dcac84ecc4768987d63b6701510223c09e7ea (image=quay.io/ceph/ceph:v18, name=epic_sanderson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 09:31:12 compute-0 podman[192320]: 2025-10-03 09:31:12.036485217 +0000 UTC m=+0.192486247 container start ff6d2fe8bd4cfc5e035fb43d4f7dcac84ecc4768987d63b6701510223c09e7ea (image=quay.io/ceph/ceph:v18, name=epic_sanderson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:31:12 compute-0 podman[192320]: 2025-10-03 09:31:12.047701808 +0000 UTC m=+0.203702858 container attach ff6d2fe8bd4cfc5e035fb43d4f7dcac84ecc4768987d63b6701510223c09e7ea (image=quay.io/ceph/ceph:v18, name=epic_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 09:31:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  3 09:31:12 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1876512436' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  3 09:31:12 compute-0 epic_sanderson[192337]: 
Oct  3 09:31:12 compute-0 epic_sanderson[192337]: {
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    "fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    "health": {
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "status": "HEALTH_OK",
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "checks": {},
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "mutes": []
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    },
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    "election_epoch": 5,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    "quorum": [
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        0
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    ],
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    "quorum_names": [
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "compute-0"
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    ],
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    "quorum_age": 17,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    "monmap": {
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "epoch": 1,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "min_mon_release_name": "reef",
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "num_mons": 1
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    },
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    "osdmap": {
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "epoch": 1,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "num_osds": 0,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "num_up_osds": 0,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "osd_up_since": 0,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "num_in_osds": 0,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "osd_in_since": 0,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "num_remapped_pgs": 0
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    },
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    "pgmap": {
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "pgs_by_state": [],
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "num_pgs": 0,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "num_pools": 0,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "num_objects": 0,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "data_bytes": 0,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "bytes_used": 0,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "bytes_avail": 0,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "bytes_total": 0
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    },
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    "fsmap": {
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "epoch": 1,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "by_rank": [],
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "up:standby": 0
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    },
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    "mgrmap": {
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "available": false,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "num_standbys": 0,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "modules": [
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:            "iostat",
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:            "nfs",
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:            "restful"
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        ],
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "services": {}
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    },
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    "servicemap": {
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "epoch": 1,
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "modified": "2025-10-03T09:30:51.647609+0000",
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:        "services": {}
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    },
Oct  3 09:31:12 compute-0 epic_sanderson[192337]:    "progress_events": {}
Oct  3 09:31:12 compute-0 epic_sanderson[192337]: }
Oct  3 09:31:12 compute-0 systemd[1]: libpod-ff6d2fe8bd4cfc5e035fb43d4f7dcac84ecc4768987d63b6701510223c09e7ea.scope: Deactivated successfully.
Oct  3 09:31:12 compute-0 podman[192320]: 2025-10-03 09:31:12.518540449 +0000 UTC m=+0.674541479 container died ff6d2fe8bd4cfc5e035fb43d4f7dcac84ecc4768987d63b6701510223c09e7ea (image=quay.io/ceph/ceph:v18, name=epic_sanderson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 09:31:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-24f1c042cd519e9363fefc8042d875155bae2b6116b38ae78e6aca7d12597537-merged.mount: Deactivated successfully.
Oct  3 09:31:12 compute-0 podman[192320]: 2025-10-03 09:31:12.598019502 +0000 UTC m=+0.754020532 container remove ff6d2fe8bd4cfc5e035fb43d4f7dcac84ecc4768987d63b6701510223c09e7ea (image=quay.io/ceph/ceph:v18, name=epic_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  3 09:31:12 compute-0 systemd[1]: libpod-conmon-ff6d2fe8bd4cfc5e035fb43d4f7dcac84ecc4768987d63b6701510223c09e7ea.scope: Deactivated successfully.
Oct  3 09:31:12 compute-0 ceph-mgr[192071]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  3 09:31:12 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:12.698+0000 7f447edca140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  3 09:31:12 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'rbd_support'
Oct  3 09:31:13 compute-0 ceph-mgr[192071]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  3 09:31:13 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'restful'
Oct  3 09:31:13 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:13.021+0000 7f447edca140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  3 09:31:13 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'rgw'
Oct  3 09:31:14 compute-0 ceph-mgr[192071]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  3 09:31:14 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'rook'
Oct  3 09:31:14 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:14.593+0000 7f447edca140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  3 09:31:14 compute-0 podman[192374]: 2025-10-03 09:31:14.718327992 +0000 UTC m=+0.079578109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:14 compute-0 podman[192374]: 2025-10-03 09:31:14.8452289 +0000 UTC m=+0.206479007 container create 289d17611249b29a12eac2f597963eaaa63634b1c060647a77cbc43b09144f90 (image=quay.io/ceph/ceph:v18, name=pedantic_mcclintock, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:31:15 compute-0 systemd[1]: Started libpod-conmon-289d17611249b29a12eac2f597963eaaa63634b1c060647a77cbc43b09144f90.scope.
Oct  3 09:31:15 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61cf22267220c87387ebc61d1aca1ce8551ad5abf7ed4418092ffacba1f8329f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61cf22267220c87387ebc61d1aca1ce8551ad5abf7ed4418092ffacba1f8329f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61cf22267220c87387ebc61d1aca1ce8551ad5abf7ed4418092ffacba1f8329f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:15 compute-0 podman[192374]: 2025-10-03 09:31:15.422107139 +0000 UTC m=+0.783357266 container init 289d17611249b29a12eac2f597963eaaa63634b1c060647a77cbc43b09144f90 (image=quay.io/ceph/ceph:v18, name=pedantic_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:31:15 compute-0 podman[192374]: 2025-10-03 09:31:15.435374985 +0000 UTC m=+0.796625102 container start 289d17611249b29a12eac2f597963eaaa63634b1c060647a77cbc43b09144f90 (image=quay.io/ceph/ceph:v18, name=pedantic_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 09:31:15 compute-0 podman[192374]: 2025-10-03 09:31:15.479582476 +0000 UTC m=+0.840832593 container attach 289d17611249b29a12eac2f597963eaaa63634b1c060647a77cbc43b09144f90 (image=quay.io/ceph/ceph:v18, name=pedantic_mcclintock, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:31:15 compute-0 podman[192395]: 2025-10-03 09:31:15.537215548 +0000 UTC m=+0.485606267 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 09:31:15 compute-0 podman[192393]: 2025-10-03 09:31:15.576342036 +0000 UTC m=+0.529421005 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:31:15 compute-0 podman[192396]: 2025-10-03 09:31:15.606095352 +0000 UTC m=+0.551775394 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  3 09:31:15 compute-0 podman[192392]: 2025-10-03 09:31:15.616291739 +0000 UTC m=+0.567203759 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release-0.7.12=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Oct  3 09:31:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  3 09:31:15 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/5949709' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]: 
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]: {
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    "fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    "health": {
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "status": "HEALTH_OK",
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "checks": {},
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "mutes": []
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    },
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    "election_epoch": 5,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    "quorum": [
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        0
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    ],
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    "quorum_names": [
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "compute-0"
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    ],
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    "quorum_age": 20,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    "monmap": {
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "epoch": 1,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "min_mon_release_name": "reef",
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "num_mons": 1
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    },
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    "osdmap": {
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "epoch": 1,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "num_osds": 0,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "num_up_osds": 0,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "osd_up_since": 0,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "num_in_osds": 0,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "osd_in_since": 0,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "num_remapped_pgs": 0
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    },
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    "pgmap": {
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "pgs_by_state": [],
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "num_pgs": 0,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "num_pools": 0,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "num_objects": 0,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "data_bytes": 0,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "bytes_used": 0,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "bytes_avail": 0,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "bytes_total": 0
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    },
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    "fsmap": {
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "epoch": 1,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "by_rank": [],
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "up:standby": 0
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    },
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    "mgrmap": {
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "available": false,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "num_standbys": 0,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "modules": [
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:            "iostat",
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:            "nfs",
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:            "restful"
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        ],
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "services": {}
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    },
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    "servicemap": {
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "epoch": 1,
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "modified": "2025-10-03T09:30:51.647609+0000",
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:        "services": {}
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    },
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]:    "progress_events": {}
Oct  3 09:31:15 compute-0 pedantic_mcclintock[192390]: }
Oct  3 09:31:15 compute-0 systemd[1]: libpod-289d17611249b29a12eac2f597963eaaa63634b1c060647a77cbc43b09144f90.scope: Deactivated successfully.
Oct  3 09:31:15 compute-0 podman[192374]: 2025-10-03 09:31:15.985597877 +0000 UTC m=+1.346848024 container died 289d17611249b29a12eac2f597963eaaa63634b1c060647a77cbc43b09144f90 (image=quay.io/ceph/ceph:v18, name=pedantic_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:31:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-61cf22267220c87387ebc61d1aca1ce8551ad5abf7ed4418092ffacba1f8329f-merged.mount: Deactivated successfully.
Oct  3 09:31:16 compute-0 podman[192374]: 2025-10-03 09:31:16.094031302 +0000 UTC m=+1.455281409 container remove 289d17611249b29a12eac2f597963eaaa63634b1c060647a77cbc43b09144f90 (image=quay.io/ceph/ceph:v18, name=pedantic_mcclintock, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 09:31:16 compute-0 systemd[1]: libpod-conmon-289d17611249b29a12eac2f597963eaaa63634b1c060647a77cbc43b09144f90.scope: Deactivated successfully.
Oct  3 09:31:16 compute-0 ceph-mgr[192071]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  3 09:31:16 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'selftest'
Oct  3 09:31:16 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:16.777+0000 7f447edca140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  3 09:31:17 compute-0 ceph-mgr[192071]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  3 09:31:17 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'snap_schedule'
Oct  3 09:31:17 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:17.032+0000 7f447edca140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  3 09:31:17 compute-0 ceph-mgr[192071]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  3 09:31:17 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'stats'
Oct  3 09:31:17 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:17.289+0000 7f447edca140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  3 09:31:17 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'status'
Oct  3 09:31:17 compute-0 ceph-mgr[192071]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  3 09:31:17 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'telegraf'
Oct  3 09:31:17 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:17.802+0000 7f447edca140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  3 09:31:18 compute-0 ceph-mgr[192071]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  3 09:31:18 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'telemetry'
Oct  3 09:31:18 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:18.042+0000 7f447edca140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  3 09:31:18 compute-0 podman[192505]: 2025-10-03 09:31:18.193784801 +0000 UTC m=+0.063516803 container create cd6f728c10af190aac5ec05a2d5afa4e5451633cdeed5716967cfc1f08e4b466 (image=quay.io/ceph/ceph:v18, name=vigilant_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:31:18 compute-0 systemd[1]: Started libpod-conmon-cd6f728c10af190aac5ec05a2d5afa4e5451633cdeed5716967cfc1f08e4b466.scope.
Oct  3 09:31:18 compute-0 podman[192505]: 2025-10-03 09:31:18.165296256 +0000 UTC m=+0.035028278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:18 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cab141ad4cc66163eec57d4e408536df4e29d3baf2caaca5fa3b98c756af28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cab141ad4cc66163eec57d4e408536df4e29d3baf2caaca5fa3b98c756af28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94cab141ad4cc66163eec57d4e408536df4e29d3baf2caaca5fa3b98c756af28/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:18 compute-0 podman[192505]: 2025-10-03 09:31:18.310856383 +0000 UTC m=+0.180588405 container init cd6f728c10af190aac5ec05a2d5afa4e5451633cdeed5716967cfc1f08e4b466 (image=quay.io/ceph/ceph:v18, name=vigilant_yonath, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 09:31:18 compute-0 podman[192505]: 2025-10-03 09:31:18.324584004 +0000 UTC m=+0.194316026 container start cd6f728c10af190aac5ec05a2d5afa4e5451633cdeed5716967cfc1f08e4b466 (image=quay.io/ceph/ceph:v18, name=vigilant_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:31:18 compute-0 podman[192505]: 2025-10-03 09:31:18.33254957 +0000 UTC m=+0.202281592 container attach cd6f728c10af190aac5ec05a2d5afa4e5451633cdeed5716967cfc1f08e4b466 (image=quay.io/ceph/ceph:v18, name=vigilant_yonath, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:31:18 compute-0 ceph-mgr[192071]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  3 09:31:18 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'test_orchestrator'
Oct  3 09:31:18 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:18.759+0000 7f447edca140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  3 09:31:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  3 09:31:18 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1753456358' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]: 
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]: {
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    "fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    "health": {
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "status": "HEALTH_OK",
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "checks": {},
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "mutes": []
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    },
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    "election_epoch": 5,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    "quorum": [
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        0
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    ],
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    "quorum_names": [
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "compute-0"
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    ],
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    "quorum_age": 23,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    "monmap": {
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "epoch": 1,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "min_mon_release_name": "reef",
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "num_mons": 1
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    },
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    "osdmap": {
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "epoch": 1,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "num_osds": 0,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "num_up_osds": 0,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "osd_up_since": 0,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "num_in_osds": 0,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "osd_in_since": 0,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "num_remapped_pgs": 0
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    },
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    "pgmap": {
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "pgs_by_state": [],
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "num_pgs": 0,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "num_pools": 0,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "num_objects": 0,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "data_bytes": 0,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "bytes_used": 0,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "bytes_avail": 0,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "bytes_total": 0
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    },
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    "fsmap": {
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "epoch": 1,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "by_rank": [],
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "up:standby": 0
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    },
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    "mgrmap": {
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "available": false,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "num_standbys": 0,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "modules": [
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:            "iostat",
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:            "nfs",
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:            "restful"
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        ],
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "services": {}
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    },
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    "servicemap": {
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "epoch": 1,
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "modified": "2025-10-03T09:30:51.647609+0000",
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:        "services": {}
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    },
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]:    "progress_events": {}
Oct  3 09:31:18 compute-0 vigilant_yonath[192521]: }
Oct  3 09:31:18 compute-0 systemd[1]: libpod-cd6f728c10af190aac5ec05a2d5afa4e5451633cdeed5716967cfc1f08e4b466.scope: Deactivated successfully.
Oct  3 09:31:18 compute-0 podman[192505]: 2025-10-03 09:31:18.877983768 +0000 UTC m=+0.747715790 container died cd6f728c10af190aac5ec05a2d5afa4e5451633cdeed5716967cfc1f08e4b466 (image=quay.io/ceph/ceph:v18, name=vigilant_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 09:31:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-94cab141ad4cc66163eec57d4e408536df4e29d3baf2caaca5fa3b98c756af28-merged.mount: Deactivated successfully.
Oct  3 09:31:18 compute-0 podman[192505]: 2025-10-03 09:31:18.939228187 +0000 UTC m=+0.808960189 container remove cd6f728c10af190aac5ec05a2d5afa4e5451633cdeed5716967cfc1f08e4b466 (image=quay.io/ceph/ceph:v18, name=vigilant_yonath, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:31:18 compute-0 systemd[1]: libpod-conmon-cd6f728c10af190aac5ec05a2d5afa4e5451633cdeed5716967cfc1f08e4b466.scope: Deactivated successfully.
Oct  3 09:31:19 compute-0 ceph-mgr[192071]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  3 09:31:19 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'volumes'
Oct  3 09:31:19 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:19.459+0000 7f447edca140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'zabbix'
Oct  3 09:31:20 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:20.199+0000 7f447edca140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  3 09:31:20 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:20.448+0000 7f447edca140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: ms_deliver_dispatch: unhandled message 0x5555f49791e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct  3 09:31:20 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.vtkhde
Oct  3 09:31:20 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.vtkhde(active, starting, since 0.0425685s)
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr handle_mgr_map Activating!
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr handle_mgr_map I am now activating
Oct  3 09:31:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct  3 09:31:20 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3963427472' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  3 09:31:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).mds e1 all = 1
Oct  3 09:31:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct  3 09:31:20 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3963427472' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  3 09:31:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct  3 09:31:20 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3963427472' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  3 09:31:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct  3 09:31:20 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3963427472' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  3 09:31:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.vtkhde", "id": "compute-0.vtkhde"} v 0) v1
Oct  3 09:31:20 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3963427472' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "mgr metadata", "who": "compute-0.vtkhde", "id": "compute-0.vtkhde"}]: dispatch
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: balancer
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: crash
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [balancer INFO root] Starting
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:31:20
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [balancer INFO root] No pools available
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: devicehealth
Oct  3 09:31:20 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : Manager daemon compute-0.vtkhde is now available
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: iostat
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: nfs
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: orchestrator
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: pg_autoscaler
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: progress
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [devicehealth INFO root] Starting
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [progress INFO root] Loading...
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [progress INFO root] No stored events to load
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [progress INFO root] Loaded [] historic events
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [progress INFO root] Loaded OSDMap, ready.
Oct  3 09:31:20 compute-0 ceph-mon[191783]: Activating manager daemon compute-0.vtkhde
Oct  3 09:31:20 compute-0 ceph-mon[191783]: Manager daemon compute-0.vtkhde is now available
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [rbd_support INFO root] recovery thread starting
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [rbd_support INFO root] starting setup
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: rbd_support
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: restful
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [restful INFO root] server_addr: :: server_port: 8003
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [restful WARNING root] server not running: no certificate configured
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: status
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: telemetry
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Oct  3 09:31:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vtkhde/mirror_snapshot_schedule"} v 0) v1
Oct  3 09:31:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3963427472' entity='mgr.compute-0.vtkhde' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vtkhde/mirror_snapshot_schedule"}]: dispatch
Oct  3 09:31:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3963427472' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:31:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [rbd_support INFO root] PerfHandler: starting
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TaskHandler: starting
Oct  3 09:31:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3963427472' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vtkhde/trash_purge_schedule"} v 0) v1
Oct  3 09:31:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3963427472' entity='mgr.compute-0.vtkhde' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vtkhde/trash_purge_schedule"}]: dispatch
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:31:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: [rbd_support INFO root] setup complete
Oct  3 09:31:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3963427472' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:20 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: volumes
Oct  3 09:31:21 compute-0 podman[192638]: 2025-10-03 09:31:21.028497996 +0000 UTC m=+0.054805194 container create 147a2173b819b7e938e878209c6e80874a5cacb863bc549ecf5c351e3dc808db (image=quay.io/ceph/ceph:v18, name=elated_jemison, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:31:21 compute-0 systemd[1]: Started libpod-conmon-147a2173b819b7e938e878209c6e80874a5cacb863bc549ecf5c351e3dc808db.scope.
Oct  3 09:31:21 compute-0 podman[192638]: 2025-10-03 09:31:21.006982214 +0000 UTC m=+0.033289442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:21 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2da9f412e668f1b5bb52c7f29a7f66436a92c284fbad68e3dcbdd60f78867e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2da9f412e668f1b5bb52c7f29a7f66436a92c284fbad68e3dcbdd60f78867e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2da9f412e668f1b5bb52c7f29a7f66436a92c284fbad68e3dcbdd60f78867e6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:21 compute-0 podman[192638]: 2025-10-03 09:31:21.160034368 +0000 UTC m=+0.186341596 container init 147a2173b819b7e938e878209c6e80874a5cacb863bc549ecf5c351e3dc808db (image=quay.io/ceph/ceph:v18, name=elated_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 09:31:21 compute-0 podman[192638]: 2025-10-03 09:31:21.169921717 +0000 UTC m=+0.196228915 container start 147a2173b819b7e938e878209c6e80874a5cacb863bc549ecf5c351e3dc808db (image=quay.io/ceph/ceph:v18, name=elated_jemison, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct  3 09:31:21 compute-0 podman[192638]: 2025-10-03 09:31:21.175999011 +0000 UTC m=+0.202306209 container attach 147a2173b819b7e938e878209c6e80874a5cacb863bc549ecf5c351e3dc808db (image=quay.io/ceph/ceph:v18, name=elated_jemison, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 09:31:21 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.vtkhde(active, since 1.05771s)
Oct  3 09:31:21 compute-0 ceph-mon[191783]: from='mgr.14102 192.168.122.100:0/3963427472' entity='mgr.compute-0.vtkhde' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vtkhde/mirror_snapshot_schedule"}]: dispatch
Oct  3 09:31:21 compute-0 ceph-mon[191783]: from='mgr.14102 192.168.122.100:0/3963427472' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:21 compute-0 ceph-mon[191783]: from='mgr.14102 192.168.122.100:0/3963427472' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:21 compute-0 ceph-mon[191783]: from='mgr.14102 192.168.122.100:0/3963427472' entity='mgr.compute-0.vtkhde' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vtkhde/trash_purge_schedule"}]: dispatch
Oct  3 09:31:21 compute-0 ceph-mon[191783]: from='mgr.14102 192.168.122.100:0/3963427472' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  3 09:31:21 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1164344188' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  3 09:31:21 compute-0 elated_jemison[192654]: 
Oct  3 09:31:21 compute-0 elated_jemison[192654]: {
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    "fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    "health": {
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "status": "HEALTH_OK",
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "checks": {},
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "mutes": []
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    },
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    "election_epoch": 5,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    "quorum": [
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        0
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    ],
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    "quorum_names": [
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "compute-0"
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    ],
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    "quorum_age": 26,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    "monmap": {
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "epoch": 1,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "min_mon_release_name": "reef",
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "num_mons": 1
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    },
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    "osdmap": {
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "epoch": 1,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "num_osds": 0,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "num_up_osds": 0,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "osd_up_since": 0,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "num_in_osds": 0,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "osd_in_since": 0,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "num_remapped_pgs": 0
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    },
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    "pgmap": {
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "pgs_by_state": [],
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "num_pgs": 0,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "num_pools": 0,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "num_objects": 0,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "data_bytes": 0,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "bytes_used": 0,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "bytes_avail": 0,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "bytes_total": 0
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    },
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    "fsmap": {
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "epoch": 1,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "by_rank": [],
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "up:standby": 0
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    },
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    "mgrmap": {
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "available": true,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "num_standbys": 0,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "modules": [
Oct  3 09:31:21 compute-0 elated_jemison[192654]:            "iostat",
Oct  3 09:31:21 compute-0 elated_jemison[192654]:            "nfs",
Oct  3 09:31:21 compute-0 elated_jemison[192654]:            "restful"
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        ],
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "services": {}
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    },
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    "servicemap": {
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "epoch": 1,
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "modified": "2025-10-03T09:30:51.647609+0000",
Oct  3 09:31:21 compute-0 elated_jemison[192654]:        "services": {}
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    },
Oct  3 09:31:21 compute-0 elated_jemison[192654]:    "progress_events": {}
Oct  3 09:31:21 compute-0 elated_jemison[192654]: }
Oct  3 09:31:21 compute-0 systemd[1]: libpod-147a2173b819b7e938e878209c6e80874a5cacb863bc549ecf5c351e3dc808db.scope: Deactivated successfully.
Oct  3 09:31:21 compute-0 podman[192638]: 2025-10-03 09:31:21.92282846 +0000 UTC m=+0.949135658 container died 147a2173b819b7e938e878209c6e80874a5cacb863bc549ecf5c351e3dc808db (image=quay.io/ceph/ceph:v18, name=elated_jemison, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 09:31:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2da9f412e668f1b5bb52c7f29a7f66436a92c284fbad68e3dcbdd60f78867e6-merged.mount: Deactivated successfully.
Oct  3 09:31:21 compute-0 podman[192638]: 2025-10-03 09:31:21.991992385 +0000 UTC m=+1.018299583 container remove 147a2173b819b7e938e878209c6e80874a5cacb863bc549ecf5c351e3dc808db (image=quay.io/ceph/ceph:v18, name=elated_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:31:22 compute-0 systemd[1]: libpod-conmon-147a2173b819b7e938e878209c6e80874a5cacb863bc549ecf5c351e3dc808db.scope: Deactivated successfully.
Oct  3 09:31:22 compute-0 podman[192692]: 2025-10-03 09:31:22.081763503 +0000 UTC m=+0.058323337 container create 15ef53d09dd2c3de79e2ce150f7192fa2b6429cef72640f53aeb7745ed7dd539 (image=quay.io/ceph/ceph:v18, name=cool_ramanujan, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:31:22 compute-0 systemd[1]: Started libpod-conmon-15ef53d09dd2c3de79e2ce150f7192fa2b6429cef72640f53aeb7745ed7dd539.scope.
Oct  3 09:31:22 compute-0 podman[192692]: 2025-10-03 09:31:22.060113187 +0000 UTC m=+0.036673051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:22 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0371f01e8e47e69ad82a283c14bfc96eaa63bdee4f7478ea282ac917db44146/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0371f01e8e47e69ad82a283c14bfc96eaa63bdee4f7478ea282ac917db44146/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0371f01e8e47e69ad82a283c14bfc96eaa63bdee4f7478ea282ac917db44146/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0371f01e8e47e69ad82a283c14bfc96eaa63bdee4f7478ea282ac917db44146/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:22 compute-0 podman[192692]: 2025-10-03 09:31:22.189304592 +0000 UTC m=+0.165864436 container init 15ef53d09dd2c3de79e2ce150f7192fa2b6429cef72640f53aeb7745ed7dd539 (image=quay.io/ceph/ceph:v18, name=cool_ramanujan, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:31:22 compute-0 podman[192692]: 2025-10-03 09:31:22.20785544 +0000 UTC m=+0.184415264 container start 15ef53d09dd2c3de79e2ce150f7192fa2b6429cef72640f53aeb7745ed7dd539 (image=quay.io/ceph/ceph:v18, name=cool_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:31:22 compute-0 podman[192692]: 2025-10-03 09:31:22.21254428 +0000 UTC m=+0.189104134 container attach 15ef53d09dd2c3de79e2ce150f7192fa2b6429cef72640f53aeb7745ed7dd539 (image=quay.io/ceph/ceph:v18, name=cool_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:31:22 compute-0 ceph-mgr[192071]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  3 09:31:22 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.vtkhde(active, since 2s)
Oct  3 09:31:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct  3 09:31:22 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2193506143' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  3 09:31:22 compute-0 systemd[1]: libpod-15ef53d09dd2c3de79e2ce150f7192fa2b6429cef72640f53aeb7745ed7dd539.scope: Deactivated successfully.
Oct  3 09:31:22 compute-0 podman[192692]: 2025-10-03 09:31:22.765089067 +0000 UTC m=+0.741648911 container died 15ef53d09dd2c3de79e2ce150f7192fa2b6429cef72640f53aeb7745ed7dd539 (image=quay.io/ceph/ceph:v18, name=cool_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:31:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0371f01e8e47e69ad82a283c14bfc96eaa63bdee4f7478ea282ac917db44146-merged.mount: Deactivated successfully.
Oct  3 09:31:22 compute-0 podman[192692]: 2025-10-03 09:31:22.82767396 +0000 UTC m=+0.804233794 container remove 15ef53d09dd2c3de79e2ce150f7192fa2b6429cef72640f53aeb7745ed7dd539 (image=quay.io/ceph/ceph:v18, name=cool_ramanujan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:31:22 compute-0 systemd[1]: libpod-conmon-15ef53d09dd2c3de79e2ce150f7192fa2b6429cef72640f53aeb7745ed7dd539.scope: Deactivated successfully.
Oct  3 09:31:22 compute-0 podman[192732]: 2025-10-03 09:31:22.872144492 +0000 UTC m=+0.129215839 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6)
Oct  3 09:31:22 compute-0 podman[192769]: 2025-10-03 09:31:22.912601003 +0000 UTC m=+0.061403757 container create 35a8b2e259b79d78fd306f5a0f00c59f5c836bdda3228b0df0fedb8fec8b8f41 (image=quay.io/ceph/ceph:v18, name=hungry_bhaskara, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:31:22 compute-0 podman[192743]: 2025-10-03 09:31:22.941209374 +0000 UTC m=+0.146200655 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:31:22 compute-0 systemd[1]: Started libpod-conmon-35a8b2e259b79d78fd306f5a0f00c59f5c836bdda3228b0df0fedb8fec8b8f41.scope.
Oct  3 09:31:22 compute-0 podman[192769]: 2025-10-03 09:31:22.882152303 +0000 UTC m=+0.030955077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:22 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600050cbbddf5a844a6c4127f6075b857168861244117619617a098985b14aa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600050cbbddf5a844a6c4127f6075b857168861244117619617a098985b14aa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/600050cbbddf5a844a6c4127f6075b857168861244117619617a098985b14aa9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:23 compute-0 podman[192769]: 2025-10-03 09:31:23.020112032 +0000 UTC m=+0.168914816 container init 35a8b2e259b79d78fd306f5a0f00c59f5c836bdda3228b0df0fedb8fec8b8f41 (image=quay.io/ceph/ceph:v18, name=hungry_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:31:23 compute-0 podman[192769]: 2025-10-03 09:31:23.037089309 +0000 UTC m=+0.185892063 container start 35a8b2e259b79d78fd306f5a0f00c59f5c836bdda3228b0df0fedb8fec8b8f41 (image=quay.io/ceph/ceph:v18, name=hungry_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct  3 09:31:23 compute-0 podman[192769]: 2025-10-03 09:31:23.043411201 +0000 UTC m=+0.192213975 container attach 35a8b2e259b79d78fd306f5a0f00c59f5c836bdda3228b0df0fedb8fec8b8f41 (image=quay.io/ceph/ceph:v18, name=hungry_bhaskara, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 09:31:23 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/2193506143' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  3 09:31:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Oct  3 09:31:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2098799541' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  3 09:31:24 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/2098799541' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct  3 09:31:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2098799541' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: mgr respawn  1: '-n'
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: mgr respawn  2: 'mgr.compute-0.vtkhde'
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: mgr respawn  3: '-f'
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: mgr respawn  4: '--setuser'
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: mgr respawn  5: 'ceph'
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: mgr respawn  6: '--setgroup'
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: mgr respawn  7: 'ceph'
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: mgr respawn  8: '--default-log-to-file=false'
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: mgr respawn  9: '--default-log-to-journald=true'
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: mgr respawn  exe_path /proc/self/exe
Oct  3 09:31:24 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.vtkhde(active, since 4s)
Oct  3 09:31:24 compute-0 systemd[1]: libpod-35a8b2e259b79d78fd306f5a0f00c59f5c836bdda3228b0df0fedb8fec8b8f41.scope: Deactivated successfully.
Oct  3 09:31:24 compute-0 podman[192769]: 2025-10-03 09:31:24.655523847 +0000 UTC m=+1.804326621 container died 35a8b2e259b79d78fd306f5a0f00c59f5c836bdda3228b0df0fedb8fec8b8f41 (image=quay.io/ceph/ceph:v18, name=hungry_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:31:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-600050cbbddf5a844a6c4127f6075b857168861244117619617a098985b14aa9-merged.mount: Deactivated successfully.
Oct  3 09:31:24 compute-0 podman[192769]: 2025-10-03 09:31:24.731883664 +0000 UTC m=+1.880686428 container remove 35a8b2e259b79d78fd306f5a0f00c59f5c836bdda3228b0df0fedb8fec8b8f41 (image=quay.io/ceph/ceph:v18, name=hungry_bhaskara, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:31:24 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: ignoring --setuser ceph since I am not root
Oct  3 09:31:24 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: ignoring --setgroup ceph since I am not root
Oct  3 09:31:24 compute-0 systemd[1]: libpod-conmon-35a8b2e259b79d78fd306f5a0f00c59f5c836bdda3228b0df0fedb8fec8b8f41.scope: Deactivated successfully.
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: pidfile_write: ignore empty --pid-file
Oct  3 09:31:24 compute-0 podman[192831]: 2025-10-03 09:31:24.776038275 +0000 UTC m=+0.083957423 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 09:31:24 compute-0 podman[192865]: 2025-10-03 09:31:24.819316507 +0000 UTC m=+0.052721597 container create 947a4012bb76621a36753075e55da6beedf6df514c3341e47316a5bd4bc920c1 (image=quay.io/ceph/ceph:v18, name=compassionate_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:31:24 compute-0 systemd[1]: Started libpod-conmon-947a4012bb76621a36753075e55da6beedf6df514c3341e47316a5bd4bc920c1.scope.
Oct  3 09:31:24 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'alerts'
Oct  3 09:31:24 compute-0 podman[192865]: 2025-10-03 09:31:24.797993962 +0000 UTC m=+0.031399092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:24 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0990d586d2d7cadb5a1dba515943862f0c0e273125092a790bc0f0ce01924b38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0990d586d2d7cadb5a1dba515943862f0c0e273125092a790bc0f0ce01924b38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0990d586d2d7cadb5a1dba515943862f0c0e273125092a790bc0f0ce01924b38/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:24 compute-0 podman[192865]: 2025-10-03 09:31:24.921855476 +0000 UTC m=+0.155260576 container init 947a4012bb76621a36753075e55da6beedf6df514c3341e47316a5bd4bc920c1 (image=quay.io/ceph/ceph:v18, name=compassionate_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 09:31:24 compute-0 podman[192865]: 2025-10-03 09:31:24.933299074 +0000 UTC m=+0.166704164 container start 947a4012bb76621a36753075e55da6beedf6df514c3341e47316a5bd4bc920c1 (image=quay.io/ceph/ceph:v18, name=compassionate_ptolemy, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  3 09:31:24 compute-0 podman[192865]: 2025-10-03 09:31:24.93875673 +0000 UTC m=+0.172161820 container attach 947a4012bb76621a36753075e55da6beedf6df514c3341e47316a5bd4bc920c1 (image=quay.io/ceph/ceph:v18, name=compassionate_ptolemy, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:31:25 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:25.207+0000 7f3280d8c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  3 09:31:25 compute-0 ceph-mgr[192071]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  3 09:31:25 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'balancer'
Oct  3 09:31:25 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:25.474+0000 7f3280d8c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  3 09:31:25 compute-0 ceph-mgr[192071]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  3 09:31:25 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'cephadm'
Oct  3 09:31:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  3 09:31:25 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640326193' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  3 09:31:25 compute-0 compassionate_ptolemy[192906]: {
Oct  3 09:31:25 compute-0 compassionate_ptolemy[192906]:    "epoch": 5,
Oct  3 09:31:25 compute-0 compassionate_ptolemy[192906]:    "available": true,
Oct  3 09:31:25 compute-0 compassionate_ptolemy[192906]:    "active_name": "compute-0.vtkhde",
Oct  3 09:31:25 compute-0 compassionate_ptolemy[192906]:    "num_standby": 0
Oct  3 09:31:25 compute-0 compassionate_ptolemy[192906]: }
Oct  3 09:31:25 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/2098799541' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct  3 09:31:25 compute-0 systemd[1]: libpod-947a4012bb76621a36753075e55da6beedf6df514c3341e47316a5bd4bc920c1.scope: Deactivated successfully.
Oct  3 09:31:25 compute-0 conmon[192906]: conmon 947a4012bb76621a3675 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-947a4012bb76621a36753075e55da6beedf6df514c3341e47316a5bd4bc920c1.scope/container/memory.events
Oct  3 09:31:25 compute-0 podman[192932]: 2025-10-03 09:31:25.692757358 +0000 UTC m=+0.043096728 container died 947a4012bb76621a36753075e55da6beedf6df514c3341e47316a5bd4bc920c1 (image=quay.io/ceph/ceph:v18, name=compassionate_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 09:31:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0990d586d2d7cadb5a1dba515943862f0c0e273125092a790bc0f0ce01924b38-merged.mount: Deactivated successfully.
Oct  3 09:31:25 compute-0 podman[192932]: 2025-10-03 09:31:25.756555381 +0000 UTC m=+0.106894731 container remove 947a4012bb76621a36753075e55da6beedf6df514c3341e47316a5bd4bc920c1 (image=quay.io/ceph/ceph:v18, name=compassionate_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 09:31:25 compute-0 systemd[1]: libpod-conmon-947a4012bb76621a36753075e55da6beedf6df514c3341e47316a5bd4bc920c1.scope: Deactivated successfully.
Oct  3 09:31:25 compute-0 podman[192946]: 2025-10-03 09:31:25.854195122 +0000 UTC m=+0.055487936 container create 10f2d1c3c890df44a3f2b999efcfca6e59dd5bce801e034c25c6f171e409432b (image=quay.io/ceph/ceph:v18, name=crazy_goodall, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:31:25 compute-0 systemd[1]: Started libpod-conmon-10f2d1c3c890df44a3f2b999efcfca6e59dd5bce801e034c25c6f171e409432b.scope.
Oct  3 09:31:25 compute-0 podman[192946]: 2025-10-03 09:31:25.831585024 +0000 UTC m=+0.032877838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d39cb27286523f0dd3d760a12d1375022af4e74f775b3cc19a05c50f5687f9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d39cb27286523f0dd3d760a12d1375022af4e74f775b3cc19a05c50f5687f9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d39cb27286523f0dd3d760a12d1375022af4e74f775b3cc19a05c50f5687f9c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:25 compute-0 podman[192946]: 2025-10-03 09:31:25.967710135 +0000 UTC m=+0.169002999 container init 10f2d1c3c890df44a3f2b999efcfca6e59dd5bce801e034c25c6f171e409432b (image=quay.io/ceph/ceph:v18, name=crazy_goodall, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:31:25 compute-0 podman[192946]: 2025-10-03 09:31:25.977012483 +0000 UTC m=+0.178305297 container start 10f2d1c3c890df44a3f2b999efcfca6e59dd5bce801e034c25c6f171e409432b (image=quay.io/ceph/ceph:v18, name=crazy_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 09:31:25 compute-0 podman[192946]: 2025-10-03 09:31:25.981371564 +0000 UTC m=+0.182664388 container attach 10f2d1c3c890df44a3f2b999efcfca6e59dd5bce801e034c25c6f171e409432b (image=quay.io/ceph/ceph:v18, name=crazy_goodall, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 09:31:27 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'crash'
Oct  3 09:31:27 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:27.760+0000 7f3280d8c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  3 09:31:27 compute-0 ceph-mgr[192071]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  3 09:31:27 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'dashboard'
Oct  3 09:31:29 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'devicehealth'
Oct  3 09:31:29 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:29.615+0000 7f3280d8c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  3 09:31:29 compute-0 ceph-mgr[192071]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  3 09:31:29 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'diskprediction_local'
Oct  3 09:31:29 compute-0 podman[157165]: time="2025-10-03T09:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:31:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 23482 "" "Go-http-client/1.1"
Oct  3 09:31:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4342 "" "Go-http-client/1.1"
Oct  3 09:31:30 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  3 09:31:30 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  3 09:31:30 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]:  from numpy import show_config as show_numpy_config
Oct  3 09:31:30 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:30.209+0000 7f3280d8c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  3 09:31:30 compute-0 ceph-mgr[192071]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  3 09:31:30 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'influx'
Oct  3 09:31:30 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:30.544+0000 7f3280d8c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  3 09:31:30 compute-0 ceph-mgr[192071]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  3 09:31:30 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'insights'
Oct  3 09:31:30 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'iostat'
Oct  3 09:31:31 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:31.084+0000 7f3280d8c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  3 09:31:31 compute-0 ceph-mgr[192071]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  3 09:31:31 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'k8sevents'
Oct  3 09:31:31 compute-0 openstack_network_exporter[159287]: ERROR   09:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:31:31 compute-0 openstack_network_exporter[159287]: ERROR   09:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:31:31 compute-0 openstack_network_exporter[159287]: ERROR   09:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:31:31 compute-0 openstack_network_exporter[159287]: ERROR   09:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:31:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:31:31 compute-0 openstack_network_exporter[159287]: ERROR   09:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:31:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:31:33 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'localpool'
Oct  3 09:31:33 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'mds_autoscaler'
Oct  3 09:31:33 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'mirroring'
Oct  3 09:31:34 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'nfs'
Oct  3 09:31:34 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:34.975+0000 7f3280d8c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  3 09:31:34 compute-0 ceph-mgr[192071]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  3 09:31:34 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'orchestrator'
Oct  3 09:31:35 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:35.699+0000 7f3280d8c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  3 09:31:35 compute-0 ceph-mgr[192071]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  3 09:31:35 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'osd_perf_query'
Oct  3 09:31:35 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:35.986+0000 7f3280d8c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  3 09:31:35 compute-0 ceph-mgr[192071]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  3 09:31:35 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'osd_support'
Oct  3 09:31:36 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:36.230+0000 7f3280d8c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  3 09:31:36 compute-0 ceph-mgr[192071]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  3 09:31:36 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'pg_autoscaler'
Oct  3 09:31:36 compute-0 ceph-mgr[192071]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  3 09:31:36 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:36.519+0000 7f3280d8c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  3 09:31:36 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'progress'
Oct  3 09:31:36 compute-0 ceph-mgr[192071]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  3 09:31:36 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:36.762+0000 7f3280d8c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  3 09:31:36 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'prometheus'
Oct  3 09:31:37 compute-0 ceph-mgr[192071]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  3 09:31:37 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:37.828+0000 7f3280d8c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  3 09:31:37 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'rbd_support'
Oct  3 09:31:38 compute-0 ceph-mgr[192071]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  3 09:31:38 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:38.149+0000 7f3280d8c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  3 09:31:38 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'restful'
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.949 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.950 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f70b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa35c9f7170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b8940b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35df74380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b894380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e566ba0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f73b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e6b9400>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f76e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b777920>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa35b894080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa35c9f71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa35c9f7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa35c9f7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa35c9f72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa35c9f7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa35c955970>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa35b894350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa35c92f7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa35c9f7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa35c9f7710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa35c9f79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa35e6e6180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.967 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa35c9f73e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.967 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa35c9f7c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.967 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa35c9f7440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.968 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.968 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa35c9f7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.968 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.968 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa35c9f7ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.968 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa35c9f7d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.969 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa35c9f7e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.969 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa35c9f7650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.969 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.970 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa35c9f7e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.970 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.970 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa35c9f76b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.970 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.970 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa35c9f7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.970 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.971 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa35c9f7fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.971 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.975 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.975 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.975 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.975 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.975 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.975 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.975 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:31:38.976 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:31:38 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'rgw'
Oct  3 09:31:39 compute-0 ceph-mgr[192071]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  3 09:31:39 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:39.722+0000 7f3280d8c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  3 09:31:39 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'rook'
Oct  3 09:31:42 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:42.040+0000 7f3280d8c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  3 09:31:42 compute-0 ceph-mgr[192071]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  3 09:31:42 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'selftest'
Oct  3 09:31:42 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:42.306+0000 7f3280d8c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  3 09:31:42 compute-0 ceph-mgr[192071]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  3 09:31:42 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'snap_schedule'
Oct  3 09:31:42 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:42.589+0000 7f3280d8c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  3 09:31:42 compute-0 ceph-mgr[192071]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  3 09:31:42 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'stats'
Oct  3 09:31:42 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'status'
Oct  3 09:31:43 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:43.133+0000 7f3280d8c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  3 09:31:43 compute-0 ceph-mgr[192071]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  3 09:31:43 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'telegraf'
Oct  3 09:31:43 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:43.414+0000 7f3280d8c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  3 09:31:43 compute-0 ceph-mgr[192071]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  3 09:31:43 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'telemetry'
Oct  3 09:31:44 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:44.078+0000 7f3280d8c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  3 09:31:44 compute-0 ceph-mgr[192071]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  3 09:31:44 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'test_orchestrator'
Oct  3 09:31:44 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:44.820+0000 7f3280d8c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  3 09:31:44 compute-0 ceph-mgr[192071]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  3 09:31:44 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'volumes'
Oct  3 09:31:45 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:45.588+0000 7f3280d8c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: mgr[py] Loading python module 'zabbix'
Oct  3 09:31:45 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T09:31:45.851+0000 7f3280d8c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  3 09:31:45 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : Active manager daemon compute-0.vtkhde restarted
Oct  3 09:31:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct  3 09:31:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  3 09:31:45 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.vtkhde
Oct  3 09:31:45 compute-0 podman[193010]: 2025-10-03 09:31:45.857183122 +0000 UTC m=+0.085779120 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: ms_deliver_dispatch: unhandled message 0x55fd7cadf1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct  3 09:31:45 compute-0 podman[192998]: 2025-10-03 09:31:45.872829826 +0000 UTC m=+0.120888660 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, name=ubi9)
Oct  3 09:31:45 compute-0 podman[192999]: 2025-10-03 09:31:45.873821468 +0000 UTC m=+0.116591842 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:31:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  3 09:31:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  3 09:31:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct  3 09:31:45 compute-0 podman[193000]: 2025-10-03 09:31:45.878210939 +0000 UTC m=+0.116194599 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS)
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: mgr handle_mgr_map Activating!
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: mgr handle_mgr_map I am now activating
Oct  3 09:31:45 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct  3 09:31:45 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.vtkhde(active, starting, since 0.0283516s)
Oct  3 09:31:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct  3 09:31:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  3 09:31:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.vtkhde", "id": "compute-0.vtkhde"} v 0) v1
Oct  3 09:31:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "mgr metadata", "who": "compute-0.vtkhde", "id": "compute-0.vtkhde"}]: dispatch
Oct  3 09:31:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct  3 09:31:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  3 09:31:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).mds e1 all = 1
Oct  3 09:31:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct  3 09:31:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  3 09:31:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct  3 09:31:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: balancer
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:45 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : Manager daemon compute-0.vtkhde is now available
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Starting
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:31:45
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: [balancer INFO root] No pools available
Oct  3 09:31:45 compute-0 ceph-mon[191783]: Active manager daemon compute-0.vtkhde restarted
Oct  3 09:31:45 compute-0 ceph-mon[191783]: Activating manager daemon compute-0.vtkhde
Oct  3 09:31:45 compute-0 ceph-mon[191783]: Manager daemon compute-0.vtkhde is now available
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct  3 09:31:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Oct  3 09:31:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Oct  3 09:31:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: cephadm
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: crash
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: devicehealth
Oct  3 09:31:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  3 09:31:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: [devicehealth INFO root] Starting
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: iostat
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: nfs
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:45 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: orchestrator
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: pg_autoscaler
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: progress
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [progress INFO root] Loading...
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [progress INFO root] No stored events to load
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [progress INFO root] Loaded [] historic events
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [progress INFO root] Loaded OSDMap, ready.
Oct  3 09:31:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  3 09:31:46 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] recovery thread starting
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] starting setup
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: rbd_support
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: restful
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [restful INFO root] server_addr: :: server_port: 8003
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [restful WARNING root] server not running: no certificate configured
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: status
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: telemetry
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  3 09:31:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vtkhde/mirror_snapshot_schedule"} v 0) v1
Oct  3 09:31:46 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vtkhde/mirror_snapshot_schedule"}]: dispatch
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] PerfHandler: starting
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TaskHandler: starting
Oct  3 09:31:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vtkhde/trash_purge_schedule"} v 0) v1
Oct  3 09:31:46 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vtkhde/trash_purge_schedule"}]: dispatch
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] setup complete
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: mgr load Constructed class from module: volumes
Oct  3 09:31:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Oct  3 09:31:46 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Oct  3 09:31:46 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct  3 09:31:46 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.vtkhde(active, since 1.04194s)
Oct  3 09:31:46 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct  3 09:31:46 compute-0 crazy_goodall[192962]: {
Oct  3 09:31:46 compute-0 crazy_goodall[192962]:    "mgrmap_epoch": 7,
Oct  3 09:31:46 compute-0 crazy_goodall[192962]:    "initialized": true
Oct  3 09:31:46 compute-0 crazy_goodall[192962]: }
Oct  3 09:31:46 compute-0 systemd[1]: libpod-10f2d1c3c890df44a3f2b999efcfca6e59dd5bce801e034c25c6f171e409432b.scope: Deactivated successfully.
Oct  3 09:31:46 compute-0 podman[192946]: 2025-10-03 09:31:46.945492367 +0000 UTC m=+21.146785201 container died 10f2d1c3c890df44a3f2b999efcfca6e59dd5bce801e034c25c6f171e409432b (image=quay.io/ceph/ceph:v18, name=crazy_goodall, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:31:46 compute-0 ceph-mon[191783]: Found migration_current of "None". Setting to last migration.
Oct  3 09:31:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vtkhde/mirror_snapshot_schedule"}]: dispatch
Oct  3 09:31:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.vtkhde/trash_purge_schedule"}]: dispatch
Oct  3 09:31:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d39cb27286523f0dd3d760a12d1375022af4e74f775b3cc19a05c50f5687f9c-merged.mount: Deactivated successfully.
Oct  3 09:31:47 compute-0 podman[192946]: 2025-10-03 09:31:47.015606303 +0000 UTC m=+21.216899117 container remove 10f2d1c3c890df44a3f2b999efcfca6e59dd5bce801e034c25c6f171e409432b (image=quay.io/ceph/ceph:v18, name=crazy_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 09:31:47 compute-0 systemd[1]: libpod-conmon-10f2d1c3c890df44a3f2b999efcfca6e59dd5bce801e034c25c6f171e409432b.scope: Deactivated successfully.
Oct  3 09:31:47 compute-0 podman[193199]: 2025-10-03 09:31:47.089739577 +0000 UTC m=+0.048198241 container create f64cbeb68924cef43955fe80e1ea8553e8f69f634e681b49fc154f22872b8563 (image=quay.io/ceph/ceph:v18, name=unruffled_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 09:31:47 compute-0 systemd[1]: Started libpod-conmon-f64cbeb68924cef43955fe80e1ea8553e8f69f634e681b49fc154f22872b8563.scope.
Oct  3 09:31:47 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:47 compute-0 podman[193199]: 2025-10-03 09:31:47.071897303 +0000 UTC m=+0.030356007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b6c6e412f641b7f1947637e8b9cf1f3d54ed145905c961e9e425c10f495963/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b6c6e412f641b7f1947637e8b9cf1f3d54ed145905c961e9e425c10f495963/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28b6c6e412f641b7f1947637e8b9cf1f3d54ed145905c961e9e425c10f495963/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:47 compute-0 podman[193199]: 2025-10-03 09:31:47.192019659 +0000 UTC m=+0.150478413 container init f64cbeb68924cef43955fe80e1ea8553e8f69f634e681b49fc154f22872b8563 (image=quay.io/ceph/ceph:v18, name=unruffled_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:31:47 compute-0 podman[193199]: 2025-10-03 09:31:47.209635635 +0000 UTC m=+0.168094309 container start f64cbeb68924cef43955fe80e1ea8553e8f69f634e681b49fc154f22872b8563 (image=quay.io/ceph/ceph:v18, name=unruffled_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 09:31:47 compute-0 podman[193199]: 2025-10-03 09:31:47.215064099 +0000 UTC m=+0.173522793 container attach f64cbeb68924cef43955fe80e1ea8553e8f69f634e681b49fc154f22872b8563 (image=quay.io/ceph/ceph:v18, name=unruffled_goodall, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 09:31:47 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 09:31:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Oct  3 09:31:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  3 09:31:47 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  3 09:31:47 compute-0 systemd[1]: libpod-f64cbeb68924cef43955fe80e1ea8553e8f69f634e681b49fc154f22872b8563.scope: Deactivated successfully.
Oct  3 09:31:47 compute-0 podman[193199]: 2025-10-03 09:31:47.836158211 +0000 UTC m=+0.794616885 container died f64cbeb68924cef43955fe80e1ea8553e8f69f634e681b49fc154f22872b8563 (image=quay.io/ceph/ceph:v18, name=unruffled_goodall, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 09:31:47 compute-0 ceph-mgr[192071]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  3 09:31:47 compute-0 ceph-mgr[192071]: [cephadm INFO cherrypy.error] [03/Oct/2025:09:31:47] ENGINE Bus STARTING
Oct  3 09:31:47 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : [03/Oct/2025:09:31:47] ENGINE Bus STARTING
Oct  3 09:31:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-28b6c6e412f641b7f1947637e8b9cf1f3d54ed145905c961e9e425c10f495963-merged.mount: Deactivated successfully.
Oct  3 09:31:47 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:48 compute-0 ceph-mgr[192071]: [cephadm INFO cherrypy.error] [03/Oct/2025:09:31:48] ENGINE Serving on http://192.168.122.100:8765
Oct  3 09:31:48 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : [03/Oct/2025:09:31:48] ENGINE Serving on http://192.168.122.100:8765
Oct  3 09:31:48 compute-0 podman[193199]: 2025-10-03 09:31:48.007002078 +0000 UTC m=+0.965460772 container remove f64cbeb68924cef43955fe80e1ea8553e8f69f634e681b49fc154f22872b8563 (image=quay.io/ceph/ceph:v18, name=unruffled_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 09:31:48 compute-0 systemd[1]: libpod-conmon-f64cbeb68924cef43955fe80e1ea8553e8f69f634e681b49fc154f22872b8563.scope: Deactivated successfully.
Oct  3 09:31:48 compute-0 podman[193264]: 2025-10-03 09:31:48.105760206 +0000 UTC m=+0.073561028 container create 92a927711bd6f83f3cb58689bdf1e7968d15cec617677b3480ffdc8e232c66d6 (image=quay.io/ceph/ceph:v18, name=gallant_snyder, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct  3 09:31:48 compute-0 ceph-mgr[192071]: [cephadm INFO cherrypy.error] [03/Oct/2025:09:31:48] ENGINE Serving on https://192.168.122.100:7150
Oct  3 09:31:48 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : [03/Oct/2025:09:31:48] ENGINE Serving on https://192.168.122.100:7150
Oct  3 09:31:48 compute-0 ceph-mgr[192071]: [cephadm INFO cherrypy.error] [03/Oct/2025:09:31:48] ENGINE Bus STARTED
Oct  3 09:31:48 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : [03/Oct/2025:09:31:48] ENGINE Bus STARTED
Oct  3 09:31:48 compute-0 ceph-mgr[192071]: [cephadm INFO cherrypy.error] [03/Oct/2025:09:31:48] ENGINE Client ('192.168.122.100', 56328) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  3 09:31:48 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : [03/Oct/2025:09:31:48] ENGINE Client ('192.168.122.100', 56328) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  3 09:31:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  3 09:31:48 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  3 09:31:48 compute-0 systemd[1]: Started libpod-conmon-92a927711bd6f83f3cb58689bdf1e7968d15cec617677b3480ffdc8e232c66d6.scope.
Oct  3 09:31:48 compute-0 podman[193264]: 2025-10-03 09:31:48.085095121 +0000 UTC m=+0.052895963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead28d8b21d596a83325592df7faa68cfab081a391a99c59143d1eab7610ea87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead28d8b21d596a83325592df7faa68cfab081a391a99c59143d1eab7610ea87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead28d8b21d596a83325592df7faa68cfab081a391a99c59143d1eab7610ea87/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:48 compute-0 podman[193264]: 2025-10-03 09:31:48.27285551 +0000 UTC m=+0.240656332 container init 92a927711bd6f83f3cb58689bdf1e7968d15cec617677b3480ffdc8e232c66d6 (image=quay.io/ceph/ceph:v18, name=gallant_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 09:31:48 compute-0 podman[193264]: 2025-10-03 09:31:48.281391805 +0000 UTC m=+0.249192627 container start 92a927711bd6f83f3cb58689bdf1e7968d15cec617677b3480ffdc8e232c66d6 (image=quay.io/ceph/ceph:v18, name=gallant_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 09:31:48 compute-0 podman[193264]: 2025-10-03 09:31:48.285862 +0000 UTC m=+0.253662822 container attach 92a927711bd6f83f3cb58689bdf1e7968d15cec617677b3480ffdc8e232c66d6 (image=quay.io/ceph/ceph:v18, name=gallant_snyder, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:31:48 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.vtkhde(active, since 2s)
Oct  3 09:31:48 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 09:31:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Oct  3 09:31:48 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:48 compute-0 ceph-mgr[192071]: [cephadm INFO root] Set ssh ssh_user
Oct  3 09:31:48 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct  3 09:31:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Oct  3 09:31:48 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:48 compute-0 ceph-mgr[192071]: [cephadm INFO root] Set ssh ssh_config
Oct  3 09:31:48 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct  3 09:31:48 compute-0 ceph-mgr[192071]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct  3 09:31:48 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct  3 09:31:48 compute-0 gallant_snyder[193292]: ssh user set to ceph-admin. sudo will be used
Oct  3 09:31:48 compute-0 systemd[1]: libpod-92a927711bd6f83f3cb58689bdf1e7968d15cec617677b3480ffdc8e232c66d6.scope: Deactivated successfully.
Oct  3 09:31:48 compute-0 podman[193264]: 2025-10-03 09:31:48.933485145 +0000 UTC m=+0.901285997 container died 92a927711bd6f83f3cb58689bdf1e7968d15cec617677b3480ffdc8e232c66d6 (image=quay.io/ceph/ceph:v18, name=gallant_snyder, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:31:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ead28d8b21d596a83325592df7faa68cfab081a391a99c59143d1eab7610ea87-merged.mount: Deactivated successfully.
Oct  3 09:31:49 compute-0 ceph-mon[191783]: [03/Oct/2025:09:31:47] ENGINE Bus STARTING
Oct  3 09:31:49 compute-0 ceph-mon[191783]: [03/Oct/2025:09:31:48] ENGINE Serving on http://192.168.122.100:8765
Oct  3 09:31:49 compute-0 ceph-mon[191783]: [03/Oct/2025:09:31:48] ENGINE Serving on https://192.168.122.100:7150
Oct  3 09:31:49 compute-0 ceph-mon[191783]: [03/Oct/2025:09:31:48] ENGINE Bus STARTED
Oct  3 09:31:49 compute-0 ceph-mon[191783]: [03/Oct/2025:09:31:48] ENGINE Client ('192.168.122.100', 56328) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  3 09:31:49 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:49 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:49 compute-0 podman[193264]: 2025-10-03 09:31:48.99950289 +0000 UTC m=+0.967303712 container remove 92a927711bd6f83f3cb58689bdf1e7968d15cec617677b3480ffdc8e232c66d6 (image=quay.io/ceph/ceph:v18, name=gallant_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 09:31:49 compute-0 systemd[1]: libpod-conmon-92a927711bd6f83f3cb58689bdf1e7968d15cec617677b3480ffdc8e232c66d6.scope: Deactivated successfully.
Oct  3 09:31:49 compute-0 podman[193330]: 2025-10-03 09:31:49.098426123 +0000 UTC m=+0.056591572 container create 634431c4837d1eceb81072709954fde397b7e6c70af51042b650ba0a1b99e7d2 (image=quay.io/ceph/ceph:v18, name=blissful_sanderson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 09:31:49 compute-0 systemd[1]: Started libpod-conmon-634431c4837d1eceb81072709954fde397b7e6c70af51042b650ba0a1b99e7d2.scope.
Oct  3 09:31:49 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:49 compute-0 podman[193330]: 2025-10-03 09:31:49.07721405 +0000 UTC m=+0.035379519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc18077ccbe4f8250c2a6a354b8bc220ca073b56bed0cb74537377200c56154/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc18077ccbe4f8250c2a6a354b8bc220ca073b56bed0cb74537377200c56154/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc18077ccbe4f8250c2a6a354b8bc220ca073b56bed0cb74537377200c56154/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc18077ccbe4f8250c2a6a354b8bc220ca073b56bed0cb74537377200c56154/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc18077ccbe4f8250c2a6a354b8bc220ca073b56bed0cb74537377200c56154/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:49 compute-0 podman[193330]: 2025-10-03 09:31:49.193666127 +0000 UTC m=+0.151831596 container init 634431c4837d1eceb81072709954fde397b7e6c70af51042b650ba0a1b99e7d2 (image=quay.io/ceph/ceph:v18, name=blissful_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 09:31:49 compute-0 podman[193330]: 2025-10-03 09:31:49.208796933 +0000 UTC m=+0.166962382 container start 634431c4837d1eceb81072709954fde397b7e6c70af51042b650ba0a1b99e7d2 (image=quay.io/ceph/ceph:v18, name=blissful_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:31:49 compute-0 podman[193330]: 2025-10-03 09:31:49.213645289 +0000 UTC m=+0.171810758 container attach 634431c4837d1eceb81072709954fde397b7e6c70af51042b650ba0a1b99e7d2 (image=quay.io/ceph/ceph:v18, name=blissful_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 09:31:49 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 09:31:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Oct  3 09:31:49 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:49 compute-0 ceph-mgr[192071]: [cephadm INFO root] Set ssh ssh_identity_key
Oct  3 09:31:49 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct  3 09:31:49 compute-0 ceph-mgr[192071]: [cephadm INFO root] Set ssh private key
Oct  3 09:31:49 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Set ssh private key
Oct  3 09:31:49 compute-0 systemd[1]: libpod-634431c4837d1eceb81072709954fde397b7e6c70af51042b650ba0a1b99e7d2.scope: Deactivated successfully.
Oct  3 09:31:49 compute-0 podman[193330]: 2025-10-03 09:31:49.820096491 +0000 UTC m=+0.778261940 container died 634431c4837d1eceb81072709954fde397b7e6c70af51042b650ba0a1b99e7d2 (image=quay.io/ceph/ceph:v18, name=blissful_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 09:31:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdc18077ccbe4f8250c2a6a354b8bc220ca073b56bed0cb74537377200c56154-merged.mount: Deactivated successfully.
Oct  3 09:31:49 compute-0 podman[193330]: 2025-10-03 09:31:49.866881316 +0000 UTC m=+0.825046765 container remove 634431c4837d1eceb81072709954fde397b7e6c70af51042b650ba0a1b99e7d2 (image=quay.io/ceph/ceph:v18, name=blissful_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct  3 09:31:49 compute-0 systemd[1]: libpod-conmon-634431c4837d1eceb81072709954fde397b7e6c70af51042b650ba0a1b99e7d2.scope: Deactivated successfully.
Oct  3 09:31:49 compute-0 ceph-mgr[192071]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  3 09:31:49 compute-0 podman[193384]: 2025-10-03 09:31:49.95281365 +0000 UTC m=+0.060805127 container create 3f89358874030acf444c9df403cb9b2e51107934bac2eae8aea647c2c96e3b75 (image=quay.io/ceph/ceph:v18, name=awesome_mcnulty, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:31:49 compute-0 systemd[1]: Started libpod-conmon-3f89358874030acf444c9df403cb9b2e51107934bac2eae8aea647c2c96e3b75.scope.
Oct  3 09:31:50 compute-0 ceph-mon[191783]: Set ssh ssh_user
Oct  3 09:31:50 compute-0 ceph-mon[191783]: Set ssh ssh_config
Oct  3 09:31:50 compute-0 ceph-mon[191783]: ssh user set to ceph-admin. sudo will be used
Oct  3 09:31:50 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:50 compute-0 podman[193384]: 2025-10-03 09:31:49.929218642 +0000 UTC m=+0.037210139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e15b2df7512105404e0f7f78279d612b03f42f6d53ea95a1c533ce73c2ace01/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e15b2df7512105404e0f7f78279d612b03f42f6d53ea95a1c533ce73c2ace01/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e15b2df7512105404e0f7f78279d612b03f42f6d53ea95a1c533ce73c2ace01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e15b2df7512105404e0f7f78279d612b03f42f6d53ea95a1c533ce73c2ace01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e15b2df7512105404e0f7f78279d612b03f42f6d53ea95a1c533ce73c2ace01/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:50 compute-0 podman[193384]: 2025-10-03 09:31:50.059135071 +0000 UTC m=+0.167126578 container init 3f89358874030acf444c9df403cb9b2e51107934bac2eae8aea647c2c96e3b75 (image=quay.io/ceph/ceph:v18, name=awesome_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 09:31:50 compute-0 podman[193384]: 2025-10-03 09:31:50.076909043 +0000 UTC m=+0.184900520 container start 3f89358874030acf444c9df403cb9b2e51107934bac2eae8aea647c2c96e3b75 (image=quay.io/ceph/ceph:v18, name=awesome_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 09:31:50 compute-0 podman[193384]: 2025-10-03 09:31:50.082289366 +0000 UTC m=+0.190280873 container attach 3f89358874030acf444c9df403cb9b2e51107934bac2eae8aea647c2c96e3b75 (image=quay.io/ceph/ceph:v18, name=awesome_mcnulty, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct  3 09:31:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019920036 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:31:50 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 09:31:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Oct  3 09:31:50 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:50 compute-0 ceph-mgr[192071]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct  3 09:31:50 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct  3 09:31:50 compute-0 systemd[1]: libpod-3f89358874030acf444c9df403cb9b2e51107934bac2eae8aea647c2c96e3b75.scope: Deactivated successfully.
Oct  3 09:31:50 compute-0 podman[193384]: 2025-10-03 09:31:50.678962183 +0000 UTC m=+0.786953700 container died 3f89358874030acf444c9df403cb9b2e51107934bac2eae8aea647c2c96e3b75 (image=quay.io/ceph/ceph:v18, name=awesome_mcnulty, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct  3 09:31:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e15b2df7512105404e0f7f78279d612b03f42f6d53ea95a1c533ce73c2ace01-merged.mount: Deactivated successfully.
Oct  3 09:31:50 compute-0 podman[193384]: 2025-10-03 09:31:50.75440985 +0000 UTC m=+0.862401337 container remove 3f89358874030acf444c9df403cb9b2e51107934bac2eae8aea647c2c96e3b75 (image=quay.io/ceph/ceph:v18, name=awesome_mcnulty, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:31:50 compute-0 systemd[1]: libpod-conmon-3f89358874030acf444c9df403cb9b2e51107934bac2eae8aea647c2c96e3b75.scope: Deactivated successfully.
Oct  3 09:31:50 compute-0 podman[193440]: 2025-10-03 09:31:50.844116006 +0000 UTC m=+0.064883389 container create dfa44c2966cb6cbb8a6f760e0d9956104346d0b48421d1c9da029988572975a8 (image=quay.io/ceph/ceph:v18, name=exciting_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  3 09:31:50 compute-0 systemd[1]: Started libpod-conmon-dfa44c2966cb6cbb8a6f760e0d9956104346d0b48421d1c9da029988572975a8.scope.
Oct  3 09:31:50 compute-0 podman[193440]: 2025-10-03 09:31:50.818475851 +0000 UTC m=+0.039243274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d188ded8f997d71102785aa8bed0f60c80c8120fafd4f6c99ffb8f2a9b8c3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d188ded8f997d71102785aa8bed0f60c80c8120fafd4f6c99ffb8f2a9b8c3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d188ded8f997d71102785aa8bed0f60c80c8120fafd4f6c99ffb8f2a9b8c3b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:50 compute-0 podman[193440]: 2025-10-03 09:31:50.992041885 +0000 UTC m=+0.212809288 container init dfa44c2966cb6cbb8a6f760e0d9956104346d0b48421d1c9da029988572975a8 (image=quay.io/ceph/ceph:v18, name=exciting_wright, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:31:51 compute-0 podman[193440]: 2025-10-03 09:31:51.002142661 +0000 UTC m=+0.222910044 container start dfa44c2966cb6cbb8a6f760e0d9956104346d0b48421d1c9da029988572975a8 (image=quay.io/ceph/ceph:v18, name=exciting_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:31:51 compute-0 podman[193440]: 2025-10-03 09:31:51.008013759 +0000 UTC m=+0.228781152 container attach dfa44c2966cb6cbb8a6f760e0d9956104346d0b48421d1c9da029988572975a8 (image=quay.io/ceph/ceph:v18, name=exciting_wright, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:31:51 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 09:31:51 compute-0 exciting_wright[193456]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXmKM5pHD/ZJmHPoNhgs6oa7LQzOzpdSD6MNduUG8Y0WbIRGOqypyRZcvvDJBH9VsCn9NTmJMIKsycNb86XHW2jaLfTyx3Ml3w3XFexgDj9Klc03yXZzR0vNxm7QCAnhUSlqJKT7F94dgxHCaRemewCTgxEOHwf+u6JixWw+A/oVM59z8Pd43OZmXy0v8ju2nATIalz5mHzHqzxg1J5K4XiEdZg9i7ndGdp1uMOM813IrBb1rasMHBYDiPcreScWAIIxyBL4cTHNkq0IhcauF3Z1Mp+SXzQbpiQw1mxm+049oJi0Iy14W1icDPwrpJBz3rFkW4dxiiO8VEq5pM47HAogvey1RM3J5g45iSkMpp4eVkBgzbpdDJl8/gmCYuLXWDZ5/mVugdh7nuhmPbMQuhupC9m8STccqsPqX8eyQUi6mW4El5W3XWjD5Tl+n59bNtlLflSNmRD85+w23kunXuCWqV1LErmeIviFekpWdJo7ILnKkEdPokaQRZiDxcNfs= zuul@controller
Oct  3 09:31:51 compute-0 systemd[1]: libpod-dfa44c2966cb6cbb8a6f760e0d9956104346d0b48421d1c9da029988572975a8.scope: Deactivated successfully.
Oct  3 09:31:51 compute-0 podman[193440]: 2025-10-03 09:31:51.600787621 +0000 UTC m=+0.821555004 container died dfa44c2966cb6cbb8a6f760e0d9956104346d0b48421d1c9da029988572975a8 (image=quay.io/ceph/ceph:v18, name=exciting_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 09:31:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-63d188ded8f997d71102785aa8bed0f60c80c8120fafd4f6c99ffb8f2a9b8c3b-merged.mount: Deactivated successfully.
Oct  3 09:31:51 compute-0 ceph-mon[191783]: Set ssh ssh_identity_key
Oct  3 09:31:51 compute-0 ceph-mon[191783]: Set ssh private key
Oct  3 09:31:51 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:51 compute-0 ceph-mon[191783]: Set ssh ssh_identity_pub
Oct  3 09:31:51 compute-0 podman[193440]: 2025-10-03 09:31:51.658339102 +0000 UTC m=+0.879106485 container remove dfa44c2966cb6cbb8a6f760e0d9956104346d0b48421d1c9da029988572975a8 (image=quay.io/ceph/ceph:v18, name=exciting_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 09:31:51 compute-0 systemd[1]: libpod-conmon-dfa44c2966cb6cbb8a6f760e0d9956104346d0b48421d1c9da029988572975a8.scope: Deactivated successfully.
Oct  3 09:31:51 compute-0 podman[193495]: 2025-10-03 09:31:51.728855641 +0000 UTC m=+0.047653304 container create 138a7a9c1d5170969a78009eafe25cbff02b3de9cd92dfc468337a14758f16db (image=quay.io/ceph/ceph:v18, name=eager_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:31:51 compute-0 systemd[1]: Started libpod-conmon-138a7a9c1d5170969a78009eafe25cbff02b3de9cd92dfc468337a14758f16db.scope.
Oct  3 09:31:51 compute-0 podman[193495]: 2025-10-03 09:31:51.71114099 +0000 UTC m=+0.029938673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:51 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0dd99a5f1254d4857b1da12a527e7f522d24a85aaef9c192928db03e0b16b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0dd99a5f1254d4857b1da12a527e7f522d24a85aaef9c192928db03e0b16b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0dd99a5f1254d4857b1da12a527e7f522d24a85aaef9c192928db03e0b16b6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:51 compute-0 podman[193495]: 2025-10-03 09:31:51.836950678 +0000 UTC m=+0.155748371 container init 138a7a9c1d5170969a78009eafe25cbff02b3de9cd92dfc468337a14758f16db (image=quay.io/ceph/ceph:v18, name=eager_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:31:51 compute-0 podman[193495]: 2025-10-03 09:31:51.844899935 +0000 UTC m=+0.163697598 container start 138a7a9c1d5170969a78009eafe25cbff02b3de9cd92dfc468337a14758f16db (image=quay.io/ceph/ceph:v18, name=eager_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:31:51 compute-0 podman[193495]: 2025-10-03 09:31:51.849896835 +0000 UTC m=+0.168694498 container attach 138a7a9c1d5170969a78009eafe25cbff02b3de9cd92dfc468337a14758f16db (image=quay.io/ceph/ceph:v18, name=eager_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:31:51 compute-0 ceph-mgr[192071]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  3 09:31:52 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 09:31:52 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Oct  3 09:31:52 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  3 09:31:52 compute-0 systemd-logind[798]: New session 29 of user ceph-admin.
Oct  3 09:31:52 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  3 09:31:52 compute-0 systemd[1]: Starting User Manager for UID 42477...
Oct  3 09:31:52 compute-0 systemd-logind[798]: New session 31 of user ceph-admin.
Oct  3 09:31:52 compute-0 systemd[193541]: Queued start job for default target Main User Target.
Oct  3 09:31:52 compute-0 systemd[193541]: Created slice User Application Slice.
Oct  3 09:31:52 compute-0 systemd[193541]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  3 09:31:52 compute-0 systemd[193541]: Started Daily Cleanup of User's Temporary Directories.
Oct  3 09:31:52 compute-0 systemd[193541]: Reached target Paths.
Oct  3 09:31:52 compute-0 systemd[193541]: Reached target Timers.
Oct  3 09:31:52 compute-0 systemd[193541]: Starting D-Bus User Message Bus Socket...
Oct  3 09:31:52 compute-0 systemd[193541]: Starting Create User's Volatile Files and Directories...
Oct  3 09:31:52 compute-0 systemd[193541]: Listening on D-Bus User Message Bus Socket.
Oct  3 09:31:52 compute-0 systemd[193541]: Reached target Sockets.
Oct  3 09:31:52 compute-0 systemd[193541]: Finished Create User's Volatile Files and Directories.
Oct  3 09:31:52 compute-0 systemd[193541]: Reached target Basic System.
Oct  3 09:31:52 compute-0 systemd[193541]: Reached target Main User Target.
Oct  3 09:31:52 compute-0 systemd[193541]: Startup finished in 142ms.
Oct  3 09:31:52 compute-0 systemd[1]: Started User Manager for UID 42477.
Oct  3 09:31:52 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Oct  3 09:31:52 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Oct  3 09:31:52 compute-0 podman[193557]: 2025-10-03 09:31:52.986996129 +0000 UTC m=+0.073347271 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, container_name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Oct  3 09:31:53 compute-0 podman[193570]: 2025-10-03 09:31:53.10419991 +0000 UTC m=+0.125306364 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:31:53 compute-0 systemd-logind[798]: New session 32 of user ceph-admin.
Oct  3 09:31:53 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Oct  3 09:31:53 compute-0 ceph-mgr[192071]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  3 09:31:53 compute-0 systemd-logind[798]: New session 33 of user ceph-admin.
Oct  3 09:31:53 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Oct  3 09:31:54 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct  3 09:31:54 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct  3 09:31:54 compute-0 systemd-logind[798]: New session 34 of user ceph-admin.
Oct  3 09:31:54 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Oct  3 09:31:54 compute-0 ceph-mon[191783]: Deploying cephadm binary to compute-0
Oct  3 09:31:54 compute-0 systemd-logind[798]: New session 35 of user ceph-admin.
Oct  3 09:31:54 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Oct  3 09:31:54 compute-0 podman[193820]: 2025-10-03 09:31:54.940181918 +0000 UTC m=+0.081203864 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 09:31:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053013 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:31:55 compute-0 systemd-logind[798]: New session 36 of user ceph-admin.
Oct  3 09:31:55 compute-0 systemd[1]: Started Session 36 of User ceph-admin.
Oct  3 09:31:55 compute-0 systemd-logind[798]: New session 37 of user ceph-admin.
Oct  3 09:31:55 compute-0 systemd[1]: Started Session 37 of User ceph-admin.
Oct  3 09:31:55 compute-0 ceph-mgr[192071]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  3 09:31:56 compute-0 systemd-logind[798]: New session 38 of user ceph-admin.
Oct  3 09:31:56 compute-0 systemd[1]: Started Session 38 of User ceph-admin.
Oct  3 09:31:56 compute-0 systemd-logind[798]: New session 39 of user ceph-admin.
Oct  3 09:31:56 compute-0 systemd[1]: Started Session 39 of User ceph-admin.
Oct  3 09:31:57 compute-0 systemd-logind[798]: New session 40 of user ceph-admin.
Oct  3 09:31:57 compute-0 systemd[1]: Started Session 40 of User ceph-admin.
Oct  3 09:31:57 compute-0 ceph-mgr[192071]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  3 09:31:57 compute-0 systemd-logind[798]: New session 41 of user ceph-admin.
Oct  3 09:31:57 compute-0 systemd[1]: Started Session 41 of User ceph-admin.
Oct  3 09:31:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  3 09:31:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:58 compute-0 ceph-mgr[192071]: [cephadm INFO root] Added host compute-0
Oct  3 09:31:58 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Added host compute-0
Oct  3 09:31:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  3 09:31:58 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  3 09:31:58 compute-0 eager_feistel[193511]: Added host 'compute-0' with addr '192.168.122.100'
Oct  3 09:31:58 compute-0 systemd[1]: libpod-138a7a9c1d5170969a78009eafe25cbff02b3de9cd92dfc468337a14758f16db.scope: Deactivated successfully.
Oct  3 09:31:58 compute-0 podman[193495]: 2025-10-03 09:31:58.57253781 +0000 UTC m=+6.891335503 container died 138a7a9c1d5170969a78009eafe25cbff02b3de9cd92dfc468337a14758f16db (image=quay.io/ceph/ceph:v18, name=eager_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:31:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d0dd99a5f1254d4857b1da12a527e7f522d24a85aaef9c192928db03e0b16b6-merged.mount: Deactivated successfully.
Oct  3 09:31:58 compute-0 podman[193495]: 2025-10-03 09:31:58.64711389 +0000 UTC m=+6.965911573 container remove 138a7a9c1d5170969a78009eafe25cbff02b3de9cd92dfc468337a14758f16db (image=quay.io/ceph/ceph:v18, name=eager_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 09:31:58 compute-0 systemd[1]: libpod-conmon-138a7a9c1d5170969a78009eafe25cbff02b3de9cd92dfc468337a14758f16db.scope: Deactivated successfully.
Oct  3 09:31:58 compute-0 podman[194253]: 2025-10-03 09:31:58.711778541 +0000 UTC m=+0.039944856 container create 01259ce91d54871c72be5dfae9a322a8fc3ee9b0dfaaa5008e00f16899f07d36 (image=quay.io/ceph/ceph:v18, name=priceless_poincare, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 09:31:58 compute-0 systemd[1]: Started libpod-conmon-01259ce91d54871c72be5dfae9a322a8fc3ee9b0dfaaa5008e00f16899f07d36.scope.
Oct  3 09:31:58 compute-0 podman[194253]: 2025-10-03 09:31:58.694671331 +0000 UTC m=+0.022837666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:58 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbf960b46c62387de763f0ac9a8c5feebf93d852632b76aec86bd3abcbb497c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbf960b46c62387de763f0ac9a8c5feebf93d852632b76aec86bd3abcbb497c2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbf960b46c62387de763f0ac9a8c5feebf93d852632b76aec86bd3abcbb497c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:58 compute-0 podman[194253]: 2025-10-03 09:31:58.831944197 +0000 UTC m=+0.160110542 container init 01259ce91d54871c72be5dfae9a322a8fc3ee9b0dfaaa5008e00f16899f07d36 (image=quay.io/ceph/ceph:v18, name=priceless_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:31:58 compute-0 podman[194253]: 2025-10-03 09:31:58.84696557 +0000 UTC m=+0.175131885 container start 01259ce91d54871c72be5dfae9a322a8fc3ee9b0dfaaa5008e00f16899f07d36 (image=quay.io/ceph/ceph:v18, name=priceless_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 09:31:58 compute-0 podman[194253]: 2025-10-03 09:31:58.852116166 +0000 UTC m=+0.180282491 container attach 01259ce91d54871c72be5dfae9a322a8fc3ee9b0dfaaa5008e00f16899f07d36 (image=quay.io/ceph/ceph:v18, name=priceless_poincare, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  3 09:31:59 compute-0 podman[194371]: 2025-10-03 09:31:59.281749398 +0000 UTC m=+0.069052072 container create ac8c4daafe4cf0eb87227c18aff1785b115ecc6f086dc5a85a36a762a30f918e (image=quay.io/ceph/ceph:v18, name=eager_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:31:59 compute-0 systemd[1]: Started libpod-conmon-ac8c4daafe4cf0eb87227c18aff1785b115ecc6f086dc5a85a36a762a30f918e.scope.
Oct  3 09:31:59 compute-0 podman[194371]: 2025-10-03 09:31:59.252722074 +0000 UTC m=+0.040024838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:59 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:59 compute-0 podman[194371]: 2025-10-03 09:31:59.371174765 +0000 UTC m=+0.158477479 container init ac8c4daafe4cf0eb87227c18aff1785b115ecc6f086dc5a85a36a762a30f918e (image=quay.io/ceph/ceph:v18, name=eager_shirley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 09:31:59 compute-0 podman[194371]: 2025-10-03 09:31:59.382319174 +0000 UTC m=+0.169621858 container start ac8c4daafe4cf0eb87227c18aff1785b115ecc6f086dc5a85a36a762a30f918e (image=quay.io/ceph/ceph:v18, name=eager_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:31:59 compute-0 podman[194371]: 2025-10-03 09:31:59.388767361 +0000 UTC m=+0.176070075 container attach ac8c4daafe4cf0eb87227c18aff1785b115ecc6f086dc5a85a36a762a30f918e (image=quay.io/ceph/ceph:v18, name=eager_shirley, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 09:31:59 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 09:31:59 compute-0 ceph-mgr[192071]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct  3 09:31:59 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct  3 09:31:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  3 09:31:59 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:59 compute-0 priceless_poincare[194298]: Scheduled mon update...
Oct  3 09:31:59 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:59 compute-0 ceph-mon[191783]: Added host compute-0
Oct  3 09:31:59 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:59 compute-0 systemd[1]: libpod-01259ce91d54871c72be5dfae9a322a8fc3ee9b0dfaaa5008e00f16899f07d36.scope: Deactivated successfully.
Oct  3 09:31:59 compute-0 podman[194253]: 2025-10-03 09:31:59.557064566 +0000 UTC m=+0.885230881 container died 01259ce91d54871c72be5dfae9a322a8fc3ee9b0dfaaa5008e00f16899f07d36 (image=quay.io/ceph/ceph:v18, name=priceless_poincare, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:31:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbf960b46c62387de763f0ac9a8c5feebf93d852632b76aec86bd3abcbb497c2-merged.mount: Deactivated successfully.
Oct  3 09:31:59 compute-0 podman[194253]: 2025-10-03 09:31:59.617277893 +0000 UTC m=+0.945444208 container remove 01259ce91d54871c72be5dfae9a322a8fc3ee9b0dfaaa5008e00f16899f07d36 (image=quay.io/ceph/ceph:v18, name=priceless_poincare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 09:31:59 compute-0 systemd[1]: libpod-conmon-01259ce91d54871c72be5dfae9a322a8fc3ee9b0dfaaa5008e00f16899f07d36.scope: Deactivated successfully.
Oct  3 09:31:59 compute-0 podman[194421]: 2025-10-03 09:31:59.688333619 +0000 UTC m=+0.049553555 container create 039b2273937a2dc8da569a82fa6533d3ffb2795dd4322d7a345f7cfabe5cb81d (image=quay.io/ceph/ceph:v18, name=admiring_einstein, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 09:31:59 compute-0 eager_shirley[194404]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct  3 09:31:59 compute-0 systemd[1]: libpod-ac8c4daafe4cf0eb87227c18aff1785b115ecc6f086dc5a85a36a762a30f918e.scope: Deactivated successfully.
Oct  3 09:31:59 compute-0 podman[194371]: 2025-10-03 09:31:59.721360102 +0000 UTC m=+0.508662796 container died ac8c4daafe4cf0eb87227c18aff1785b115ecc6f086dc5a85a36a762a30f918e (image=quay.io/ceph/ceph:v18, name=eager_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 09:31:59 compute-0 systemd[1]: Started libpod-conmon-039b2273937a2dc8da569a82fa6533d3ffb2795dd4322d7a345f7cfabe5cb81d.scope.
Oct  3 09:31:59 compute-0 podman[157165]: time="2025-10-03T09:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:31:59 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:31:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a20f1344a11616f690d7227d9b33b7e9e8863fbdb61b4b6301d2ba422773d56-merged.mount: Deactivated successfully.
Oct  3 09:31:59 compute-0 podman[194421]: 2025-10-03 09:31:59.66504972 +0000 UTC m=+0.026269676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32990e4481807f99b035923351daea6f56aad9890f210b438ac332cea707c9d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32990e4481807f99b035923351daea6f56aad9890f210b438ac332cea707c9d2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32990e4481807f99b035923351daea6f56aad9890f210b438ac332cea707c9d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:31:59 compute-0 podman[194421]: 2025-10-03 09:31:59.793304666 +0000 UTC m=+0.154524632 container init 039b2273937a2dc8da569a82fa6533d3ffb2795dd4322d7a345f7cfabe5cb81d (image=quay.io/ceph/ceph:v18, name=admiring_einstein, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:31:59 compute-0 podman[194371]: 2025-10-03 09:31:59.803726212 +0000 UTC m=+0.591028896 container remove ac8c4daafe4cf0eb87227c18aff1785b115ecc6f086dc5a85a36a762a30f918e (image=quay.io/ceph/ceph:v18, name=eager_shirley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:31:59 compute-0 podman[194421]: 2025-10-03 09:31:59.805475038 +0000 UTC m=+0.166694974 container start 039b2273937a2dc8da569a82fa6533d3ffb2795dd4322d7a345f7cfabe5cb81d (image=quay.io/ceph/ceph:v18, name=admiring_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  3 09:31:59 compute-0 podman[194421]: 2025-10-03 09:31:59.810170429 +0000 UTC m=+0.171390395 container attach 039b2273937a2dc8da569a82fa6533d3ffb2795dd4322d7a345f7cfabe5cb81d (image=quay.io/ceph/ceph:v18, name=admiring_einstein, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:31:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 24759 "" "Go-http-client/1.1"
Oct  3 09:31:59 compute-0 systemd[1]: libpod-conmon-ac8c4daafe4cf0eb87227c18aff1785b115ecc6f086dc5a85a36a762a30f918e.scope: Deactivated successfully.
Oct  3 09:31:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4345 "" "Go-http-client/1.1"
Oct  3 09:31:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Oct  3 09:31:59 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:31:59 compute-0 ceph-mgr[192071]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  3 09:32:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:32:00 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 09:32:00 compute-0 ceph-mgr[192071]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct  3 09:32:00 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct  3 09:32:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  3 09:32:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:00 compute-0 admiring_einstein[194443]: Scheduled mgr update...
Oct  3 09:32:00 compute-0 systemd[1]: libpod-039b2273937a2dc8da569a82fa6533d3ffb2795dd4322d7a345f7cfabe5cb81d.scope: Deactivated successfully.
Oct  3 09:32:00 compute-0 podman[194421]: 2025-10-03 09:32:00.444313791 +0000 UTC m=+0.805533737 container died 039b2273937a2dc8da569a82fa6533d3ffb2795dd4322d7a345f7cfabe5cb81d (image=quay.io/ceph/ceph:v18, name=admiring_einstein, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 09:32:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-32990e4481807f99b035923351daea6f56aad9890f210b438ac332cea707c9d2-merged.mount: Deactivated successfully.
Oct  3 09:32:00 compute-0 podman[194421]: 2025-10-03 09:32:00.513571459 +0000 UTC m=+0.874791405 container remove 039b2273937a2dc8da569a82fa6533d3ffb2795dd4322d7a345f7cfabe5cb81d (image=quay.io/ceph/ceph:v18, name=admiring_einstein, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:32:00 compute-0 systemd[1]: libpod-conmon-039b2273937a2dc8da569a82fa6533d3ffb2795dd4322d7a345f7cfabe5cb81d.scope: Deactivated successfully.
Oct  3 09:32:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:32:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:00 compute-0 ceph-mon[191783]: Saving service mon spec with placement count:5
Oct  3 09:32:00 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:00 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:00 compute-0 podman[194603]: 2025-10-03 09:32:00.606742697 +0000 UTC m=+0.062029167 container create 0168309f5ce42b5c212d8739f79cc9428e419609e4319e1a7ee9b03d5909414f (image=quay.io/ceph/ceph:v18, name=vibrant_stonebraker, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  3 09:32:00 compute-0 systemd[1]: Started libpod-conmon-0168309f5ce42b5c212d8739f79cc9428e419609e4319e1a7ee9b03d5909414f.scope.
Oct  3 09:32:00 compute-0 podman[194603]: 2025-10-03 09:32:00.580203023 +0000 UTC m=+0.035489543 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:32:00 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf7c14c3d4d1ffde465386c22ffc4e6df66a896711f6624f2df04ba4ddcb5a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf7c14c3d4d1ffde465386c22ffc4e6df66a896711f6624f2df04ba4ddcb5a7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf7c14c3d4d1ffde465386c22ffc4e6df66a896711f6624f2df04ba4ddcb5a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:00 compute-0 podman[194603]: 2025-10-03 09:32:00.723622227 +0000 UTC m=+0.178908717 container init 0168309f5ce42b5c212d8739f79cc9428e419609e4319e1a7ee9b03d5909414f (image=quay.io/ceph/ceph:v18, name=vibrant_stonebraker, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 09:32:00 compute-0 podman[194603]: 2025-10-03 09:32:00.736175321 +0000 UTC m=+0.191461801 container start 0168309f5ce42b5c212d8739f79cc9428e419609e4319e1a7ee9b03d5909414f (image=quay.io/ceph/ceph:v18, name=vibrant_stonebraker, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:32:00 compute-0 podman[194603]: 2025-10-03 09:32:00.740337405 +0000 UTC m=+0.195623895 container attach 0168309f5ce42b5c212d8739f79cc9428e419609e4319e1a7ee9b03d5909414f (image=quay.io/ceph/ceph:v18, name=vibrant_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:01 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 09:32:01 compute-0 ceph-mgr[192071]: [cephadm INFO root] Saving service crash spec with placement *
Oct  3 09:32:01 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct  3 09:32:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct  3 09:32:01 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:01 compute-0 vibrant_stonebraker[194644]: Scheduled crash update...
Oct  3 09:32:01 compute-0 systemd[1]: libpod-0168309f5ce42b5c212d8739f79cc9428e419609e4319e1a7ee9b03d5909414f.scope: Deactivated successfully.
Oct  3 09:32:01 compute-0 podman[194603]: 2025-10-03 09:32:01.402896121 +0000 UTC m=+0.858182601 container died 0168309f5ce42b5c212d8739f79cc9428e419609e4319e1a7ee9b03d5909414f (image=quay.io/ceph/ceph:v18, name=vibrant_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 09:32:01 compute-0 openstack_network_exporter[159287]: ERROR   09:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:32:01 compute-0 openstack_network_exporter[159287]: ERROR   09:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:32:01 compute-0 openstack_network_exporter[159287]: ERROR   09:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:32:01 compute-0 openstack_network_exporter[159287]: ERROR   09:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:32:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:32:01 compute-0 openstack_network_exporter[159287]: ERROR   09:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:32:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:32:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-eaf7c14c3d4d1ffde465386c22ffc4e6df66a896711f6624f2df04ba4ddcb5a7-merged.mount: Deactivated successfully.
Oct  3 09:32:01 compute-0 podman[194603]: 2025-10-03 09:32:01.477058867 +0000 UTC m=+0.932345337 container remove 0168309f5ce42b5c212d8739f79cc9428e419609e4319e1a7ee9b03d5909414f (image=quay.io/ceph/ceph:v18, name=vibrant_stonebraker, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:01 compute-0 systemd[1]: libpod-conmon-0168309f5ce42b5c212d8739f79cc9428e419609e4319e1a7ee9b03d5909414f.scope: Deactivated successfully.
Oct  3 09:32:01 compute-0 podman[194813]: 2025-10-03 09:32:01.535550259 +0000 UTC m=+0.122941266 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct  3 09:32:01 compute-0 podman[194840]: 2025-10-03 09:32:01.558941582 +0000 UTC m=+0.056472678 container create d0e60e5e142e4b8e76e05e847bb50a6b320163bcd13b6dd62919732837a058ea (image=quay.io/ceph/ceph:v18, name=hopeful_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:01 compute-0 ceph-mon[191783]: Saving service mgr spec with placement count:2
Oct  3 09:32:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:01 compute-0 systemd[1]: Started libpod-conmon-d0e60e5e142e4b8e76e05e847bb50a6b320163bcd13b6dd62919732837a058ea.scope.
Oct  3 09:32:01 compute-0 podman[194840]: 2025-10-03 09:32:01.531822469 +0000 UTC m=+0.029353595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:32:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d6f3cdb4880e7b132ecc8403aa498056ff112d441dbbf7d5031e59bc5cd7d0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d6f3cdb4880e7b132ecc8403aa498056ff112d441dbbf7d5031e59bc5cd7d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59d6f3cdb4880e7b132ecc8403aa498056ff112d441dbbf7d5031e59bc5cd7d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:01 compute-0 podman[194840]: 2025-10-03 09:32:01.69875121 +0000 UTC m=+0.196282336 container init d0e60e5e142e4b8e76e05e847bb50a6b320163bcd13b6dd62919732837a058ea (image=quay.io/ceph/ceph:v18, name=hopeful_wilson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Oct  3 09:32:01 compute-0 podman[194840]: 2025-10-03 09:32:01.708139561 +0000 UTC m=+0.205670657 container start d0e60e5e142e4b8e76e05e847bb50a6b320163bcd13b6dd62919732837a058ea (image=quay.io/ceph/ceph:v18, name=hopeful_wilson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:32:01 compute-0 podman[194840]: 2025-10-03 09:32:01.712804062 +0000 UTC m=+0.210335188 container attach d0e60e5e142e4b8e76e05e847bb50a6b320163bcd13b6dd62919732837a058ea (image=quay.io/ceph/ceph:v18, name=hopeful_wilson, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct  3 09:32:01 compute-0 podman[194813]: 2025-10-03 09:32:01.875092073 +0000 UTC m=+0.462483060 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:32:01 compute-0 ceph-mgr[192071]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  3 09:32:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:32:02 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Oct  3 09:32:02 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/172354149' entity='client.admin' 
Oct  3 09:32:02 compute-0 systemd[1]: libpod-d0e60e5e142e4b8e76e05e847bb50a6b320163bcd13b6dd62919732837a058ea.scope: Deactivated successfully.
Oct  3 09:32:02 compute-0 podman[194840]: 2025-10-03 09:32:02.351607674 +0000 UTC m=+0.849138770 container died d0e60e5e142e4b8e76e05e847bb50a6b320163bcd13b6dd62919732837a058ea (image=quay.io/ceph/ceph:v18, name=hopeful_wilson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-59d6f3cdb4880e7b132ecc8403aa498056ff112d441dbbf7d5031e59bc5cd7d0-merged.mount: Deactivated successfully.
Oct  3 09:32:02 compute-0 podman[194840]: 2025-10-03 09:32:02.416797401 +0000 UTC m=+0.914328497 container remove d0e60e5e142e4b8e76e05e847bb50a6b320163bcd13b6dd62919732837a058ea (image=quay.io/ceph/ceph:v18, name=hopeful_wilson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:02 compute-0 systemd[1]: libpod-conmon-d0e60e5e142e4b8e76e05e847bb50a6b320163bcd13b6dd62919732837a058ea.scope: Deactivated successfully.
Oct  3 09:32:02 compute-0 podman[195030]: 2025-10-03 09:32:02.514592867 +0000 UTC m=+0.071613964 container create 62a57fb93f2d02245596b88bf0525c3bf888aff9e62d484367d203b70b2d779f (image=quay.io/ceph/ceph:v18, name=funny_shannon, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:32:02 compute-0 ceph-mon[191783]: Saving service crash spec with placement *
Oct  3 09:32:02 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:02 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/172354149' entity='client.admin' 
Oct  3 09:32:02 compute-0 systemd[1]: Started libpod-conmon-62a57fb93f2d02245596b88bf0525c3bf888aff9e62d484367d203b70b2d779f.scope.
Oct  3 09:32:02 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 195055 (sysctl)
Oct  3 09:32:02 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct  3 09:32:02 compute-0 podman[195030]: 2025-10-03 09:32:02.48950128 +0000 UTC m=+0.046522467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:32:02 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct  3 09:32:02 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a376a047d9ae25dd2fc957f06eb3e737d6cfcd3047e48dc1503402e9810990/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a376a047d9ae25dd2fc957f06eb3e737d6cfcd3047e48dc1503402e9810990/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a376a047d9ae25dd2fc957f06eb3e737d6cfcd3047e48dc1503402e9810990/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:02 compute-0 podman[195030]: 2025-10-03 09:32:02.647783133 +0000 UTC m=+0.204804250 container init 62a57fb93f2d02245596b88bf0525c3bf888aff9e62d484367d203b70b2d779f (image=quay.io/ceph/ceph:v18, name=funny_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 09:32:02 compute-0 podman[195030]: 2025-10-03 09:32:02.65856918 +0000 UTC m=+0.215590287 container start 62a57fb93f2d02245596b88bf0525c3bf888aff9e62d484367d203b70b2d779f (image=quay.io/ceph/ceph:v18, name=funny_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:32:02 compute-0 podman[195030]: 2025-10-03 09:32:02.662758204 +0000 UTC m=+0.219779321 container attach 62a57fb93f2d02245596b88bf0525c3bf888aff9e62d484367d203b70b2d779f (image=quay.io/ceph/ceph:v18, name=funny_shannon, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 09:32:03 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 09:32:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Oct  3 09:32:03 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:03 compute-0 systemd[1]: libpod-62a57fb93f2d02245596b88bf0525c3bf888aff9e62d484367d203b70b2d779f.scope: Deactivated successfully.
Oct  3 09:32:03 compute-0 podman[195030]: 2025-10-03 09:32:03.300560735 +0000 UTC m=+0.857581852 container died 62a57fb93f2d02245596b88bf0525c3bf888aff9e62d484367d203b70b2d779f (image=quay.io/ceph/ceph:v18, name=funny_shannon, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 09:32:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-51a376a047d9ae25dd2fc957f06eb3e737d6cfcd3047e48dc1503402e9810990-merged.mount: Deactivated successfully.
Oct  3 09:32:03 compute-0 podman[195030]: 2025-10-03 09:32:03.35855057 +0000 UTC m=+0.915571667 container remove 62a57fb93f2d02245596b88bf0525c3bf888aff9e62d484367d203b70b2d779f (image=quay.io/ceph/ceph:v18, name=funny_shannon, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 09:32:03 compute-0 systemd[1]: libpod-conmon-62a57fb93f2d02245596b88bf0525c3bf888aff9e62d484367d203b70b2d779f.scope: Deactivated successfully.
Oct  3 09:32:03 compute-0 podman[195216]: 2025-10-03 09:32:03.44651577 +0000 UTC m=+0.064692822 container create b1c592ed19d877d709d40ee31c04afeee5429a328dba37fc93023a4caa796a5c (image=quay.io/ceph/ceph:v18, name=laughing_ardinghelli, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:32:03 compute-0 systemd[1]: Started libpod-conmon-b1c592ed19d877d709d40ee31c04afeee5429a328dba37fc93023a4caa796a5c.scope.
Oct  3 09:32:03 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:03 compute-0 podman[195216]: 2025-10-03 09:32:03.427209539 +0000 UTC m=+0.045386611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aa8a4f8780fc3f07ce4d68077de0a8d8cbb74054e141f0582c8e9e93c8e71e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aa8a4f8780fc3f07ce4d68077de0a8d8cbb74054e141f0582c8e9e93c8e71e7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aa8a4f8780fc3f07ce4d68077de0a8d8cbb74054e141f0582c8e9e93c8e71e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:32:03 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:03 compute-0 podman[195216]: 2025-10-03 09:32:03.551727535 +0000 UTC m=+0.169904607 container init b1c592ed19d877d709d40ee31c04afeee5429a328dba37fc93023a4caa796a5c (image=quay.io/ceph/ceph:v18, name=laughing_ardinghelli, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:32:03 compute-0 podman[195216]: 2025-10-03 09:32:03.562379978 +0000 UTC m=+0.180557040 container start b1c592ed19d877d709d40ee31c04afeee5429a328dba37fc93023a4caa796a5c (image=quay.io/ceph/ceph:v18, name=laughing_ardinghelli, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:03 compute-0 podman[195216]: 2025-10-03 09:32:03.567289275 +0000 UTC m=+0.185466357 container attach b1c592ed19d877d709d40ee31c04afeee5429a328dba37fc93023a4caa796a5c (image=quay.io/ceph/ceph:v18, name=laughing_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 09:32:03 compute-0 ceph-mgr[192071]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  3 09:32:04 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 09:32:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  3 09:32:04 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:04 compute-0 ceph-mgr[192071]: [cephadm INFO root] Added label _admin to host compute-0
Oct  3 09:32:04 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct  3 09:32:04 compute-0 laughing_ardinghelli[195245]: Added label _admin to host compute-0
Oct  3 09:32:04 compute-0 systemd[1]: libpod-b1c592ed19d877d709d40ee31c04afeee5429a328dba37fc93023a4caa796a5c.scope: Deactivated successfully.
Oct  3 09:32:04 compute-0 podman[195216]: 2025-10-03 09:32:04.244634007 +0000 UTC m=+0.862811059 container died b1c592ed19d877d709d40ee31c04afeee5429a328dba37fc93023a4caa796a5c (image=quay.io/ceph/ceph:v18, name=laughing_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 09:32:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5aa8a4f8780fc3f07ce4d68077de0a8d8cbb74054e141f0582c8e9e93c8e71e7-merged.mount: Deactivated successfully.
Oct  3 09:32:04 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:04 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:04 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:04 compute-0 podman[195411]: 2025-10-03 09:32:04.286864096 +0000 UTC m=+0.077693860 container create 3dbbfa0f2dcd93648780341c8db691667e4c1392ead5311e867c0aec7df32bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 09:32:04 compute-0 podman[195216]: 2025-10-03 09:32:04.329117206 +0000 UTC m=+0.947294248 container remove b1c592ed19d877d709d40ee31c04afeee5429a328dba37fc93023a4caa796a5c (image=quay.io/ceph/ceph:v18, name=laughing_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 09:32:04 compute-0 podman[195411]: 2025-10-03 09:32:04.242607152 +0000 UTC m=+0.033436936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:04 compute-0 systemd[1]: libpod-conmon-b1c592ed19d877d709d40ee31c04afeee5429a328dba37fc93023a4caa796a5c.scope: Deactivated successfully.
Oct  3 09:32:04 compute-0 systemd[1]: Started libpod-conmon-3dbbfa0f2dcd93648780341c8db691667e4c1392ead5311e867c0aec7df32bb3.scope.
Oct  3 09:32:04 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:04 compute-0 podman[195411]: 2025-10-03 09:32:04.399514891 +0000 UTC m=+0.190344685 container init 3dbbfa0f2dcd93648780341c8db691667e4c1392ead5311e867c0aec7df32bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:32:04 compute-0 podman[195411]: 2025-10-03 09:32:04.408876212 +0000 UTC m=+0.199705976 container start 3dbbfa0f2dcd93648780341c8db691667e4c1392ead5311e867c0aec7df32bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:04 compute-0 podman[195411]: 2025-10-03 09:32:04.412664783 +0000 UTC m=+0.203494577 container attach 3dbbfa0f2dcd93648780341c8db691667e4c1392ead5311e867c0aec7df32bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 09:32:04 compute-0 serene_shtern[195446]: 167 167
Oct  3 09:32:04 compute-0 podman[195444]: 2025-10-03 09:32:04.417334904 +0000 UTC m=+0.056549290 container create 9e4fae5e9c05eb16fc1986a388f680d18ed784b4b8a7a575f145bb898b76b0a3 (image=quay.io/ceph/ceph:v18, name=youthful_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 09:32:04 compute-0 systemd[1]: libpod-3dbbfa0f2dcd93648780341c8db691667e4c1392ead5311e867c0aec7df32bb3.scope: Deactivated successfully.
Oct  3 09:32:04 compute-0 podman[195411]: 2025-10-03 09:32:04.41780957 +0000 UTC m=+0.208639354 container died 3dbbfa0f2dcd93648780341c8db691667e4c1392ead5311e867c0aec7df32bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:32:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1b2a12a08f37102407bf922eefaf4e421d3bf7ae35a39b5e064d429aa990f3b-merged.mount: Deactivated successfully.
Oct  3 09:32:04 compute-0 systemd[1]: Started libpod-conmon-9e4fae5e9c05eb16fc1986a388f680d18ed784b4b8a7a575f145bb898b76b0a3.scope.
Oct  3 09:32:04 compute-0 podman[195411]: 2025-10-03 09:32:04.482581343 +0000 UTC m=+0.273411107 container remove 3dbbfa0f2dcd93648780341c8db691667e4c1392ead5311e867c0aec7df32bb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct  3 09:32:04 compute-0 podman[195444]: 2025-10-03 09:32:04.400471102 +0000 UTC m=+0.039685508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:32:04 compute-0 systemd[1]: libpod-conmon-3dbbfa0f2dcd93648780341c8db691667e4c1392ead5311e867c0aec7df32bb3.scope: Deactivated successfully.
Oct  3 09:32:04 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d57d77397870d9c395a5688dfde9b69000e503e7471a708b6686eea4a00e57e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d57d77397870d9c395a5688dfde9b69000e503e7471a708b6686eea4a00e57e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d57d77397870d9c395a5688dfde9b69000e503e7471a708b6686eea4a00e57e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:04 compute-0 podman[195444]: 2025-10-03 09:32:04.531087604 +0000 UTC m=+0.170302020 container init 9e4fae5e9c05eb16fc1986a388f680d18ed784b4b8a7a575f145bb898b76b0a3 (image=quay.io/ceph/ceph:v18, name=youthful_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:04 compute-0 podman[195444]: 2025-10-03 09:32:04.543852124 +0000 UTC m=+0.183066510 container start 9e4fae5e9c05eb16fc1986a388f680d18ed784b4b8a7a575f145bb898b76b0a3 (image=quay.io/ceph/ceph:v18, name=youthful_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:32:04 compute-0 podman[195444]: 2025-10-03 09:32:04.54993998 +0000 UTC m=+0.189154406 container attach 9e4fae5e9c05eb16fc1986a388f680d18ed784b4b8a7a575f145bb898b76b0a3 (image=quay.io/ceph/ceph:v18, name=youthful_pasteur, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 09:32:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Oct  3 09:32:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3011820785' entity='client.admin' 
Oct  3 09:32:05 compute-0 systemd[1]: libpod-9e4fae5e9c05eb16fc1986a388f680d18ed784b4b8a7a575f145bb898b76b0a3.scope: Deactivated successfully.
Oct  3 09:32:05 compute-0 conmon[195473]: conmon 9e4fae5e9c05eb16fc19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e4fae5e9c05eb16fc1986a388f680d18ed784b4b8a7a575f145bb898b76b0a3.scope/container/memory.events
Oct  3 09:32:05 compute-0 podman[195444]: 2025-10-03 09:32:05.1796627 +0000 UTC m=+0.818877086 container died 9e4fae5e9c05eb16fc1986a388f680d18ed784b4b8a7a575f145bb898b76b0a3 (image=quay.io/ceph/ceph:v18, name=youthful_pasteur, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:32:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d57d77397870d9c395a5688dfde9b69000e503e7471a708b6686eea4a00e57e-merged.mount: Deactivated successfully.
Oct  3 09:32:05 compute-0 podman[195444]: 2025-10-03 09:32:05.257124603 +0000 UTC m=+0.896338999 container remove 9e4fae5e9c05eb16fc1986a388f680d18ed784b4b8a7a575f145bb898b76b0a3 (image=quay.io/ceph/ceph:v18, name=youthful_pasteur, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:32:05 compute-0 systemd[1]: libpod-conmon-9e4fae5e9c05eb16fc1986a388f680d18ed784b4b8a7a575f145bb898b76b0a3.scope: Deactivated successfully.
Oct  3 09:32:05 compute-0 ceph-mon[191783]: Added label _admin to host compute-0
Oct  3 09:32:05 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/3011820785' entity='client.admin' 
Oct  3 09:32:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:32:05 compute-0 podman[195512]: 2025-10-03 09:32:05.342143867 +0000 UTC m=+0.057157599 container create 686f3a051039e56aeaaa30b87ca0fb5298a6b264f0f48a7dfa36ee92a1917595 (image=quay.io/ceph/ceph:v18, name=keen_jang, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct  3 09:32:05 compute-0 systemd[1]: Started libpod-conmon-686f3a051039e56aeaaa30b87ca0fb5298a6b264f0f48a7dfa36ee92a1917595.scope.
Oct  3 09:32:05 compute-0 podman[195512]: 2025-10-03 09:32:05.316799902 +0000 UTC m=+0.031813664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:32:05 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774fc1ad68dccde6325fbe64e1edf03ed4920620d893b426659ff90632452be5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774fc1ad68dccde6325fbe64e1edf03ed4920620d893b426659ff90632452be5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774fc1ad68dccde6325fbe64e1edf03ed4920620d893b426659ff90632452be5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:05 compute-0 podman[195512]: 2025-10-03 09:32:05.454374048 +0000 UTC m=+0.169387810 container init 686f3a051039e56aeaaa30b87ca0fb5298a6b264f0f48a7dfa36ee92a1917595 (image=quay.io/ceph/ceph:v18, name=keen_jang, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 09:32:05 compute-0 podman[195512]: 2025-10-03 09:32:05.464950708 +0000 UTC m=+0.179964440 container start 686f3a051039e56aeaaa30b87ca0fb5298a6b264f0f48a7dfa36ee92a1917595 (image=quay.io/ceph/ceph:v18, name=keen_jang, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 09:32:05 compute-0 podman[195512]: 2025-10-03 09:32:05.469107682 +0000 UTC m=+0.184121434 container attach 686f3a051039e56aeaaa30b87ca0fb5298a6b264f0f48a7dfa36ee92a1917595 (image=quay.io/ceph/ceph:v18, name=keen_jang, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  3 09:32:05 compute-0 ceph-mgr[192071]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct  3 09:32:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:05 compute-0 ceph-mon[191783]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct  3 09:32:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Oct  3 09:32:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/920109050' entity='client.admin' 
Oct  3 09:32:06 compute-0 keen_jang[195528]: set mgr/dashboard/cluster/status
Oct  3 09:32:06 compute-0 systemd[1]: libpod-686f3a051039e56aeaaa30b87ca0fb5298a6b264f0f48a7dfa36ee92a1917595.scope: Deactivated successfully.
Oct  3 09:32:06 compute-0 conmon[195528]: conmon 686f3a051039e56aeaaa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-686f3a051039e56aeaaa30b87ca0fb5298a6b264f0f48a7dfa36ee92a1917595.scope/container/memory.events
Oct  3 09:32:06 compute-0 podman[195554]: 2025-10-03 09:32:06.24515825 +0000 UTC m=+0.049350619 container died 686f3a051039e56aeaaa30b87ca0fb5298a6b264f0f48a7dfa36ee92a1917595 (image=quay.io/ceph/ceph:v18, name=keen_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 09:32:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-774fc1ad68dccde6325fbe64e1edf03ed4920620d893b426659ff90632452be5-merged.mount: Deactivated successfully.
Oct  3 09:32:06 compute-0 ceph-mon[191783]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct  3 09:32:06 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/920109050' entity='client.admin' 
Oct  3 09:32:06 compute-0 podman[195554]: 2025-10-03 09:32:06.29612793 +0000 UTC m=+0.100320279 container remove 686f3a051039e56aeaaa30b87ca0fb5298a6b264f0f48a7dfa36ee92a1917595 (image=quay.io/ceph/ceph:v18, name=keen_jang, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:32:06 compute-0 systemd[1]: libpod-conmon-686f3a051039e56aeaaa30b87ca0fb5298a6b264f0f48a7dfa36ee92a1917595.scope: Deactivated successfully.
Oct  3 09:32:06 compute-0 podman[195575]: 2025-10-03 09:32:06.551861747 +0000 UTC m=+0.061113837 container create 97ddb6bcf170a0341c242aa7a6f28ac2982820a16be6f38efb256ebdea3493dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_golick, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 09:32:06 compute-0 systemd[1]: Started libpod-conmon-97ddb6bcf170a0341c242aa7a6f28ac2982820a16be6f38efb256ebdea3493dc.scope.
Oct  3 09:32:06 compute-0 podman[195575]: 2025-10-03 09:32:06.528431174 +0000 UTC m=+0.037683264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:06 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28478303ddaa96734f68228dc5b657e48cb6d6366f20d5ba3cc8f0da3f38e8d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28478303ddaa96734f68228dc5b657e48cb6d6366f20d5ba3cc8f0da3f38e8d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28478303ddaa96734f68228dc5b657e48cb6d6366f20d5ba3cc8f0da3f38e8d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28478303ddaa96734f68228dc5b657e48cb6d6366f20d5ba3cc8f0da3f38e8d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:06 compute-0 podman[195575]: 2025-10-03 09:32:06.6796603 +0000 UTC m=+0.188912380 container init 97ddb6bcf170a0341c242aa7a6f28ac2982820a16be6f38efb256ebdea3493dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 09:32:06 compute-0 podman[195575]: 2025-10-03 09:32:06.698220546 +0000 UTC m=+0.207472606 container start 97ddb6bcf170a0341c242aa7a6f28ac2982820a16be6f38efb256ebdea3493dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_golick, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  3 09:32:06 compute-0 podman[195575]: 2025-10-03 09:32:06.702723161 +0000 UTC m=+0.211975241 container attach 97ddb6bcf170a0341c242aa7a6f28ac2982820a16be6f38efb256ebdea3493dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_golick, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 09:32:06 compute-0 python3[195621]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:32:06 compute-0 podman[195622]: 2025-10-03 09:32:06.930497759 +0000 UTC m=+0.051552660 container create 55ed9bdef3a2eb0f1bca95458dcb1a5a9ea1c1d720ae74e623dbb6a559ea2924 (image=quay.io/ceph/ceph:v18, name=wizardly_dirac, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:32:06 compute-0 systemd[1]: Started libpod-conmon-55ed9bdef3a2eb0f1bca95458dcb1a5a9ea1c1d720ae74e623dbb6a559ea2924.scope.
Oct  3 09:32:07 compute-0 podman[195622]: 2025-10-03 09:32:06.908651087 +0000 UTC m=+0.029705988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:32:07 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d29dd630a8c7bbf440ad153949e18ad7fda15ff4498a44990d16f81a2f44d96/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d29dd630a8c7bbf440ad153949e18ad7fda15ff4498a44990d16f81a2f44d96/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:07 compute-0 podman[195622]: 2025-10-03 09:32:07.039583719 +0000 UTC m=+0.160638650 container init 55ed9bdef3a2eb0f1bca95458dcb1a5a9ea1c1d720ae74e623dbb6a559ea2924 (image=quay.io/ceph/ceph:v18, name=wizardly_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:32:07 compute-0 podman[195622]: 2025-10-03 09:32:07.049479677 +0000 UTC m=+0.170534558 container start 55ed9bdef3a2eb0f1bca95458dcb1a5a9ea1c1d720ae74e623dbb6a559ea2924 (image=quay.io/ceph/ceph:v18, name=wizardly_dirac, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 09:32:07 compute-0 podman[195622]: 2025-10-03 09:32:07.054327994 +0000 UTC m=+0.175382915 container attach 55ed9bdef3a2eb0f1bca95458dcb1a5a9ea1c1d720ae74e623dbb6a559ea2924 (image=quay.io/ceph/ceph:v18, name=wizardly_dirac, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:32:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Oct  3 09:32:07 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2559294805' entity='client.admin' 
Oct  3 09:32:07 compute-0 systemd[1]: libpod-55ed9bdef3a2eb0f1bca95458dcb1a5a9ea1c1d720ae74e623dbb6a559ea2924.scope: Deactivated successfully.
Oct  3 09:32:07 compute-0 podman[195678]: 2025-10-03 09:32:07.769918036 +0000 UTC m=+0.043526241 container died 55ed9bdef3a2eb0f1bca95458dcb1a5a9ea1c1d720ae74e623dbb6a559ea2924 (image=quay.io/ceph/ceph:v18, name=wizardly_dirac, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 09:32:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d29dd630a8c7bbf440ad153949e18ad7fda15ff4498a44990d16f81a2f44d96-merged.mount: Deactivated successfully.
Oct  3 09:32:07 compute-0 podman[195678]: 2025-10-03 09:32:07.823576472 +0000 UTC m=+0.097184657 container remove 55ed9bdef3a2eb0f1bca95458dcb1a5a9ea1c1d720ae74e623dbb6a559ea2924 (image=quay.io/ceph/ceph:v18, name=wizardly_dirac, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 09:32:07 compute-0 systemd[1]: libpod-conmon-55ed9bdef3a2eb0f1bca95458dcb1a5a9ea1c1d720ae74e623dbb6a559ea2924.scope: Deactivated successfully.
Oct  3 09:32:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:08 compute-0 sharp_golick[195592]: [
Oct  3 09:32:08 compute-0 sharp_golick[195592]:    {
Oct  3 09:32:08 compute-0 sharp_golick[195592]:        "available": false,
Oct  3 09:32:08 compute-0 sharp_golick[195592]:        "ceph_device": false,
Oct  3 09:32:08 compute-0 sharp_golick[195592]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:        "lsm_data": {},
Oct  3 09:32:08 compute-0 sharp_golick[195592]:        "lvs": [],
Oct  3 09:32:08 compute-0 sharp_golick[195592]:        "path": "/dev/sr0",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:        "rejected_reasons": [
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "Insufficient space (<5GB)",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "Has a FileSystem"
Oct  3 09:32:08 compute-0 sharp_golick[195592]:        ],
Oct  3 09:32:08 compute-0 sharp_golick[195592]:        "sys_api": {
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "actuators": null,
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "device_nodes": "sr0",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "devname": "sr0",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "human_readable_size": "482.00 KB",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "id_bus": "ata",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "model": "QEMU DVD-ROM",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "nr_requests": "2",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "parent": "/dev/sr0",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "partitions": {},
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "path": "/dev/sr0",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "removable": "1",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "rev": "2.5+",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "ro": "0",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "rotational": "0",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "sas_address": "",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "sas_device_handle": "",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "scheduler_mode": "mq-deadline",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "sectors": 0,
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "sectorsize": "2048",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "size": 493568.0,
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "support_discard": "2048",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "type": "disk",
Oct  3 09:32:08 compute-0 sharp_golick[195592]:            "vendor": "QEMU"
Oct  3 09:32:08 compute-0 sharp_golick[195592]:        }
Oct  3 09:32:08 compute-0 sharp_golick[195592]:    }
Oct  3 09:32:08 compute-0 sharp_golick[195592]: ]
Oct  3 09:32:08 compute-0 systemd[1]: libpod-97ddb6bcf170a0341c242aa7a6f28ac2982820a16be6f38efb256ebdea3493dc.scope: Deactivated successfully.
Oct  3 09:32:08 compute-0 systemd[1]: libpod-97ddb6bcf170a0341c242aa7a6f28ac2982820a16be6f38efb256ebdea3493dc.scope: Consumed 1.963s CPU time.
Oct  3 09:32:08 compute-0 podman[195575]: 2025-10-03 09:32:08.593632637 +0000 UTC m=+2.102884707 container died 97ddb6bcf170a0341c242aa7a6f28ac2982820a16be6f38efb256ebdea3493dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 09:32:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-28478303ddaa96734f68228dc5b657e48cb6d6366f20d5ba3cc8f0da3f38e8d7-merged.mount: Deactivated successfully.
Oct  3 09:32:08 compute-0 podman[195575]: 2025-10-03 09:32:08.656428887 +0000 UTC m=+2.165680947 container remove 97ddb6bcf170a0341c242aa7a6f28ac2982820a16be6f38efb256ebdea3493dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_golick, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 09:32:08 compute-0 systemd[1]: libpod-conmon-97ddb6bcf170a0341c242aa7a6f28ac2982820a16be6f38efb256ebdea3493dc.scope: Deactivated successfully.
Oct  3 09:32:08 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/2559294805' entity='client.admin' 
Oct  3 09:32:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:32:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:32:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:32:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:32:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  3 09:32:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 09:32:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:32:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:32:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:32:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:32:08 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  3 09:32:08 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  3 09:32:08 compute-0 ansible-async_wrapper.py[197805]: Invoked with j812941051402 30 /home/zuul/.ansible/tmp/ansible-tmp-1759483928.202914-33668-180596721288793/AnsiballZ_command.py _
Oct  3 09:32:08 compute-0 ansible-async_wrapper.py[197854]: Starting module and watcher
Oct  3 09:32:08 compute-0 ansible-async_wrapper.py[197854]: Start watching 197856 (30)
Oct  3 09:32:08 compute-0 ansible-async_wrapper.py[197856]: Start module (197856)
Oct  3 09:32:08 compute-0 ansible-async_wrapper.py[197805]: Return async_wrapper task started.
Oct  3 09:32:09 compute-0 python3[197857]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:32:09 compute-0 podman[197907]: 2025-10-03 09:32:09.130062795 +0000 UTC m=+0.044872985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:32:09 compute-0 podman[197907]: 2025-10-03 09:32:09.259554962 +0000 UTC m=+0.174365132 container create 4932d6df42397452a3da1b20140d1e1410f74e6f79e96ab245e3776a41c2b09a (image=quay.io/ceph/ceph:v18, name=hungry_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 09:32:09 compute-0 systemd[1]: Started libpod-conmon-4932d6df42397452a3da1b20140d1e1410f74e6f79e96ab245e3776a41c2b09a.scope.
Oct  3 09:32:09 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c975f93ad5d81fed522f146a86396140934437eb334bfdd8bde5883f7ddb67/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c975f93ad5d81fed522f146a86396140934437eb334bfdd8bde5883f7ddb67/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:09 compute-0 podman[197907]: 2025-10-03 09:32:09.381708531 +0000 UTC m=+0.296518731 container init 4932d6df42397452a3da1b20140d1e1410f74e6f79e96ab245e3776a41c2b09a (image=quay.io/ceph/ceph:v18, name=hungry_swanson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:32:09 compute-0 podman[197907]: 2025-10-03 09:32:09.395624219 +0000 UTC m=+0.310434379 container start 4932d6df42397452a3da1b20140d1e1410f74e6f79e96ab245e3776a41c2b09a (image=quay.io/ceph/ceph:v18, name=hungry_swanson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:32:09 compute-0 podman[197907]: 2025-10-03 09:32:09.401161307 +0000 UTC m=+0.315971487 container attach 4932d6df42397452a3da1b20140d1e1410f74e6f79e96ab245e3776a41c2b09a (image=quay.io/ceph/ceph:v18, name=hungry_swanson, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 09:32:09 compute-0 auditd[710]: Audit daemon rotating log files
Oct  3 09:32:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 09:32:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:32:09 compute-0 ceph-mon[191783]: Updating compute-0:/etc/ceph/ceph.conf
Oct  3 09:32:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:10 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  3 09:32:10 compute-0 hungry_swanson[197992]: 
Oct  3 09:32:10 compute-0 hungry_swanson[197992]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  3 09:32:10 compute-0 systemd[1]: libpod-4932d6df42397452a3da1b20140d1e1410f74e6f79e96ab245e3776a41c2b09a.scope: Deactivated successfully.
Oct  3 09:32:10 compute-0 podman[197907]: 2025-10-03 09:32:10.045122355 +0000 UTC m=+0.959932615 container died 4932d6df42397452a3da1b20140d1e1410f74e6f79e96ab245e3776a41c2b09a (image=quay.io/ceph/ceph:v18, name=hungry_swanson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Oct  3 09:32:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2c975f93ad5d81fed522f146a86396140934437eb334bfdd8bde5883f7ddb67-merged.mount: Deactivated successfully.
Oct  3 09:32:10 compute-0 podman[197907]: 2025-10-03 09:32:10.103837344 +0000 UTC m=+1.018647514 container remove 4932d6df42397452a3da1b20140d1e1410f74e6f79e96ab245e3776a41c2b09a (image=quay.io/ceph/ceph:v18, name=hungry_swanson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:10 compute-0 systemd[1]: libpod-conmon-4932d6df42397452a3da1b20140d1e1410f74e6f79e96ab245e3776a41c2b09a.scope: Deactivated successfully.
Oct  3 09:32:10 compute-0 ansible-async_wrapper.py[197856]: Module complete (197856)
Oct  3 09:32:10 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/9b4e8c9a-5555-5510-a631-4742a1182561/config/ceph.conf
Oct  3 09:32:10 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/9b4e8c9a-5555-5510-a631-4742a1182561/config/ceph.conf
Oct  3 09:32:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:32:10 compute-0 python3[198331]: ansible-ansible.legacy.async_status Invoked with jid=j812941051402.197805 mode=status _async_dir=/root/.ansible_async
Oct  3 09:32:10 compute-0 python3[198485]: ansible-ansible.legacy.async_status Invoked with jid=j812941051402.197805 mode=cleanup _async_dir=/root/.ansible_async
Oct  3 09:32:11 compute-0 python3[198662]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:32:11 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  3 09:32:11 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  3 09:32:11 compute-0 ceph-mon[191783]: Updating compute-0:/var/lib/ceph/9b4e8c9a-5555-5510-a631-4742a1182561/config/ceph.conf
Oct  3 09:32:11 compute-0 python3[198857]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:32:11 compute-0 podman[198904]: 2025-10-03 09:32:11.813998615 +0000 UTC m=+0.056701755 container create 80b0a6d848a53212bb972a30c5a287cfd223c8dd9c686dea9fa3a8ca2152585d (image=quay.io/ceph/ceph:v18, name=brave_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:32:11 compute-0 systemd[1]: Started libpod-conmon-80b0a6d848a53212bb972a30c5a287cfd223c8dd9c686dea9fa3a8ca2152585d.scope.
Oct  3 09:32:11 compute-0 podman[198904]: 2025-10-03 09:32:11.796001986 +0000 UTC m=+0.038705156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:32:11 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70141037686c3cfbab57512291c367044c25a10819e3f1d47e427280e818e61d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70141037686c3cfbab57512291c367044c25a10819e3f1d47e427280e818e61d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70141037686c3cfbab57512291c367044c25a10819e3f1d47e427280e818e61d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:11 compute-0 podman[198904]: 2025-10-03 09:32:11.936768115 +0000 UTC m=+0.179471275 container init 80b0a6d848a53212bb972a30c5a287cfd223c8dd9c686dea9fa3a8ca2152585d (image=quay.io/ceph/ceph:v18, name=brave_montalcini, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:32:11 compute-0 podman[198904]: 2025-10-03 09:32:11.94500623 +0000 UTC m=+0.187709380 container start 80b0a6d848a53212bb972a30c5a287cfd223c8dd9c686dea9fa3a8ca2152585d (image=quay.io/ceph/ceph:v18, name=brave_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:32:11 compute-0 podman[198904]: 2025-10-03 09:32:11.949428692 +0000 UTC m=+0.192131852 container attach 80b0a6d848a53212bb972a30c5a287cfd223c8dd9c686dea9fa3a8ca2152585d (image=quay.io/ceph/ceph:v18, name=brave_montalcini, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 09:32:12 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  3 09:32:12 compute-0 brave_montalcini[198946]: 
Oct  3 09:32:12 compute-0 brave_montalcini[198946]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  3 09:32:12 compute-0 systemd[1]: libpod-80b0a6d848a53212bb972a30c5a287cfd223c8dd9c686dea9fa3a8ca2152585d.scope: Deactivated successfully.
Oct  3 09:32:12 compute-0 podman[198904]: 2025-10-03 09:32:12.555510532 +0000 UTC m=+0.798213672 container died 80b0a6d848a53212bb972a30c5a287cfd223c8dd9c686dea9fa3a8ca2152585d (image=quay.io/ceph/ceph:v18, name=brave_montalcini, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 09:32:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-70141037686c3cfbab57512291c367044c25a10819e3f1d47e427280e818e61d-merged.mount: Deactivated successfully.
Oct  3 09:32:12 compute-0 podman[198904]: 2025-10-03 09:32:12.605181969 +0000 UTC m=+0.847885109 container remove 80b0a6d848a53212bb972a30c5a287cfd223c8dd9c686dea9fa3a8ca2152585d (image=quay.io/ceph/ceph:v18, name=brave_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 09:32:12 compute-0 systemd[1]: libpod-conmon-80b0a6d848a53212bb972a30c5a287cfd223c8dd9c686dea9fa3a8ca2152585d.scope: Deactivated successfully.
Oct  3 09:32:12 compute-0 ceph-mon[191783]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  3 09:32:13 compute-0 python3[199308]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:32:13 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/9b4e8c9a-5555-5510-a631-4742a1182561/config/ceph.client.admin.keyring
Oct  3 09:32:13 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/9b4e8c9a-5555-5510-a631-4742a1182561/config/ceph.client.admin.keyring
Oct  3 09:32:13 compute-0 podman[199359]: 2025-10-03 09:32:13.193219518 +0000 UTC m=+0.056437436 container create d7b2225dc2a0a87be021c5bd1f3bfbe9335e9a773493a155da4f285b464b8a7e (image=quay.io/ceph/ceph:v18, name=zen_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:32:13 compute-0 systemd[1]: Started libpod-conmon-d7b2225dc2a0a87be021c5bd1f3bfbe9335e9a773493a155da4f285b464b8a7e.scope.
Oct  3 09:32:13 compute-0 podman[199359]: 2025-10-03 09:32:13.17181415 +0000 UTC m=+0.035032088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:32:13 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f26cf5a825423d37ce7d0529e2535cb7effb44cdd9c1dff616578df651403c9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f26cf5a825423d37ce7d0529e2535cb7effb44cdd9c1dff616578df651403c9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f26cf5a825423d37ce7d0529e2535cb7effb44cdd9c1dff616578df651403c9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:13 compute-0 podman[199359]: 2025-10-03 09:32:13.30672673 +0000 UTC m=+0.169944748 container init d7b2225dc2a0a87be021c5bd1f3bfbe9335e9a773493a155da4f285b464b8a7e (image=quay.io/ceph/ceph:v18, name=zen_shtern, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 09:32:13 compute-0 podman[199359]: 2025-10-03 09:32:13.318514279 +0000 UTC m=+0.181732197 container start d7b2225dc2a0a87be021c5bd1f3bfbe9335e9a773493a155da4f285b464b8a7e (image=quay.io/ceph/ceph:v18, name=zen_shtern, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 09:32:13 compute-0 podman[199359]: 2025-10-03 09:32:13.323041555 +0000 UTC m=+0.186259493 container attach d7b2225dc2a0a87be021c5bd1f3bfbe9335e9a773493a155da4f285b464b8a7e (image=quay.io/ceph/ceph:v18, name=zen_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 09:32:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Oct  3 09:32:13 compute-0 ansible-async_wrapper.py[197854]: Done in kid B.
Oct  3 09:32:13 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3606502520' entity='client.admin' 
Oct  3 09:32:13 compute-0 systemd[1]: libpod-d7b2225dc2a0a87be021c5bd1f3bfbe9335e9a773493a155da4f285b464b8a7e.scope: Deactivated successfully.
Oct  3 09:32:13 compute-0 podman[199359]: 2025-10-03 09:32:13.951054899 +0000 UTC m=+0.814272817 container died d7b2225dc2a0a87be021c5bd1f3bfbe9335e9a773493a155da4f285b464b8a7e (image=quay.io/ceph/ceph:v18, name=zen_shtern, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 09:32:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f26cf5a825423d37ce7d0529e2535cb7effb44cdd9c1dff616578df651403c9-merged.mount: Deactivated successfully.
Oct  3 09:32:14 compute-0 podman[199359]: 2025-10-03 09:32:14.008980014 +0000 UTC m=+0.872197932 container remove d7b2225dc2a0a87be021c5bd1f3bfbe9335e9a773493a155da4f285b464b8a7e (image=quay.io/ceph/ceph:v18, name=zen_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 09:32:14 compute-0 systemd[1]: libpod-conmon-d7b2225dc2a0a87be021c5bd1f3bfbe9335e9a773493a155da4f285b464b8a7e.scope: Deactivated successfully.
Oct  3 09:32:14 compute-0 python3[199738]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:32:14 compute-0 podman[199786]: 2025-10-03 09:32:14.434812323 +0000 UTC m=+0.068386731 container create dffe65fe83183c862a626a7ce0e99d1c40edf0d67a9a986ab5d95339084fd4ef (image=quay.io/ceph/ceph:v18, name=nervous_wozniak, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:32:14 compute-0 systemd[1]: Started libpod-conmon-dffe65fe83183c862a626a7ce0e99d1c40edf0d67a9a986ab5d95339084fd4ef.scope.
Oct  3 09:32:14 compute-0 podman[199786]: 2025-10-03 09:32:14.411656959 +0000 UTC m=+0.045231457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:32:14 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07cd3c7aa66800ee9e085aed487c0cca66174ade97d2727eadf80adc9d372765/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07cd3c7aa66800ee9e085aed487c0cca66174ade97d2727eadf80adc9d372765/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07cd3c7aa66800ee9e085aed487c0cca66174ade97d2727eadf80adc9d372765/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:14 compute-0 podman[199786]: 2025-10-03 09:32:14.533040163 +0000 UTC m=+0.166614581 container init dffe65fe83183c862a626a7ce0e99d1c40edf0d67a9a986ab5d95339084fd4ef (image=quay.io/ceph/ceph:v18, name=nervous_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 09:32:14 compute-0 podman[199786]: 2025-10-03 09:32:14.546396424 +0000 UTC m=+0.179970832 container start dffe65fe83183c862a626a7ce0e99d1c40edf0d67a9a986ab5d95339084fd4ef (image=quay.io/ceph/ceph:v18, name=nervous_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:32:14 compute-0 podman[199786]: 2025-10-03 09:32:14.551768086 +0000 UTC m=+0.185342494 container attach dffe65fe83183c862a626a7ce0e99d1c40edf0d67a9a986ab5d95339084fd4ef (image=quay.io/ceph/ceph:v18, name=nervous_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:32:14 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:32:14 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:14 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:32:14 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:14 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:32:14 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:14 compute-0 ceph-mgr[192071]: [progress INFO root] update: starting ev 04a30a95-f435-4442-b478-a25ecb012bcc (Updating crash deployment (+1 -> 1))
Oct  3 09:32:14 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct  3 09:32:14 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  3 09:32:14 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  3 09:32:14 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:32:14 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:32:14 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct  3 09:32:14 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct  3 09:32:14 compute-0 ceph-mon[191783]: Updating compute-0:/var/lib/ceph/9b4e8c9a-5555-5510-a631-4742a1182561/config/ceph.client.admin.keyring
Oct  3 09:32:14 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/3606502520' entity='client.admin' 
Oct  3 09:32:14 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:14 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:14 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:14 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  3 09:32:14 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  3 09:32:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Oct  3 09:32:15 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3980431323' entity='client.admin' 
Oct  3 09:32:15 compute-0 systemd[1]: libpod-dffe65fe83183c862a626a7ce0e99d1c40edf0d67a9a986ab5d95339084fd4ef.scope: Deactivated successfully.
Oct  3 09:32:15 compute-0 podman[200006]: 2025-10-03 09:32:15.272202674 +0000 UTC m=+0.042046254 container died dffe65fe83183c862a626a7ce0e99d1c40edf0d67a9a986ab5d95339084fd4ef (image=quay.io/ceph/ceph:v18, name=nervous_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 09:32:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-07cd3c7aa66800ee9e085aed487c0cca66174ade97d2727eadf80adc9d372765-merged.mount: Deactivated successfully.
Oct  3 09:32:15 compute-0 podman[200006]: 2025-10-03 09:32:15.327004138 +0000 UTC m=+0.096847698 container remove dffe65fe83183c862a626a7ce0e99d1c40edf0d67a9a986ab5d95339084fd4ef (image=quay.io/ceph/ceph:v18, name=nervous_wozniak, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:32:15 compute-0 systemd[1]: libpod-conmon-dffe65fe83183c862a626a7ce0e99d1c40edf0d67a9a986ab5d95339084fd4ef.scope: Deactivated successfully.
Oct  3 09:32:15 compute-0 podman[200050]: 2025-10-03 09:32:15.492620226 +0000 UTC m=+0.080710029 container create c6e5d713607841e5bff88efd7438c14125b9ffd043516f240a8e7061e18688b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:32:15 compute-0 systemd[1]: Started libpod-conmon-c6e5d713607841e5bff88efd7438c14125b9ffd043516f240a8e7061e18688b4.scope.
Oct  3 09:32:15 compute-0 podman[200050]: 2025-10-03 09:32:15.450060586 +0000 UTC m=+0.038150469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:15 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:15 compute-0 podman[200050]: 2025-10-03 09:32:15.589616896 +0000 UTC m=+0.177706709 container init c6e5d713607841e5bff88efd7438c14125b9ffd043516f240a8e7061e18688b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:32:15 compute-0 podman[200050]: 2025-10-03 09:32:15.598310546 +0000 UTC m=+0.186400329 container start c6e5d713607841e5bff88efd7438c14125b9ffd043516f240a8e7061e18688b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct  3 09:32:15 compute-0 podman[200050]: 2025-10-03 09:32:15.602682436 +0000 UTC m=+0.190772229 container attach c6e5d713607841e5bff88efd7438c14125b9ffd043516f240a8e7061e18688b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  3 09:32:15 compute-0 friendly_rosalind[200090]: 167 167
Oct  3 09:32:15 compute-0 systemd[1]: libpod-c6e5d713607841e5bff88efd7438c14125b9ffd043516f240a8e7061e18688b4.scope: Deactivated successfully.
Oct  3 09:32:15 compute-0 podman[200050]: 2025-10-03 09:32:15.605341852 +0000 UTC m=+0.193431645 container died c6e5d713607841e5bff88efd7438c14125b9ffd043516f240a8e7061e18688b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:32:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-153b8caee548c1fa1d76992811e0a1deda883ac742753a4a47cb1eb254b07d16-merged.mount: Deactivated successfully.
Oct  3 09:32:15 compute-0 podman[200050]: 2025-10-03 09:32:15.716756337 +0000 UTC m=+0.304846130 container remove c6e5d713607841e5bff88efd7438c14125b9ffd043516f240a8e7061e18688b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 09:32:15 compute-0 systemd[1]: libpod-conmon-c6e5d713607841e5bff88efd7438c14125b9ffd043516f240a8e7061e18688b4.scope: Deactivated successfully.
Oct  3 09:32:15 compute-0 python3[200094]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:32:15 compute-0 systemd[1]: Reloading.
Oct  3 09:32:15 compute-0 podman[200112]: 2025-10-03 09:32:15.810511503 +0000 UTC m=+0.062446610 container create ca0f596264101640abcafdf15f9296752b49c78d3671cb71d996709ff4a70137 (image=quay.io/ceph/ceph:v18, name=jolly_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 09:32:15 compute-0 ceph-mon[191783]: Deploying daemon crash.compute-0 on compute-0
Oct  3 09:32:15 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/3980431323' entity='client.admin' 
Oct  3 09:32:15 compute-0 podman[200112]: 2025-10-03 09:32:15.779894998 +0000 UTC m=+0.031830115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:32:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:15 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:32:15 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:32:16 compute-0 systemd[1]: Started libpod-conmon-ca0f596264101640abcafdf15f9296752b49c78d3671cb71d996709ff4a70137.scope.
Oct  3 09:32:16 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe5fd3c35d2c8be256bd03c824c97b14164213fa860de31695ad2f2207fe53a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe5fd3c35d2c8be256bd03c824c97b14164213fa860de31695ad2f2207fe53a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe5fd3c35d2c8be256bd03c824c97b14164213fa860de31695ad2f2207fe53a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:16 compute-0 systemd[1]: Reloading.
Oct  3 09:32:16 compute-0 podman[200112]: 2025-10-03 09:32:16.336550777 +0000 UTC m=+0.588485894 container init ca0f596264101640abcafdf15f9296752b49c78d3671cb71d996709ff4a70137 (image=quay.io/ceph/ceph:v18, name=jolly_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 09:32:16 compute-0 podman[200165]: 2025-10-03 09:32:16.342821799 +0000 UTC m=+0.145139900 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, version=9.4, container_name=kepler, managed_by=edpm_ansible, vendor=Red Hat, Inc., release-0.7.12=, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 09:32:16 compute-0 podman[200112]: 2025-10-03 09:32:16.352387657 +0000 UTC m=+0.604322744 container start ca0f596264101640abcafdf15f9296752b49c78d3671cb71d996709ff4a70137 (image=quay.io/ceph/ceph:v18, name=jolly_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 09:32:16 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:32:16 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:32:16 compute-0 podman[200112]: 2025-10-03 09:32:16.468876455 +0000 UTC m=+0.720811572 container attach ca0f596264101640abcafdf15f9296752b49c78d3671cb71d996709ff4a70137 (image=quay.io/ceph/ceph:v18, name=jolly_hawking, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:32:16 compute-0 podman[200167]: 2025-10-03 09:32:16.511673392 +0000 UTC m=+0.312237597 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:32:16 compute-0 podman[200168]: 2025-10-03 09:32:16.550387867 +0000 UTC m=+0.345029282 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct  3 09:32:16 compute-0 podman[200170]: 2025-10-03 09:32:16.560428 +0000 UTC m=+0.351306333 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:32:16 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 9b4e8c9a-5555-5510-a631-4742a1182561...
Oct  3 09:32:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Oct  3 09:32:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4205549034' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct  3 09:32:17 compute-0 podman[200347]: 2025-10-03 09:32:17.084605424 +0000 UTC m=+0.090140801 container create 97c57d436adb82f267e8a02b4ba833a6cb35311af26a289115b98ba35ea5505b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:32:17 compute-0 podman[200347]: 2025-10-03 09:32:17.029016146 +0000 UTC m=+0.034551563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faec5a7d325a07ca5ab74d3df094334a24d5b73c7ee120ea525d2b9be64b34df/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faec5a7d325a07ca5ab74d3df094334a24d5b73c7ee120ea525d2b9be64b34df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faec5a7d325a07ca5ab74d3df094334a24d5b73c7ee120ea525d2b9be64b34df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faec5a7d325a07ca5ab74d3df094334a24d5b73c7ee120ea525d2b9be64b34df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:17 compute-0 podman[200347]: 2025-10-03 09:32:17.258655314 +0000 UTC m=+0.264190711 container init 97c57d436adb82f267e8a02b4ba833a6cb35311af26a289115b98ba35ea5505b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-crash-compute-0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:32:17 compute-0 podman[200347]: 2025-10-03 09:32:17.266649901 +0000 UTC m=+0.272185288 container start 97c57d436adb82f267e8a02b4ba833a6cb35311af26a289115b98ba35ea5505b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-crash-compute-0, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:32:17 compute-0 bash[200347]: 97c57d436adb82f267e8a02b4ba833a6cb35311af26a289115b98ba35ea5505b
Oct  3 09:32:17 compute-0 systemd[1]: Started Ceph crash.compute-0 for 9b4e8c9a-5555-5510-a631-4742a1182561.
Oct  3 09:32:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:32:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:32:17 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-crash-compute-0[200363]: INFO:ceph-crash:pinging cluster to exercise our key
Oct  3 09:32:17 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-crash-compute-0[200363]: 2025-10-03T09:32:17.651+0000 7efded081640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  3 09:32:17 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-crash-compute-0[200363]: 2025-10-03T09:32:17.651+0000 7efded081640 -1 AuthRegistry(0x7efde8067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  3 09:32:17 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-crash-compute-0[200363]: 2025-10-03T09:32:17.654+0000 7efded081640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  3 09:32:17 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-crash-compute-0[200363]: 2025-10-03T09:32:17.654+0000 7efded081640 -1 AuthRegistry(0x7efded080000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  3 09:32:17 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-crash-compute-0[200363]: 2025-10-03T09:32:17.656+0000 7efde6d76640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct  3 09:32:17 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-crash-compute-0[200363]: 2025-10-03T09:32:17.657+0000 7efded081640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct  3 09:32:17 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-crash-compute-0[200363]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct  3 09:32:17 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-crash-compute-0[200363]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct  3 09:32:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct  3 09:32:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:17 compute-0 ceph-mgr[192071]: [progress INFO root] complete: finished ev 04a30a95-f435-4442-b478-a25ecb012bcc (Updating crash deployment (+1 -> 1))
Oct  3 09:32:17 compute-0 ceph-mgr[192071]: [progress INFO root] Completed event 04a30a95-f435-4442-b478-a25ecb012bcc (Updating crash deployment (+1 -> 1)) in 3 seconds
Oct  3 09:32:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct  3 09:32:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:17 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 15f269de-eaed-48cb-9f9c-9f737cbaf63d does not exist
Oct  3 09:32:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  3 09:32:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:17 compute-0 ceph-mgr[192071]: [progress INFO root] update: starting ev 03e92262-96ee-489d-8281-bae29a5095d3 (Updating mgr deployment (+1 -> 2))
Oct  3 09:32:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ubiymr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct  3 09:32:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ubiymr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  3 09:32:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ubiymr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  3 09:32:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  3 09:32:17 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  3 09:32:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:32:17 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:32:17 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.ubiymr on compute-0
Oct  3 09:32:17 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.ubiymr on compute-0
Oct  3 09:32:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct  3 09:32:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  3 09:32:18 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/4205549034' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct  3 09:32:18 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:18 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:18 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:18 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:18 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:18 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ubiymr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  3 09:32:18 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ubiymr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  3 09:32:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4205549034' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct  3 09:32:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct  3 09:32:18 compute-0 jolly_hawking[200164]: set require_min_compat_client to mimic
Oct  3 09:32:18 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct  3 09:32:18 compute-0 systemd[1]: libpod-ca0f596264101640abcafdf15f9296752b49c78d3671cb71d996709ff4a70137.scope: Deactivated successfully.
Oct  3 09:32:18 compute-0 podman[200112]: 2025-10-03 09:32:18.069912544 +0000 UTC m=+2.321847671 container died ca0f596264101640abcafdf15f9296752b49c78d3671cb71d996709ff4a70137 (image=quay.io/ceph/ceph:v18, name=jolly_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:32:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fe5fd3c35d2c8be256bd03c824c97b14164213fa860de31695ad2f2207fe53a-merged.mount: Deactivated successfully.
Oct  3 09:32:18 compute-0 podman[200112]: 2025-10-03 09:32:18.166649756 +0000 UTC m=+2.418584843 container remove ca0f596264101640abcafdf15f9296752b49c78d3671cb71d996709ff4a70137 (image=quay.io/ceph/ceph:v18, name=jolly_hawking, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 09:32:18 compute-0 systemd[1]: libpod-conmon-ca0f596264101640abcafdf15f9296752b49c78d3671cb71d996709ff4a70137.scope: Deactivated successfully.
Oct  3 09:32:18 compute-0 podman[200529]: 2025-10-03 09:32:18.666386085 +0000 UTC m=+0.071207283 container create 706fb07189b8c865c9be59df049f02ef711d6e1e885a20697818ef2c543e2482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:32:18 compute-0 systemd[1]: Started libpod-conmon-706fb07189b8c865c9be59df049f02ef711d6e1e885a20697818ef2c543e2482.scope.
Oct  3 09:32:18 compute-0 podman[200529]: 2025-10-03 09:32:18.630596183 +0000 UTC m=+0.035417411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:18 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:18 compute-0 python3[200567]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:32:18 compute-0 podman[200529]: 2025-10-03 09:32:18.895728773 +0000 UTC m=+0.300550001 container init 706fb07189b8c865c9be59df049f02ef711d6e1e885a20697818ef2c543e2482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 09:32:18 compute-0 podman[200529]: 2025-10-03 09:32:18.91149783 +0000 UTC m=+0.316319048 container start 706fb07189b8c865c9be59df049f02ef711d6e1e885a20697818ef2c543e2482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 09:32:18 compute-0 elated_hellman[200570]: 167 167
Oct  3 09:32:18 compute-0 systemd[1]: libpod-706fb07189b8c865c9be59df049f02ef711d6e1e885a20697818ef2c543e2482.scope: Deactivated successfully.
Oct  3 09:32:18 compute-0 podman[200529]: 2025-10-03 09:32:18.92452033 +0000 UTC m=+0.329341548 container attach 706fb07189b8c865c9be59df049f02ef711d6e1e885a20697818ef2c543e2482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:18 compute-0 podman[200529]: 2025-10-03 09:32:18.925128599 +0000 UTC m=+0.329949797 container died 706fb07189b8c865c9be59df049f02ef711d6e1e885a20697818ef2c543e2482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:32:18 compute-0 podman[200573]: 2025-10-03 09:32:18.906592173 +0000 UTC m=+0.049497964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:32:19 compute-0 ceph-mon[191783]: Deploying daemon mgr.compute-0.ubiymr on compute-0
Oct  3 09:32:19 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/4205549034' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct  3 09:32:19 compute-0 podman[200573]: 2025-10-03 09:32:19.117006452 +0000 UTC m=+0.259912263 container create ebeea4f37f29b2706fe4bae595cedcf296b0ce3f3117aa121d3cf15e9122b7cc (image=quay.io/ceph/ceph:v18, name=jolly_jemison, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 09:32:19 compute-0 systemd[1]: Started libpod-conmon-ebeea4f37f29b2706fe4bae595cedcf296b0ce3f3117aa121d3cf15e9122b7cc.scope.
Oct  3 09:32:19 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0a72e9983a3f61d2dfb7863cf3bb844afc5b2ffb8dc3bd8f1270c0623737b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0a72e9983a3f61d2dfb7863cf3bb844afc5b2ffb8dc3bd8f1270c0623737b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0a72e9983a3f61d2dfb7863cf3bb844afc5b2ffb8dc3bd8f1270c0623737b5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c3dd31dedb4d43e04703d60bce654846aac5481547ba53fd480e43990adc54a-merged.mount: Deactivated successfully.
Oct  3 09:32:19 compute-0 podman[200529]: 2025-10-03 09:32:19.289051248 +0000 UTC m=+0.693872486 container remove 706fb07189b8c865c9be59df049f02ef711d6e1e885a20697818ef2c543e2482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 09:32:19 compute-0 podman[200573]: 2025-10-03 09:32:19.300808185 +0000 UTC m=+0.443714036 container init ebeea4f37f29b2706fe4bae595cedcf296b0ce3f3117aa121d3cf15e9122b7cc (image=quay.io/ceph/ceph:v18, name=jolly_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 09:32:19 compute-0 systemd[1]: libpod-conmon-706fb07189b8c865c9be59df049f02ef711d6e1e885a20697818ef2c543e2482.scope: Deactivated successfully.
Oct  3 09:32:19 compute-0 podman[200573]: 2025-10-03 09:32:19.315937622 +0000 UTC m=+0.458843393 container start ebeea4f37f29b2706fe4bae595cedcf296b0ce3f3117aa121d3cf15e9122b7cc (image=quay.io/ceph/ceph:v18, name=jolly_jemison, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 09:32:19 compute-0 podman[200573]: 2025-10-03 09:32:19.342120815 +0000 UTC m=+0.485026586 container attach ebeea4f37f29b2706fe4bae595cedcf296b0ce3f3117aa121d3cf15e9122b7cc (image=quay.io/ceph/ceph:v18, name=jolly_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:19 compute-0 systemd[1]: Reloading.
Oct  3 09:32:19 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:32:19 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:32:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:19 compute-0 systemd[1]: Reloading.
Oct  3 09:32:19 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 09:32:20 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:32:20 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:32:20 compute-0 systemd[1]: Starting Ceph mgr.compute-0.ubiymr for 9b4e8c9a-5555-5510-a631-4742a1182561...
Oct  3 09:32:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:32:20 compute-0 podman[200845]: 2025-10-03 09:32:20.586475949 +0000 UTC m=+0.058126741 container create f92646b9e420a47bd5ebf7806e901dc5de3f297568445f4b369c842e4d681b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-ubiymr, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0398693d3eae1a4cbd5be60d698079a9b62793bf19aff368b3bb4f749b0691c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0398693d3eae1a4cbd5be60d698079a9b62793bf19aff368b3bb4f749b0691c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0398693d3eae1a4cbd5be60d698079a9b62793bf19aff368b3bb4f749b0691c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0398693d3eae1a4cbd5be60d698079a9b62793bf19aff368b3bb4f749b0691c9/merged/var/lib/ceph/mgr/ceph-compute-0.ubiymr supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:20 compute-0 podman[200845]: 2025-10-03 09:32:20.563974505 +0000 UTC m=+0.035625327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:20 compute-0 podman[200845]: 2025-10-03 09:32:20.675885206 +0000 UTC m=+0.147536018 container init f92646b9e420a47bd5ebf7806e901dc5de3f297568445f4b369c842e4d681b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-ubiymr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 09:32:20 compute-0 podman[200845]: 2025-10-03 09:32:20.694846915 +0000 UTC m=+0.166497707 container start f92646b9e420a47bd5ebf7806e901dc5de3f297568445f4b369c842e4d681b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-ubiymr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:32:20 compute-0 bash[200845]: f92646b9e420a47bd5ebf7806e901dc5de3f297568445f4b369c842e4d681b97
Oct  3 09:32:20 compute-0 systemd[1]: Started Ceph mgr.compute-0.ubiymr for 9b4e8c9a-5555-5510-a631-4742a1182561.
Oct  3 09:32:20 compute-0 ceph-mgr[200877]: set uid:gid to 167:167 (ceph:ceph)
Oct  3 09:32:20 compute-0 ceph-mgr[200877]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  3 09:32:20 compute-0 ceph-mgr[200877]: pidfile_write: ignore empty --pid-file
Oct  3 09:32:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:32:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:32:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  3 09:32:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:20 compute-0 ceph-mgr[192071]: [progress INFO root] complete: finished ev 03e92262-96ee-489d-8281-bae29a5095d3 (Updating mgr deployment (+1 -> 2))
Oct  3 09:32:20 compute-0 ceph-mgr[192071]: [progress INFO root] Completed event 03e92262-96ee-489d-8281-bae29a5095d3 (Updating mgr deployment (+1 -> 2)) in 3 seconds
Oct  3 09:32:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  3 09:32:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  3 09:32:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  3 09:32:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  3 09:32:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  3 09:32:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:20 compute-0 ceph-mgr[192071]: [cephadm INFO root] Added host compute-0
Oct  3 09:32:20 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Added host compute-0
Oct  3 09:32:20 compute-0 ceph-mgr[192071]: [cephadm INFO root] Saving service mon spec with placement compute-0
Oct  3 09:32:20 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Oct  3 09:32:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  3 09:32:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:20 compute-0 ceph-mgr[192071]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Oct  3 09:32:20 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Oct  3 09:32:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  3 09:32:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:20 compute-0 ceph-mgr[192071]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct  3 09:32:20 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct  3 09:32:20 compute-0 ceph-mgr[192071]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Oct  3 09:32:20 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Oct  3 09:32:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Oct  3 09:32:20 compute-0 ceph-mgr[200877]: mgr[py] Loading python module 'alerts'
Oct  3 09:32:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:20 compute-0 jolly_jemison[200600]: Added host 'compute-0' with addr '192.168.122.100'
Oct  3 09:32:20 compute-0 jolly_jemison[200600]: Scheduled mon update...
Oct  3 09:32:20 compute-0 jolly_jemison[200600]: Scheduled mgr update...
Oct  3 09:32:20 compute-0 jolly_jemison[200600]: Scheduled osd.default_drive_group update...
Oct  3 09:32:20 compute-0 systemd[1]: libpod-ebeea4f37f29b2706fe4bae595cedcf296b0ce3f3117aa121d3cf15e9122b7cc.scope: Deactivated successfully.
Oct  3 09:32:20 compute-0 podman[200573]: 2025-10-03 09:32:20.905029468 +0000 UTC m=+2.047935239 container died ebeea4f37f29b2706fe4bae595cedcf296b0ce3f3117aa121d3cf15e9122b7cc (image=quay.io/ceph/ceph:v18, name=jolly_jemison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:32:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a0a72e9983a3f61d2dfb7863cf3bb844afc5b2ffb8dc3bd8f1270c0623737b5-merged.mount: Deactivated successfully.
Oct  3 09:32:20 compute-0 podman[200573]: 2025-10-03 09:32:20.970484624 +0000 UTC m=+2.113390395 container remove ebeea4f37f29b2706fe4bae595cedcf296b0ce3f3117aa121d3cf15e9122b7cc (image=quay.io/ceph/ceph:v18, name=jolly_jemison, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  3 09:32:20 compute-0 systemd[1]: libpod-conmon-ebeea4f37f29b2706fe4bae595cedcf296b0ce3f3117aa121d3cf15e9122b7cc.scope: Deactivated successfully.
Oct  3 09:32:21 compute-0 ceph-mgr[192071]: [progress INFO root] Writing back 2 completed events
Oct  3 09:32:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  3 09:32:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:21 compute-0 ceph-mgr[200877]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  3 09:32:21 compute-0 ceph-mgr[200877]: mgr[py] Loading python module 'balancer'
Oct  3 09:32:21 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-ubiymr[200862]: 2025-10-03T09:32:21.199+0000 7fb492c22140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  3 09:32:21 compute-0 python3[201080]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:32:21 compute-0 ceph-mgr[200877]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  3 09:32:21 compute-0 ceph-mgr[200877]: mgr[py] Loading python module 'cephadm'
Oct  3 09:32:21 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-ubiymr[200862]: 2025-10-03T09:32:21.488+0000 7fb492c22140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  3 09:32:21 compute-0 podman[201099]: 2025-10-03 09:32:21.497597223 +0000 UTC m=+0.059150214 container create 91e6db9b2f67ed79c40bd5fbb43714495742447f5189e95fa409f0c7c44f14b5 (image=quay.io/ceph/ceph:v18, name=tender_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 09:32:21 compute-0 systemd[1]: Started libpod-conmon-91e6db9b2f67ed79c40bd5fbb43714495742447f5189e95fa409f0c7c44f14b5.scope.
Oct  3 09:32:21 compute-0 podman[201099]: 2025-10-03 09:32:21.472863806 +0000 UTC m=+0.034416817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:32:21 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3210bfd55f00babdb37dc293be5df4859387e045ba6fc9ab46e8029231e663bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3210bfd55f00babdb37dc293be5df4859387e045ba6fc9ab46e8029231e663bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3210bfd55f00babdb37dc293be5df4859387e045ba6fc9ab46e8029231e663bc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:21 compute-0 podman[201099]: 2025-10-03 09:32:21.599076047 +0000 UTC m=+0.160629058 container init 91e6db9b2f67ed79c40bd5fbb43714495742447f5189e95fa409f0c7c44f14b5 (image=quay.io/ceph/ceph:v18, name=tender_elion, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 09:32:21 compute-0 podman[201099]: 2025-10-03 09:32:21.612569382 +0000 UTC m=+0.174122373 container start 91e6db9b2f67ed79c40bd5fbb43714495742447f5189e95fa409f0c7c44f14b5 (image=quay.io/ceph/ceph:v18, name=tender_elion, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 09:32:21 compute-0 podman[201099]: 2025-10-03 09:32:21.620177136 +0000 UTC m=+0.181730157 container attach 91e6db9b2f67ed79c40bd5fbb43714495742447f5189e95fa409f0c7c44f14b5 (image=quay.io/ceph/ceph:v18, name=tender_elion, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:22 compute-0 podman[201189]: 2025-10-03 09:32:22.016704973 +0000 UTC m=+0.108886884 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:32:22 compute-0 podman[201189]: 2025-10-03 09:32:22.12631564 +0000 UTC m=+0.218497561 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:32:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  3 09:32:22 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1426508442' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  3 09:32:22 compute-0 tender_elion[201141]: 
Oct  3 09:32:22 compute-0 tender_elion[201141]: {"fsid":"9b4e8c9a-5555-5510-a631-4742a1182561","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":86,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-03T09:30:51.647609+0000","services":{}},"progress_events":{"03e92262-96ee-489d-8281-bae29a5095d3":{"message":"Updating mgr deployment (+1 -> 2) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Oct  3 09:32:22 compute-0 systemd[1]: libpod-91e6db9b2f67ed79c40bd5fbb43714495742447f5189e95fa409f0c7c44f14b5.scope: Deactivated successfully.
Oct  3 09:32:22 compute-0 podman[201099]: 2025-10-03 09:32:22.358789269 +0000 UTC m=+0.920342260 container died 91e6db9b2f67ed79c40bd5fbb43714495742447f5189e95fa409f0c7c44f14b5 (image=quay.io/ceph/ceph:v18, name=tender_elion, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:32:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-3210bfd55f00babdb37dc293be5df4859387e045ba6fc9ab46e8029231e663bc-merged.mount: Deactivated successfully.
Oct  3 09:32:22 compute-0 podman[201099]: 2025-10-03 09:32:22.423673577 +0000 UTC m=+0.985226568 container remove 91e6db9b2f67ed79c40bd5fbb43714495742447f5189e95fa409f0c7c44f14b5 (image=quay.io/ceph/ceph:v18, name=tender_elion, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  3 09:32:22 compute-0 systemd[1]: libpod-conmon-91e6db9b2f67ed79c40bd5fbb43714495742447f5189e95fa409f0c7c44f14b5.scope: Deactivated successfully.
Oct  3 09:32:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:32:22 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:32:22 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:32:22 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:32:22 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:32:22 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:32:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:32:22 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:32:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:32:22 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:22 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6c8d23ef-0b9f-4818-b9c0-43b3246aa613 does not exist
Oct  3 09:32:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  3 09:32:22 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:22 compute-0 ceph-mgr[192071]: [progress INFO root] update: starting ev da85b597-2fd2-4d3d-aec7-10798a95eab2 (Updating mgr deployment (-1 -> 1))
Oct  3 09:32:22 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.ubiymr from compute-0 -- ports [8765]
Oct  3 09:32:22 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.ubiymr from compute-0 -- ports [8765]
Oct  3 09:32:22 compute-0 ceph-mon[191783]: Added host compute-0
Oct  3 09:32:22 compute-0 ceph-mon[191783]: Saving service mon spec with placement compute-0
Oct  3 09:32:22 compute-0 ceph-mon[191783]: Saving service mgr spec with placement compute-0
Oct  3 09:32:22 compute-0 ceph-mon[191783]: Marking host: compute-0 for OSDSpec preview refresh.
Oct  3 09:32:22 compute-0 ceph-mon[191783]: Saving service osd.default_drive_group spec with placement compute-0
Oct  3 09:32:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:32:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:23 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.ubiymr for 9b4e8c9a-5555-5510-a631-4742a1182561...
Oct  3 09:32:23 compute-0 podman[201458]: 2025-10-03 09:32:23.357515421 +0000 UTC m=+0.110587359 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct  3 09:32:23 compute-0 podman[201457]: 2025-10-03 09:32:23.370932773 +0000 UTC m=+0.123482404 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible)
Oct  3 09:32:23 compute-0 podman[201523]: 2025-10-03 09:32:23.474882707 +0000 UTC m=+0.065085205 container died f92646b9e420a47bd5ebf7806e901dc5de3f297568445f4b369c842e4d681b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-ubiymr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 09:32:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0398693d3eae1a4cbd5be60d698079a9b62793bf19aff368b3bb4f749b0691c9-merged.mount: Deactivated successfully.
Oct  3 09:32:23 compute-0 podman[201523]: 2025-10-03 09:32:23.520517945 +0000 UTC m=+0.110720443 container remove f92646b9e420a47bd5ebf7806e901dc5de3f297568445f4b369c842e4d681b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-ubiymr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 09:32:23 compute-0 bash[201523]: ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-ubiymr
Oct  3 09:32:23 compute-0 systemd[1]: ceph-9b4e8c9a-5555-5510-a631-4742a1182561@mgr.compute-0.ubiymr.service: Main process exited, code=exited, status=143/n/a
Oct  3 09:32:23 compute-0 systemd[1]: ceph-9b4e8c9a-5555-5510-a631-4742a1182561@mgr.compute-0.ubiymr.service: Failed with result 'exit-code'.
Oct  3 09:32:23 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.ubiymr for 9b4e8c9a-5555-5510-a631-4742a1182561.
Oct  3 09:32:23 compute-0 systemd[1]: ceph-9b4e8c9a-5555-5510-a631-4742a1182561@mgr.compute-0.ubiymr.service: Consumed 3.830s CPU time.
Oct  3 09:32:23 compute-0 systemd[1]: Reloading.
Oct  3 09:32:23 compute-0 ceph-mon[191783]: Removing daemon mgr.compute-0.ubiymr from compute-0 -- ports [8765]
Oct  3 09:32:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:23 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:32:23 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:32:24 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.ubiymr
Oct  3 09:32:24 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.ubiymr
Oct  3 09:32:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.ubiymr"} v 0) v1
Oct  3 09:32:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.ubiymr"}]: dispatch
Oct  3 09:32:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.ubiymr"}]': finished
Oct  3 09:32:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  3 09:32:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:24 compute-0 ceph-mgr[192071]: [progress INFO root] complete: finished ev da85b597-2fd2-4d3d-aec7-10798a95eab2 (Updating mgr deployment (-1 -> 1))
Oct  3 09:32:24 compute-0 ceph-mgr[192071]: [progress INFO root] Completed event da85b597-2fd2-4d3d-aec7-10798a95eab2 (Updating mgr deployment (-1 -> 1)) in 2 seconds
Oct  3 09:32:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  3 09:32:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d848b1f8-b418-4ee1-a3bd-8693cbcbb1d2 does not exist
Oct  3 09:32:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:32:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:32:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:32:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:32:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:32:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:32:24 compute-0 ceph-mon[191783]: Removing key for mgr.compute-0.ubiymr
Oct  3 09:32:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.ubiymr"}]: dispatch
Oct  3 09:32:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.ubiymr"}]': finished
Oct  3 09:32:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:32:24 compute-0 podman[201752]: 2025-10-03 09:32:24.982692698 +0000 UTC m=+0.049989340 container create 35e62d4f6caef7b49db4ae5fd56b4d91e0f5768e75cfa778fbadf101590d0817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wiles, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 09:32:25 compute-0 systemd[1]: Started libpod-conmon-35e62d4f6caef7b49db4ae5fd56b4d91e0f5768e75cfa778fbadf101590d0817.scope.
Oct  3 09:32:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:25 compute-0 podman[201752]: 2025-10-03 09:32:24.962633652 +0000 UTC m=+0.029930304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:25 compute-0 podman[201752]: 2025-10-03 09:32:25.074023166 +0000 UTC m=+0.141319818 container init 35e62d4f6caef7b49db4ae5fd56b4d91e0f5768e75cfa778fbadf101590d0817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wiles, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:32:25 compute-0 podman[201752]: 2025-10-03 09:32:25.08408563 +0000 UTC m=+0.151382262 container start 35e62d4f6caef7b49db4ae5fd56b4d91e0f5768e75cfa778fbadf101590d0817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wiles, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:32:25 compute-0 podman[201752]: 2025-10-03 09:32:25.088830942 +0000 UTC m=+0.156127594 container attach 35e62d4f6caef7b49db4ae5fd56b4d91e0f5768e75cfa778fbadf101590d0817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  3 09:32:25 compute-0 cranky_wiles[201769]: 167 167
Oct  3 09:32:25 compute-0 systemd[1]: libpod-35e62d4f6caef7b49db4ae5fd56b4d91e0f5768e75cfa778fbadf101590d0817.scope: Deactivated successfully.
Oct  3 09:32:25 compute-0 podman[201752]: 2025-10-03 09:32:25.091951493 +0000 UTC m=+0.159248125 container died 35e62d4f6caef7b49db4ae5fd56b4d91e0f5768e75cfa778fbadf101590d0817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wiles, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:32:25 compute-0 podman[201766]: 2025-10-03 09:32:25.110859861 +0000 UTC m=+0.080356286 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:32:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b68359d4e05528792ce883a062c07aaec1abc1d9d18b430998bdfb4bf6c314c-merged.mount: Deactivated successfully.
Oct  3 09:32:25 compute-0 podman[201752]: 2025-10-03 09:32:25.145224327 +0000 UTC m=+0.212520959 container remove 35e62d4f6caef7b49db4ae5fd56b4d91e0f5768e75cfa778fbadf101590d0817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wiles, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:25 compute-0 systemd[1]: libpod-conmon-35e62d4f6caef7b49db4ae5fd56b4d91e0f5768e75cfa778fbadf101590d0817.scope: Deactivated successfully.
Oct  3 09:32:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:32:25 compute-0 podman[201812]: 2025-10-03 09:32:25.337507366 +0000 UTC m=+0.057421085 container create 060967dab85bdae7152ff2f1c0c74a0c0461b01126bfd28137c83d8a41571078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ellis, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 09:32:25 compute-0 systemd[1]: Started libpod-conmon-060967dab85bdae7152ff2f1c0c74a0c0461b01126bfd28137c83d8a41571078.scope.
Oct  3 09:32:25 compute-0 podman[201812]: 2025-10-03 09:32:25.312980966 +0000 UTC m=+0.032894715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c846e0c3e0833de106b7ded2858748d6d4602d0b8cfa73c9190ee7236e437de0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c846e0c3e0833de106b7ded2858748d6d4602d0b8cfa73c9190ee7236e437de0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c846e0c3e0833de106b7ded2858748d6d4602d0b8cfa73c9190ee7236e437de0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c846e0c3e0833de106b7ded2858748d6d4602d0b8cfa73c9190ee7236e437de0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c846e0c3e0833de106b7ded2858748d6d4602d0b8cfa73c9190ee7236e437de0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:25 compute-0 podman[201812]: 2025-10-03 09:32:25.443394026 +0000 UTC m=+0.163307775 container init 060967dab85bdae7152ff2f1c0c74a0c0461b01126bfd28137c83d8a41571078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 09:32:25 compute-0 podman[201812]: 2025-10-03 09:32:25.46074884 +0000 UTC m=+0.180662569 container start 060967dab85bdae7152ff2f1c0c74a0c0461b01126bfd28137c83d8a41571078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 09:32:25 compute-0 podman[201812]: 2025-10-03 09:32:25.465660778 +0000 UTC m=+0.185574517 container attach 060967dab85bdae7152ff2f1c0c74a0c0461b01126bfd28137c83d8a41571078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ellis, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  3 09:32:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:26 compute-0 ceph-mgr[192071]: [progress INFO root] Writing back 3 completed events
Oct  3 09:32:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  3 09:32:26 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:26 compute-0 serene_ellis[201829]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:32:26 compute-0 serene_ellis[201829]: --> relative data size: 1.0
Oct  3 09:32:26 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  3 09:32:26 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 25b10821-47d4-4e0b-9b6d-d16a0463c4d0
Oct  3 09:32:27 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0"} v 0) v1
Oct  3 09:32:27 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1014308121' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0"}]: dispatch
Oct  3 09:32:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct  3 09:32:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  3 09:32:27 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1014308121' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0"}]': finished
Oct  3 09:32:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct  3 09:32:27 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct  3 09:32:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  3 09:32:27 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  3 09:32:27 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  3 09:32:27 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  3 09:32:27 compute-0 serene_ellis[201829]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct  3 09:32:27 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct  3 09:32:27 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  3 09:32:27 compute-0 lvm[201893]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  3 09:32:27 compute-0 lvm[201893]: VG ceph_vg0 finished
Oct  3 09:32:27 compute-0 serene_ellis[201829]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  3 09:32:27 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct  3 09:32:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct  3 09:32:27 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1861899716' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  3 09:32:27 compute-0 serene_ellis[201829]: stderr: got monmap epoch 1
Oct  3 09:32:27 compute-0 serene_ellis[201829]: --> Creating keyring file for osd.0
Oct  3 09:32:27 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct  3 09:32:27 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct  3 09:32:27 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 25b10821-47d4-4e0b-9b6d-d16a0463c4d0 --setuser ceph --setgroup ceph
Oct  3 09:32:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:28 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/1014308121' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0"}]: dispatch
Oct  3 09:32:28 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/1014308121' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0"}]': finished
Oct  3 09:32:28 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  3 09:32:28 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  3 09:32:29 compute-0 ceph-mon[191783]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  3 09:32:29 compute-0 ceph-mon[191783]: Cluster is now healthy
Oct  3 09:32:29 compute-0 podman[157165]: time="2025-10-03T09:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:32:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 25436 "" "Go-http-client/1.1"
Oct  3 09:32:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4834 "" "Go-http-client/1.1"
Oct  3 09:32:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:32:30 compute-0 serene_ellis[201829]: stderr: 2025-10-03T09:32:27.882+0000 7f84cdf4b740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  3 09:32:30 compute-0 serene_ellis[201829]: stderr: 2025-10-03T09:32:27.882+0000 7f84cdf4b740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  3 09:32:30 compute-0 serene_ellis[201829]: stderr: 2025-10-03T09:32:27.882+0000 7f84cdf4b740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  3 09:32:30 compute-0 serene_ellis[201829]: stderr: 2025-10-03T09:32:27.883+0000 7f84cdf4b740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct  3 09:32:30 compute-0 serene_ellis[201829]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct  3 09:32:30 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  3 09:32:30 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct  3 09:32:30 compute-0 serene_ellis[201829]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  3 09:32:30 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct  3 09:32:30 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  3 09:32:30 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  3 09:32:30 compute-0 serene_ellis[201829]: --> ceph-volume lvm activate successful for osd ID: 0
Oct  3 09:32:30 compute-0 serene_ellis[201829]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct  3 09:32:30 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  3 09:32:30 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 16cef594-0067-4499-9298-5d83edf70190
Oct  3 09:32:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "16cef594-0067-4499-9298-5d83edf70190"} v 0) v1
Oct  3 09:32:31 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2927662005' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "16cef594-0067-4499-9298-5d83edf70190"}]: dispatch
Oct  3 09:32:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct  3 09:32:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  3 09:32:31 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2927662005' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "16cef594-0067-4499-9298-5d83edf70190"}]': finished
Oct  3 09:32:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct  3 09:32:31 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct  3 09:32:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  3 09:32:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  3 09:32:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  3 09:32:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  3 09:32:31 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  3 09:32:31 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  3 09:32:31 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  3 09:32:31 compute-0 openstack_network_exporter[159287]: ERROR   09:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:32:31 compute-0 openstack_network_exporter[159287]: ERROR   09:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:32:31 compute-0 openstack_network_exporter[159287]: ERROR   09:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:32:31 compute-0 openstack_network_exporter[159287]: ERROR   09:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:32:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:32:31 compute-0 openstack_network_exporter[159287]: ERROR   09:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:32:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:32:31 compute-0 lvm[202838]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  3 09:32:31 compute-0 lvm[202838]: VG ceph_vg1 finished
Oct  3 09:32:31 compute-0 serene_ellis[201829]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct  3 09:32:31 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Oct  3 09:32:31 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  3 09:32:31 compute-0 serene_ellis[201829]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  3 09:32:31 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct  3 09:32:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct  3 09:32:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2582969405' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  3 09:32:31 compute-0 serene_ellis[201829]: stderr: got monmap epoch 1
Oct  3 09:32:31 compute-0 serene_ellis[201829]: --> Creating keyring file for osd.1
Oct  3 09:32:32 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct  3 09:32:32 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct  3 09:32:32 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 16cef594-0067-4499-9298-5d83edf70190 --setuser ceph --setgroup ceph
Oct  3 09:32:32 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/2927662005' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "16cef594-0067-4499-9298-5d83edf70190"}]: dispatch
Oct  3 09:32:32 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/2927662005' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "16cef594-0067-4499-9298-5d83edf70190"}]': finished
Oct  3 09:32:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:34 compute-0 serene_ellis[201829]: stderr: 2025-10-03T09:32:32.115+0000 7f7c7eef1740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  3 09:32:34 compute-0 serene_ellis[201829]: stderr: 2025-10-03T09:32:32.115+0000 7f7c7eef1740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  3 09:32:34 compute-0 serene_ellis[201829]: stderr: 2025-10-03T09:32:32.116+0000 7f7c7eef1740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  3 09:32:34 compute-0 serene_ellis[201829]: stderr: 2025-10-03T09:32:32.116+0000 7f7c7eef1740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct  3 09:32:34 compute-0 serene_ellis[201829]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Oct  3 09:32:35 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  3 09:32:35 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct  3 09:32:35 compute-0 serene_ellis[201829]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  3 09:32:35 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct  3 09:32:35 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  3 09:32:35 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  3 09:32:35 compute-0 serene_ellis[201829]: --> ceph-volume lvm activate successful for osd ID: 1
Oct  3 09:32:35 compute-0 serene_ellis[201829]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Oct  3 09:32:35 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  3 09:32:35 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0
Oct  3 09:32:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:32:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0"} v 0) v1
Oct  3 09:32:35 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1910004916' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0"}]: dispatch
Oct  3 09:32:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct  3 09:32:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  3 09:32:35 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1910004916' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0"}]': finished
Oct  3 09:32:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Oct  3 09:32:35 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Oct  3 09:32:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  3 09:32:35 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  3 09:32:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  3 09:32:35 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  3 09:32:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  3 09:32:35 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  3 09:32:35 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  3 09:32:35 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  3 09:32:35 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  3 09:32:35 compute-0 lvm[203789]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  3 09:32:35 compute-0 lvm[203789]: VG ceph_vg2 finished
Oct  3 09:32:35 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  3 09:32:35 compute-0 serene_ellis[201829]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Oct  3 09:32:35 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Oct  3 09:32:35 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  3 09:32:35 compute-0 serene_ellis[201829]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  3 09:32:35 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Oct  3 09:32:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct  3 09:32:36 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3496692914' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  3 09:32:36 compute-0 serene_ellis[201829]: stderr: got monmap epoch 1
Oct  3 09:32:36 compute-0 serene_ellis[201829]: --> Creating keyring file for osd.2
Oct  3 09:32:36 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/1910004916' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0"}]: dispatch
Oct  3 09:32:36 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/1910004916' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0"}]': finished
Oct  3 09:32:36 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Oct  3 09:32:36 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Oct  3 09:32:36 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0 --setuser ceph --setgroup ceph
Oct  3 09:32:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:38 compute-0 serene_ellis[201829]: stderr: 2025-10-03T09:32:36.328+0000 7f0555841740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  3 09:32:38 compute-0 serene_ellis[201829]: stderr: 2025-10-03T09:32:36.328+0000 7f0555841740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  3 09:32:38 compute-0 serene_ellis[201829]: stderr: 2025-10-03T09:32:36.328+0000 7f0555841740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  3 09:32:38 compute-0 serene_ellis[201829]: stderr: 2025-10-03T09:32:36.329+0000 7f0555841740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Oct  3 09:32:38 compute-0 serene_ellis[201829]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Oct  3 09:32:38 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  3 09:32:38 compute-0 serene_ellis[201829]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct  3 09:32:38 compute-0 serene_ellis[201829]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  3 09:32:39 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct  3 09:32:39 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  3 09:32:39 compute-0 serene_ellis[201829]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  3 09:32:39 compute-0 serene_ellis[201829]: --> ceph-volume lvm activate successful for osd ID: 2
Oct  3 09:32:39 compute-0 serene_ellis[201829]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Oct  3 09:32:39 compute-0 systemd[1]: libpod-060967dab85bdae7152ff2f1c0c74a0c0461b01126bfd28137c83d8a41571078.scope: Deactivated successfully.
Oct  3 09:32:39 compute-0 systemd[1]: libpod-060967dab85bdae7152ff2f1c0c74a0c0461b01126bfd28137c83d8a41571078.scope: Consumed 7.316s CPU time.
Oct  3 09:32:39 compute-0 conmon[201829]: conmon 060967dab85bdae7152f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-060967dab85bdae7152ff2f1c0c74a0c0461b01126bfd28137c83d8a41571078.scope/container/memory.events
Oct  3 09:32:39 compute-0 podman[201812]: 2025-10-03 09:32:39.072271482 +0000 UTC m=+13.792185211 container died 060967dab85bdae7152ff2f1c0c74a0c0461b01126bfd28137c83d8a41571078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c846e0c3e0833de106b7ded2858748d6d4602d0b8cfa73c9190ee7236e437de0-merged.mount: Deactivated successfully.
Oct  3 09:32:39 compute-0 podman[201812]: 2025-10-03 09:32:39.161605261 +0000 UTC m=+13.881518980 container remove 060967dab85bdae7152ff2f1c0c74a0c0461b01126bfd28137c83d8a41571078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 09:32:39 compute-0 systemd[1]: libpod-conmon-060967dab85bdae7152ff2f1c0c74a0c0461b01126bfd28137c83d8a41571078.scope: Deactivated successfully.
Oct  3 09:32:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:39 compute-0 podman[204860]: 2025-10-03 09:32:39.921912174 +0000 UTC m=+0.053766956 container create bbf146a4bb500ffb6c8a99c0bff442d63c71785bbee975a4327e6bcf4b023f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_haibt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:32:39 compute-0 systemd[1]: Started libpod-conmon-bbf146a4bb500ffb6c8a99c0bff442d63c71785bbee975a4327e6bcf4b023f62.scope.
Oct  3 09:32:39 compute-0 podman[204860]: 2025-10-03 09:32:39.894033691 +0000 UTC m=+0.025888523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:40 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:40 compute-0 podman[204860]: 2025-10-03 09:32:40.086960921 +0000 UTC m=+0.218815723 container init bbf146a4bb500ffb6c8a99c0bff442d63c71785bbee975a4327e6bcf4b023f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 09:32:40 compute-0 podman[204860]: 2025-10-03 09:32:40.096152138 +0000 UTC m=+0.228006920 container start bbf146a4bb500ffb6c8a99c0bff442d63c71785bbee975a4327e6bcf4b023f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_haibt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 09:32:40 compute-0 hopeful_haibt[204875]: 167 167
Oct  3 09:32:40 compute-0 systemd[1]: libpod-bbf146a4bb500ffb6c8a99c0bff442d63c71785bbee975a4327e6bcf4b023f62.scope: Deactivated successfully.
Oct  3 09:32:40 compute-0 podman[204860]: 2025-10-03 09:32:40.158591575 +0000 UTC m=+0.290446407 container attach bbf146a4bb500ffb6c8a99c0bff442d63c71785bbee975a4327e6bcf4b023f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:32:40 compute-0 podman[204860]: 2025-10-03 09:32:40.159062959 +0000 UTC m=+0.290917791 container died bbf146a4bb500ffb6c8a99c0bff442d63c71785bbee975a4327e6bcf4b023f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 09:32:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-a63fbe3c6d52156bc7c70545d443204928a584ea0fddab86b4c5d8f90c159902-merged.mount: Deactivated successfully.
Oct  3 09:32:40 compute-0 podman[204860]: 2025-10-03 09:32:40.216226776 +0000 UTC m=+0.348081558 container remove bbf146a4bb500ffb6c8a99c0bff442d63c71785bbee975a4327e6bcf4b023f62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_haibt, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:32:40 compute-0 systemd[1]: libpod-conmon-bbf146a4bb500ffb6c8a99c0bff442d63c71785bbee975a4327e6bcf4b023f62.scope: Deactivated successfully.
Oct  3 09:32:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:32:40 compute-0 podman[204902]: 2025-10-03 09:32:40.435582304 +0000 UTC m=+0.094788105 container create a42cb713db13f8983445cc1cd884293cca0a2dfaa09506982da745d0a59afb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_rhodes, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 09:32:40 compute-0 podman[204902]: 2025-10-03 09:32:40.373664533 +0000 UTC m=+0.032870324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:40 compute-0 systemd[1]: Started libpod-conmon-a42cb713db13f8983445cc1cd884293cca0a2dfaa09506982da745d0a59afb1d.scope.
Oct  3 09:32:40 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2604cb35f6ab8ac7c67bcdf02e15445d8aa0d5ba29de9198e170a541255a7d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2604cb35f6ab8ac7c67bcdf02e15445d8aa0d5ba29de9198e170a541255a7d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2604cb35f6ab8ac7c67bcdf02e15445d8aa0d5ba29de9198e170a541255a7d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2604cb35f6ab8ac7c67bcdf02e15445d8aa0d5ba29de9198e170a541255a7d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:40 compute-0 podman[204902]: 2025-10-03 09:32:40.670303886 +0000 UTC m=+0.329509687 container init a42cb713db13f8983445cc1cd884293cca0a2dfaa09506982da745d0a59afb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 09:32:40 compute-0 podman[204902]: 2025-10-03 09:32:40.67869693 +0000 UTC m=+0.337902701 container start a42cb713db13f8983445cc1cd884293cca0a2dfaa09506982da745d0a59afb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:40 compute-0 podman[204902]: 2025-10-03 09:32:40.697733865 +0000 UTC m=+0.356939666 container attach a42cb713db13f8983445cc1cd884293cca0a2dfaa09506982da745d0a59afb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]: {
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:    "0": [
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:        {
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "devices": [
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "/dev/loop3"
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            ],
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "lv_name": "ceph_lv0",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "lv_size": "21470642176",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "name": "ceph_lv0",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "tags": {
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.cluster_name": "ceph",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.crush_device_class": "",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.encrypted": "0",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.osd_id": "0",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.type": "block",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.vdo": "0"
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            },
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "type": "block",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "vg_name": "ceph_vg0"
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:        }
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:    ],
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:    "1": [
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:        {
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "devices": [
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "/dev/loop4"
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            ],
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "lv_name": "ceph_lv1",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "lv_size": "21470642176",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "name": "ceph_lv1",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "tags": {
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.cluster_name": "ceph",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.crush_device_class": "",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.encrypted": "0",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.osd_id": "1",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.type": "block",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.vdo": "0"
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            },
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "type": "block",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "vg_name": "ceph_vg1"
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:        }
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:    ],
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:    "2": [
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:        {
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "devices": [
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "/dev/loop5"
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            ],
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "lv_name": "ceph_lv2",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "lv_size": "21470642176",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "name": "ceph_lv2",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "tags": {
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.cluster_name": "ceph",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.crush_device_class": "",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.encrypted": "0",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.osd_id": "2",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.type": "block",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:                "ceph.vdo": "0"
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            },
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "type": "block",
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:            "vg_name": "ceph_vg2"
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:        }
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]:    ]
Oct  3 09:32:41 compute-0 amazing_rhodes[204918]: }
Oct  3 09:32:41 compute-0 systemd[1]: libpod-a42cb713db13f8983445cc1cd884293cca0a2dfaa09506982da745d0a59afb1d.scope: Deactivated successfully.
Oct  3 09:32:41 compute-0 podman[204902]: 2025-10-03 09:32:41.472508565 +0000 UTC m=+1.131714376 container died a42cb713db13f8983445cc1cd884293cca0a2dfaa09506982da745d0a59afb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:32:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2604cb35f6ab8ac7c67bcdf02e15445d8aa0d5ba29de9198e170a541255a7d0-merged.mount: Deactivated successfully.
Oct  3 09:32:41 compute-0 podman[204902]: 2025-10-03 09:32:41.743867414 +0000 UTC m=+1.403073185 container remove a42cb713db13f8983445cc1cd884293cca0a2dfaa09506982da745d0a59afb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 09:32:41 compute-0 systemd[1]: libpod-conmon-a42cb713db13f8983445cc1cd884293cca0a2dfaa09506982da745d0a59afb1d.scope: Deactivated successfully.
Oct  3 09:32:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Oct  3 09:32:41 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  3 09:32:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:32:41 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:32:41 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Oct  3 09:32:41 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Oct  3 09:32:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:42 compute-0 podman[205074]: 2025-10-03 09:32:42.503336331 +0000 UTC m=+0.037562566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:42 compute-0 podman[205074]: 2025-10-03 09:32:42.725005919 +0000 UTC m=+0.259232134 container create 7123c4087b5dd8135bfaf2dadd6f744336f5b0b742e572bce747aab59346efb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:32:42 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  3 09:32:42 compute-0 ceph-mon[191783]: Deploying daemon osd.0 on compute-0
Oct  3 09:32:42 compute-0 systemd[1]: Started libpod-conmon-7123c4087b5dd8135bfaf2dadd6f744336f5b0b742e572bce747aab59346efb1.scope.
Oct  3 09:32:42 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:43 compute-0 podman[205074]: 2025-10-03 09:32:43.094124122 +0000 UTC m=+0.628350417 container init 7123c4087b5dd8135bfaf2dadd6f744336f5b0b742e572bce747aab59346efb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 09:32:43 compute-0 podman[205074]: 2025-10-03 09:32:43.112893429 +0000 UTC m=+0.647119684 container start 7123c4087b5dd8135bfaf2dadd6f744336f5b0b742e572bce747aab59346efb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct  3 09:32:43 compute-0 sweet_hertz[205090]: 167 167
Oct  3 09:32:43 compute-0 systemd[1]: libpod-7123c4087b5dd8135bfaf2dadd6f744336f5b0b742e572bce747aab59346efb1.scope: Deactivated successfully.
Oct  3 09:32:43 compute-0 podman[205074]: 2025-10-03 09:32:43.16585769 +0000 UTC m=+0.700083935 container attach 7123c4087b5dd8135bfaf2dadd6f744336f5b0b742e572bce747aab59346efb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  3 09:32:43 compute-0 podman[205074]: 2025-10-03 09:32:43.166987613 +0000 UTC m=+0.701213838 container died 7123c4087b5dd8135bfaf2dadd6f744336f5b0b742e572bce747aab59346efb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0573afb8dc2452c169b35b80f2ce9b80d170b88358149c11c813581c717f133-merged.mount: Deactivated successfully.
Oct  3 09:32:43 compute-0 podman[205074]: 2025-10-03 09:32:43.562171633 +0000 UTC m=+1.096397858 container remove 7123c4087b5dd8135bfaf2dadd6f744336f5b0b742e572bce747aab59346efb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hertz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Oct  3 09:32:43 compute-0 systemd[1]: libpod-conmon-7123c4087b5dd8135bfaf2dadd6f744336f5b0b742e572bce747aab59346efb1.scope: Deactivated successfully.
Oct  3 09:32:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:43 compute-0 podman[205121]: 2025-10-03 09:32:43.931761521 +0000 UTC m=+0.061659204 container create f27a5981c4f06b1b714876417df07885497a6de734fa81406f102387bc8796fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct  3 09:32:44 compute-0 podman[205121]: 2025-10-03 09:32:43.908673763 +0000 UTC m=+0.038571446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:44 compute-0 systemd[1]: Started libpod-conmon-f27a5981c4f06b1b714876417df07885497a6de734fa81406f102387bc8796fb.scope.
Oct  3 09:32:44 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c2bc2a6dfeed9665fb988c0ae65e80dceca9d0d97a5f476439f22f2757ca484/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c2bc2a6dfeed9665fb988c0ae65e80dceca9d0d97a5f476439f22f2757ca484/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c2bc2a6dfeed9665fb988c0ae65e80dceca9d0d97a5f476439f22f2757ca484/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c2bc2a6dfeed9665fb988c0ae65e80dceca9d0d97a5f476439f22f2757ca484/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c2bc2a6dfeed9665fb988c0ae65e80dceca9d0d97a5f476439f22f2757ca484/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:44 compute-0 podman[205121]: 2025-10-03 09:32:44.11376492 +0000 UTC m=+0.243662613 container init f27a5981c4f06b1b714876417df07885497a6de734fa81406f102387bc8796fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate-test, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:32:44 compute-0 podman[205121]: 2025-10-03 09:32:44.128389882 +0000 UTC m=+0.258287565 container start f27a5981c4f06b1b714876417df07885497a6de734fa81406f102387bc8796fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate-test, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 09:32:44 compute-0 podman[205121]: 2025-10-03 09:32:44.175042531 +0000 UTC m=+0.304940214 container attach f27a5981c4f06b1b714876417df07885497a6de734fa81406f102387bc8796fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 09:32:44 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate-test[205137]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  3 09:32:44 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate-test[205137]:                            [--no-systemd] [--no-tmpfs]
Oct  3 09:32:44 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate-test[205137]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  3 09:32:44 compute-0 systemd[1]: libpod-f27a5981c4f06b1b714876417df07885497a6de734fa81406f102387bc8796fb.scope: Deactivated successfully.
Oct  3 09:32:44 compute-0 podman[205121]: 2025-10-03 09:32:44.843664323 +0000 UTC m=+0.973561986 container died f27a5981c4f06b1b714876417df07885497a6de734fa81406f102387bc8796fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate-test, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 09:32:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c2bc2a6dfeed9665fb988c0ae65e80dceca9d0d97a5f476439f22f2757ca484-merged.mount: Deactivated successfully.
Oct  3 09:32:45 compute-0 podman[205121]: 2025-10-03 09:32:45.313955363 +0000 UTC m=+1.443853026 container remove f27a5981c4f06b1b714876417df07885497a6de734fa81406f102387bc8796fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 09:32:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:32:45 compute-0 systemd[1]: libpod-conmon-f27a5981c4f06b1b714876417df07885497a6de734fa81406f102387bc8796fb.scope: Deactivated successfully.
Oct  3 09:32:45 compute-0 systemd[1]: Reloading.
Oct  3 09:32:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:32:45
Oct  3 09:32:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:32:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:32:45 compute-0 ceph-mgr[192071]: [balancer INFO root] No pools available
Oct  3 09:32:45 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:32:45 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:32:46 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:32:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:32:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:32:46 compute-0 systemd[1]: Reloading.
Oct  3 09:32:46 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:32:46 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:32:46 compute-0 systemd[1]: Starting Ceph osd.0 for 9b4e8c9a-5555-5510-a631-4742a1182561...
Oct  3 09:32:46 compute-0 podman[205248]: 2025-10-03 09:32:46.863831923 +0000 UTC m=+0.096869348 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 09:32:46 compute-0 podman[205250]: 2025-10-03 09:32:46.864581066 +0000 UTC m=+0.094714134 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 09:32:46 compute-0 podman[205247]: 2025-10-03 09:32:46.86542455 +0000 UTC m=+0.100445415 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, com.redhat.component=ubi9-container, container_name=kepler, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=base rhel9, name=ubi9)
Oct  3 09:32:46 compute-0 podman[205249]: 2025-10-03 09:32:46.895584773 +0000 UTC m=+0.130222907 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  3 09:32:47 compute-0 podman[205365]: 2025-10-03 09:32:47.059452923 +0000 UTC m=+0.040337110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:47 compute-0 podman[205365]: 2025-10-03 09:32:47.181022207 +0000 UTC m=+0.161906354 container create bdfaba501d00de31219945a1db2a4eaa30a03ff84726a2b2b215d55b36c1f7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 09:32:47 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a939488fa774eb2dbea17711fff17b63926b52b1c70c7369a26c2cfc15ba3da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a939488fa774eb2dbea17711fff17b63926b52b1c70c7369a26c2cfc15ba3da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a939488fa774eb2dbea17711fff17b63926b52b1c70c7369a26c2cfc15ba3da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a939488fa774eb2dbea17711fff17b63926b52b1c70c7369a26c2cfc15ba3da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a939488fa774eb2dbea17711fff17b63926b52b1c70c7369a26c2cfc15ba3da/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:47 compute-0 podman[205365]: 2025-10-03 09:32:47.642998067 +0000 UTC m=+0.623882204 container init bdfaba501d00de31219945a1db2a4eaa30a03ff84726a2b2b215d55b36c1f7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:32:47 compute-0 podman[205365]: 2025-10-03 09:32:47.651827253 +0000 UTC m=+0.632711360 container start bdfaba501d00de31219945a1db2a4eaa30a03ff84726a2b2b215d55b36c1f7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 09:32:47 compute-0 podman[205365]: 2025-10-03 09:32:47.72320432 +0000 UTC m=+0.704088437 container attach bdfaba501d00de31219945a1db2a4eaa30a03ff84726a2b2b215d55b36c1f7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 09:32:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:48 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate[205381]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  3 09:32:48 compute-0 bash[205365]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  3 09:32:48 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate[205381]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct  3 09:32:48 compute-0 bash[205365]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct  3 09:32:48 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate[205381]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct  3 09:32:48 compute-0 bash[205365]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct  3 09:32:48 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate[205381]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  3 09:32:48 compute-0 bash[205365]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  3 09:32:48 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate[205381]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  3 09:32:48 compute-0 bash[205365]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  3 09:32:48 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate[205381]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  3 09:32:48 compute-0 bash[205365]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  3 09:32:48 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate[205381]: --> ceph-volume raw activate successful for osd ID: 0
Oct  3 09:32:48 compute-0 bash[205365]: --> ceph-volume raw activate successful for osd ID: 0
Oct  3 09:32:48 compute-0 systemd[1]: libpod-bdfaba501d00de31219945a1db2a4eaa30a03ff84726a2b2b215d55b36c1f7b6.scope: Deactivated successfully.
Oct  3 09:32:48 compute-0 podman[205365]: 2025-10-03 09:32:48.843141919 +0000 UTC m=+1.824026026 container died bdfaba501d00de31219945a1db2a4eaa30a03ff84726a2b2b215d55b36c1f7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:32:48 compute-0 systemd[1]: libpod-bdfaba501d00de31219945a1db2a4eaa30a03ff84726a2b2b215d55b36c1f7b6.scope: Consumed 1.192s CPU time.
Oct  3 09:32:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a939488fa774eb2dbea17711fff17b63926b52b1c70c7369a26c2cfc15ba3da-merged.mount: Deactivated successfully.
Oct  3 09:32:48 compute-0 podman[205365]: 2025-10-03 09:32:48.980141448 +0000 UTC m=+1.961025555 container remove bdfaba501d00de31219945a1db2a4eaa30a03ff84726a2b2b215d55b36c1f7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0-activate, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:32:49 compute-0 podman[205564]: 2025-10-03 09:32:49.26051983 +0000 UTC m=+0.057307633 container create afc3a3d8841233f2892d479479bed9c06be82d26de95f639532e23b83a5d3bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 09:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bada71e0d3064e2c3e57e997fa8df2043264932a899eef0862504de2ec9c6526/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bada71e0d3064e2c3e57e997fa8df2043264932a899eef0862504de2ec9c6526/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bada71e0d3064e2c3e57e997fa8df2043264932a899eef0862504de2ec9c6526/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bada71e0d3064e2c3e57e997fa8df2043264932a899eef0862504de2ec9c6526/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bada71e0d3064e2c3e57e997fa8df2043264932a899eef0862504de2ec9c6526/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:49 compute-0 podman[205564]: 2025-10-03 09:32:49.238055321 +0000 UTC m=+0.034843154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:49 compute-0 podman[205564]: 2025-10-03 09:32:49.342437885 +0000 UTC m=+0.139225708 container init afc3a3d8841233f2892d479479bed9c06be82d26de95f639532e23b83a5d3bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:32:49 compute-0 podman[205564]: 2025-10-03 09:32:49.354715536 +0000 UTC m=+0.151503339 container start afc3a3d8841233f2892d479479bed9c06be82d26de95f639532e23b83a5d3bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 09:32:49 compute-0 bash[205564]: afc3a3d8841233f2892d479479bed9c06be82d26de95f639532e23b83a5d3bfd
Oct  3 09:32:49 compute-0 systemd[1]: Started Ceph osd.0 for 9b4e8c9a-5555-5510-a631-4742a1182561.
Oct  3 09:32:49 compute-0 ceph-osd[205584]: set uid:gid to 167:167 (ceph:ceph)
Oct  3 09:32:49 compute-0 ceph-osd[205584]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  3 09:32:49 compute-0 ceph-osd[205584]: pidfile_write: ignore empty --pid-file
Oct  3 09:32:49 compute-0 ceph-osd[205584]: bdev(0x561d1bc3d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  3 09:32:49 compute-0 ceph-osd[205584]: bdev(0x561d1bc3d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  3 09:32:49 compute-0 ceph-osd[205584]: bdev(0x561d1bc3d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:32:49 compute-0 ceph-osd[205584]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  3 09:32:49 compute-0 ceph-osd[205584]: bdev(0x561d1ca7f800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  3 09:32:49 compute-0 ceph-osd[205584]: bdev(0x561d1ca7f800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  3 09:32:49 compute-0 ceph-osd[205584]: bdev(0x561d1ca7f800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:32:49 compute-0 ceph-osd[205584]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  3 09:32:49 compute-0 ceph-osd[205584]: bdev(0x561d1ca7f800 /var/lib/ceph/osd/ceph-0/block) close
Oct  3 09:32:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:32:49 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:32:49 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Oct  3 09:32:49 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  3 09:32:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:32:49 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:32:49 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct  3 09:32:49 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct  3 09:32:49 compute-0 ceph-osd[205584]: bdev(0x561d1bc3d800 /var/lib/ceph/osd/ceph-0/block) close
Oct  3 09:32:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:49 compute-0 ceph-osd[205584]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct  3 09:32:49 compute-0 ceph-osd[205584]: load: jerasure load: lrc 
Oct  3 09:32:49 compute-0 ceph-osd[205584]: bdev(0x561d1be06c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  3 09:32:49 compute-0 ceph-osd[205584]: bdev(0x561d1be06c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  3 09:32:49 compute-0 ceph-osd[205584]: bdev(0x561d1be06c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:32:49 compute-0 ceph-osd[205584]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  3 09:32:49 compute-0 ceph-osd[205584]: bdev(0x561d1be06c00 /var/lib/ceph/osd/ceph-0/block) close
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bdev(0x561d1be06c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bdev(0x561d1be06c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bdev(0x561d1be06c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bdev(0x561d1be06c00 /var/lib/ceph/osd/ceph-0/block) close
Oct  3 09:32:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:32:50 compute-0 podman[205748]: 2025-10-03 09:32:50.299756031 +0000 UTC m=+0.033589806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:50 compute-0 podman[205748]: 2025-10-03 09:32:50.425993276 +0000 UTC m=+0.159827041 container create fc90613a4e6e052a6056ecf40825a3a31d3f2749fb7e1420569b13a25e4e18fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shtern, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 09:32:50 compute-0 ceph-osd[205584]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  3 09:32:50 compute-0 ceph-osd[205584]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bdev(0x561d1be06c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bdev(0x561d1be06c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bdev(0x561d1be06c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bdev(0x561d1be07400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bdev(0x561d1be07400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bdev(0x561d1be07400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluefs mount
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluefs mount shared_bdev_used = 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  3 09:32:50 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:50 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:50 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  3 09:32:50 compute-0 ceph-mon[191783]: Deploying daemon osd.1 on compute-0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: RocksDB version: 7.9.2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Git sha 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: DB SUMMARY
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: DB Session ID:  EP76ECO123SXP8MNP8F2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: CURRENT file:  CURRENT
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: IDENTITY file:  IDENTITY
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                         Options.error_if_exists: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.create_if_missing: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                         Options.paranoid_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                                     Options.env: 0x561d1cad1d50
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                                Options.info_log: 0x561d1bcc8800
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_file_opening_threads: 16
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                              Options.statistics: (nil)
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.use_fsync: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.max_log_file_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                         Options.allow_fallocate: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.use_direct_reads: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.create_missing_column_families: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                              Options.db_log_dir: 
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                                 Options.wal_dir: db.wal
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.advise_random_on_open: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.write_buffer_manager: 0x561d1cbba460
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                            Options.rate_limiter: (nil)
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.unordered_write: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.row_cache: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                              Options.wal_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.allow_ingest_behind: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.two_write_queues: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.manual_wal_flush: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.wal_compression: 0
Oct  3 09:32:50 compute-0 systemd[1]: Started libpod-conmon-fc90613a4e6e052a6056ecf40825a3a31d3f2749fb7e1420569b13a25e4e18fa.scope.
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.atomic_flush: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.log_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.allow_data_in_errors: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.db_host_id: __hostname__
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.max_background_jobs: 4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.max_background_compactions: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.max_subcompactions: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.max_open_files: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.bytes_per_sync: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.max_background_flushes: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Compression algorithms supported:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: #011kZSTD supported: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: #011kXpressCompression supported: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: #011kBZip2Compression supported: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: #011kLZ4Compression supported: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: #011kZlibCompression supported: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: #011kSnappyCompression supported: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc8e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc8e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc8e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc8e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc8e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc8e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc8e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc8e60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc8e60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc8e60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d6b03e0d-ec9c-4190-aaae-ed44233e7d9f
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483970560723, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483970561157, "job": 1, "event": "recovery_finished"}
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: freelist init
Oct  3 09:32:50 compute-0 ceph-osd[205584]: freelist _read_cfg
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluefs umount
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bdev(0x561d1be07400 /var/lib/ceph/osd/ceph-0/block) close
Oct  3 09:32:50 compute-0 podman[205748]: 2025-10-03 09:32:50.59263577 +0000 UTC m=+0.326469545 container init fc90613a4e6e052a6056ecf40825a3a31d3f2749fb7e1420569b13a25e4e18fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shtern, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:32:50 compute-0 podman[205748]: 2025-10-03 09:32:50.604290322 +0000 UTC m=+0.338124077 container start fc90613a4e6e052a6056ecf40825a3a31d3f2749fb7e1420569b13a25e4e18fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 09:32:50 compute-0 podman[205748]: 2025-10-03 09:32:50.609047685 +0000 UTC m=+0.342881460 container attach fc90613a4e6e052a6056ecf40825a3a31d3f2749fb7e1420569b13a25e4e18fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 09:32:50 compute-0 frosty_shtern[205776]: 167 167
Oct  3 09:32:50 compute-0 systemd[1]: libpod-fc90613a4e6e052a6056ecf40825a3a31d3f2749fb7e1420569b13a25e4e18fa.scope: Deactivated successfully.
Oct  3 09:32:50 compute-0 podman[205748]: 2025-10-03 09:32:50.61184653 +0000 UTC m=+0.345680285 container died fc90613a4e6e052a6056ecf40825a3a31d3f2749fb7e1420569b13a25e4e18fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct  3 09:32:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-889a4fa6dcd5a7573c8a405020ddf3c2bd766a98c2a823a05c15effa50ddba1f-merged.mount: Deactivated successfully.
Oct  3 09:32:50 compute-0 podman[205748]: 2025-10-03 09:32:50.726468963 +0000 UTC m=+0.460302718 container remove fc90613a4e6e052a6056ecf40825a3a31d3f2749fb7e1420569b13a25e4e18fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shtern, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:50 compute-0 systemd[1]: libpod-conmon-fc90613a4e6e052a6056ecf40825a3a31d3f2749fb7e1420569b13a25e4e18fa.scope: Deactivated successfully.
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bdev(0x561d1be07400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bdev(0x561d1be07400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bdev(0x561d1be07400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluefs mount
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluefs mount shared_bdev_used = 4718592
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: RocksDB version: 7.9.2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Git sha 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: DB SUMMARY
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: DB Session ID:  EP76ECO123SXP8MNP8F3
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: CURRENT file:  CURRENT
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: IDENTITY file:  IDENTITY
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                         Options.error_if_exists: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.create_if_missing: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                         Options.paranoid_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                                     Options.env: 0x561d1cc6eb60
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                                Options.info_log: 0x561d1bcc91c0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_file_opening_threads: 16
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                              Options.statistics: (nil)
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.use_fsync: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.max_log_file_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                         Options.allow_fallocate: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.use_direct_reads: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.create_missing_column_families: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                              Options.db_log_dir: 
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                                 Options.wal_dir: db.wal
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.advise_random_on_open: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.write_buffer_manager: 0x561d1cbbaa00
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                            Options.rate_limiter: (nil)
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.unordered_write: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.row_cache: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                              Options.wal_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.allow_ingest_behind: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.two_write_queues: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.manual_wal_flush: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.wal_compression: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.atomic_flush: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.log_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.allow_data_in_errors: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.db_host_id: __hostname__
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.max_background_jobs: 4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.max_background_compactions: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.max_subcompactions: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.max_open_files: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.bytes_per_sync: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.max_background_flushes: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Compression algorithms supported:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: #011kZSTD supported: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: #011kXpressCompression supported: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: #011kBZip2Compression supported: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: #011kLZ4Compression supported: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: #011kZlibCompression supported: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: #011kSnappyCompression supported: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc89c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc89c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc89c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc89c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc89c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc89c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc89c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc8f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc8f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561d1bcc8f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561d1bcb0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d6b03e0d-ec9c-4190-aaae-ed44233e7d9f
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483970808064, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483970812989, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483970, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d6b03e0d-ec9c-4190-aaae-ed44233e7d9f", "db_session_id": "EP76ECO123SXP8MNP8F3", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483970816804, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483970, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d6b03e0d-ec9c-4190-aaae-ed44233e7d9f", "db_session_id": "EP76ECO123SXP8MNP8F3", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483970820666, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483970, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d6b03e0d-ec9c-4190-aaae-ed44233e7d9f", "db_session_id": "EP76ECO123SXP8MNP8F3", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483970822918, "job": 1, "event": "recovery_finished"}
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561d1cc8c000
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: DB pointer 0x561d1bceba00
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct  3 09:32:50 compute-0 ceph-osd[205584]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 09:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d1bcb0dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d1bcb0dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Oct  3 09:32:50 compute-0 ceph-osd[205584]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  3 09:32:50 compute-0 ceph-osd[205584]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  3 09:32:50 compute-0 ceph-osd[205584]: _get_class not permitted to load lua
Oct  3 09:32:50 compute-0 ceph-osd[205584]: _get_class not permitted to load sdk
Oct  3 09:32:50 compute-0 ceph-osd[205584]: _get_class not permitted to load test_remote_reads
Oct  3 09:32:50 compute-0 ceph-osd[205584]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  3 09:32:50 compute-0 ceph-osd[205584]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  3 09:32:50 compute-0 ceph-osd[205584]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  3 09:32:50 compute-0 ceph-osd[205584]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  3 09:32:50 compute-0 ceph-osd[205584]: osd.0 0 load_pgs
Oct  3 09:32:50 compute-0 ceph-osd[205584]: osd.0 0 load_pgs opened 0 pgs
Oct  3 09:32:50 compute-0 ceph-osd[205584]: osd.0 0 log_to_monitors true
Oct  3 09:32:50 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0[205580]: 2025-10-03T09:32:50.967+0000 7f0cfdeab740 -1 osd.0 0 log_to_monitors true
Oct  3 09:32:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Oct  3 09:32:50 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3346435816,v1:192.168.122.100:6803/3346435816]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  3 09:32:51 compute-0 podman[206202]: 2025-10-03 09:32:51.131489102 +0000 UTC m=+0.040793705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:51 compute-0 podman[206202]: 2025-10-03 09:32:51.247827326 +0000 UTC m=+0.157131919 container create 22d86dcf4af8b73acd38a93ed2777111f0de28f7add2e9b60ae61d14ef7d7493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate-test, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:32:51 compute-0 systemd[1]: Started libpod-conmon-22d86dcf4af8b73acd38a93ed2777111f0de28f7add2e9b60ae61d14ef7d7493.scope.
Oct  3 09:32:51 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d3a011c7b720aeba73d8e5dbf369b7579f3dbba69bd0b791fc23608b677ed8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d3a011c7b720aeba73d8e5dbf369b7579f3dbba69bd0b791fc23608b677ed8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d3a011c7b720aeba73d8e5dbf369b7579f3dbba69bd0b791fc23608b677ed8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d3a011c7b720aeba73d8e5dbf369b7579f3dbba69bd0b791fc23608b677ed8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d3a011c7b720aeba73d8e5dbf369b7579f3dbba69bd0b791fc23608b677ed8/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:51 compute-0 podman[206202]: 2025-10-03 09:32:51.434809126 +0000 UTC m=+0.344113739 container init 22d86dcf4af8b73acd38a93ed2777111f0de28f7add2e9b60ae61d14ef7d7493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:32:51 compute-0 podman[206202]: 2025-10-03 09:32:51.459976786 +0000 UTC m=+0.369281379 container start 22d86dcf4af8b73acd38a93ed2777111f0de28f7add2e9b60ae61d14ef7d7493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:32:51 compute-0 podman[206202]: 2025-10-03 09:32:51.514918347 +0000 UTC m=+0.424222930 container attach 22d86dcf4af8b73acd38a93ed2777111f0de28f7add2e9b60ae61d14ef7d7493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate-test, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:32:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct  3 09:32:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  3 09:32:51 compute-0 ceph-mon[191783]: from='osd.0 [v2:192.168.122.100:6802/3346435816,v1:192.168.122.100:6803/3346435816]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  3 09:32:51 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3346435816,v1:192.168.122.100:6803/3346435816]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  3 09:32:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Oct  3 09:32:51 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Oct  3 09:32:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct  3 09:32:51 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3346435816,v1:192.168.122.100:6803/3346435816]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  3 09:32:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  3 09:32:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  3 09:32:51 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  3 09:32:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  3 09:32:51 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  3 09:32:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  3 09:32:51 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  3 09:32:51 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  3 09:32:51 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  3 09:32:51 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  3 09:32:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:52 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  3 09:32:52 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  3 09:32:52 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate-test[206218]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  3 09:32:52 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate-test[206218]:                            [--no-systemd] [--no-tmpfs]
Oct  3 09:32:52 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate-test[206218]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  3 09:32:52 compute-0 systemd[1]: libpod-22d86dcf4af8b73acd38a93ed2777111f0de28f7add2e9b60ae61d14ef7d7493.scope: Deactivated successfully.
Oct  3 09:32:52 compute-0 podman[206202]: 2025-10-03 09:32:52.187626222 +0000 UTC m=+1.096930805 container died 22d86dcf4af8b73acd38a93ed2777111f0de28f7add2e9b60ae61d14ef7d7493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate-test, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:32:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-92d3a011c7b720aeba73d8e5dbf369b7579f3dbba69bd0b791fc23608b677ed8-merged.mount: Deactivated successfully.
Oct  3 09:32:52 compute-0 podman[206202]: 2025-10-03 09:32:52.325146177 +0000 UTC m=+1.234450760 container remove 22d86dcf4af8b73acd38a93ed2777111f0de28f7add2e9b60ae61d14ef7d7493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate-test, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:52 compute-0 systemd[1]: libpod-conmon-22d86dcf4af8b73acd38a93ed2777111f0de28f7add2e9b60ae61d14ef7d7493.scope: Deactivated successfully.
Oct  3 09:32:52 compute-0 systemd[1]: Reloading.
Oct  3 09:32:52 compute-0 python3[206271]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:32:52 compute-0 podman[206283]: 2025-10-03 09:32:52.783572849 +0000 UTC m=+0.053169938 container create 035e83e1ed081299e47cc7e7709c480b8ed95856f6cdcc5b295cc9403a18e1ea (image=quay.io/ceph/ceph:v18, name=wonderful_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 09:32:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct  3 09:32:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  3 09:32:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3346435816,v1:192.168.122.100:6803/3346435816]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  3 09:32:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Oct  3 09:32:52 compute-0 ceph-osd[205584]: osd.0 0 done with init, starting boot process
Oct  3 09:32:52 compute-0 ceph-osd[205584]: osd.0 0 start_boot
Oct  3 09:32:52 compute-0 ceph-osd[205584]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  3 09:32:52 compute-0 ceph-osd[205584]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  3 09:32:52 compute-0 ceph-osd[205584]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  3 09:32:52 compute-0 ceph-osd[205584]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  3 09:32:52 compute-0 ceph-osd[205584]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct  3 09:32:52 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Oct  3 09:32:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  3 09:32:52 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  3 09:32:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  3 09:32:52 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  3 09:32:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  3 09:32:52 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  3 09:32:52 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  3 09:32:52 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  3 09:32:52 compute-0 ceph-mon[191783]: from='osd.0 [v2:192.168.122.100:6802/3346435816,v1:192.168.122.100:6803/3346435816]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  3 09:32:52 compute-0 ceph-mon[191783]: from='osd.0 [v2:192.168.122.100:6802/3346435816,v1:192.168.122.100:6803/3346435816]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  3 09:32:52 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  3 09:32:52 compute-0 ceph-mgr[192071]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3346435816; not ready for session (expect reconnect)
Oct  3 09:32:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  3 09:32:52 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  3 09:32:52 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  3 09:32:52 compute-0 podman[206283]: 2025-10-03 09:32:52.76145604 +0000 UTC m=+0.031053159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:32:52 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:32:52 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:32:53 compute-0 systemd[1]: Started libpod-conmon-035e83e1ed081299e47cc7e7709c480b8ed95856f6cdcc5b295cc9403a18e1ea.scope.
Oct  3 09:32:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c5c07b520c0e98c70b6a8b93c474c8a82e8aca0ad30361b44fa7c1fc55c65ef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c5c07b520c0e98c70b6a8b93c474c8a82e8aca0ad30361b44fa7c1fc55c65ef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c5c07b520c0e98c70b6a8b93c474c8a82e8aca0ad30361b44fa7c1fc55c65ef/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:53 compute-0 systemd[1]: Reloading.
Oct  3 09:32:53 compute-0 podman[206283]: 2025-10-03 09:32:53.243303679 +0000 UTC m=+0.512900788 container init 035e83e1ed081299e47cc7e7709c480b8ed95856f6cdcc5b295cc9403a18e1ea (image=quay.io/ceph/ceph:v18, name=wonderful_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 09:32:53 compute-0 podman[206283]: 2025-10-03 09:32:53.253073355 +0000 UTC m=+0.522670444 container start 035e83e1ed081299e47cc7e7709c480b8ed95856f6cdcc5b295cc9403a18e1ea (image=quay.io/ceph/ceph:v18, name=wonderful_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:32:53 compute-0 podman[206283]: 2025-10-03 09:32:53.266597564 +0000 UTC m=+0.536194653 container attach 035e83e1ed081299e47cc7e7709c480b8ed95856f6cdcc5b295cc9403a18e1ea (image=quay.io/ceph/ceph:v18, name=wonderful_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 09:32:53 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:32:53 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:32:53 compute-0 systemd[1]: Starting Ceph osd.1 for 9b4e8c9a-5555-5510-a631-4742a1182561...
Oct  3 09:32:53 compute-0 podman[206397]: 2025-10-03 09:32:53.795504814 +0000 UTC m=+0.083944307 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., release=1755695350, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6)
Oct  3 09:32:53 compute-0 ceph-mgr[192071]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3346435816; not ready for session (expect reconnect)
Oct  3 09:32:53 compute-0 podman[206395]: 2025-10-03 09:32:53.828997446 +0000 UTC m=+0.123860423 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible)
Oct  3 09:32:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  3 09:32:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  3 09:32:53 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  3 09:32:53 compute-0 ceph-mon[191783]: from='osd.0 [v2:192.168.122.100:6802/3346435816,v1:192.168.122.100:6803/3346435816]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  3 09:32:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  3 09:32:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1782255210' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  3 09:32:53 compute-0 wonderful_pasteur[206333]: 
Oct  3 09:32:53 compute-0 wonderful_pasteur[206333]: {"fsid":"9b4e8c9a-5555-5510-a631-4742a1182561","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":118,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":8,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1759483955,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-03T09:32:47.907561+0000","services":{}},"progress_events":{}}
Oct  3 09:32:53 compute-0 systemd[1]: libpod-035e83e1ed081299e47cc7e7709c480b8ed95856f6cdcc5b295cc9403a18e1ea.scope: Deactivated successfully.
Oct  3 09:32:53 compute-0 podman[206283]: 2025-10-03 09:32:53.99259997 +0000 UTC m=+1.262197059 container died 035e83e1ed081299e47cc7e7709c480b8ed95856f6cdcc5b295cc9403a18e1ea (image=quay.io/ceph/ceph:v18, name=wonderful_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 09:32:54 compute-0 podman[206480]: 2025-10-03 09:32:54.069866514 +0000 UTC m=+0.110507170 container create e46b6a04c72be561cb5cf9dbaa21ec2c3d6da71e276f8560546a0b6d935029af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:32:54 compute-0 podman[206480]: 2025-10-03 09:32:53.996098045 +0000 UTC m=+0.036738721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c5c07b520c0e98c70b6a8b93c474c8a82e8aca0ad30361b44fa7c1fc55c65ef-merged.mount: Deactivated successfully.
Oct  3 09:32:54 compute-0 podman[206283]: 2025-10-03 09:32:54.188448417 +0000 UTC m=+1.458045506 container remove 035e83e1ed081299e47cc7e7709c480b8ed95856f6cdcc5b295cc9403a18e1ea (image=quay.io/ceph/ceph:v18, name=wonderful_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 09:32:54 compute-0 systemd[1]: libpod-conmon-035e83e1ed081299e47cc7e7709c480b8ed95856f6cdcc5b295cc9403a18e1ea.scope: Deactivated successfully.
Oct  3 09:32:54 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48624cb511985873b673d7c3d3e846b79676f693aae68ec587b526f1c1094d1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48624cb511985873b673d7c3d3e846b79676f693aae68ec587b526f1c1094d1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48624cb511985873b673d7c3d3e846b79676f693aae68ec587b526f1c1094d1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48624cb511985873b673d7c3d3e846b79676f693aae68ec587b526f1c1094d1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48624cb511985873b673d7c3d3e846b79676f693aae68ec587b526f1c1094d1d/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:54 compute-0 podman[206480]: 2025-10-03 09:32:54.328382665 +0000 UTC m=+0.369023321 container init e46b6a04c72be561cb5cf9dbaa21ec2c3d6da71e276f8560546a0b6d935029af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 09:32:54 compute-0 podman[206480]: 2025-10-03 09:32:54.342951335 +0000 UTC m=+0.383591991 container start e46b6a04c72be561cb5cf9dbaa21ec2c3d6da71e276f8560546a0b6d935029af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:32:54 compute-0 podman[206480]: 2025-10-03 09:32:54.359213207 +0000 UTC m=+0.399853893 container attach e46b6a04c72be561cb5cf9dbaa21ec2c3d6da71e276f8560546a0b6d935029af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:32:54 compute-0 ceph-mgr[192071]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3346435816; not ready for session (expect reconnect)
Oct  3 09:32:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  3 09:32:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  3 09:32:54 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  3 09:32:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:32:55 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate[206506]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  3 09:32:55 compute-0 bash[206480]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  3 09:32:55 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate[206506]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct  3 09:32:55 compute-0 bash[206480]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct  3 09:32:55 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate[206506]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct  3 09:32:55 compute-0 bash[206480]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct  3 09:32:55 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate[206506]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  3 09:32:55 compute-0 bash[206480]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  3 09:32:55 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate[206506]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  3 09:32:55 compute-0 bash[206480]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  3 09:32:55 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate[206506]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  3 09:32:55 compute-0 bash[206480]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  3 09:32:55 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate[206506]: --> ceph-volume raw activate successful for osd ID: 1
Oct  3 09:32:55 compute-0 bash[206480]: --> ceph-volume raw activate successful for osd ID: 1
Oct  3 09:32:55 compute-0 systemd[1]: libpod-e46b6a04c72be561cb5cf9dbaa21ec2c3d6da71e276f8560546a0b6d935029af.scope: Deactivated successfully.
Oct  3 09:32:55 compute-0 podman[206480]: 2025-10-03 09:32:55.463692478 +0000 UTC m=+1.504333134 container died e46b6a04c72be561cb5cf9dbaa21ec2c3d6da71e276f8560546a0b6d935029af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct  3 09:32:55 compute-0 systemd[1]: libpod-e46b6a04c72be561cb5cf9dbaa21ec2c3d6da71e276f8560546a0b6d935029af.scope: Consumed 1.134s CPU time.
Oct  3 09:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-48624cb511985873b673d7c3d3e846b79676f693aae68ec587b526f1c1094d1d-merged.mount: Deactivated successfully.
Oct  3 09:32:55 compute-0 podman[206480]: 2025-10-03 09:32:55.579990873 +0000 UTC m=+1.620631529 container remove e46b6a04c72be561cb5cf9dbaa21ec2c3d6da71e276f8560546a0b6d935029af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 09:32:55 compute-0 podman[206640]: 2025-10-03 09:32:55.624867368 +0000 UTC m=+0.128801313 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 09:32:55 compute-0 ceph-mgr[192071]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3346435816; not ready for session (expect reconnect)
Oct  3 09:32:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  3 09:32:55 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  3 09:32:55 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  3 09:32:55 compute-0 podman[206715]: 2025-10-03 09:32:55.867051536 +0000 UTC m=+0.076081200 container create 04521b7b64d41e64c3dd60b460a5d86914b644b57558e40ef5a9235b1a612191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 09:32:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  3 09:32:55 compute-0 podman[206715]: 2025-10-03 09:32:55.822107198 +0000 UTC m=+0.031136882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1430aa9384ebbe018a0275eabcdecc8d2b2e4178c091ba65ee72988fe945825f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1430aa9384ebbe018a0275eabcdecc8d2b2e4178c091ba65ee72988fe945825f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1430aa9384ebbe018a0275eabcdecc8d2b2e4178c091ba65ee72988fe945825f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1430aa9384ebbe018a0275eabcdecc8d2b2e4178c091ba65ee72988fe945825f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1430aa9384ebbe018a0275eabcdecc8d2b2e4178c091ba65ee72988fe945825f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:55 compute-0 podman[206715]: 2025-10-03 09:32:55.967093778 +0000 UTC m=+0.176123462 container init 04521b7b64d41e64c3dd60b460a5d86914b644b57558e40ef5a9235b1a612191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:32:55 compute-0 podman[206715]: 2025-10-03 09:32:55.985216656 +0000 UTC m=+0.194246320 container start 04521b7b64d41e64c3dd60b460a5d86914b644b57558e40ef5a9235b1a612191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:32:55 compute-0 bash[206715]: 04521b7b64d41e64c3dd60b460a5d86914b644b57558e40ef5a9235b1a612191
Oct  3 09:32:56 compute-0 systemd[1]: Started Ceph osd.1 for 9b4e8c9a-5555-5510-a631-4742a1182561.
Oct  3 09:32:56 compute-0 ceph-osd[206733]: set uid:gid to 167:167 (ceph:ceph)
Oct  3 09:32:56 compute-0 ceph-osd[206733]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  3 09:32:56 compute-0 ceph-osd[206733]: pidfile_write: ignore empty --pid-file
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bdev(0x565066373800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bdev(0x565066373800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bdev(0x565066373800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bdev(0x5650671b7800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bdev(0x5650671b7800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bdev(0x5650671b7800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bdev(0x5650671b7800 /var/lib/ceph/osd/ceph-1/block) close
Oct  3 09:32:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:32:56 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:32:56 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Oct  3 09:32:56 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  3 09:32:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:32:56 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:32:56 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Oct  3 09:32:56 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bdev(0x565066373800 /var/lib/ceph/osd/ceph-1/block) close
Oct  3 09:32:56 compute-0 ceph-osd[205584]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 26.361 iops: 6748.514 elapsed_sec: 0.445
Oct  3 09:32:56 compute-0 ceph-osd[205584]: log_channel(cluster) log [WRN] : OSD bench result of 6748.514116 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  3 09:32:56 compute-0 ceph-osd[205584]: osd.0 0 waiting for initial osdmap
Oct  3 09:32:56 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0[205580]: 2025-10-03T09:32:56.522+0000 7f0cfa642640 -1 osd.0 0 waiting for initial osdmap
Oct  3 09:32:56 compute-0 ceph-osd[205584]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct  3 09:32:56 compute-0 ceph-osd[205584]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct  3 09:32:56 compute-0 ceph-osd[205584]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct  3 09:32:56 compute-0 ceph-osd[205584]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Oct  3 09:32:56 compute-0 ceph-osd[205584]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  3 09:32:56 compute-0 ceph-osd[205584]: osd.0 8 set_numa_affinity not setting numa affinity
Oct  3 09:32:56 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-0[205580]: 2025-10-03T09:32:56.549+0000 7f0cf5453640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  3 09:32:56 compute-0 ceph-osd[205584]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct  3 09:32:56 compute-0 ceph-osd[206733]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct  3 09:32:56 compute-0 ceph-osd[206733]: load: jerasure load: lrc 
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bdev(0x565067236c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bdev(0x565067236c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bdev(0x565067236c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bdev(0x565067236c00 /var/lib/ceph/osd/ceph-1/block) close
Oct  3 09:32:56 compute-0 ceph-mgr[192071]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3346435816; not ready for session (expect reconnect)
Oct  3 09:32:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  3 09:32:56 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  3 09:32:56 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  3 09:32:56 compute-0 podman[206890]: 2025-10-03 09:32:56.854645666 +0000 UTC m=+0.055050634 container create ddfc90a4ef6dc874f0f8e65c2adad7546b76f74cb84cb2730061e83da02349d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bdev(0x565067236c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bdev(0x565067236c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bdev(0x565067236c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  3 09:32:56 compute-0 ceph-osd[206733]: bdev(0x565067236c00 /var/lib/ceph/osd/ceph-1/block) close
Oct  3 09:32:56 compute-0 systemd[1]: Started libpod-conmon-ddfc90a4ef6dc874f0f8e65c2adad7546b76f74cb84cb2730061e83da02349d3.scope.
Oct  3 09:32:56 compute-0 podman[206890]: 2025-10-03 09:32:56.837531539 +0000 UTC m=+0.037936537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:56 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:56 compute-0 podman[206890]: 2025-10-03 09:32:56.965114804 +0000 UTC m=+0.165519792 container init ddfc90a4ef6dc874f0f8e65c2adad7546b76f74cb84cb2730061e83da02349d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 09:32:56 compute-0 podman[206890]: 2025-10-03 09:32:56.976017913 +0000 UTC m=+0.176422881 container start ddfc90a4ef6dc874f0f8e65c2adad7546b76f74cb84cb2730061e83da02349d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:32:56 compute-0 podman[206890]: 2025-10-03 09:32:56.981043085 +0000 UTC m=+0.181448053 container attach ddfc90a4ef6dc874f0f8e65c2adad7546b76f74cb84cb2730061e83da02349d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 09:32:56 compute-0 gracious_feistel[206910]: 167 167
Oct  3 09:32:56 compute-0 systemd[1]: libpod-ddfc90a4ef6dc874f0f8e65c2adad7546b76f74cb84cb2730061e83da02349d3.scope: Deactivated successfully.
Oct  3 09:32:56 compute-0 podman[206890]: 2025-10-03 09:32:56.983375085 +0000 UTC m=+0.183780053 container died ddfc90a4ef6dc874f0f8e65c2adad7546b76f74cb84cb2730061e83da02349d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:32:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-964e3325f2d72b8b2d4aed11f3bbc8697d77c9f069afa4f983f9f5213c2f71fb-merged.mount: Deactivated successfully.
Oct  3 09:32:57 compute-0 podman[206890]: 2025-10-03 09:32:57.04639946 +0000 UTC m=+0.246804458 container remove ddfc90a4ef6dc874f0f8e65c2adad7546b76f74cb84cb2730061e83da02349d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:32:57 compute-0 systemd[1]: libpod-conmon-ddfc90a4ef6dc874f0f8e65c2adad7546b76f74cb84cb2730061e83da02349d3.scope: Deactivated successfully.
Oct  3 09:32:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:32:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  3 09:32:57 compute-0 ceph-mon[191783]: Deploying daemon osd.2 on compute-0
Oct  3 09:32:57 compute-0 ceph-mon[191783]: OSD bench result of 6748.514116 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  3 09:32:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct  3 09:32:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  3 09:32:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Oct  3 09:32:57 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3346435816,v1:192.168.122.100:6803/3346435816] boot
Oct  3 09:32:57 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Oct  3 09:32:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  3 09:32:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  3 09:32:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  3 09:32:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  3 09:32:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  3 09:32:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  3 09:32:57 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  3 09:32:57 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  3 09:32:57 compute-0 ceph-osd[205584]: osd.0 9 state: booting -> active
Oct  3 09:32:57 compute-0 ceph-osd[206733]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  3 09:32:57 compute-0 ceph-osd[206733]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bdev(0x565067236c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bdev(0x565067236c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bdev(0x565067236c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bdev(0x565067237400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bdev(0x565067237400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bdev(0x565067237400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluefs mount
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluefs mount shared_bdev_used = 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: RocksDB version: 7.9.2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Git sha 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: DB SUMMARY
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: DB Session ID:  6HLC05QN8FXGKXF8U3NH
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: CURRENT file:  CURRENT
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: IDENTITY file:  IDENTITY
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                         Options.error_if_exists: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.create_if_missing: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                         Options.paranoid_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                                     Options.env: 0x565067209c70
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                                Options.info_log: 0x5650663fa8a0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_file_opening_threads: 16
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                              Options.statistics: (nil)
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.use_fsync: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.max_log_file_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                         Options.allow_fallocate: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.use_direct_reads: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.create_missing_column_families: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                              Options.db_log_dir: 
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                                 Options.wal_dir: db.wal
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.advise_random_on_open: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.write_buffer_manager: 0x56506730c460
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                            Options.rate_limiter: (nil)
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.unordered_write: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.row_cache: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                              Options.wal_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.allow_ingest_behind: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.two_write_queues: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.manual_wal_flush: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.wal_compression: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.atomic_flush: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.log_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.allow_data_in_errors: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.db_host_id: __hostname__
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.max_background_jobs: 4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.max_background_compactions: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.max_subcompactions: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.max_open_files: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.bytes_per_sync: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.max_background_flushes: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Compression algorithms supported:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: #011kZSTD supported: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: #011kXpressCompression supported: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: #011kBZip2Compression supported: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: #011kLZ4Compression supported: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: #011kZlibCompression supported: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: #011kSnappyCompression supported: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663fa2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663fa2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663fa2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663fa2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663fa2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663fa2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663fa2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663fa240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663fa240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663fa240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 449e19a3-6906-46fb-98cb-3c374ab5a8c8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483977200429, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483977201400, "job": 1, "event": "recovery_finished"}
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: freelist init
Oct  3 09:32:57 compute-0 ceph-osd[206733]: freelist _read_cfg
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluefs umount
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bdev(0x565067237400 /var/lib/ceph/osd/ceph-1/block) close
Oct  3 09:32:57 compute-0 podman[207136]: 2025-10-03 09:32:57.378213755 +0000 UTC m=+0.080762401 container create f1c9038588a6e876c575110347ef5c3564f07ed72e6bf57e4f017770637488dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate-test, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bdev(0x565067237400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bdev(0x565067237400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bdev(0x565067237400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluefs mount
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluefs mount shared_bdev_used = 4718592
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: RocksDB version: 7.9.2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Git sha 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: DB SUMMARY
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: DB Session ID:  6HLC05QN8FXGKXF8U3NG
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: CURRENT file:  CURRENT
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: IDENTITY file:  IDENTITY
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                         Options.error_if_exists: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.create_if_missing: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                         Options.paranoid_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                                     Options.env: 0x5650673b8460
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                                Options.info_log: 0x5650663fa620
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_file_opening_threads: 16
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                              Options.statistics: (nil)
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.use_fsync: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.max_log_file_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                         Options.allow_fallocate: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.use_direct_reads: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.create_missing_column_families: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                              Options.db_log_dir: 
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                                 Options.wal_dir: db.wal
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.advise_random_on_open: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.write_buffer_manager: 0x56506730c460
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                            Options.rate_limiter: (nil)
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.unordered_write: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.row_cache: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                              Options.wal_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.allow_ingest_behind: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.two_write_queues: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.manual_wal_flush: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.wal_compression: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.atomic_flush: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.log_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.allow_data_in_errors: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.db_host_id: __hostname__
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.max_background_jobs: 4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.max_background_compactions: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.max_subcompactions: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.max_open_files: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.bytes_per_sync: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.max_background_flushes: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Compression algorithms supported:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: #011kZSTD supported: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: #011kXpressCompression supported: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: #011kBZip2Compression supported: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: #011kLZ4Compression supported: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: #011kZlibCompression supported: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: #011kSnappyCompression supported: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663faa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663faa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663faa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663faa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 podman[207136]: 2025-10-03 09:32:57.354173969 +0000 UTC m=+0.056722605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663faa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663faa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663faa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663fa380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 systemd[1]: Started libpod-conmon-f1c9038588a6e876c575110347ef5c3564f07ed72e6bf57e4f017770637488dc.scope.
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663fa380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:           Options.merge_operator: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5650663fa380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5650663e7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.compression: LZ4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.num_levels: 7
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 449e19a3-6906-46fb-98cb-3c374ab5a8c8
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483977471550, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483977476801, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483977, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "449e19a3-6906-46fb-98cb-3c374ab5a8c8", "db_session_id": "6HLC05QN8FXGKXF8U3NG", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483977481265, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483977, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "449e19a3-6906-46fb-98cb-3c374ab5a8c8", "db_session_id": "6HLC05QN8FXGKXF8U3NG", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:32:57 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1747dc06fa1e1953c15e91e480b86f66f615e94aa3e9e517fcd544432b5b7ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483977487139, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483977, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "449e19a3-6906-46fb-98cb-3c374ab5a8c8", "db_session_id": "6HLC05QN8FXGKXF8U3NG", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1747dc06fa1e1953c15e91e480b86f66f615e94aa3e9e517fcd544432b5b7ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1747dc06fa1e1953c15e91e480b86f66f615e94aa3e9e517fcd544432b5b7ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1747dc06fa1e1953c15e91e480b86f66f615e94aa3e9e517fcd544432b5b7ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1747dc06fa1e1953c15e91e480b86f66f615e94aa3e9e517fcd544432b5b7ee/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483977489812, "job": 1, "event": "recovery_finished"}
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  3 09:32:57 compute-0 podman[207136]: 2025-10-03 09:32:57.514108412 +0000 UTC m=+0.216657038 container init f1c9038588a6e876c575110347ef5c3564f07ed72e6bf57e4f017770637488dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate-test, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:32:57 compute-0 podman[207136]: 2025-10-03 09:32:57.52630897 +0000 UTC m=+0.228857586 container start f1c9038588a6e876c575110347ef5c3564f07ed72e6bf57e4f017770637488dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:32:57 compute-0 podman[207136]: 2025-10-03 09:32:57.530841048 +0000 UTC m=+0.233389724 container attach f1c9038588a6e876c575110347ef5c3564f07ed72e6bf57e4f017770637488dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56506642e000
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: DB pointer 0x5650672f1a00
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct  3 09:32:57 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 09:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5650663e71f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5650663e71f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Oct  3 09:32:57 compute-0 ceph-osd[206733]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  3 09:32:57 compute-0 ceph-osd[206733]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  3 09:32:57 compute-0 ceph-osd[206733]: _get_class not permitted to load lua
Oct  3 09:32:57 compute-0 ceph-osd[206733]: _get_class not permitted to load sdk
Oct  3 09:32:57 compute-0 ceph-osd[206733]: _get_class not permitted to load test_remote_reads
Oct  3 09:32:57 compute-0 ceph-osd[206733]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  3 09:32:57 compute-0 ceph-osd[206733]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  3 09:32:57 compute-0 ceph-osd[206733]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  3 09:32:57 compute-0 ceph-osd[206733]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  3 09:32:57 compute-0 ceph-osd[206733]: osd.1 0 load_pgs
Oct  3 09:32:57 compute-0 ceph-osd[206733]: osd.1 0 load_pgs opened 0 pgs
Oct  3 09:32:57 compute-0 ceph-osd[206733]: osd.1 0 log_to_monitors true
Oct  3 09:32:57 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1[206729]: 2025-10-03T09:32:57.543+0000 7f5c670d9740 -1 osd.1 0 log_to_monitors true
Oct  3 09:32:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Oct  3 09:32:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/998995106,v1:192.168.122.100:6807/998995106]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  3 09:32:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct  3 09:32:58 compute-0 ceph-mgr[192071]: [devicehealth INFO root] creating mgr pool
Oct  3 09:32:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Oct  3 09:32:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  3 09:32:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct  3 09:32:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  3 09:32:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/998995106,v1:192.168.122.100:6807/998995106]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  3 09:32:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  3 09:32:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Oct  3 09:32:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Oct  3 09:32:58 compute-0 ceph-mon[191783]: osd.0 [v2:192.168.122.100:6802/3346435816,v1:192.168.122.100:6803/3346435816] boot
Oct  3 09:32:58 compute-0 ceph-mon[191783]: from='osd.1 [v2:192.168.122.100:6806/998995106,v1:192.168.122.100:6807/998995106]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  3 09:32:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  3 09:32:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct  3 09:32:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct  3 09:32:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Oct  3 09:32:58 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Oct  3 09:32:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct  3 09:32:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/998995106,v1:192.168.122.100:6807/998995106]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  3 09:32:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e10 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  3 09:32:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  3 09:32:58 compute-0 ceph-osd[205584]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  3 09:32:58 compute-0 ceph-osd[205584]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct  3 09:32:58 compute-0 ceph-osd[205584]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  3 09:32:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 10 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [0] r=0 lpr=10 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:32:58 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  3 09:32:58 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  3 09:32:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  3 09:32:58 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  3 09:32:58 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  3 09:32:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Oct  3 09:32:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  3 09:32:58 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate-test[207251]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  3 09:32:58 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate-test[207251]:                            [--no-systemd] [--no-tmpfs]
Oct  3 09:32:58 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate-test[207251]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  3 09:32:58 compute-0 systemd[1]: libpod-f1c9038588a6e876c575110347ef5c3564f07ed72e6bf57e4f017770637488dc.scope: Deactivated successfully.
Oct  3 09:32:58 compute-0 podman[207136]: 2025-10-03 09:32:58.266492025 +0000 UTC m=+0.969040681 container died f1c9038588a6e876c575110347ef5c3564f07ed72e6bf57e4f017770637488dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate-test, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1747dc06fa1e1953c15e91e480b86f66f615e94aa3e9e517fcd544432b5b7ee-merged.mount: Deactivated successfully.
Oct  3 09:32:58 compute-0 podman[207136]: 2025-10-03 09:32:58.368314492 +0000 UTC m=+1.070863108 container remove f1c9038588a6e876c575110347ef5c3564f07ed72e6bf57e4f017770637488dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate-test, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:32:58 compute-0 systemd[1]: libpod-conmon-f1c9038588a6e876c575110347ef5c3564f07ed72e6bf57e4f017770637488dc.scope: Deactivated successfully.
Oct  3 09:32:58 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  3 09:32:58 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  3 09:32:58 compute-0 systemd[1]: Reloading.
Oct  3 09:32:58 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:32:58 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:32:59 compute-0 systemd[1]: Reloading.
Oct  3 09:32:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct  3 09:32:59 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/998995106,v1:192.168.122.100:6807/998995106]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  3 09:32:59 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  3 09:32:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Oct  3 09:32:59 compute-0 ceph-osd[206733]: osd.1 0 done with init, starting boot process
Oct  3 09:32:59 compute-0 ceph-osd[206733]: osd.1 0 start_boot
Oct  3 09:32:59 compute-0 ceph-osd[206733]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  3 09:32:59 compute-0 ceph-osd[206733]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  3 09:32:59 compute-0 ceph-osd[206733]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  3 09:32:59 compute-0 ceph-osd[206733]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  3 09:32:59 compute-0 ceph-osd[206733]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct  3 09:32:59 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Oct  3 09:32:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  3 09:32:59 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  3 09:32:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  3 09:32:59 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  3 09:32:59 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  3 09:32:59 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  3 09:32:59 compute-0 ceph-mon[191783]: from='osd.1 [v2:192.168.122.100:6806/998995106,v1:192.168.122.100:6807/998995106]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  3 09:32:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=11) [] r=-1 lpr=11 pi=[10,11)/0 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:32:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=11) [] r=-1 lpr=11 pi=[10,11)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:32:59 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  3 09:32:59 compute-0 ceph-mon[191783]: from='osd.1 [v2:192.168.122.100:6806/998995106,v1:192.168.122.100:6807/998995106]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  3 09:32:59 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  3 09:32:59 compute-0 ceph-mgr[192071]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/998995106; not ready for session (expect reconnect)
Oct  3 09:32:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  3 09:32:59 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  3 09:32:59 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  3 09:32:59 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:32:59 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:32:59 compute-0 systemd[1]: Starting Ceph osd.2 for 9b4e8c9a-5555-5510-a631-4742a1182561...
Oct  3 09:32:59 compute-0 podman[157165]: time="2025-10-03T09:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:32:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27361 "" "Go-http-client/1.1"
Oct  3 09:32:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5330 "" "Go-http-client/1.1"
Oct  3 09:32:59 compute-0 podman[207525]: 2025-10-03 09:32:59.841634788 +0000 UTC m=+0.055036754 container create d4b6c9850f46fbe7731f06249af49ea678727d7349bf475209ded87ac7faf638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:32:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v39: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct  3 09:32:59 compute-0 podman[207525]: 2025-10-03 09:32:59.821978314 +0000 UTC m=+0.035380300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:32:59 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/199b501378b7539bbfd6f98d137322c53c9ba8e005767f70ef8b0103c1d13326/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/199b501378b7539bbfd6f98d137322c53c9ba8e005767f70ef8b0103c1d13326/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/199b501378b7539bbfd6f98d137322c53c9ba8e005767f70ef8b0103c1d13326/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/199b501378b7539bbfd6f98d137322c53c9ba8e005767f70ef8b0103c1d13326/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/199b501378b7539bbfd6f98d137322c53c9ba8e005767f70ef8b0103c1d13326/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  3 09:32:59 compute-0 podman[207525]: 2025-10-03 09:32:59.953087775 +0000 UTC m=+0.166489761 container init d4b6c9850f46fbe7731f06249af49ea678727d7349bf475209ded87ac7faf638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 09:32:59 compute-0 podman[207525]: 2025-10-03 09:32:59.96514443 +0000 UTC m=+0.178546396 container start d4b6c9850f46fbe7731f06249af49ea678727d7349bf475209ded87ac7faf638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:32:59 compute-0 podman[207525]: 2025-10-03 09:32:59.993666292 +0000 UTC m=+0.207068258 container attach d4b6c9850f46fbe7731f06249af49ea678727d7349bf475209ded87ac7faf638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:00 compute-0 ceph-mgr[192071]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/998995106; not ready for session (expect reconnect)
Oct  3 09:33:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  3 09:33:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  3 09:33:00 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  3 09:33:00 compute-0 ceph-mon[191783]: from='osd.1 [v2:192.168.122.100:6806/998995106,v1:192.168.122.100:6807/998995106]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  3 09:33:00 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  3 09:33:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:33:00 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate[207539]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  3 09:33:00 compute-0 bash[207525]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  3 09:33:01 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate[207539]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct  3 09:33:01 compute-0 bash[207525]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct  3 09:33:01 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate[207539]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct  3 09:33:01 compute-0 bash[207525]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct  3 09:33:01 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate[207539]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  3 09:33:01 compute-0 bash[207525]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  3 09:33:01 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate[207539]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  3 09:33:01 compute-0 bash[207525]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  3 09:33:01 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate[207539]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  3 09:33:01 compute-0 bash[207525]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  3 09:33:01 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate[207539]: --> ceph-volume raw activate successful for osd ID: 2
Oct  3 09:33:01 compute-0 bash[207525]: --> ceph-volume raw activate successful for osd ID: 2
Oct  3 09:33:01 compute-0 systemd[1]: libpod-d4b6c9850f46fbe7731f06249af49ea678727d7349bf475209ded87ac7faf638.scope: Deactivated successfully.
Oct  3 09:33:01 compute-0 systemd[1]: libpod-d4b6c9850f46fbe7731f06249af49ea678727d7349bf475209ded87ac7faf638.scope: Consumed 1.145s CPU time.
Oct  3 09:33:01 compute-0 podman[207525]: 2025-10-03 09:33:01.104755593 +0000 UTC m=+1.318157559 container died d4b6c9850f46fbe7731f06249af49ea678727d7349bf475209ded87ac7faf638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:33:01 compute-0 ceph-mgr[192071]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/998995106; not ready for session (expect reconnect)
Oct  3 09:33:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  3 09:33:01 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  3 09:33:01 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  3 09:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-199b501378b7539bbfd6f98d137322c53c9ba8e005767f70ef8b0103c1d13326-merged.mount: Deactivated successfully.
Oct  3 09:33:01 compute-0 podman[207525]: 2025-10-03 09:33:01.267289554 +0000 UTC m=+1.480691520 container remove d4b6c9850f46fbe7731f06249af49ea678727d7349bf475209ded87ac7faf638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2-activate, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 09:33:01 compute-0 openstack_network_exporter[159287]: ERROR   09:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:33:01 compute-0 openstack_network_exporter[159287]: ERROR   09:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:33:01 compute-0 openstack_network_exporter[159287]: ERROR   09:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:33:01 compute-0 openstack_network_exporter[159287]: ERROR   09:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:33:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:33:01 compute-0 openstack_network_exporter[159287]: ERROR   09:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:33:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:33:01 compute-0 podman[207723]: 2025-10-03 09:33:01.638608683 +0000 UTC m=+0.116945364 container create 166adbb3547d1c913e4c3e02fa62dc58c3ee9c533dbab673a9091ce11043e3dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:33:01 compute-0 podman[207723]: 2025-10-03 09:33:01.555339627 +0000 UTC m=+0.033676328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b890b0fe35cef158dd5eb90ff1e3122f05726e9b296c7469fbb24e5376999263/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b890b0fe35cef158dd5eb90ff1e3122f05726e9b296c7469fbb24e5376999263/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b890b0fe35cef158dd5eb90ff1e3122f05726e9b296c7469fbb24e5376999263/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b890b0fe35cef158dd5eb90ff1e3122f05726e9b296c7469fbb24e5376999263/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b890b0fe35cef158dd5eb90ff1e3122f05726e9b296c7469fbb24e5376999263/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:01 compute-0 podman[207723]: 2025-10-03 09:33:01.753362991 +0000 UTC m=+0.231699742 container init 166adbb3547d1c913e4c3e02fa62dc58c3ee9c533dbab673a9091ce11043e3dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:33:01 compute-0 podman[207723]: 2025-10-03 09:33:01.767486777 +0000 UTC m=+0.245823458 container start 166adbb3547d1c913e4c3e02fa62dc58c3ee9c533dbab673a9091ce11043e3dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 09:33:01 compute-0 bash[207723]: 166adbb3547d1c913e4c3e02fa62dc58c3ee9c533dbab673a9091ce11043e3dc
Oct  3 09:33:01 compute-0 systemd[1]: Started Ceph osd.2 for 9b4e8c9a-5555-5510-a631-4742a1182561.
Oct  3 09:33:01 compute-0 ceph-osd[207741]: set uid:gid to 167:167 (ceph:ceph)
Oct  3 09:33:01 compute-0 ceph-osd[207741]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  3 09:33:01 compute-0 ceph-osd[207741]: pidfile_write: ignore empty --pid-file
Oct  3 09:33:01 compute-0 ceph-osd[207741]: bdev(0x56167ac59800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  3 09:33:01 compute-0 ceph-osd[207741]: bdev(0x56167ac59800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  3 09:33:01 compute-0 ceph-osd[207741]: bdev(0x56167ac59800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:33:01 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  3 09:33:01 compute-0 ceph-osd[207741]: bdev(0x56167ba9b800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  3 09:33:01 compute-0 ceph-osd[207741]: bdev(0x56167ba9b800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  3 09:33:01 compute-0 ceph-osd[207741]: bdev(0x56167ba9b800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:33:01 compute-0 ceph-osd[207741]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  3 09:33:01 compute-0 ceph-osd[207741]: bdev(0x56167ba9b800 /var/lib/ceph/osd/ceph-2/block) close
Oct  3 09:33:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:33:01 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:33:01 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bdev(0x56167ac59800 /var/lib/ceph/osd/ceph-2/block) close
Oct  3 09:33:02 compute-0 ceph-mgr[192071]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/998995106; not ready for session (expect reconnect)
Oct  3 09:33:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  3 09:33:02 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  3 09:33:02 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  3 09:33:02 compute-0 ceph-osd[207741]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Oct  3 09:33:02 compute-0 ceph-osd[207741]: load: jerasure load: lrc 
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bdev(0x56167bb2ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bdev(0x56167bb2ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bdev(0x56167bb2ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bdev(0x56167bb2ec00 /var/lib/ceph/osd/ceph-2/block) close
Oct  3 09:33:02 compute-0 podman[207897]: 2025-10-03 09:33:02.575346177 +0000 UTC m=+0.056749806 container create 061e9bc3027a000d2a8f7c038e2cdf7789c2c57d9d9fa3eceaeb854303462dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bdev(0x56167bb2ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bdev(0x56167bb2ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bdev(0x56167bb2ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bdev(0x56167bb2ec00 /var/lib/ceph/osd/ceph-2/block) close
Oct  3 09:33:02 compute-0 podman[207897]: 2025-10-03 09:33:02.549648211 +0000 UTC m=+0.031051880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:02 compute-0 systemd[1]: Started libpod-conmon-061e9bc3027a000d2a8f7c038e2cdf7789c2c57d9d9fa3eceaeb854303462dd8.scope.
Oct  3 09:33:02 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:02 compute-0 podman[207897]: 2025-10-03 09:33:02.714637576 +0000 UTC m=+0.196041235 container init 061e9bc3027a000d2a8f7c038e2cdf7789c2c57d9d9fa3eceaeb854303462dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:33:02 compute-0 podman[207897]: 2025-10-03 09:33:02.724864304 +0000 UTC m=+0.206267943 container start 061e9bc3027a000d2a8f7c038e2cdf7789c2c57d9d9fa3eceaeb854303462dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:02 compute-0 busy_hawking[207918]: 167 167
Oct  3 09:33:02 compute-0 systemd[1]: libpod-061e9bc3027a000d2a8f7c038e2cdf7789c2c57d9d9fa3eceaeb854303462dd8.scope: Deactivated successfully.
Oct  3 09:33:02 compute-0 podman[207897]: 2025-10-03 09:33:02.731437803 +0000 UTC m=+0.212841432 container attach 061e9bc3027a000d2a8f7c038e2cdf7789c2c57d9d9fa3eceaeb854303462dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  3 09:33:02 compute-0 podman[207897]: 2025-10-03 09:33:02.733145695 +0000 UTC m=+0.214549344 container died 061e9bc3027a000d2a8f7c038e2cdf7789c2c57d9d9fa3eceaeb854303462dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  3 09:33:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1b4973ca5b0cd3a4c6df2755f1608566675080a6a725090f3687442034639e3-merged.mount: Deactivated successfully.
Oct  3 09:33:02 compute-0 podman[207897]: 2025-10-03 09:33:02.810055539 +0000 UTC m=+0.291459168 container remove 061e9bc3027a000d2a8f7c038e2cdf7789c2c57d9d9fa3eceaeb854303462dd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:33:02 compute-0 systemd[1]: libpod-conmon-061e9bc3027a000d2a8f7c038e2cdf7789c2c57d9d9fa3eceaeb854303462dd8.scope: Deactivated successfully.
Oct  3 09:33:02 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:02 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:02 compute-0 ceph-osd[207741]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  3 09:33:02 compute-0 ceph-osd[207741]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bdev(0x56167bb2ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bdev(0x56167bb2ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bdev(0x56167bb2ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bdev(0x56167bb2f400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bdev(0x56167bb2f400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bdev(0x56167bb2f400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bluefs mount
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bluefs mount shared_bdev_used = 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: RocksDB version: 7.9.2
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Git sha 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: DB SUMMARY
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: DB Session ID:  HJI6GXJPTID2QOVWWDAZ
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: CURRENT file:  CURRENT
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: IDENTITY file:  IDENTITY
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                         Options.error_if_exists: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                       Options.create_if_missing: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                         Options.paranoid_checks: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                                     Options.env: 0x56167baedce0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                                Options.info_log: 0x56167ace4b40
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.max_file_opening_threads: 16
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                              Options.statistics: (nil)
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                               Options.use_fsync: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                       Options.max_log_file_size: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                         Options.allow_fallocate: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                        Options.use_direct_reads: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.create_missing_column_families: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                              Options.db_log_dir: 
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                                 Options.wal_dir: db.wal
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.advise_random_on_open: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                    Options.write_buffer_manager: 0x56167ad16460
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                            Options.rate_limiter: (nil)
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.unordered_write: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                               Options.row_cache: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                              Options.wal_filter: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.allow_ingest_behind: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.two_write_queues: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.manual_wal_flush: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.wal_compression: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.atomic_flush: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                 Options.log_readahead_size: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.allow_data_in_errors: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.db_host_id: __hostname__
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.max_background_jobs: 4
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.max_background_compactions: -1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.max_subcompactions: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                          Options.max_open_files: -1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                          Options.bytes_per_sync: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.max_background_flushes: -1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Compression algorithms supported:
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: #011kZSTD supported: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: #011kXpressCompression supported: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: #011kBZip2Compression supported: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: #011kLZ4Compression supported: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: #011kZlibCompression supported: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: #011kSnappyCompression supported: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace5200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167acccdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace5200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167acccdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace5200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167acccdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace5200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167acccdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace5200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167acccdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:02 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace5200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167acccdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace5200)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167acccdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace51c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167accc430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace51c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167accc430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace51c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167accc430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:03 compute-0 podman[207952]: 2025-10-03 09:33:03.00868805 +0000 UTC m=+0.064641554 container create 13209c05f5231190d4cf02782b03235a550287b66de291a26ef897a7df694299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 514a9060-e94a-40c2-aaee-993ff3f8a917
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483983021123, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483983021505, "job": 1, "event": "recovery_finished"}
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: freelist init
Oct  3 09:33:03 compute-0 ceph-osd[207741]: freelist _read_cfg
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bluefs umount
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bdev(0x56167bb2f400 /var/lib/ceph/osd/ceph-2/block) close
Oct  3 09:33:03 compute-0 systemd[1]: Started libpod-conmon-13209c05f5231190d4cf02782b03235a550287b66de291a26ef897a7df694299.scope.
Oct  3 09:33:03 compute-0 podman[207952]: 2025-10-03 09:33:02.983499599 +0000 UTC m=+0.039453113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:03 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734ba13454159fb5e7ede61c2b3cbc77a890e1de65fe334fb9c5a258752dec21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734ba13454159fb5e7ede61c2b3cbc77a890e1de65fe334fb9c5a258752dec21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734ba13454159fb5e7ede61c2b3cbc77a890e1de65fe334fb9c5a258752dec21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734ba13454159fb5e7ede61c2b3cbc77a890e1de65fe334fb9c5a258752dec21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:03 compute-0 ceph-mgr[192071]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/998995106; not ready for session (expect reconnect)
Oct  3 09:33:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  3 09:33:03 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  3 09:33:03 compute-0 ceph-osd[206733]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 23.717 iops: 6071.667 elapsed_sec: 0.494
Oct  3 09:33:03 compute-0 ceph-osd[206733]: log_channel(cluster) log [WRN] : OSD bench result of 6071.667068 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  3 09:33:03 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  3 09:33:03 compute-0 ceph-osd[206733]: osd.1 0 waiting for initial osdmap
Oct  3 09:33:03 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1[206729]: 2025-10-03T09:33:03.138+0000 7f5c63870640 -1 osd.1 0 waiting for initial osdmap
Oct  3 09:33:03 compute-0 podman[207952]: 2025-10-03 09:33:03.150824885 +0000 UTC m=+0.206778409 container init 13209c05f5231190d4cf02782b03235a550287b66de291a26ef897a7df694299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 09:33:03 compute-0 ceph-osd[206733]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  3 09:33:03 compute-0 ceph-osd[206733]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct  3 09:33:03 compute-0 ceph-osd[206733]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  3 09:33:03 compute-0 ceph-osd[206733]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef
Oct  3 09:33:03 compute-0 podman[207952]: 2025-10-03 09:33:03.159776286 +0000 UTC m=+0.215729780 container start 13209c05f5231190d4cf02782b03235a550287b66de291a26ef897a7df694299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_almeida, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 09:33:03 compute-0 podman[207952]: 2025-10-03 09:33:03.164831808 +0000 UTC m=+0.220785332 container attach 13209c05f5231190d4cf02782b03235a550287b66de291a26ef897a7df694299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 09:33:03 compute-0 ceph-osd[206733]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  3 09:33:03 compute-0 ceph-osd[206733]: osd.1 11 set_numa_affinity not setting numa affinity
Oct  3 09:33:03 compute-0 ceph-osd[206733]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Oct  3 09:33:03 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-1[206729]: 2025-10-03T09:33:03.166+0000 7f5c5e681640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bdev(0x56167bb2f400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bdev(0x56167bb2f400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bdev(0x56167bb2f400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bluefs mount
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bluefs mount shared_bdev_used = 4718592
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: RocksDB version: 7.9.2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Git sha 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: DB SUMMARY
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: DB Session ID:  HJI6GXJPTID2QOVWWDAY
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: CURRENT file:  CURRENT
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: IDENTITY file:  IDENTITY
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                         Options.error_if_exists: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.create_if_missing: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                         Options.paranoid_checks: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                                     Options.env: 0x56167bc8a3f0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                                Options.info_log: 0x56167ace4900
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_file_opening_threads: 16
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                              Options.statistics: (nil)
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.use_fsync: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.max_log_file_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                         Options.allow_fallocate: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.use_direct_reads: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.create_missing_column_families: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                              Options.db_log_dir: 
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                                 Options.wal_dir: db.wal
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.advise_random_on_open: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.write_buffer_manager: 0x56167ad16460
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                            Options.rate_limiter: (nil)
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.unordered_write: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.row_cache: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                              Options.wal_filter: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.allow_ingest_behind: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.two_write_queues: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.manual_wal_flush: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.wal_compression: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.atomic_flush: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.log_readahead_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.allow_data_in_errors: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.db_host_id: __hostname__
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.max_background_jobs: 4
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.max_background_compactions: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.max_subcompactions: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.max_open_files: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.bytes_per_sync: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.max_background_flushes: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Compression algorithms supported:
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: #011kZSTD supported: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: #011kXpressCompression supported: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: #011kBZip2Compression supported: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: #011kLZ4Compression supported: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: #011kZlibCompression supported: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: #011kSnappyCompression supported: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace4d20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167acccdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace4d20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167acccdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace4d20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167acccdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace4d20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167acccdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace4d20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167acccdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace4d20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167acccdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace4d20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167acccdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace52c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167accc430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace52c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167accc430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:           Options.merge_operator: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.compaction_filter_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.sst_partitioner_factory: None
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56167ace52c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56167accc430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.write_buffer_size: 16777216
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.max_write_buffer_number: 64
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.compression: LZ4
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.num_levels: 7
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.level: 32767
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.compression_opts.strategy: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                  Options.compression_opts.enabled: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.arena_block_size: 1048576
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.disable_auto_compactions: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.inplace_update_support: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.bloom_locality: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                    Options.max_successive_merges: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.paranoid_file_checks: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.force_consistency_checks: 1
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.report_bg_io_stats: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                               Options.ttl: 2592000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                       Options.enable_blob_files: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                           Options.min_blob_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                          Options.blob_file_size: 268435456
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb:                Options.blob_file_starting_level: 0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 514a9060-e94a-40c2-aaee-993ff3f8a917
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483983226839, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483983234667, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483983, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "514a9060-e94a-40c2-aaee-993ff3f8a917", "db_session_id": "HJI6GXJPTID2QOVWWDAY", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483983239415, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483983, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "514a9060-e94a-40c2-aaee-993ff3f8a917", "db_session_id": "HJI6GXJPTID2QOVWWDAY", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483983244334, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483983, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "514a9060-e94a-40c2-aaee-993ff3f8a917", "db_session_id": "HJI6GXJPTID2QOVWWDAY", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759483983246606, "job": 1, "event": "recovery_finished"}
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56167bc96000
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: DB pointer 0x56167ad01a00
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Oct  3 09:33:03 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 09:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56167acccdd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56167acccdd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Bloc
Oct  3 09:33:03 compute-0 ceph-osd[207741]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  3 09:33:03 compute-0 ceph-osd[207741]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  3 09:33:03 compute-0 ceph-osd[207741]: _get_class not permitted to load lua
Oct  3 09:33:03 compute-0 ceph-osd[207741]: _get_class not permitted to load sdk
Oct  3 09:33:03 compute-0 ceph-osd[207741]: _get_class not permitted to load test_remote_reads
Oct  3 09:33:03 compute-0 ceph-osd[207741]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  3 09:33:03 compute-0 ceph-osd[207741]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  3 09:33:03 compute-0 ceph-osd[207741]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  3 09:33:03 compute-0 ceph-osd[207741]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  3 09:33:03 compute-0 ceph-osd[207741]: osd.2 0 load_pgs
Oct  3 09:33:03 compute-0 ceph-osd[207741]: osd.2 0 load_pgs opened 0 pgs
Oct  3 09:33:03 compute-0 ceph-osd[207741]: osd.2 0 log_to_monitors true
Oct  3 09:33:03 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2[207737]: 2025-10-03T09:33:03.309+0000 7fe4da602740 -1 osd.2 0 log_to_monitors true
Oct  3 09:33:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Oct  3 09:33:03 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4207822563,v1:192.168.122.100:6811/4207822563]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  3 09:33:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct  3 09:33:03 compute-0 ceph-mon[191783]: OSD bench result of 6071.667068 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  3 09:33:03 compute-0 ceph-mon[191783]: from='osd.2 [v2:192.168.122.100:6810/4207822563,v1:192.168.122.100:6811/4207822563]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  3 09:33:03 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4207822563,v1:192.168.122.100:6811/4207822563]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  3 09:33:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Oct  3 09:33:03 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/998995106,v1:192.168.122.100:6807/998995106] boot
Oct  3 09:33:03 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Oct  3 09:33:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct  3 09:33:03 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4207822563,v1:192.168.122.100:6811/4207822563]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  3 09:33:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  3 09:33:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  3 09:33:03 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  3 09:33:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  3 09:33:03 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  3 09:33:03 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  3 09:33:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v42: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct  3 09:33:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=-1 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [1], acting [] -> [1], acting_primary ? -> 1, up_primary ? -> 1, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=-1 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:03 compute-0 ceph-osd[206733]: osd.1 12 state: booting -> active
Oct  3 09:33:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:04 compute-0 nice_almeida[208151]: {
Oct  3 09:33:04 compute-0 nice_almeida[208151]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:33:04 compute-0 nice_almeida[208151]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:33:04 compute-0 nice_almeida[208151]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:33:04 compute-0 nice_almeida[208151]:        "osd_id": 1,
Oct  3 09:33:04 compute-0 nice_almeida[208151]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:33:04 compute-0 nice_almeida[208151]:        "type": "bluestore"
Oct  3 09:33:04 compute-0 nice_almeida[208151]:    },
Oct  3 09:33:04 compute-0 nice_almeida[208151]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:33:04 compute-0 nice_almeida[208151]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:33:04 compute-0 nice_almeida[208151]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:33:04 compute-0 nice_almeida[208151]:        "osd_id": 2,
Oct  3 09:33:04 compute-0 nice_almeida[208151]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:33:04 compute-0 nice_almeida[208151]:        "type": "bluestore"
Oct  3 09:33:04 compute-0 nice_almeida[208151]:    },
Oct  3 09:33:04 compute-0 nice_almeida[208151]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:33:04 compute-0 nice_almeida[208151]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:33:04 compute-0 nice_almeida[208151]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:33:04 compute-0 nice_almeida[208151]:        "osd_id": 0,
Oct  3 09:33:04 compute-0 nice_almeida[208151]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:33:04 compute-0 nice_almeida[208151]:        "type": "bluestore"
Oct  3 09:33:04 compute-0 nice_almeida[208151]:    }
Oct  3 09:33:04 compute-0 nice_almeida[208151]: }
Oct  3 09:33:04 compute-0 systemd[1]: libpod-13209c05f5231190d4cf02782b03235a550287b66de291a26ef897a7df694299.scope: Deactivated successfully.
Oct  3 09:33:04 compute-0 systemd[1]: libpod-13209c05f5231190d4cf02782b03235a550287b66de291a26ef897a7df694299.scope: Consumed 1.051s CPU time.
Oct  3 09:33:04 compute-0 podman[207952]: 2025-10-03 09:33:04.211853434 +0000 UTC m=+1.267806938 container died 13209c05f5231190d4cf02782b03235a550287b66de291a26ef897a7df694299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:33:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-734ba13454159fb5e7ede61c2b3cbc77a890e1de65fe334fb9c5a258752dec21-merged.mount: Deactivated successfully.
Oct  3 09:33:04 compute-0 podman[207952]: 2025-10-03 09:33:04.277803237 +0000 UTC m=+1.333756731 container remove 13209c05f5231190d4cf02782b03235a550287b66de291a26ef897a7df694299 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:33:04 compute-0 systemd[1]: libpod-conmon-13209c05f5231190d4cf02782b03235a550287b66de291a26ef897a7df694299.scope: Deactivated successfully.
Oct  3 09:33:04 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  3 09:33:04 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  3 09:33:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:33:04 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:33:04 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct  3 09:33:04 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4207822563,v1:192.168.122.100:6811/4207822563]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  3 09:33:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Oct  3 09:33:04 compute-0 ceph-osd[207741]: osd.2 0 done with init, starting boot process
Oct  3 09:33:04 compute-0 ceph-osd[207741]: osd.2 0 start_boot
Oct  3 09:33:04 compute-0 ceph-osd[207741]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  3 09:33:04 compute-0 ceph-osd[207741]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  3 09:33:04 compute-0 ceph-osd[207741]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  3 09:33:04 compute-0 ceph-osd[207741]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  3 09:33:04 compute-0 ceph-osd[207741]: osd.2 0  bench count 12288000 bsize 4 KiB
Oct  3 09:33:04 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Oct  3 09:33:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  3 09:33:04 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  3 09:33:04 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  3 09:33:04 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:04 compute-0 ceph-mon[191783]: from='osd.2 [v2:192.168.122.100:6810/4207822563,v1:192.168.122.100:6811/4207822563]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  3 09:33:04 compute-0 ceph-mon[191783]: osd.1 [v2:192.168.122.100:6806/998995106,v1:192.168.122.100:6807/998995106] boot
Oct  3 09:33:04 compute-0 ceph-mon[191783]: from='osd.2 [v2:192.168.122.100:6810/4207822563,v1:192.168.122.100:6811/4207822563]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  3 09:33:04 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:04 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:04 compute-0 ceph-mgr[192071]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4207822563; not ready for session (expect reconnect)
Oct  3 09:33:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  3 09:33:04 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  3 09:33:04 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  3 09:33:05 compute-0 ceph-mgr[192071]: [devicehealth INFO root] creating main.db for devicehealth
Oct  3 09:33:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:33:05 compute-0 ceph-mgr[192071]: [devicehealth INFO root] Check health
Oct  3 09:33:05 compute-0 ceph-mgr[192071]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Oct  3 09:33:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  3 09:33:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  3 09:33:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct  3 09:33:05 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  3 09:33:05 compute-0 podman[208639]: 2025-10-03 09:33:05.525381642 +0000 UTC m=+0.129594697 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:33:05 compute-0 podman[208639]: 2025-10-03 09:33:05.619521506 +0000 UTC m=+0.223734571 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:33:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct  3 09:33:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct  3 09:33:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Oct  3 09:33:05 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Oct  3 09:33:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  3 09:33:05 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  3 09:33:05 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  3 09:33:05 compute-0 ceph-mgr[192071]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4207822563; not ready for session (expect reconnect)
Oct  3 09:33:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  3 09:33:05 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  3 09:33:05 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  3 09:33:05 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.vtkhde(active, since 80s)
Oct  3 09:33:05 compute-0 ceph-mon[191783]: from='osd.2 [v2:192.168.122.100:6810/4207822563,v1:192.168.122.100:6811/4207822563]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  3 09:33:05 compute-0 ceph-mon[191783]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  3 09:33:05 compute-0 ceph-mon[191783]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  3 09:33:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:33:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:33:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:06 compute-0 ceph-mgr[192071]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4207822563; not ready for session (expect reconnect)
Oct  3 09:33:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  3 09:33:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  3 09:33:06 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  3 09:33:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  3 09:33:07 compute-0 ceph-mgr[192071]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4207822563; not ready for session (expect reconnect)
Oct  3 09:33:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  3 09:33:07 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  3 09:33:07 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  3 09:33:08 compute-0 podman[209022]: 2025-10-03 09:33:08.070114231 +0000 UTC m=+0.082262957 container create 2e88d5c94fafa226dfe9b0503349452e98bc68a404bdbe3271ead6aeaf535671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_taussig, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct  3 09:33:08 compute-0 podman[209022]: 2025-10-03 09:33:08.021591745 +0000 UTC m=+0.033740481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:08 compute-0 systemd[1]: Started libpod-conmon-2e88d5c94fafa226dfe9b0503349452e98bc68a404bdbe3271ead6aeaf535671.scope.
Oct  3 09:33:08 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:08 compute-0 podman[209022]: 2025-10-03 09:33:08.167834714 +0000 UTC m=+0.179983460 container init 2e88d5c94fafa226dfe9b0503349452e98bc68a404bdbe3271ead6aeaf535671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_taussig, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 09:33:08 compute-0 podman[209022]: 2025-10-03 09:33:08.179353322 +0000 UTC m=+0.191502048 container start 2e88d5c94fafa226dfe9b0503349452e98bc68a404bdbe3271ead6aeaf535671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_taussig, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:33:08 compute-0 podman[209022]: 2025-10-03 09:33:08.187819548 +0000 UTC m=+0.199968304 container attach 2e88d5c94fafa226dfe9b0503349452e98bc68a404bdbe3271ead6aeaf535671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct  3 09:33:08 compute-0 pensive_taussig[209038]: 167 167
Oct  3 09:33:08 compute-0 systemd[1]: libpod-2e88d5c94fafa226dfe9b0503349452e98bc68a404bdbe3271ead6aeaf535671.scope: Deactivated successfully.
Oct  3 09:33:08 compute-0 podman[209022]: 2025-10-03 09:33:08.190768767 +0000 UTC m=+0.202917493 container died 2e88d5c94fafa226dfe9b0503349452e98bc68a404bdbe3271ead6aeaf535671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_taussig, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:33:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-dce798d21ec13c69737f7fec458525399f0596598af71c84b91790b158bc364e-merged.mount: Deactivated successfully.
Oct  3 09:33:08 compute-0 podman[209022]: 2025-10-03 09:33:08.277798666 +0000 UTC m=+0.289947392 container remove 2e88d5c94fafa226dfe9b0503349452e98bc68a404bdbe3271ead6aeaf535671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  3 09:33:08 compute-0 systemd[1]: libpod-conmon-2e88d5c94fafa226dfe9b0503349452e98bc68a404bdbe3271ead6aeaf535671.scope: Deactivated successfully.
Oct  3 09:33:08 compute-0 podman[209059]: 2025-10-03 09:33:08.484148701 +0000 UTC m=+0.062684584 container create 1b51d146ec1bff2765b26902b841c9448bb92033d524f2625298467021c35292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wright, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:08 compute-0 ceph-osd[207741]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 21.917 iops: 5610.688 elapsed_sec: 0.535
Oct  3 09:33:08 compute-0 ceph-osd[207741]: log_channel(cluster) log [WRN] : OSD bench result of 5610.688089 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  3 09:33:08 compute-0 ceph-osd[207741]: osd.2 0 waiting for initial osdmap
Oct  3 09:33:08 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2[207737]: 2025-10-03T09:33:08.532+0000 7fe4d6d99640 -1 osd.2 0 waiting for initial osdmap
Oct  3 09:33:08 compute-0 systemd[1]: Started libpod-conmon-1b51d146ec1bff2765b26902b841c9448bb92033d524f2625298467021c35292.scope.
Oct  3 09:33:08 compute-0 ceph-osd[207741]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  3 09:33:08 compute-0 ceph-osd[207741]: osd.2 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct  3 09:33:08 compute-0 ceph-osd[207741]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  3 09:33:08 compute-0 ceph-osd[207741]: osd.2 14 check_osdmap_features require_osd_release unknown -> reef
Oct  3 09:33:08 compute-0 podman[209059]: 2025-10-03 09:33:08.45993216 +0000 UTC m=+0.038468033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:08 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-osd-2[207737]: 2025-10-03T09:33:08.560+0000 7fe4d1baa640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  3 09:33:08 compute-0 ceph-osd[207741]: osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  3 09:33:08 compute-0 ceph-osd[207741]: osd.2 14 set_numa_affinity not setting numa affinity
Oct  3 09:33:08 compute-0 ceph-osd[207741]: osd.2 14 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Oct  3 09:33:08 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c588df5fb57335494d73bd049661904bd5b55d35e0712e95cf76bdb93bc8ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c588df5fb57335494d73bd049661904bd5b55d35e0712e95cf76bdb93bc8ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c588df5fb57335494d73bd049661904bd5b55d35e0712e95cf76bdb93bc8ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c588df5fb57335494d73bd049661904bd5b55d35e0712e95cf76bdb93bc8ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:08 compute-0 podman[209059]: 2025-10-03 09:33:08.603824017 +0000 UTC m=+0.182359850 container init 1b51d146ec1bff2765b26902b841c9448bb92033d524f2625298467021c35292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wright, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct  3 09:33:08 compute-0 podman[209059]: 2025-10-03 09:33:08.626064989 +0000 UTC m=+0.204600812 container start 1b51d146ec1bff2765b26902b841c9448bb92033d524f2625298467021c35292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wright, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 09:33:08 compute-0 podman[209059]: 2025-10-03 09:33:08.630850824 +0000 UTC m=+0.209386647 container attach 1b51d146ec1bff2765b26902b841c9448bb92033d524f2625298467021c35292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 09:33:08 compute-0 ceph-mgr[192071]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4207822563; not ready for session (expect reconnect)
Oct  3 09:33:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  3 09:33:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  3 09:33:08 compute-0 ceph-mgr[192071]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  3 09:33:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct  3 09:33:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e15 e15: 3 total, 3 up, 3 in
Oct  3 09:33:09 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/4207822563,v1:192.168.122.100:6811/4207822563] boot
Oct  3 09:33:09 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 3 up, 3 in
Oct  3 09:33:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  3 09:33:09 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  3 09:33:09 compute-0 ceph-osd[207741]: osd.2 15 state: booting -> active
Oct  3 09:33:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct  3 09:33:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct  3 09:33:10 compute-0 ceph-mon[191783]: OSD bench result of 5610.688089 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  3 09:33:10 compute-0 ceph-mon[191783]: osd.2 [v2:192.168.122.100:6810/4207822563,v1:192.168.122.100:6811/4207822563] boot
Oct  3 09:33:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Oct  3 09:33:10 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Oct  3 09:33:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:33:10 compute-0 priceless_wright[209075]: [
Oct  3 09:33:10 compute-0 priceless_wright[209075]:    {
Oct  3 09:33:10 compute-0 priceless_wright[209075]:        "available": false,
Oct  3 09:33:10 compute-0 priceless_wright[209075]:        "ceph_device": false,
Oct  3 09:33:10 compute-0 priceless_wright[209075]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:        "lsm_data": {},
Oct  3 09:33:10 compute-0 priceless_wright[209075]:        "lvs": [],
Oct  3 09:33:10 compute-0 priceless_wright[209075]:        "path": "/dev/sr0",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:        "rejected_reasons": [
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "Has a FileSystem",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "Insufficient space (<5GB)"
Oct  3 09:33:10 compute-0 priceless_wright[209075]:        ],
Oct  3 09:33:10 compute-0 priceless_wright[209075]:        "sys_api": {
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "actuators": null,
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "device_nodes": "sr0",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "devname": "sr0",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "human_readable_size": "482.00 KB",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "id_bus": "ata",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "model": "QEMU DVD-ROM",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "nr_requests": "2",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "parent": "/dev/sr0",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "partitions": {},
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "path": "/dev/sr0",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "removable": "1",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "rev": "2.5+",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "ro": "0",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "rotational": "0",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "sas_address": "",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "sas_device_handle": "",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "scheduler_mode": "mq-deadline",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "sectors": 0,
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "sectorsize": "2048",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "size": 493568.0,
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "support_discard": "2048",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "type": "disk",
Oct  3 09:33:10 compute-0 priceless_wright[209075]:            "vendor": "QEMU"
Oct  3 09:33:10 compute-0 priceless_wright[209075]:        }
Oct  3 09:33:10 compute-0 priceless_wright[209075]:    }
Oct  3 09:33:10 compute-0 priceless_wright[209075]: ]
Oct  3 09:33:10 compute-0 systemd[1]: libpod-1b51d146ec1bff2765b26902b841c9448bb92033d524f2625298467021c35292.scope: Deactivated successfully.
Oct  3 09:33:10 compute-0 systemd[1]: libpod-1b51d146ec1bff2765b26902b841c9448bb92033d524f2625298467021c35292.scope: Consumed 2.120s CPU time.
Oct  3 09:33:10 compute-0 podman[211325]: 2025-10-03 09:33:10.688295469 +0000 UTC m=+0.029450871 container died 1b51d146ec1bff2765b26902b841c9448bb92033d524f2625298467021c35292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:33:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0c588df5fb57335494d73bd049661904bd5b55d35e0712e95cf76bdb93bc8ba-merged.mount: Deactivated successfully.
Oct  3 09:33:10 compute-0 podman[211325]: 2025-10-03 09:33:10.751583571 +0000 UTC m=+0.092738973 container remove 1b51d146ec1bff2765b26902b841c9448bb92033d524f2625298467021c35292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wright, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:33:10 compute-0 systemd[1]: libpod-conmon-1b51d146ec1bff2765b26902b841c9448bb92033d524f2625298467021c35292.scope: Deactivated successfully.
Oct  3 09:33:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:33:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:33:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Oct  3 09:33:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  3 09:33:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Oct  3 09:33:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  3 09:33:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Oct  3 09:33:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct  3 09:33:10 compute-0 ceph-mgr[192071]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43639k
Oct  3 09:33:10 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43639k
Oct  3 09:33:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct  3 09:33:10 compute-0 ceph-mgr[192071]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44686677: error parsing value: Value '44686677' is below minimum 939524096
Oct  3 09:33:10 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44686677: error parsing value: Value '44686677' is below minimum 939524096
Oct  3 09:33:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:33:10 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:33:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:33:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:33:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:33:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:10 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 94487931-6f44-4ab9-9a6b-604dd54a5af2 does not exist
Oct  3 09:33:10 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e9e0114c-ec8d-40a5-8d8a-0fe545f234b4 does not exist
Oct  3 09:33:10 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d60bb1a8-9bc5-47c2-8150-9fc3800cde2a does not exist
Oct  3 09:33:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:33:10 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:33:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:33:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:33:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:33:10 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:33:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  3 09:33:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  3 09:33:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct  3 09:33:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:33:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:33:11 compute-0 podman[211475]: 2025-10-03 09:33:11.604375179 +0000 UTC m=+0.049710344 container create 0cc3bc0ec7ddb2153056250440ae2f12cfe30b1fa0f16d5dddee350c335c7d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 09:33:11 compute-0 systemd[1]: Started libpod-conmon-0cc3bc0ec7ddb2153056250440ae2f12cfe30b1fa0f16d5dddee350c335c7d1c.scope.
Oct  3 09:33:11 compute-0 podman[211475]: 2025-10-03 09:33:11.584854919 +0000 UTC m=+0.030190094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:11 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:11 compute-0 podman[211475]: 2025-10-03 09:33:11.71827458 +0000 UTC m=+0.163609785 container init 0cc3bc0ec7ddb2153056250440ae2f12cfe30b1fa0f16d5dddee350c335c7d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:11 compute-0 podman[211475]: 2025-10-03 09:33:11.731493369 +0000 UTC m=+0.176828544 container start 0cc3bc0ec7ddb2153056250440ae2f12cfe30b1fa0f16d5dddee350c335c7d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:33:11 compute-0 podman[211475]: 2025-10-03 09:33:11.737212572 +0000 UTC m=+0.182547817 container attach 0cc3bc0ec7ddb2153056250440ae2f12cfe30b1fa0f16d5dddee350c335c7d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:33:11 compute-0 great_joliot[211491]: 167 167
Oct  3 09:33:11 compute-0 systemd[1]: libpod-0cc3bc0ec7ddb2153056250440ae2f12cfe30b1fa0f16d5dddee350c335c7d1c.scope: Deactivated successfully.
Oct  3 09:33:11 compute-0 conmon[211491]: conmon 0cc3bc0ec7ddb2153056 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0cc3bc0ec7ddb2153056250440ae2f12cfe30b1fa0f16d5dddee350c335c7d1c.scope/container/memory.events
Oct  3 09:33:11 compute-0 podman[211475]: 2025-10-03 09:33:11.740566184 +0000 UTC m=+0.185901349 container died 0cc3bc0ec7ddb2153056250440ae2f12cfe30b1fa0f16d5dddee350c335c7d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct  3 09:33:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-598c2a90a6ca7e0e3459b03b6d7140b054f39c4dad0bbe0f31b6451c3236ae6f-merged.mount: Deactivated successfully.
Oct  3 09:33:11 compute-0 podman[211475]: 2025-10-03 09:33:11.787337127 +0000 UTC m=+0.232672282 container remove 0cc3bc0ec7ddb2153056250440ae2f12cfe30b1fa0f16d5dddee350c335c7d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_joliot, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 09:33:11 compute-0 systemd[1]: libpod-conmon-0cc3bc0ec7ddb2153056250440ae2f12cfe30b1fa0f16d5dddee350c335c7d1c.scope: Deactivated successfully.
Oct  3 09:33:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct  3 09:33:11 compute-0 podman[211515]: 2025-10-03 09:33:11.976132821 +0000 UTC m=+0.050995522 container create 85d6881613128ac4820a0f4bb64bbd9d09ec5b43df400f43e16e6093f32a6639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cohen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:33:12 compute-0 systemd[1]: Started libpod-conmon-85d6881613128ac4820a0f4bb64bbd9d09ec5b43df400f43e16e6093f32a6639.scope.
Oct  3 09:33:12 compute-0 podman[211515]: 2025-10-03 09:33:11.955756476 +0000 UTC m=+0.030619207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:12 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4335549db584882e89090fe0b7db749abaac942223d5808fbb7648e5ffb5cb79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4335549db584882e89090fe0b7db749abaac942223d5808fbb7648e5ffb5cb79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4335549db584882e89090fe0b7db749abaac942223d5808fbb7648e5ffb5cb79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4335549db584882e89090fe0b7db749abaac942223d5808fbb7648e5ffb5cb79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4335549db584882e89090fe0b7db749abaac942223d5808fbb7648e5ffb5cb79/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:12 compute-0 podman[211515]: 2025-10-03 09:33:12.086951309 +0000 UTC m=+0.161814030 container init 85d6881613128ac4820a0f4bb64bbd9d09ec5b43df400f43e16e6093f32a6639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:12 compute-0 podman[211515]: 2025-10-03 09:33:12.096608942 +0000 UTC m=+0.171471643 container start 85d6881613128ac4820a0f4bb64bbd9d09ec5b43df400f43e16e6093f32a6639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cohen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:33:12 compute-0 podman[211515]: 2025-10-03 09:33:12.101102537 +0000 UTC m=+0.175965268 container attach 85d6881613128ac4820a0f4bb64bbd9d09ec5b43df400f43e16e6093f32a6639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cohen, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 09:33:12 compute-0 ceph-mon[191783]: Adjusting osd_memory_target on compute-0 to 43639k
Oct  3 09:33:12 compute-0 ceph-mon[191783]: Unable to set osd_memory_target on compute-0 to 44686677: error parsing value: Value '44686677' is below minimum 939524096
Oct  3 09:33:13 compute-0 elated_cohen[211531]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:33:13 compute-0 elated_cohen[211531]: --> relative data size: 1.0
Oct  3 09:33:13 compute-0 elated_cohen[211531]: --> All data devices are unavailable
Oct  3 09:33:13 compute-0 systemd[1]: libpod-85d6881613128ac4820a0f4bb64bbd9d09ec5b43df400f43e16e6093f32a6639.scope: Deactivated successfully.
Oct  3 09:33:13 compute-0 systemd[1]: libpod-85d6881613128ac4820a0f4bb64bbd9d09ec5b43df400f43e16e6093f32a6639.scope: Consumed 1.049s CPU time.
Oct  3 09:33:13 compute-0 podman[211515]: 2025-10-03 09:33:13.200202727 +0000 UTC m=+1.275065438 container died 85d6881613128ac4820a0f4bb64bbd9d09ec5b43df400f43e16e6093f32a6639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cohen, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:33:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4335549db584882e89090fe0b7db749abaac942223d5808fbb7648e5ffb5cb79-merged.mount: Deactivated successfully.
Oct  3 09:33:13 compute-0 podman[211515]: 2025-10-03 09:33:13.32010309 +0000 UTC m=+1.394965831 container remove 85d6881613128ac4820a0f4bb64bbd9d09ec5b43df400f43e16e6093f32a6639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_cohen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:33:13 compute-0 systemd[1]: libpod-conmon-85d6881613128ac4820a0f4bb64bbd9d09ec5b43df400f43e16e6093f32a6639.scope: Deactivated successfully.
Oct  3 09:33:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct  3 09:33:14 compute-0 podman[211711]: 2025-10-03 09:33:14.232397414 +0000 UTC m=+0.059111397 container create 1d98498a3cd05106ab5b46541b27680281395287788982af5f19b072974b012b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:33:14 compute-0 systemd[1]: Started libpod-conmon-1d98498a3cd05106ab5b46541b27680281395287788982af5f19b072974b012b.scope.
Oct  3 09:33:14 compute-0 podman[211711]: 2025-10-03 09:33:14.207020518 +0000 UTC m=+0.033734531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:14 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:14 compute-0 podman[211711]: 2025-10-03 09:33:14.340662975 +0000 UTC m=+0.167376978 container init 1d98498a3cd05106ab5b46541b27680281395287788982af5f19b072974b012b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:33:14 compute-0 podman[211711]: 2025-10-03 09:33:14.359140244 +0000 UTC m=+0.185854257 container start 1d98498a3cd05106ab5b46541b27680281395287788982af5f19b072974b012b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:14 compute-0 thirsty_sinoussi[211727]: 167 167
Oct  3 09:33:14 compute-0 systemd[1]: libpod-1d98498a3cd05106ab5b46541b27680281395287788982af5f19b072974b012b.scope: Deactivated successfully.
Oct  3 09:33:14 compute-0 podman[211711]: 2025-10-03 09:33:14.366139945 +0000 UTC m=+0.192853928 container attach 1d98498a3cd05106ab5b46541b27680281395287788982af5f19b072974b012b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 09:33:14 compute-0 podman[211711]: 2025-10-03 09:33:14.366471905 +0000 UTC m=+0.193185888 container died 1d98498a3cd05106ab5b46541b27680281395287788982af5f19b072974b012b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:33:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f699654498b176ab91974f168b987f16b872b25fe45a5cb687bfe2ea903a78c2-merged.mount: Deactivated successfully.
Oct  3 09:33:14 compute-0 podman[211711]: 2025-10-03 09:33:14.419057194 +0000 UTC m=+0.245771167 container remove 1d98498a3cd05106ab5b46541b27680281395287788982af5f19b072974b012b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_sinoussi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:14 compute-0 systemd[1]: libpod-conmon-1d98498a3cd05106ab5b46541b27680281395287788982af5f19b072974b012b.scope: Deactivated successfully.
Oct  3 09:33:14 compute-0 podman[211750]: 2025-10-03 09:33:14.617697186 +0000 UTC m=+0.054745915 container create 00b90e458a291cc23b0cf1ab3398b078403ecd335fc39a94a04434ea8b14d363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nightingale, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:14 compute-0 systemd[1]: Started libpod-conmon-00b90e458a291cc23b0cf1ab3398b078403ecd335fc39a94a04434ea8b14d363.scope.
Oct  3 09:33:14 compute-0 podman[211750]: 2025-10-03 09:33:14.600651041 +0000 UTC m=+0.037699800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:14 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f566d895bf27e3d1074714ce871ef185b1ede768a7922d4d452d9a3f8658caec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f566d895bf27e3d1074714ce871ef185b1ede768a7922d4d452d9a3f8658caec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f566d895bf27e3d1074714ce871ef185b1ede768a7922d4d452d9a3f8658caec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f566d895bf27e3d1074714ce871ef185b1ede768a7922d4d452d9a3f8658caec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:14 compute-0 podman[211750]: 2025-10-03 09:33:14.730732462 +0000 UTC m=+0.167781191 container init 00b90e458a291cc23b0cf1ab3398b078403ecd335fc39a94a04434ea8b14d363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nightingale, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:33:14 compute-0 podman[211750]: 2025-10-03 09:33:14.749792717 +0000 UTC m=+0.186841446 container start 00b90e458a291cc23b0cf1ab3398b078403ecd335fc39a94a04434ea8b14d363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nightingale, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:14 compute-0 podman[211750]: 2025-10-03 09:33:14.753707666 +0000 UTC m=+0.190756395 container attach 00b90e458a291cc23b0cf1ab3398b078403ecd335fc39a94a04434ea8b14d363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nightingale, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 09:33:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]: {
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:    "0": [
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:        {
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "devices": [
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "/dev/loop3"
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            ],
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "lv_name": "ceph_lv0",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "lv_size": "21470642176",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "name": "ceph_lv0",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "tags": {
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.cluster_name": "ceph",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.crush_device_class": "",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.encrypted": "0",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.osd_id": "0",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.type": "block",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.vdo": "0"
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            },
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "type": "block",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "vg_name": "ceph_vg0"
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:        }
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:    ],
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:    "1": [
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:        {
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "devices": [
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "/dev/loop4"
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            ],
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "lv_name": "ceph_lv1",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "lv_size": "21470642176",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "name": "ceph_lv1",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "tags": {
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.cluster_name": "ceph",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.crush_device_class": "",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.encrypted": "0",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.osd_id": "1",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.type": "block",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.vdo": "0"
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            },
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "type": "block",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "vg_name": "ceph_vg1"
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:        }
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:    ],
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:    "2": [
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:        {
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "devices": [
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "/dev/loop5"
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            ],
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "lv_name": "ceph_lv2",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "lv_size": "21470642176",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "name": "ceph_lv2",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "tags": {
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.cluster_name": "ceph",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.crush_device_class": "",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.encrypted": "0",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.osd_id": "2",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.type": "block",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:                "ceph.vdo": "0"
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            },
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "type": "block",
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:            "vg_name": "ceph_vg2"
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:        }
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]:    ]
Oct  3 09:33:15 compute-0 admiring_nightingale[211766]: }
Oct  3 09:33:15 compute-0 systemd[1]: libpod-00b90e458a291cc23b0cf1ab3398b078403ecd335fc39a94a04434ea8b14d363.scope: Deactivated successfully.
Oct  3 09:33:15 compute-0 podman[211750]: 2025-10-03 09:33:15.578126135 +0000 UTC m=+1.015174894 container died 00b90e458a291cc23b0cf1ab3398b078403ecd335fc39a94a04434ea8b14d363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f566d895bf27e3d1074714ce871ef185b1ede768a7922d4d452d9a3f8658caec-merged.mount: Deactivated successfully.
Oct  3 09:33:15 compute-0 podman[211750]: 2025-10-03 09:33:15.655124252 +0000 UTC m=+1.092172981 container remove 00b90e458a291cc23b0cf1ab3398b078403ecd335fc39a94a04434ea8b14d363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nightingale, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:33:15 compute-0 systemd[1]: libpod-conmon-00b90e458a291cc23b0cf1ab3398b078403ecd335fc39a94a04434ea8b14d363.scope: Deactivated successfully.
Oct  3 09:33:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:33:16 compute-0 podman[211923]: 2025-10-03 09:33:16.571017216 +0000 UTC m=+0.066127270 container create 948b84e2d1c6853984aa2747387bf9d0fc9e5a023cd7611864056b115acda7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_robinson, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 09:33:16 compute-0 podman[211923]: 2025-10-03 09:33:16.539164253 +0000 UTC m=+0.034274317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:16 compute-0 systemd[1]: Started libpod-conmon-948b84e2d1c6853984aa2747387bf9d0fc9e5a023cd7611864056b115acda7bd.scope.
Oct  3 09:33:16 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:16 compute-0 podman[211923]: 2025-10-03 09:33:16.707493169 +0000 UTC m=+0.202603283 container init 948b84e2d1c6853984aa2747387bf9d0fc9e5a023cd7611864056b115acda7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:33:16 compute-0 podman[211923]: 2025-10-03 09:33:16.719940025 +0000 UTC m=+0.215050059 container start 948b84e2d1c6853984aa2747387bf9d0fc9e5a023cd7611864056b115acda7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_robinson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 09:33:16 compute-0 podman[211923]: 2025-10-03 09:33:16.725129823 +0000 UTC m=+0.220239847 container attach 948b84e2d1c6853984aa2747387bf9d0fc9e5a023cd7611864056b115acda7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:33:16 compute-0 gracious_robinson[211939]: 167 167
Oct  3 09:33:16 compute-0 systemd[1]: libpod-948b84e2d1c6853984aa2747387bf9d0fc9e5a023cd7611864056b115acda7bd.scope: Deactivated successfully.
Oct  3 09:33:16 compute-0 podman[211923]: 2025-10-03 09:33:16.730818514 +0000 UTC m=+0.225928558 container died 948b84e2d1c6853984aa2747387bf9d0fc9e5a023cd7611864056b115acda7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 09:33:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-720554eee8b04bc617fa1f98b2ae9e7eef4507b71cd94fe71a1289f7a125f0fc-merged.mount: Deactivated successfully.
Oct  3 09:33:16 compute-0 podman[211923]: 2025-10-03 09:33:16.790894689 +0000 UTC m=+0.286004723 container remove 948b84e2d1c6853984aa2747387bf9d0fc9e5a023cd7611864056b115acda7bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 09:33:16 compute-0 systemd[1]: libpod-conmon-948b84e2d1c6853984aa2747387bf9d0fc9e5a023cd7611864056b115acda7bd.scope: Deactivated successfully.
Oct  3 09:33:17 compute-0 podman[211961]: 2025-10-03 09:33:17.019910759 +0000 UTC m=+0.065248582 container create d022d05dde8a666f265e7eb12a15af6026e97bd9a2ba38f1a16d641b55381351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 09:33:17 compute-0 podman[211961]: 2025-10-03 09:33:16.990315815 +0000 UTC m=+0.035653668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:17 compute-0 systemd[1]: Started libpod-conmon-d022d05dde8a666f265e7eb12a15af6026e97bd9a2ba38f1a16d641b55381351.scope.
Oct  3 09:33:17 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469869f2bfe3155e8b0afe766a66bf38745328f589f593e2b86cd4e6f25a5cd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469869f2bfe3155e8b0afe766a66bf38745328f589f593e2b86cd4e6f25a5cd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469869f2bfe3155e8b0afe766a66bf38745328f589f593e2b86cd4e6f25a5cd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/469869f2bfe3155e8b0afe766a66bf38745328f589f593e2b86cd4e6f25a5cd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:17 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 09:33:17 compute-0 podman[211961]: 2025-10-03 09:33:17.153462015 +0000 UTC m=+0.198799828 container init d022d05dde8a666f265e7eb12a15af6026e97bd9a2ba38f1a16d641b55381351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:33:17 compute-0 podman[211961]: 2025-10-03 09:33:17.168830729 +0000 UTC m=+0.214168542 container start d022d05dde8a666f265e7eb12a15af6026e97bd9a2ba38f1a16d641b55381351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  3 09:33:17 compute-0 podman[211961]: 2025-10-03 09:33:17.173536871 +0000 UTC m=+0.218874684 container attach d022d05dde8a666f265e7eb12a15af6026e97bd9a2ba38f1a16d641b55381351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_tharp, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:33:17 compute-0 podman[211980]: 2025-10-03 09:33:17.19930235 +0000 UTC m=+0.095505048 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Oct  3 09:33:17 compute-0 podman[211976]: 2025-10-03 09:33:17.209086446 +0000 UTC m=+0.116997737 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 09:33:17 compute-0 podman[211975]: 2025-10-03 09:33:17.237090342 +0000 UTC m=+0.142201039 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=kepler, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container)
Oct  3 09:33:17 compute-0 podman[211977]: 2025-10-03 09:33:17.248629851 +0000 UTC m=+0.138056613 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20250930, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 09:33:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:18 compute-0 jovial_tharp[211991]: {
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:        "osd_id": 1,
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:        "type": "bluestore"
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:    },
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:        "osd_id": 2,
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:        "type": "bluestore"
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:    },
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:        "osd_id": 0,
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:        "type": "bluestore"
Oct  3 09:33:18 compute-0 jovial_tharp[211991]:    }
Oct  3 09:33:18 compute-0 jovial_tharp[211991]: }
Oct  3 09:33:18 compute-0 systemd[1]: libpod-d022d05dde8a666f265e7eb12a15af6026e97bd9a2ba38f1a16d641b55381351.scope: Deactivated successfully.
Oct  3 09:33:18 compute-0 systemd[1]: libpod-d022d05dde8a666f265e7eb12a15af6026e97bd9a2ba38f1a16d641b55381351.scope: Consumed 1.097s CPU time.
Oct  3 09:33:18 compute-0 podman[212087]: 2025-10-03 09:33:18.36133512 +0000 UTC m=+0.057291522 container died d022d05dde8a666f265e7eb12a15af6026e97bd9a2ba38f1a16d641b55381351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:33:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-469869f2bfe3155e8b0afe766a66bf38745328f589f593e2b86cd4e6f25a5cd4-merged.mount: Deactivated successfully.
Oct  3 09:33:19 compute-0 podman[212087]: 2025-10-03 09:33:19.26191054 +0000 UTC m=+0.957866862 container remove d022d05dde8a666f265e7eb12a15af6026e97bd9a2ba38f1a16d641b55381351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_tharp, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 09:33:19 compute-0 systemd[1]: libpod-conmon-d022d05dde8a666f265e7eb12a15af6026e97bd9a2ba38f1a16d641b55381351.scope: Deactivated successfully.
Oct  3 09:33:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:33:19 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:33:19 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Oct  3 09:33:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:19 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Oct  3 09:33:19 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Oct  3 09:33:19 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Oct  3 09:33:19 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:19 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct  3 09:33:19 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct  3 09:33:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct  3 09:33:19 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  3 09:33:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct  3 09:33:19 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  3 09:33:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:33:19 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:33:19 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct  3 09:33:19 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct  3 09:33:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:33:20 compute-0 podman[212268]: 2025-10-03 09:33:20.354452511 +0000 UTC m=+0.062943472 container create b9bb0cc5f3ab1ac8fc2b607afe7409eb2602bee5fc420507c162b30c320820d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hamilton, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:33:20 compute-0 systemd[1]: Started libpod-conmon-b9bb0cc5f3ab1ac8fc2b607afe7409eb2602bee5fc420507c162b30c320820d6.scope.
Oct  3 09:33:20 compute-0 podman[212268]: 2025-10-03 09:33:20.330494308 +0000 UTC m=+0.038985299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:20 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:20 compute-0 podman[212268]: 2025-10-03 09:33:20.542201484 +0000 UTC m=+0.250692545 container init b9bb0cc5f3ab1ac8fc2b607afe7409eb2602bee5fc420507c162b30c320820d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 09:33:20 compute-0 podman[212268]: 2025-10-03 09:33:20.564434075 +0000 UTC m=+0.272925076 container start b9bb0cc5f3ab1ac8fc2b607afe7409eb2602bee5fc420507c162b30c320820d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:20 compute-0 confident_hamilton[212284]: 167 167
Oct  3 09:33:20 compute-0 systemd[1]: libpod-b9bb0cc5f3ab1ac8fc2b607afe7409eb2602bee5fc420507c162b30c320820d6.scope: Deactivated successfully.
Oct  3 09:33:20 compute-0 podman[212268]: 2025-10-03 09:33:20.661194559 +0000 UTC m=+0.369685520 container attach b9bb0cc5f3ab1ac8fc2b607afe7409eb2602bee5fc420507c162b30c320820d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 09:33:20 compute-0 podman[212268]: 2025-10-03 09:33:20.662506528 +0000 UTC m=+0.370997509 container died b9bb0cc5f3ab1ac8fc2b607afe7409eb2602bee5fc420507c162b30c320820d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hamilton, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:33:20 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:20 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:20 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:20 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:20 compute-0 ceph-mon[191783]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct  3 09:33:20 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  3 09:33:20 compute-0 ceph-mon[191783]: Reconfiguring daemon mon.compute-0 on compute-0
Oct  3 09:33:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1fd4d268c65ffdb607cebb4fa07a57b3b805eeb5f30db9a830bdd90da708053-merged.mount: Deactivated successfully.
Oct  3 09:33:21 compute-0 podman[212268]: 2025-10-03 09:33:21.037676094 +0000 UTC m=+0.746167085 container remove b9bb0cc5f3ab1ac8fc2b607afe7409eb2602bee5fc420507c162b30c320820d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_hamilton, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 09:33:21 compute-0 systemd[1]: libpod-conmon-b9bb0cc5f3ab1ac8fc2b607afe7409eb2602bee5fc420507c162b30c320820d6.scope: Deactivated successfully.
Oct  3 09:33:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:33:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:33:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:21 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.vtkhde (unknown last config time)...
Oct  3 09:33:21 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.vtkhde (unknown last config time)...
Oct  3 09:33:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.vtkhde", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct  3 09:33:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vtkhde", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  3 09:33:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  3 09:33:21 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  3 09:33:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:33:21 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:33:21 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.vtkhde on compute-0
Oct  3 09:33:21 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.vtkhde on compute-0
Oct  3 09:33:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.vtkhde", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  3 09:33:21 compute-0 podman[212419]: 2025-10-03 09:33:21.836776529 +0000 UTC m=+0.083314949 container create fa06a020db4a7c0caca4650218b8330ad43fe92c5e537169cc4f3516923f4453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 09:33:21 compute-0 podman[212419]: 2025-10-03 09:33:21.791531072 +0000 UTC m=+0.038069502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:21 compute-0 systemd[1]: Started libpod-conmon-fa06a020db4a7c0caca4650218b8330ad43fe92c5e537169cc4f3516923f4453.scope.
Oct  3 09:33:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:21 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:21 compute-0 podman[212419]: 2025-10-03 09:33:21.96290389 +0000 UTC m=+0.209442310 container init fa06a020db4a7c0caca4650218b8330ad43fe92c5e537169cc4f3516923f4453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 09:33:21 compute-0 podman[212419]: 2025-10-03 09:33:21.971841279 +0000 UTC m=+0.218379699 container start fa06a020db4a7c0caca4650218b8330ad43fe92c5e537169cc4f3516923f4453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:21 compute-0 podman[212419]: 2025-10-03 09:33:21.976639995 +0000 UTC m=+0.223178435 container attach fa06a020db4a7c0caca4650218b8330ad43fe92c5e537169cc4f3516923f4453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:21 compute-0 elated_gould[212435]: 167 167
Oct  3 09:33:21 compute-0 systemd[1]: libpod-fa06a020db4a7c0caca4650218b8330ad43fe92c5e537169cc4f3516923f4453.scope: Deactivated successfully.
Oct  3 09:33:21 compute-0 conmon[212435]: conmon fa06a020db4a7c0caca4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa06a020db4a7c0caca4650218b8330ad43fe92c5e537169cc4f3516923f4453.scope/container/memory.events
Oct  3 09:33:21 compute-0 podman[212419]: 2025-10-03 09:33:21.979036987 +0000 UTC m=+0.225575407 container died fa06a020db4a7c0caca4650218b8330ad43fe92c5e537169cc4f3516923f4453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 09:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fb87b4ff5b42dc828481a2be2faf438f0499057abe888d09462e9f217593212-merged.mount: Deactivated successfully.
Oct  3 09:33:22 compute-0 podman[212419]: 2025-10-03 09:33:22.042948708 +0000 UTC m=+0.289487128 container remove fa06a020db4a7c0caca4650218b8330ad43fe92c5e537169cc4f3516923f4453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:33:22 compute-0 systemd[1]: libpod-conmon-fa06a020db4a7c0caca4650218b8330ad43fe92c5e537169cc4f3516923f4453.scope: Deactivated successfully.
Oct  3 09:33:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:33:22 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:33:22 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:22 compute-0 ceph-mon[191783]: Reconfiguring mgr.compute-0.vtkhde (unknown last config time)...
Oct  3 09:33:22 compute-0 ceph-mon[191783]: Reconfiguring daemon mgr.compute-0.vtkhde on compute-0
Oct  3 09:33:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:23 compute-0 podman[212622]: 2025-10-03 09:33:23.062425792 +0000 UTC m=+0.081726360 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:23 compute-0 podman[212622]: 2025-10-03 09:33:23.161080173 +0000 UTC m=+0.180380741 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:33:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:33:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:33:23 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:33:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:33:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:33:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:33:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:23 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8c073d03-db8b-4691-a31c-6b749220b154 does not exist
Oct  3 09:33:23 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d0d2b711-7e58-421c-8dfb-b08baf86711f does not exist
Oct  3 09:33:23 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 65ebeb6e-01e1-44f5-a3c6-4d1a4cdb072f does not exist
Oct  3 09:33:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:33:23 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:33:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:33:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:33:23 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:23 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:23 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:33:23 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:33:23 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:33:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:24 compute-0 podman[212769]: 2025-10-03 09:33:24.011831448 +0000 UTC m=+0.079499744 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Oct  3 09:33:24 compute-0 podman[212768]: 2025-10-03 09:33:24.049498385 +0000 UTC m=+0.116850771 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct  3 09:33:24 compute-0 python3[212932]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:33:24 compute-0 podman[212952]: 2025-10-03 09:33:24.5202853 +0000 UTC m=+0.030118580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:24 compute-0 podman[212952]: 2025-10-03 09:33:24.663219589 +0000 UTC m=+0.173052849 container create 82cad7587e44c36622e2ff9bf827f116a543d3d6ae9897ccbe9c256edbaaf8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goodall, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 09:33:24 compute-0 systemd[1]: Started libpod-conmon-82cad7587e44c36622e2ff9bf827f116a543d3d6ae9897ccbe9c256edbaaf8c2.scope.
Oct  3 09:33:24 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:24 compute-0 podman[212952]: 2025-10-03 09:33:24.765301764 +0000 UTC m=+0.275135074 container init 82cad7587e44c36622e2ff9bf827f116a543d3d6ae9897ccbe9c256edbaaf8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goodall, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:33:24 compute-0 podman[212952]: 2025-10-03 09:33:24.775622835 +0000 UTC m=+0.285456095 container start 82cad7587e44c36622e2ff9bf827f116a543d3d6ae9897ccbe9c256edbaaf8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goodall, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:33:24 compute-0 unruffled_goodall[212981]: 167 167
Oct  3 09:33:24 compute-0 systemd[1]: libpod-82cad7587e44c36622e2ff9bf827f116a543d3d6ae9897ccbe9c256edbaaf8c2.scope: Deactivated successfully.
Oct  3 09:33:24 compute-0 podman[212965]: 2025-10-03 09:33:24.78438756 +0000 UTC m=+0.260686017 container create bcc06c048046cf9bc1eb4b76bb8de49db7ca60a322da1d49ca417b7e3c46c694 (image=quay.io/ceph/ceph:v18, name=flamboyant_ishizaka, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:24 compute-0 podman[212965]: 2025-10-03 09:33:24.741670159 +0000 UTC m=+0.217968606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:33:24 compute-0 podman[212952]: 2025-10-03 09:33:24.879737921 +0000 UTC m=+0.389571191 container attach 82cad7587e44c36622e2ff9bf827f116a543d3d6ae9897ccbe9c256edbaaf8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  3 09:33:24 compute-0 podman[212952]: 2025-10-03 09:33:24.880165945 +0000 UTC m=+0.389999205 container died 82cad7587e44c36622e2ff9bf827f116a543d3d6ae9897ccbe9c256edbaaf8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:33:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:33:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe549c27a99e048f071920bdc974f44896c3567c68352540c261af283e8f054d-merged.mount: Deactivated successfully.
Oct  3 09:33:25 compute-0 podman[212952]: 2025-10-03 09:33:25.047562062 +0000 UTC m=+0.557395312 container remove 82cad7587e44c36622e2ff9bf827f116a543d3d6ae9897ccbe9c256edbaaf8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  3 09:33:25 compute-0 systemd[1]: libpod-conmon-82cad7587e44c36622e2ff9bf827f116a543d3d6ae9897ccbe9c256edbaaf8c2.scope: Deactivated successfully.
Oct  3 09:33:25 compute-0 systemd[1]: Started libpod-conmon-bcc06c048046cf9bc1eb4b76bb8de49db7ca60a322da1d49ca417b7e3c46c694.scope.
Oct  3 09:33:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783fac40122d845b9b9797e22739094007cc1efbc8f9946e635c05c7fb9523fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783fac40122d845b9b9797e22739094007cc1efbc8f9946e635c05c7fb9523fa/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783fac40122d845b9b9797e22739094007cc1efbc8f9946e635c05c7fb9523fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:25 compute-0 podman[212965]: 2025-10-03 09:33:25.230131389 +0000 UTC m=+0.706429826 container init bcc06c048046cf9bc1eb4b76bb8de49db7ca60a322da1d49ca417b7e3c46c694 (image=quay.io/ceph/ceph:v18, name=flamboyant_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 09:33:25 compute-0 podman[212965]: 2025-10-03 09:33:25.243772081 +0000 UTC m=+0.720070548 container start bcc06c048046cf9bc1eb4b76bb8de49db7ca60a322da1d49ca417b7e3c46c694 (image=quay.io/ceph/ceph:v18, name=flamboyant_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 09:33:25 compute-0 podman[212965]: 2025-10-03 09:33:25.249985538 +0000 UTC m=+0.726283985 container attach bcc06c048046cf9bc1eb4b76bb8de49db7ca60a322da1d49ca417b7e3c46c694 (image=quay.io/ceph/ceph:v18, name=flamboyant_ishizaka, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:25 compute-0 podman[213012]: 2025-10-03 09:33:25.277635184 +0000 UTC m=+0.073809871 container create 2aec8c2722f31f7cd01cc7ba30a0a18dd3a8fa0db3bb93eaf4b85ce354a15a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 09:33:25 compute-0 podman[213012]: 2025-10-03 09:33:25.249190494 +0000 UTC m=+0.045365191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:33:25 compute-0 systemd[1]: Started libpod-conmon-2aec8c2722f31f7cd01cc7ba30a0a18dd3a8fa0db3bb93eaf4b85ce354a15a83.scope.
Oct  3 09:33:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683b38df328662c149722a7383b3d178e76acf530d62b2fce4861ed1682bb306/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683b38df328662c149722a7383b3d178e76acf530d62b2fce4861ed1682bb306/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683b38df328662c149722a7383b3d178e76acf530d62b2fce4861ed1682bb306/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683b38df328662c149722a7383b3d178e76acf530d62b2fce4861ed1682bb306/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683b38df328662c149722a7383b3d178e76acf530d62b2fce4861ed1682bb306/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:25 compute-0 podman[213012]: 2025-10-03 09:33:25.437678559 +0000 UTC m=+0.233853226 container init 2aec8c2722f31f7cd01cc7ba30a0a18dd3a8fa0db3bb93eaf4b85ce354a15a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:25 compute-0 podman[213012]: 2025-10-03 09:33:25.454822227 +0000 UTC m=+0.250996874 container start 2aec8c2722f31f7cd01cc7ba30a0a18dd3a8fa0db3bb93eaf4b85ce354a15a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:33:25 compute-0 podman[213012]: 2025-10-03 09:33:25.460133457 +0000 UTC m=+0.256308114 container attach 2aec8c2722f31f7cd01cc7ba30a0a18dd3a8fa0db3bb93eaf4b85ce354a15a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:33:25 compute-0 podman[213052]: 2025-10-03 09:33:25.814353401 +0000 UTC m=+0.066491490 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:33:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  3 09:33:25 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1521891290' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  3 09:33:25 compute-0 flamboyant_ishizaka[213004]: 
Oct  3 09:33:25 compute-0 flamboyant_ishizaka[213004]: {"fsid":"9b4e8c9a-5555-5510-a631-4742a1182561","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":150,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1759483989,"num_in_osds":3,"osd_in_since":1759483955,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502738944,"bytes_avail":63909187584,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-03T09:32:47.907561+0000","services":{}},"progress_events":{}}
Oct  3 09:33:25 compute-0 systemd[1]: libpod-bcc06c048046cf9bc1eb4b76bb8de49db7ca60a322da1d49ca417b7e3c46c694.scope: Deactivated successfully.
Oct  3 09:33:25 compute-0 podman[212965]: 2025-10-03 09:33:25.947836694 +0000 UTC m=+1.424135161 container died bcc06c048046cf9bc1eb4b76bb8de49db7ca60a322da1d49ca417b7e3c46c694 (image=quay.io/ceph/ceph:v18, name=flamboyant_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-783fac40122d845b9b9797e22739094007cc1efbc8f9946e635c05c7fb9523fa-merged.mount: Deactivated successfully.
Oct  3 09:33:26 compute-0 podman[212965]: 2025-10-03 09:33:26.121921454 +0000 UTC m=+1.598219881 container remove bcc06c048046cf9bc1eb4b76bb8de49db7ca60a322da1d49ca417b7e3c46c694 (image=quay.io/ceph/ceph:v18, name=flamboyant_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 09:33:26 compute-0 systemd[1]: libpod-conmon-bcc06c048046cf9bc1eb4b76bb8de49db7ca60a322da1d49ca417b7e3c46c694.scope: Deactivated successfully.
Oct  3 09:33:26 compute-0 elastic_jemison[213028]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:33:26 compute-0 elastic_jemison[213028]: --> relative data size: 1.0
Oct  3 09:33:26 compute-0 elastic_jemison[213028]: --> All data devices are unavailable
Oct  3 09:33:26 compute-0 python3[213132]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:33:26 compute-0 systemd[1]: libpod-2aec8c2722f31f7cd01cc7ba30a0a18dd3a8fa0db3bb93eaf4b85ce354a15a83.scope: Deactivated successfully.
Oct  3 09:33:26 compute-0 podman[213012]: 2025-10-03 09:33:26.629597103 +0000 UTC m=+1.425771750 container died 2aec8c2722f31f7cd01cc7ba30a0a18dd3a8fa0db3bb93eaf4b85ce354a15a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 09:33:26 compute-0 systemd[1]: libpod-2aec8c2722f31f7cd01cc7ba30a0a18dd3a8fa0db3bb93eaf4b85ce354a15a83.scope: Consumed 1.100s CPU time.
Oct  3 09:33:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-683b38df328662c149722a7383b3d178e76acf530d62b2fce4861ed1682bb306-merged.mount: Deactivated successfully.
Oct  3 09:33:26 compute-0 podman[213141]: 2025-10-03 09:33:26.688860333 +0000 UTC m=+0.080055529 container create b3c828876d0264235018835055f7552d8caa716a20622a291e3b2226bceb0e46 (image=quay.io/ceph/ceph:v18, name=blissful_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:26 compute-0 podman[213012]: 2025-10-03 09:33:26.697920677 +0000 UTC m=+1.494095314 container remove 2aec8c2722f31f7cd01cc7ba30a0a18dd3a8fa0db3bb93eaf4b85ce354a15a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:33:26 compute-0 systemd[1]: libpod-conmon-2aec8c2722f31f7cd01cc7ba30a0a18dd3a8fa0db3bb93eaf4b85ce354a15a83.scope: Deactivated successfully.
Oct  3 09:33:26 compute-0 systemd[1]: Started libpod-conmon-b3c828876d0264235018835055f7552d8caa716a20622a291e3b2226bceb0e46.scope.
Oct  3 09:33:26 compute-0 podman[213141]: 2025-10-03 09:33:26.652365812 +0000 UTC m=+0.043561028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:33:26 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfd8e909fe827cd2c7ff42b4befc3069cf433ed8261e4e2730979f403af52e11/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfd8e909fe827cd2c7ff42b4befc3069cf433ed8261e4e2730979f403af52e11/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:26 compute-0 podman[213141]: 2025-10-03 09:33:26.811383376 +0000 UTC m=+0.202578592 container init b3c828876d0264235018835055f7552d8caa716a20622a291e3b2226bceb0e46 (image=quay.io/ceph/ceph:v18, name=blissful_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  3 09:33:26 compute-0 podman[213141]: 2025-10-03 09:33:26.821547263 +0000 UTC m=+0.212742459 container start b3c828876d0264235018835055f7552d8caa716a20622a291e3b2226bceb0e46 (image=quay.io/ceph/ceph:v18, name=blissful_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  3 09:33:26 compute-0 podman[213141]: 2025-10-03 09:33:26.825795132 +0000 UTC m=+0.216990338 container attach b3c828876d0264235018835055f7552d8caa716a20622a291e3b2226bceb0e46 (image=quay.io/ceph/ceph:v18, name=blissful_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:33:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  3 09:33:27 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1043797629' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  3 09:33:27 compute-0 podman[213331]: 2025-10-03 09:33:27.465863881 +0000 UTC m=+0.058153548 container create f41062da6061201fb4050654a899d47f2a5a5b8aa2f6e67bd213bb833bde0dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_snyder, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 09:33:27 compute-0 systemd[1]: Started libpod-conmon-f41062da6061201fb4050654a899d47f2a5a5b8aa2f6e67bd213bb833bde0dff.scope.
Oct  3 09:33:27 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:27 compute-0 podman[213331]: 2025-10-03 09:33:27.442091482 +0000 UTC m=+0.034381199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:27 compute-0 podman[213331]: 2025-10-03 09:33:27.550066105 +0000 UTC m=+0.142355802 container init f41062da6061201fb4050654a899d47f2a5a5b8aa2f6e67bd213bb833bde0dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_snyder, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:33:27 compute-0 podman[213331]: 2025-10-03 09:33:27.557101167 +0000 UTC m=+0.149390844 container start f41062da6061201fb4050654a899d47f2a5a5b8aa2f6e67bd213bb833bde0dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_snyder, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  3 09:33:27 compute-0 podman[213331]: 2025-10-03 09:33:27.561180781 +0000 UTC m=+0.153470478 container attach f41062da6061201fb4050654a899d47f2a5a5b8aa2f6e67bd213bb833bde0dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_snyder, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 09:33:27 compute-0 pensive_snyder[213351]: 167 167
Oct  3 09:33:27 compute-0 systemd[1]: libpod-f41062da6061201fb4050654a899d47f2a5a5b8aa2f6e67bd213bb833bde0dff.scope: Deactivated successfully.
Oct  3 09:33:27 compute-0 podman[213356]: 2025-10-03 09:33:27.635170826 +0000 UTC m=+0.047148325 container died f41062da6061201fb4050654a899d47f2a5a5b8aa2f6e67bd213bb833bde0dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_snyder, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct  3 09:33:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-206c3ab5c37d47023412bfd483da36a4aa7a8634990343ad4566e04dfd726bc1-merged.mount: Deactivated successfully.
Oct  3 09:33:27 compute-0 podman[213356]: 2025-10-03 09:33:27.685565129 +0000 UTC m=+0.097542598 container remove f41062da6061201fb4050654a899d47f2a5a5b8aa2f6e67bd213bb833bde0dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_snyder, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  3 09:33:27 compute-0 systemd[1]: libpod-conmon-f41062da6061201fb4050654a899d47f2a5a5b8aa2f6e67bd213bb833bde0dff.scope: Deactivated successfully.
Oct  3 09:33:27 compute-0 podman[213377]: 2025-10-03 09:33:27.886400408 +0000 UTC m=+0.046921110 container create 683df6de25bfba4161398a13c301eb2f50178e42682e8ab4a8a8be90e2746e7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_swartz, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:27 compute-0 systemd[1]: Started libpod-conmon-683df6de25bfba4161398a13c301eb2f50178e42682e8ab4a8a8be90e2746e7a.scope.
Oct  3 09:33:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct  3 09:33:27 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/1043797629' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  3 09:33:27 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1043797629' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  3 09:33:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Oct  3 09:33:27 compute-0 blissful_khorana[213168]: pool 'vms' created
Oct  3 09:33:27 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Oct  3 09:33:27 compute-0 podman[213377]: 2025-10-03 09:33:27.868716753 +0000 UTC m=+0.029237475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:27 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d58e885484b573b5afabb74a9ea252b27e27037f766a42054c0a57823767ae7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d58e885484b573b5afabb74a9ea252b27e27037f766a42054c0a57823767ae7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d58e885484b573b5afabb74a9ea252b27e27037f766a42054c0a57823767ae7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d58e885484b573b5afabb74a9ea252b27e27037f766a42054c0a57823767ae7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:27 compute-0 podman[213377]: 2025-10-03 09:33:27.99704002 +0000 UTC m=+0.157560752 container init 683df6de25bfba4161398a13c301eb2f50178e42682e8ab4a8a8be90e2746e7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:33:27 compute-0 systemd[1]: libpod-b3c828876d0264235018835055f7552d8caa716a20622a291e3b2226bceb0e46.scope: Deactivated successfully.
Oct  3 09:33:28 compute-0 conmon[213168]: conmon b3c828876d0264235018 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3c828876d0264235018835055f7552d8caa716a20622a291e3b2226bceb0e46.scope/container/memory.events
Oct  3 09:33:28 compute-0 podman[213141]: 2025-10-03 09:33:28.008388273 +0000 UTC m=+1.399583469 container died b3c828876d0264235018835055f7552d8caa716a20622a291e3b2226bceb0e46 (image=quay.io/ceph/ceph:v18, name=blissful_khorana, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:33:28 compute-0 podman[213377]: 2025-10-03 09:33:28.010877668 +0000 UTC m=+0.171398370 container start 683df6de25bfba4161398a13c301eb2f50178e42682e8ab4a8a8be90e2746e7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_swartz, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:28 compute-0 podman[213377]: 2025-10-03 09:33:28.026521912 +0000 UTC m=+0.187042624 container attach 683df6de25bfba4161398a13c301eb2f50178e42682e8ab4a8a8be90e2746e7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_swartz, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  3 09:33:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfd8e909fe827cd2c7ff42b4befc3069cf433ed8261e4e2730979f403af52e11-merged.mount: Deactivated successfully.
Oct  3 09:33:28 compute-0 podman[213141]: 2025-10-03 09:33:28.0648636 +0000 UTC m=+1.456058796 container remove b3c828876d0264235018835055f7552d8caa716a20622a291e3b2226bceb0e46 (image=quay.io/ceph/ceph:v18, name=blissful_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:33:28 compute-0 systemd[1]: libpod-conmon-b3c828876d0264235018835055f7552d8caa716a20622a291e3b2226bceb0e46.scope: Deactivated successfully.
Oct  3 09:33:28 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:28 compute-0 python3[213434]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:33:28 compute-0 podman[213435]: 2025-10-03 09:33:28.543147391 +0000 UTC m=+0.069539512 container create 7b6bdebc8c0fa82af7e7d5f4bbda83d48f7eb729625b45f6bcf4c191f252bbc1 (image=quay.io/ceph/ceph:v18, name=boring_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  3 09:33:28 compute-0 systemd[1]: Started libpod-conmon-7b6bdebc8c0fa82af7e7d5f4bbda83d48f7eb729625b45f6bcf4c191f252bbc1.scope.
Oct  3 09:33:28 compute-0 podman[213435]: 2025-10-03 09:33:28.512182585 +0000 UTC m=+0.038574736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:33:28 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae27f3243ddfad0b2bf45ff3095220b4714da6bf7144a8141c674d4ae7c89618/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae27f3243ddfad0b2bf45ff3095220b4714da6bf7144a8141c674d4ae7c89618/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:28 compute-0 podman[213435]: 2025-10-03 09:33:28.654706211 +0000 UTC m=+0.181098372 container init 7b6bdebc8c0fa82af7e7d5f4bbda83d48f7eb729625b45f6bcf4c191f252bbc1 (image=quay.io/ceph/ceph:v18, name=boring_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Oct  3 09:33:28 compute-0 podman[213435]: 2025-10-03 09:33:28.663444696 +0000 UTC m=+0.189836837 container start 7b6bdebc8c0fa82af7e7d5f4bbda83d48f7eb729625b45f6bcf4c191f252bbc1 (image=quay.io/ceph/ceph:v18, name=boring_swartz, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:33:28 compute-0 podman[213435]: 2025-10-03 09:33:28.668849029 +0000 UTC m=+0.195241190 container attach 7b6bdebc8c0fa82af7e7d5f4bbda83d48f7eb729625b45f6bcf4c191f252bbc1 (image=quay.io/ceph/ceph:v18, name=boring_swartz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  3 09:33:28 compute-0 stoic_swartz[213392]: {
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:    "0": [
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:        {
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "devices": [
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "/dev/loop3"
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            ],
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "lv_name": "ceph_lv0",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "lv_size": "21470642176",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "name": "ceph_lv0",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "tags": {
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.cluster_name": "ceph",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.crush_device_class": "",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.encrypted": "0",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.osd_id": "0",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.type": "block",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.vdo": "0"
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            },
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "type": "block",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "vg_name": "ceph_vg0"
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:        }
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:    ],
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:    "1": [
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:        {
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "devices": [
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "/dev/loop4"
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            ],
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "lv_name": "ceph_lv1",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "lv_size": "21470642176",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "name": "ceph_lv1",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "tags": {
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.cluster_name": "ceph",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.crush_device_class": "",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.encrypted": "0",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.osd_id": "1",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.type": "block",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.vdo": "0"
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            },
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "type": "block",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "vg_name": "ceph_vg1"
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:        }
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:    ],
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:    "2": [
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:        {
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "devices": [
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "/dev/loop5"
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            ],
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "lv_name": "ceph_lv2",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "lv_size": "21470642176",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "name": "ceph_lv2",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "tags": {
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.cluster_name": "ceph",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.crush_device_class": "",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.encrypted": "0",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.osd_id": "2",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.type": "block",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:                "ceph.vdo": "0"
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            },
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "type": "block",
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:            "vg_name": "ceph_vg2"
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:        }
Oct  3 09:33:28 compute-0 stoic_swartz[213392]:    ]
Oct  3 09:33:28 compute-0 stoic_swartz[213392]: }
Oct  3 09:33:28 compute-0 systemd[1]: libpod-683df6de25bfba4161398a13c301eb2f50178e42682e8ab4a8a8be90e2746e7a.scope: Deactivated successfully.
Oct  3 09:33:28 compute-0 podman[213377]: 2025-10-03 09:33:28.906427807 +0000 UTC m=+1.066948509 container died 683df6de25bfba4161398a13c301eb2f50178e42682e8ab4a8a8be90e2746e7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:33:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d58e885484b573b5afabb74a9ea252b27e27037f766a42054c0a57823767ae7-merged.mount: Deactivated successfully.
Oct  3 09:33:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct  3 09:33:28 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/1043797629' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  3 09:33:28 compute-0 podman[213377]: 2025-10-03 09:33:28.978016441 +0000 UTC m=+1.138537143 container remove 683df6de25bfba4161398a13c301eb2f50178e42682e8ab4a8a8be90e2746e7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_swartz, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 09:33:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Oct  3 09:33:28 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Oct  3 09:33:28 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:29 compute-0 systemd[1]: libpod-conmon-683df6de25bfba4161398a13c301eb2f50178e42682e8ab4a8a8be90e2746e7a.scope: Deactivated successfully.
Oct  3 09:33:29 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  3 09:33:29 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/432027714' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  3 09:33:29 compute-0 podman[157165]: time="2025-10-03T09:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:33:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30683 "" "Go-http-client/1.1"
Oct  3 09:33:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6232 "" "Go-http-client/1.1"
Oct  3 09:33:29 compute-0 podman[213629]: 2025-10-03 09:33:29.837593453 +0000 UTC m=+0.045646991 container create eba8f2b0ffa9896eba4be0def8202dc8d55156e5715894996b4b3dfc3e6ac86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:33:29 compute-0 systemd[1]: Started libpod-conmon-eba8f2b0ffa9896eba4be0def8202dc8d55156e5715894996b4b3dfc3e6ac86f.scope.
Oct  3 09:33:29 compute-0 podman[213629]: 2025-10-03 09:33:29.816582257 +0000 UTC m=+0.024635795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v61: 2 pgs: 1 active+clean, 1 unknown; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:29 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:29 compute-0 podman[213629]: 2025-10-03 09:33:29.951646655 +0000 UTC m=+0.159700213 container init eba8f2b0ffa9896eba4be0def8202dc8d55156e5715894996b4b3dfc3e6ac86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:29 compute-0 podman[213629]: 2025-10-03 09:33:29.960029555 +0000 UTC m=+0.168083093 container start eba8f2b0ffa9896eba4be0def8202dc8d55156e5715894996b4b3dfc3e6ac86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 09:33:29 compute-0 podman[213629]: 2025-10-03 09:33:29.964699504 +0000 UTC m=+0.172753052 container attach eba8f2b0ffa9896eba4be0def8202dc8d55156e5715894996b4b3dfc3e6ac86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct  3 09:33:29 compute-0 musing_faraday[213645]: 167 167
Oct  3 09:33:29 compute-0 podman[213629]: 2025-10-03 09:33:29.968786746 +0000 UTC m=+0.176840284 container died eba8f2b0ffa9896eba4be0def8202dc8d55156e5715894996b4b3dfc3e6ac86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 09:33:29 compute-0 systemd[1]: libpod-eba8f2b0ffa9896eba4be0def8202dc8d55156e5715894996b4b3dfc3e6ac86f.scope: Deactivated successfully.
Oct  3 09:33:29 compute-0 ceph-mon[191783]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  3 09:33:29 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct  3 09:33:29 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/432027714' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  3 09:33:29 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/432027714' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  3 09:33:29 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Oct  3 09:33:30 compute-0 boring_swartz[213451]: pool 'volumes' created
Oct  3 09:33:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-29afb21eeb3d02dc0ac557ce1b3cbbc2de9224620ff83af5b98e3f30501a1f08-merged.mount: Deactivated successfully.
Oct  3 09:33:30 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Oct  3 09:33:30 compute-0 podman[213629]: 2025-10-03 09:33:30.022107788 +0000 UTC m=+0.230161326 container remove eba8f2b0ffa9896eba4be0def8202dc8d55156e5715894996b4b3dfc3e6ac86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:30 compute-0 systemd[1]: libpod-7b6bdebc8c0fa82af7e7d5f4bbda83d48f7eb729625b45f6bcf4c191f252bbc1.scope: Deactivated successfully.
Oct  3 09:33:30 compute-0 conmon[213451]: conmon 7b6bdebc8c0fa82af7e7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b6bdebc8c0fa82af7e7d5f4bbda83d48f7eb729625b45f6bcf4c191f252bbc1.scope/container/memory.events
Oct  3 09:33:30 compute-0 podman[213435]: 2025-10-03 09:33:30.041273194 +0000 UTC m=+1.567665325 container died 7b6bdebc8c0fa82af7e7d5f4bbda83d48f7eb729625b45f6bcf4c191f252bbc1 (image=quay.io/ceph/ceph:v18, name=boring_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:30 compute-0 systemd[1]: libpod-conmon-eba8f2b0ffa9896eba4be0def8202dc8d55156e5715894996b4b3dfc3e6ac86f.scope: Deactivated successfully.
Oct  3 09:33:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae27f3243ddfad0b2bf45ff3095220b4714da6bf7144a8141c674d4ae7c89618-merged.mount: Deactivated successfully.
Oct  3 09:33:30 compute-0 podman[213435]: 2025-10-03 09:33:30.103039128 +0000 UTC m=+1.629431249 container remove 7b6bdebc8c0fa82af7e7d5f4bbda83d48f7eb729625b45f6bcf4c191f252bbc1 (image=quay.io/ceph/ceph:v18, name=boring_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 09:33:30 compute-0 systemd[1]: libpod-conmon-7b6bdebc8c0fa82af7e7d5f4bbda83d48f7eb729625b45f6bcf4c191f252bbc1.scope: Deactivated successfully.
Oct  3 09:33:30 compute-0 podman[213680]: 2025-10-03 09:33:30.201728347 +0000 UTC m=+0.046823385 container create 55b5552a1a38f1485b54ebe219916207fba09087f2be4a67a8dbc1ee5991e938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 09:33:30 compute-0 systemd[1]: Started libpod-conmon-55b5552a1a38f1485b54ebe219916207fba09087f2be4a67a8dbc1ee5991e938.scope.
Oct  3 09:33:30 compute-0 podman[213680]: 2025-10-03 09:33:30.182812849 +0000 UTC m=+0.027907907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:30 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a1415e426d5c45a0e5b91d8994ea32caf78fe29e014df475dd091e5e3c9e56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a1415e426d5c45a0e5b91d8994ea32caf78fe29e014df475dd091e5e3c9e56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a1415e426d5c45a0e5b91d8994ea32caf78fe29e014df475dd091e5e3c9e56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a1415e426d5c45a0e5b91d8994ea32caf78fe29e014df475dd091e5e3c9e56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:30 compute-0 podman[213680]: 2025-10-03 09:33:30.313994983 +0000 UTC m=+0.159090071 container init 55b5552a1a38f1485b54ebe219916207fba09087f2be4a67a8dbc1ee5991e938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:33:30 compute-0 podman[213680]: 2025-10-03 09:33:30.327968642 +0000 UTC m=+0.173063680 container start 55b5552a1a38f1485b54ebe219916207fba09087f2be4a67a8dbc1ee5991e938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:33:30 compute-0 podman[213680]: 2025-10-03 09:33:30.3322829 +0000 UTC m=+0.177377958 container attach 55b5552a1a38f1485b54ebe219916207fba09087f2be4a67a8dbc1ee5991e938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:33:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:33:30 compute-0 python3[213724]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:33:30 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:30 compute-0 podman[213727]: 2025-10-03 09:33:30.558478214 +0000 UTC m=+0.073040017 container create f37a250e353f69fee56acd3cfc57f92550d19df6723fd5e77fd8838e8d7c5a48 (image=quay.io/ceph/ceph:v18, name=dreamy_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 09:33:30 compute-0 systemd[1]: Started libpod-conmon-f37a250e353f69fee56acd3cfc57f92550d19df6723fd5e77fd8838e8d7c5a48.scope.
Oct  3 09:33:30 compute-0 podman[213727]: 2025-10-03 09:33:30.533042827 +0000 UTC m=+0.047604620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:33:30 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678e8c654cd8923f201d2d932b1fbb775cda67021f900141a882d7eb1db9b8bb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678e8c654cd8923f201d2d932b1fbb775cda67021f900141a882d7eb1db9b8bb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:30 compute-0 podman[213727]: 2025-10-03 09:33:30.688202691 +0000 UTC m=+0.202764474 container init f37a250e353f69fee56acd3cfc57f92550d19df6723fd5e77fd8838e8d7c5a48 (image=quay.io/ceph/ceph:v18, name=dreamy_ptolemy, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:30 compute-0 podman[213727]: 2025-10-03 09:33:30.705430964 +0000 UTC m=+0.219992727 container start f37a250e353f69fee56acd3cfc57f92550d19df6723fd5e77fd8838e8d7c5a48 (image=quay.io/ceph/ceph:v18, name=dreamy_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 09:33:30 compute-0 podman[213727]: 2025-10-03 09:33:30.711013623 +0000 UTC m=+0.225575416 container attach f37a250e353f69fee56acd3cfc57f92550d19df6723fd5e77fd8838e8d7c5a48 (image=quay.io/ceph/ceph:v18, name=dreamy_ptolemy, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:33:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct  3 09:33:31 compute-0 ceph-mon[191783]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  3 09:33:31 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/432027714' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  3 09:33:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Oct  3 09:33:31 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Oct  3 09:33:31 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  3 09:33:31 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2463299500' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]: {
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:        "osd_id": 1,
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:        "type": "bluestore"
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:    },
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:        "osd_id": 2,
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:        "type": "bluestore"
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:    },
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:        "osd_id": 0,
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:        "type": "bluestore"
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]:    }
Oct  3 09:33:31 compute-0 compassionate_pascal[213719]: }
Oct  3 09:33:31 compute-0 systemd[1]: libpod-55b5552a1a38f1485b54ebe219916207fba09087f2be4a67a8dbc1ee5991e938.scope: Deactivated successfully.
Oct  3 09:33:31 compute-0 systemd[1]: libpod-55b5552a1a38f1485b54ebe219916207fba09087f2be4a67a8dbc1ee5991e938.scope: Consumed 1.079s CPU time.
Oct  3 09:33:31 compute-0 podman[213680]: 2025-10-03 09:33:31.415272331 +0000 UTC m=+1.260367369 container died 55b5552a1a38f1485b54ebe219916207fba09087f2be4a67a8dbc1ee5991e938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:33:31 compute-0 openstack_network_exporter[159287]: ERROR   09:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:33:31 compute-0 openstack_network_exporter[159287]: ERROR   09:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:33:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:33:31 compute-0 openstack_network_exporter[159287]: ERROR   09:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:33:31 compute-0 openstack_network_exporter[159287]: ERROR   09:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:33:31 compute-0 openstack_network_exporter[159287]: ERROR   09:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:33:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:33:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6a1415e426d5c45a0e5b91d8994ea32caf78fe29e014df475dd091e5e3c9e56-merged.mount: Deactivated successfully.
Oct  3 09:33:31 compute-0 podman[213680]: 2025-10-03 09:33:31.589784776 +0000 UTC m=+1.434879814 container remove 55b5552a1a38f1485b54ebe219916207fba09087f2be4a67a8dbc1ee5991e938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 09:33:31 compute-0 systemd[1]: libpod-conmon-55b5552a1a38f1485b54ebe219916207fba09087f2be4a67a8dbc1ee5991e938.scope: Deactivated successfully.
Oct  3 09:33:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:33:31 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:33:31 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v64: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct  3 09:33:32 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/2463299500' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  3 09:33:32 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:32 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:32 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2463299500' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  3 09:33:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Oct  3 09:33:32 compute-0 dreamy_ptolemy[213742]: pool 'backups' created
Oct  3 09:33:32 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Oct  3 09:33:32 compute-0 systemd[1]: libpod-f37a250e353f69fee56acd3cfc57f92550d19df6723fd5e77fd8838e8d7c5a48.scope: Deactivated successfully.
Oct  3 09:33:32 compute-0 podman[213727]: 2025-10-03 09:33:32.085633821 +0000 UTC m=+1.600195584 container died f37a250e353f69fee56acd3cfc57f92550d19df6723fd5e77fd8838e8d7c5a48 (image=quay.io/ceph/ceph:v18, name=dreamy_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-678e8c654cd8923f201d2d932b1fbb775cda67021f900141a882d7eb1db9b8bb-merged.mount: Deactivated successfully.
Oct  3 09:33:32 compute-0 podman[213727]: 2025-10-03 09:33:32.163777831 +0000 UTC m=+1.678339594 container remove f37a250e353f69fee56acd3cfc57f92550d19df6723fd5e77fd8838e8d7c5a48 (image=quay.io/ceph/ceph:v18, name=dreamy_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 09:33:32 compute-0 systemd[1]: libpod-conmon-f37a250e353f69fee56acd3cfc57f92550d19df6723fd5e77fd8838e8d7c5a48.scope: Deactivated successfully.
Oct  3 09:33:32 compute-0 python3[213899]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:33:32 compute-0 podman[213900]: 2025-10-03 09:33:32.533168034 +0000 UTC m=+0.058111927 container create 628936bea3f6a310f1ca9c0e06521b74b20b8d2efb08ddaa471d128d1b7be5ca (image=quay.io/ceph/ceph:v18, name=great_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 09:33:32 compute-0 systemd[1]: Started libpod-conmon-628936bea3f6a310f1ca9c0e06521b74b20b8d2efb08ddaa471d128d1b7be5ca.scope.
Oct  3 09:33:32 compute-0 podman[213900]: 2025-10-03 09:33:32.509562236 +0000 UTC m=+0.034506139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:33:32 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6859335d2dc0da78bba4eeeed0b84f5af63029879f9942f66f48bf75ab9f786e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6859335d2dc0da78bba4eeeed0b84f5af63029879f9942f66f48bf75ab9f786e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:32 compute-0 podman[213900]: 2025-10-03 09:33:32.642727293 +0000 UTC m=+0.167671186 container init 628936bea3f6a310f1ca9c0e06521b74b20b8d2efb08ddaa471d128d1b7be5ca (image=quay.io/ceph/ceph:v18, name=great_newton, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  3 09:33:32 compute-0 podman[213900]: 2025-10-03 09:33:32.650541744 +0000 UTC m=+0.175485617 container start 628936bea3f6a310f1ca9c0e06521b74b20b8d2efb08ddaa471d128d1b7be5ca (image=quay.io/ceph/ceph:v18, name=great_newton, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 09:33:32 compute-0 podman[213900]: 2025-10-03 09:33:32.654838371 +0000 UTC m=+0.179782304 container attach 628936bea3f6a310f1ca9c0e06521b74b20b8d2efb08ddaa471d128d1b7be5ca (image=quay.io/ceph/ceph:v18, name=great_newton, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct  3 09:33:33 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:33 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/2463299500' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  3 09:33:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  3 09:33:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3946800615' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  3 09:33:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct  3 09:33:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3946800615' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  3 09:33:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Oct  3 09:33:33 compute-0 great_newton[213915]: pool 'images' created
Oct  3 09:33:33 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Oct  3 09:33:33 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:33 compute-0 systemd[1]: libpod-628936bea3f6a310f1ca9c0e06521b74b20b8d2efb08ddaa471d128d1b7be5ca.scope: Deactivated successfully.
Oct  3 09:33:33 compute-0 podman[213900]: 2025-10-03 09:33:33.719430202 +0000 UTC m=+1.244374075 container died 628936bea3f6a310f1ca9c0e06521b74b20b8d2efb08ddaa471d128d1b7be5ca (image=quay.io/ceph/ceph:v18, name=great_newton, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 09:33:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6859335d2dc0da78bba4eeeed0b84f5af63029879f9942f66f48bf75ab9f786e-merged.mount: Deactivated successfully.
Oct  3 09:33:33 compute-0 podman[213900]: 2025-10-03 09:33:33.787716506 +0000 UTC m=+1.312660379 container remove 628936bea3f6a310f1ca9c0e06521b74b20b8d2efb08ddaa471d128d1b7be5ca (image=quay.io/ceph/ceph:v18, name=great_newton, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:33:33 compute-0 systemd[1]: libpod-conmon-628936bea3f6a310f1ca9c0e06521b74b20b8d2efb08ddaa471d128d1b7be5ca.scope: Deactivated successfully.
Oct  3 09:33:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v67: 5 pgs: 3 unknown, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:34 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/3946800615' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  3 09:33:34 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/3946800615' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  3 09:33:34 compute-0 python3[213981]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:33:34 compute-0 podman[213982]: 2025-10-03 09:33:34.220328359 +0000 UTC m=+0.062476167 container create 540d2e7ce26c28cba7cb695aeba2e44bdce41386bd7f134213d80c339bc940d4 (image=quay.io/ceph/ceph:v18, name=hardcore_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 09:33:34 compute-0 systemd[1]: Started libpod-conmon-540d2e7ce26c28cba7cb695aeba2e44bdce41386bd7f134213d80c339bc940d4.scope.
Oct  3 09:33:34 compute-0 podman[213982]: 2025-10-03 09:33:34.198711325 +0000 UTC m=+0.040859163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:33:34 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/778297370df64597ba66d6a0866b6fcb1c11080d9302dc0a8543abec4def2a9d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/778297370df64597ba66d6a0866b6fcb1c11080d9302dc0a8543abec4def2a9d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:34 compute-0 podman[213982]: 2025-10-03 09:33:34.329623649 +0000 UTC m=+0.171771497 container init 540d2e7ce26c28cba7cb695aeba2e44bdce41386bd7f134213d80c339bc940d4 (image=quay.io/ceph/ceph:v18, name=hardcore_montalcini, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 09:33:34 compute-0 podman[213982]: 2025-10-03 09:33:34.343845196 +0000 UTC m=+0.185992984 container start 540d2e7ce26c28cba7cb695aeba2e44bdce41386bd7f134213d80c339bc940d4 (image=quay.io/ceph/ceph:v18, name=hardcore_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:34 compute-0 podman[213982]: 2025-10-03 09:33:34.348294099 +0000 UTC m=+0.190441917 container attach 540d2e7ce26c28cba7cb695aeba2e44bdce41386bd7f134213d80c339bc940d4 (image=quay.io/ceph/ceph:v18, name=hardcore_montalcini, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:33:34 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 22 pg[5.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct  3 09:33:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Oct  3 09:33:34 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Oct  3 09:33:34 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  3 09:33:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/598556269' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  3 09:33:35 compute-0 ceph-mon[191783]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  3 09:33:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:33:35 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/598556269' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  3 09:33:35 compute-0 ceph-mon[191783]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  3 09:33:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct  3 09:33:35 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/598556269' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  3 09:33:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Oct  3 09:33:35 compute-0 hardcore_montalcini[213997]: pool 'cephfs.cephfs.meta' created
Oct  3 09:33:35 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Oct  3 09:33:35 compute-0 systemd[1]: libpod-540d2e7ce26c28cba7cb695aeba2e44bdce41386bd7f134213d80c339bc940d4.scope: Deactivated successfully.
Oct  3 09:33:35 compute-0 podman[213982]: 2025-10-03 09:33:35.794929739 +0000 UTC m=+1.637077557 container died 540d2e7ce26c28cba7cb695aeba2e44bdce41386bd7f134213d80c339bc940d4 (image=quay.io/ceph/ceph:v18, name=hardcore_montalcini, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-778297370df64597ba66d6a0866b6fcb1c11080d9302dc0a8543abec4def2a9d-merged.mount: Deactivated successfully.
Oct  3 09:33:35 compute-0 podman[213982]: 2025-10-03 09:33:35.856765405 +0000 UTC m=+1.698913203 container remove 540d2e7ce26c28cba7cb695aeba2e44bdce41386bd7f134213d80c339bc940d4 (image=quay.io/ceph/ceph:v18, name=hardcore_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  3 09:33:35 compute-0 systemd[1]: libpod-conmon-540d2e7ce26c28cba7cb695aeba2e44bdce41386bd7f134213d80c339bc940d4.scope: Deactivated successfully.
Oct  3 09:33:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v70: 6 pgs: 2 unknown, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:36 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 24 pg[6.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [0] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:36 compute-0 python3[214063]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:33:36 compute-0 podman[214064]: 2025-10-03 09:33:36.29965844 +0000 UTC m=+0.045855554 container create d831feef5122e9161e6cc5c97d15a720201d3210fefb4dba55dd4d40fd6791af (image=quay.io/ceph/ceph:v18, name=objective_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:33:36 compute-0 systemd[1]: Started libpod-conmon-d831feef5122e9161e6cc5c97d15a720201d3210fefb4dba55dd4d40fd6791af.scope.
Oct  3 09:33:36 compute-0 podman[214064]: 2025-10-03 09:33:36.281463475 +0000 UTC m=+0.027660619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:33:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683badd7b5d733adf6cd08b115881f2760748c1f0ea88b9629a75ae9214ed8da/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683badd7b5d733adf6cd08b115881f2760748c1f0ea88b9629a75ae9214ed8da/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:36 compute-0 podman[214064]: 2025-10-03 09:33:36.401159589 +0000 UTC m=+0.147356743 container init d831feef5122e9161e6cc5c97d15a720201d3210fefb4dba55dd4d40fd6791af (image=quay.io/ceph/ceph:v18, name=objective_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:33:36 compute-0 podman[214064]: 2025-10-03 09:33:36.411661027 +0000 UTC m=+0.157858151 container start d831feef5122e9161e6cc5c97d15a720201d3210fefb4dba55dd4d40fd6791af (image=quay.io/ceph/ceph:v18, name=objective_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 09:33:36 compute-0 podman[214064]: 2025-10-03 09:33:36.416512622 +0000 UTC m=+0.162709766 container attach d831feef5122e9161e6cc5c97d15a720201d3210fefb4dba55dd4d40fd6791af (image=quay.io/ceph/ceph:v18, name=objective_mclean, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct  3 09:33:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Oct  3 09:33:36 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/598556269' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  3 09:33:36 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Oct  3 09:33:36 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 25 pg[6.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [0] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  3 09:33:36 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2188024101' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  3 09:33:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct  3 09:33:37 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/2188024101' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  3 09:33:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v72: 6 pgs: 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:37 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2188024101' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  3 09:33:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Oct  3 09:33:37 compute-0 objective_mclean[214079]: pool 'cephfs.cephfs.data' created
Oct  3 09:33:38 compute-0 systemd[1]: libpod-d831feef5122e9161e6cc5c97d15a720201d3210fefb4dba55dd4d40fd6791af.scope: Deactivated successfully.
Oct  3 09:33:38 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Oct  3 09:33:38 compute-0 podman[214106]: 2025-10-03 09:33:38.084373988 +0000 UTC m=+0.056006500 container died d831feef5122e9161e6cc5c97d15a720201d3210fefb4dba55dd4d40fd6791af (image=quay.io/ceph/ceph:v18, name=objective_mclean, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  3 09:33:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-683badd7b5d733adf6cd08b115881f2760748c1f0ea88b9629a75ae9214ed8da-merged.mount: Deactivated successfully.
Oct  3 09:33:38 compute-0 podman[214106]: 2025-10-03 09:33:38.344962177 +0000 UTC m=+0.316594659 container remove d831feef5122e9161e6cc5c97d15a720201d3210fefb4dba55dd4d40fd6791af (image=quay.io/ceph/ceph:v18, name=objective_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 09:33:38 compute-0 systemd[1]: libpod-conmon-d831feef5122e9161e6cc5c97d15a720201d3210fefb4dba55dd4d40fd6791af.scope: Deactivated successfully.
Oct  3 09:33:38 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 26 pg[7.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [1] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:38 compute-0 python3[214145]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:33:38 compute-0 podman[214146]: 2025-10-03 09:33:38.835977746 +0000 UTC m=+0.061508296 container create 9de7292555598d814c7a03865834dbc9eb48be5d646dde7ae3deb229a6a37b85 (image=quay.io/ceph/ceph:v18, name=wizardly_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 09:33:38 compute-0 systemd[1]: Started libpod-conmon-9de7292555598d814c7a03865834dbc9eb48be5d646dde7ae3deb229a6a37b85.scope.
Oct  3 09:33:38 compute-0 podman[214146]: 2025-10-03 09:33:38.816207991 +0000 UTC m=+0.041738561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:33:38 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1947d0835db76560a6d97263b6b55da9af6f53e79e500c45d98ec9706869d0a5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1947d0835db76560a6d97263b6b55da9af6f53e79e500c45d98ec9706869d0a5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:38 compute-0 podman[214146]: 2025-10-03 09:33:38.945887826 +0000 UTC m=+0.171418446 container init 9de7292555598d814c7a03865834dbc9eb48be5d646dde7ae3deb229a6a37b85 (image=quay.io/ceph/ceph:v18, name=wizardly_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.949 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.950 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f70b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa35c9f7170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b8940b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35df74380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b894380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e566ba0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f73b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e6b9400>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.954 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa35b894080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f76e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 podman[214146]: 2025-10-03 09:33:38.956077043 +0000 UTC m=+0.181607583 container start 9de7292555598d814c7a03865834dbc9eb48be5d646dde7ae3deb229a6a37b85 (image=quay.io/ceph/ceph:v18, name=wizardly_einstein, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.955 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa35c9f71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa35c9f7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa35c9f7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa35c9f72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b615940>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa35c9f7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa35c955970>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa35b894350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa35c92f7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa35c9f7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa35c9f7710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa35c9f79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa35e6e6180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa35c9f73e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa35c9f7c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa35c9f7440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa35c9f7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa35c9f7ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa35c9f7d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa35c9f7e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa35c9f7650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa35c9f7e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa35c9f76b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa35c9f7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa35c9f7fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 podman[214146]: 2025-10-03 09:33:38.961034402 +0000 UTC m=+0.186564962 container attach 9de7292555598d814c7a03865834dbc9eb48be5d646dde7ae3deb229a6a37b85 (image=quay.io/ceph/ceph:v18, name=wizardly_einstein, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.960 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:33:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:33:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct  3 09:33:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Oct  3 09:33:38 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Oct  3 09:33:38 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/2188024101' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  3 09:33:38 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 27 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [1] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:39 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Oct  3 09:33:39 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2818079975' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  3 09:33:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:39 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct  3 09:33:39 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2818079975' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  3 09:33:39 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Oct  3 09:33:39 compute-0 wizardly_einstein[214160]: enabled application 'rbd' on pool 'vms'
Oct  3 09:33:39 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Oct  3 09:33:40 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/2818079975' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  3 09:33:40 compute-0 systemd[1]: libpod-9de7292555598d814c7a03865834dbc9eb48be5d646dde7ae3deb229a6a37b85.scope: Deactivated successfully.
Oct  3 09:33:40 compute-0 podman[214146]: 2025-10-03 09:33:40.028225657 +0000 UTC m=+1.253756227 container died 9de7292555598d814c7a03865834dbc9eb48be5d646dde7ae3deb229a6a37b85 (image=quay.io/ceph/ceph:v18, name=wizardly_einstein, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 09:33:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1947d0835db76560a6d97263b6b55da9af6f53e79e500c45d98ec9706869d0a5-merged.mount: Deactivated successfully.
Oct  3 09:33:40 compute-0 podman[214146]: 2025-10-03 09:33:40.106570243 +0000 UTC m=+1.332100793 container remove 9de7292555598d814c7a03865834dbc9eb48be5d646dde7ae3deb229a6a37b85 (image=quay.io/ceph/ceph:v18, name=wizardly_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 09:33:40 compute-0 systemd[1]: libpod-conmon-9de7292555598d814c7a03865834dbc9eb48be5d646dde7ae3deb229a6a37b85.scope: Deactivated successfully.
Oct  3 09:33:40 compute-0 ceph-mon[191783]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  3 09:33:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:33:40 compute-0 python3[214224]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:33:40 compute-0 podman[214225]: 2025-10-03 09:33:40.548901259 +0000 UTC m=+0.053070565 container create f671f52ff5692bcf8c370c5ff9d68c20c81b6c1432550dd73f5df01b1b2c037d (image=quay.io/ceph/ceph:v18, name=practical_lamarr, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 09:33:40 compute-0 systemd[1]: Started libpod-conmon-f671f52ff5692bcf8c370c5ff9d68c20c81b6c1432550dd73f5df01b1b2c037d.scope.
Oct  3 09:33:40 compute-0 podman[214225]: 2025-10-03 09:33:40.522390127 +0000 UTC m=+0.026559443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:33:40 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a5b052188c9c8a63b1fae213834d4213296d1248817cbc6c4024facd372b6b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a5b052188c9c8a63b1fae213834d4213296d1248817cbc6c4024facd372b6b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:40 compute-0 podman[214225]: 2025-10-03 09:33:40.657976822 +0000 UTC m=+0.162146128 container init f671f52ff5692bcf8c370c5ff9d68c20c81b6c1432550dd73f5df01b1b2c037d (image=quay.io/ceph/ceph:v18, name=practical_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 09:33:40 compute-0 podman[214225]: 2025-10-03 09:33:40.668394526 +0000 UTC m=+0.172563812 container start f671f52ff5692bcf8c370c5ff9d68c20c81b6c1432550dd73f5df01b1b2c037d (image=quay.io/ceph/ceph:v18, name=practical_lamarr, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 09:33:40 compute-0 podman[214225]: 2025-10-03 09:33:40.67288576 +0000 UTC m=+0.177055076 container attach f671f52ff5692bcf8c370c5ff9d68c20c81b6c1432550dd73f5df01b1b2c037d (image=quay.io/ceph/ceph:v18, name=practical_lamarr, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:33:41 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/2818079975' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  3 09:33:41 compute-0 ceph-mon[191783]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  3 09:33:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Oct  3 09:33:41 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3377876037' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  3 09:33:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct  3 09:33:42 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/3377876037' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  3 09:33:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3377876037' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  3 09:33:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Oct  3 09:33:42 compute-0 practical_lamarr[214240]: enabled application 'rbd' on pool 'volumes'
Oct  3 09:33:42 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Oct  3 09:33:42 compute-0 systemd[1]: libpod-f671f52ff5692bcf8c370c5ff9d68c20c81b6c1432550dd73f5df01b1b2c037d.scope: Deactivated successfully.
Oct  3 09:33:42 compute-0 podman[214225]: 2025-10-03 09:33:42.072368447 +0000 UTC m=+1.576537753 container died f671f52ff5692bcf8c370c5ff9d68c20c81b6c1432550dd73f5df01b1b2c037d (image=quay.io/ceph/ceph:v18, name=practical_lamarr, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:33:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2a5b052188c9c8a63b1fae213834d4213296d1248817cbc6c4024facd372b6b-merged.mount: Deactivated successfully.
Oct  3 09:33:42 compute-0 podman[214225]: 2025-10-03 09:33:42.130842585 +0000 UTC m=+1.635011871 container remove f671f52ff5692bcf8c370c5ff9d68c20c81b6c1432550dd73f5df01b1b2c037d (image=quay.io/ceph/ceph:v18, name=practical_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 09:33:42 compute-0 systemd[1]: libpod-conmon-f671f52ff5692bcf8c370c5ff9d68c20c81b6c1432550dd73f5df01b1b2c037d.scope: Deactivated successfully.
Oct  3 09:33:42 compute-0 python3[214300]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:33:42 compute-0 podman[214301]: 2025-10-03 09:33:42.573484991 +0000 UTC m=+0.054452980 container create ad0d5092502804784dce9eca5f307db6af219f0f7bf34c4dbf9289efdd12d0a6 (image=quay.io/ceph/ceph:v18, name=distracted_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 09:33:42 compute-0 systemd[1]: Started libpod-conmon-ad0d5092502804784dce9eca5f307db6af219f0f7bf34c4dbf9289efdd12d0a6.scope.
Oct  3 09:33:42 compute-0 podman[214301]: 2025-10-03 09:33:42.551624878 +0000 UTC m=+0.032592877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:33:42 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beaee651e2fa64a1ef33e4a8630658cbb7cd46b70f0cfe334d3c8204ed82c1d3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beaee651e2fa64a1ef33e4a8630658cbb7cd46b70f0cfe334d3c8204ed82c1d3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:42 compute-0 podman[214301]: 2025-10-03 09:33:42.678434831 +0000 UTC m=+0.159402850 container init ad0d5092502804784dce9eca5f307db6af219f0f7bf34c4dbf9289efdd12d0a6 (image=quay.io/ceph/ceph:v18, name=distracted_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 09:33:42 compute-0 podman[214301]: 2025-10-03 09:33:42.689822967 +0000 UTC m=+0.170790966 container start ad0d5092502804784dce9eca5f307db6af219f0f7bf34c4dbf9289efdd12d0a6 (image=quay.io/ceph/ceph:v18, name=distracted_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:33:42 compute-0 podman[214301]: 2025-10-03 09:33:42.695633524 +0000 UTC m=+0.176601533 container attach ad0d5092502804784dce9eca5f307db6af219f0f7bf34c4dbf9289efdd12d0a6 (image=quay.io/ceph/ceph:v18, name=distracted_cartwright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 09:33:43 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/3377876037' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  3 09:33:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Oct  3 09:33:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2705144407' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  3 09:33:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct  3 09:33:44 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/2705144407' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  3 09:33:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2705144407' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  3 09:33:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Oct  3 09:33:44 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Oct  3 09:33:44 compute-0 distracted_cartwright[214316]: enabled application 'rbd' on pool 'backups'
Oct  3 09:33:44 compute-0 systemd[1]: libpod-ad0d5092502804784dce9eca5f307db6af219f0f7bf34c4dbf9289efdd12d0a6.scope: Deactivated successfully.
Oct  3 09:33:44 compute-0 podman[214301]: 2025-10-03 09:33:44.093421846 +0000 UTC m=+1.574389855 container died ad0d5092502804784dce9eca5f307db6af219f0f7bf34c4dbf9289efdd12d0a6 (image=quay.io/ceph/ceph:v18, name=distracted_cartwright, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-beaee651e2fa64a1ef33e4a8630658cbb7cd46b70f0cfe334d3c8204ed82c1d3-merged.mount: Deactivated successfully.
Oct  3 09:33:44 compute-0 podman[214301]: 2025-10-03 09:33:44.148500374 +0000 UTC m=+1.629468353 container remove ad0d5092502804784dce9eca5f307db6af219f0f7bf34c4dbf9289efdd12d0a6 (image=quay.io/ceph/ceph:v18, name=distracted_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 09:33:44 compute-0 systemd[1]: libpod-conmon-ad0d5092502804784dce9eca5f307db6af219f0f7bf34c4dbf9289efdd12d0a6.scope: Deactivated successfully.
Oct  3 09:33:44 compute-0 python3[214376]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:33:44 compute-0 podman[214377]: 2025-10-03 09:33:44.587526374 +0000 UTC m=+0.041459562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:33:44 compute-0 podman[214377]: 2025-10-03 09:33:44.727051455 +0000 UTC m=+0.180984593 container create 3bfdd9df58f863be1c1c33b27e16be862d38613845d4b56abcfebb74880ab316 (image=quay.io/ceph/ceph:v18, name=focused_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 09:33:44 compute-0 systemd[1]: Started libpod-conmon-3bfdd9df58f863be1c1c33b27e16be862d38613845d4b56abcfebb74880ab316.scope.
Oct  3 09:33:44 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d1e793207ec56bc67beff41ef14c54cd546f139a980b251c210782670c693ce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d1e793207ec56bc67beff41ef14c54cd546f139a980b251c210782670c693ce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:45 compute-0 podman[214377]: 2025-10-03 09:33:45.071924071 +0000 UTC m=+0.525857259 container init 3bfdd9df58f863be1c1c33b27e16be862d38613845d4b56abcfebb74880ab316 (image=quay.io/ceph/ceph:v18, name=focused_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:33:45 compute-0 podman[214377]: 2025-10-03 09:33:45.086069836 +0000 UTC m=+0.540002984 container start 3bfdd9df58f863be1c1c33b27e16be862d38613845d4b56abcfebb74880ab316 (image=quay.io/ceph/ceph:v18, name=focused_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  3 09:33:45 compute-0 podman[214377]: 2025-10-03 09:33:45.290834042 +0000 UTC m=+0.744767210 container attach 3bfdd9df58f863be1c1c33b27e16be862d38613845d4b56abcfebb74880ab316 (image=quay.io/ceph/ceph:v18, name=focused_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct  3 09:33:45 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/2705144407' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  3 09:33:45 compute-0 ceph-mon[191783]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  3 09:33:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:33:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Oct  3 09:33:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1606318045' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  3 09:33:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:33:45
Oct  3 09:33:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:33:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:33:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', '.mgr', 'volumes', 'vms', 'images', 'cephfs.cephfs.data']
Oct  3 09:33:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:33:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  3 09:33:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Oct  3 09:33:46 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:33:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct  3 09:33:46 compute-0 ceph-mon[191783]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  3 09:33:46 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/1606318045' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  3 09:33:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:33:46 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1606318045' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  3 09:33:46 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:33:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Oct  3 09:33:46 compute-0 focused_beaver[214391]: enabled application 'rbd' on pool 'images'
Oct  3 09:33:46 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Oct  3 09:33:46 compute-0 ceph-mgr[192071]: [progress INFO root] update: starting ev 69ba619a-47e9-4bae-9ace-4b21883628e6 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct  3 09:33:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Oct  3 09:33:46 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:33:46 compute-0 systemd[1]: libpod-3bfdd9df58f863be1c1c33b27e16be862d38613845d4b56abcfebb74880ab316.scope: Deactivated successfully.
Oct  3 09:33:46 compute-0 podman[214377]: 2025-10-03 09:33:46.364211584 +0000 UTC m=+1.818144752 container died 3bfdd9df58f863be1c1c33b27e16be862d38613845d4b56abcfebb74880ab316 (image=quay.io/ceph/ceph:v18, name=focused_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 09:33:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d1e793207ec56bc67beff41ef14c54cd546f139a980b251c210782670c693ce-merged.mount: Deactivated successfully.
Oct  3 09:33:46 compute-0 podman[214377]: 2025-10-03 09:33:46.43537211 +0000 UTC m=+1.889305248 container remove 3bfdd9df58f863be1c1c33b27e16be862d38613845d4b56abcfebb74880ab316 (image=quay.io/ceph/ceph:v18, name=focused_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 09:33:46 compute-0 systemd[1]: libpod-conmon-3bfdd9df58f863be1c1c33b27e16be862d38613845d4b56abcfebb74880ab316.scope: Deactivated successfully.
Oct  3 09:33:46 compute-0 python3[214452]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:33:46 compute-0 podman[214453]: 2025-10-03 09:33:46.857496857 +0000 UTC m=+0.069664109 container create 71630ce51bb732662b99025b2de6fe9385499f1ffebb9a18d59c9193354f55bc (image=quay.io/ceph/ceph:v18, name=zen_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 09:33:46 compute-0 systemd[1]: Started libpod-conmon-71630ce51bb732662b99025b2de6fe9385499f1ffebb9a18d59c9193354f55bc.scope.
Oct  3 09:33:46 compute-0 podman[214453]: 2025-10-03 09:33:46.83549652 +0000 UTC m=+0.047663742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:33:46 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/442d5909d917cace7e9bda76c5aed0b3e929a44ea6934a81b4098f5c76318843/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/442d5909d917cace7e9bda76c5aed0b3e929a44ea6934a81b4098f5c76318843/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:47 compute-0 podman[214453]: 2025-10-03 09:33:47.019218511 +0000 UTC m=+0.231385743 container init 71630ce51bb732662b99025b2de6fe9385499f1ffebb9a18d59c9193354f55bc (image=quay.io/ceph/ceph:v18, name=zen_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 09:33:47 compute-0 podman[214453]: 2025-10-03 09:33:47.033047195 +0000 UTC m=+0.245214407 container start 71630ce51bb732662b99025b2de6fe9385499f1ffebb9a18d59c9193354f55bc (image=quay.io/ceph/ceph:v18, name=zen_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 09:33:47 compute-0 podman[214453]: 2025-10-03 09:33:47.037817518 +0000 UTC m=+0.249984750 container attach 71630ce51bb732662b99025b2de6fe9385499f1ffebb9a18d59c9193354f55bc (image=quay.io/ceph/ceph:v18, name=zen_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 09:33:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct  3 09:33:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:33:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Oct  3 09:33:47 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Oct  3 09:33:47 compute-0 ceph-mgr[192071]: [progress INFO root] update: starting ev 309d8e83-40c9-47b8-94c9-93a09d15c2e1 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct  3 09:33:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Oct  3 09:33:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:33:47 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/1606318045' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  3 09:33:47 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:33:47 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:33:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Oct  3 09:33:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3601011276' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  3 09:33:47 compute-0 podman[214494]: 2025-10-03 09:33:47.853433093 +0000 UTC m=+0.097019098 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 09:33:47 compute-0 podman[214495]: 2025-10-03 09:33:47.866690698 +0000 UTC m=+0.103673000 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 09:33:47 compute-0 podman[214496]: 2025-10-03 09:33:47.871306956 +0000 UTC m=+0.100843689 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:33:47 compute-0 podman[214493]: 2025-10-03 09:33:47.887557938 +0000 UTC m=+0.130961707 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=edpm, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543)
Oct  3 09:33:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  3 09:33:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  3 09:33:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct  3 09:33:48 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:33:48 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3601011276' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  3 09:33:48 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:33:48 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:33:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Oct  3 09:33:48 compute-0 zen_thompson[214469]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct  3 09:33:48 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Oct  3 09:33:48 compute-0 systemd[1]: libpod-71630ce51bb732662b99025b2de6fe9385499f1ffebb9a18d59c9193354f55bc.scope: Deactivated successfully.
Oct  3 09:33:48 compute-0 podman[214453]: 2025-10-03 09:33:48.392040661 +0000 UTC m=+1.604207943 container died 71630ce51bb732662b99025b2de6fe9385499f1ffebb9a18d59c9193354f55bc (image=quay.io/ceph/ceph:v18, name=zen_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:33:48 compute-0 ceph-mgr[192071]: [progress INFO root] update: starting ev f4ec5d93-920a-4498-b7a7-3c3b6b2869df (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct  3 09:33:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Oct  3 09:33:48 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:33:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:33:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:33:48 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/3601011276' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  3 09:33:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:48 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=33 pruub=12.568485260s) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active pruub 57.691001892s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:48 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=33 pruub=12.568485260s) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown pruub 57.691001892s@ mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-442d5909d917cace7e9bda76c5aed0b3e929a44ea6934a81b4098f5c76318843-merged.mount: Deactivated successfully.
Oct  3 09:33:48 compute-0 podman[214453]: 2025-10-03 09:33:48.483057864 +0000 UTC m=+1.695225076 container remove 71630ce51bb732662b99025b2de6fe9385499f1ffebb9a18d59c9193354f55bc (image=quay.io/ceph/ceph:v18, name=zen_thompson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 09:33:48 compute-0 systemd[1]: libpod-conmon-71630ce51bb732662b99025b2de6fe9385499f1ffebb9a18d59c9193354f55bc.scope: Deactivated successfully.
Oct  3 09:33:48 compute-0 python3[214608]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:33:48 compute-0 podman[214609]: 2025-10-03 09:33:48.98049835 +0000 UTC m=+0.097781111 container create a44d1025e7022dc645e38457b302319d856f68ea4ece4d337a76584deedd83f5 (image=quay.io/ceph/ceph:v18, name=vigorous_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct  3 09:33:49 compute-0 podman[214609]: 2025-10-03 09:33:48.931578908 +0000 UTC m=+0.048861719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:33:49 compute-0 systemd[1]: Started libpod-conmon-a44d1025e7022dc645e38457b302319d856f68ea4ece4d337a76584deedd83f5.scope.
Oct  3 09:33:49 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/847dce9671fd0fccac9218197153e5f694ba7a4fefc26fbab1d0e7e1b6e437b4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/847dce9671fd0fccac9218197153e5f694ba7a4fefc26fbab1d0e7e1b6e437b4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:49 compute-0 podman[214609]: 2025-10-03 09:33:49.134075262 +0000 UTC m=+0.251358083 container init a44d1025e7022dc645e38457b302319d856f68ea4ece4d337a76584deedd83f5 (image=quay.io/ceph/ceph:v18, name=vigorous_swirles, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct  3 09:33:49 compute-0 podman[214609]: 2025-10-03 09:33:49.155003075 +0000 UTC m=+0.272285836 container start a44d1025e7022dc645e38457b302319d856f68ea4ece4d337a76584deedd83f5 (image=quay.io/ceph/ceph:v18, name=vigorous_swirles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct  3 09:33:49 compute-0 podman[214609]: 2025-10-03 09:33:49.162606219 +0000 UTC m=+0.279888960 container attach a44d1025e7022dc645e38457b302319d856f68ea4ece4d337a76584deedd83f5 (image=quay.io/ceph/ceph:v18, name=vigorous_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:33:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct  3 09:33:49 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:33:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct  3 09:33:49 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct  3 09:33:49 compute-0 ceph-mgr[192071]: [progress INFO root] update: starting ev 635f50d1-7f93-4cab-a606-113524f43586 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct  3 09:33:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct  3 09:33:49 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:33:49 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/3601011276' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  3 09:33:49 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:33:49 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:33:49 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:33:49 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:33:49 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.0( empty local-lis/les=33/34 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=33 pruub=13.270923615s) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active pruub 65.515945435s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=33 pruub=13.270923615s) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown pruub 65.515945435s@ mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.1a( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.1b( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.1c( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.1d( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.2( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.c( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.d( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.12( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.13( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.8( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.9( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.a( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.b( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.10( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.11( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.14( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.15( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.e( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.f( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.18( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.19( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.16( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.17( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.1f( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.1e( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.6( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.7( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.1( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.5( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.4( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 34 pg[3.3( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Oct  3 09:33:49 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/211331142' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  3 09:33:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v87: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  3 09:33:49 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  3 09:33:49 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:50 compute-0 ceph-mon[191783]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  3 09:33:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:33:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct  3 09:33:50 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:33:50 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/211331142' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  3 09:33:50 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:33:50 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:33:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct  3 09:33:50 compute-0 vigorous_swirles[214624]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct  3 09:33:50 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct  3 09:33:50 compute-0 ceph-mgr[192071]: [progress INFO root] update: starting ev 9e125a51-7e4a-4e05-a620-01a4e7d73adc (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct  3 09:33:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Oct  3 09:33:50 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:33:50 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=35 pruub=8.337759972s) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active pruub 55.460075378s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:50 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=35 pruub=8.337759972s) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown pruub 55.460075378s@ mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.1e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.1b( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.1c( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.a( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.7( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.8( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.6( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.3( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.5( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.0( empty local-lis/les=33/35 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.2( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.b( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.1( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.4( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.c( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.d( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.13( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.12( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.14( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.10( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.16( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.17( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.18( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.19( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 35 pg[3.1f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:50 compute-0 systemd[1]: libpod-a44d1025e7022dc645e38457b302319d856f68ea4ece4d337a76584deedd83f5.scope: Deactivated successfully.
Oct  3 09:33:50 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/211331142' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  3 09:33:50 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:50 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:50 compute-0 ceph-mon[191783]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  3 09:33:50 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:33:50 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/211331142' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  3 09:33:50 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:33:50 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:33:50 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:33:50 compute-0 podman[214650]: 2025-10-03 09:33:50.52921462 +0000 UTC m=+0.047013951 container died a44d1025e7022dc645e38457b302319d856f68ea4ece4d337a76584deedd83f5 (image=quay.io/ceph/ceph:v18, name=vigorous_swirles, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 09:33:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-847dce9671fd0fccac9218197153e5f694ba7a4fefc26fbab1d0e7e1b6e437b4-merged.mount: Deactivated successfully.
Oct  3 09:33:50 compute-0 podman[214650]: 2025-10-03 09:33:50.596665156 +0000 UTC m=+0.114464437 container remove a44d1025e7022dc645e38457b302319d856f68ea4ece4d337a76584deedd83f5 (image=quay.io/ceph/ceph:v18, name=vigorous_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  3 09:33:50 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 35 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=15.114091873s) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 74.751762390s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:50 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 35 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=15.114091873s) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown pruub 74.751762390s@ mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:50 compute-0 systemd[1]: libpod-conmon-a44d1025e7022dc645e38457b302319d856f68ea4ece4d337a76584deedd83f5.scope: Deactivated successfully.
Oct  3 09:33:51 compute-0 ceph-mgr[192071]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Oct  3 09:33:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct  3 09:33:51 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:33:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct  3 09:33:51 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct  3 09:33:51 compute-0 ceph-mgr[192071]: [progress INFO root] update: starting ev f275a80a-c26c-42ea-ae2b-5dc703a34cdc (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct  3 09:33:51 compute-0 ceph-mgr[192071]: [progress INFO root] complete: finished ev 69ba619a-47e9-4bae-9ace-4b21883628e6 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct  3 09:33:51 compute-0 ceph-mgr[192071]: [progress INFO root] Completed event 69ba619a-47e9-4bae-9ace-4b21883628e6 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Oct  3 09:33:51 compute-0 ceph-mgr[192071]: [progress INFO root] complete: finished ev 309d8e83-40c9-47b8-94c9-93a09d15c2e1 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct  3 09:33:51 compute-0 ceph-mgr[192071]: [progress INFO root] Completed event 309d8e83-40c9-47b8-94c9-93a09d15c2e1 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Oct  3 09:33:51 compute-0 ceph-mgr[192071]: [progress INFO root] complete: finished ev f4ec5d93-920a-4498-b7a7-3c3b6b2869df (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct  3 09:33:51 compute-0 ceph-mgr[192071]: [progress INFO root] Completed event f4ec5d93-920a-4498-b7a7-3c3b6b2869df (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Oct  3 09:33:51 compute-0 ceph-mgr[192071]: [progress INFO root] complete: finished ev 635f50d1-7f93-4cab-a606-113524f43586 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct  3 09:33:51 compute-0 ceph-mgr[192071]: [progress INFO root] Completed event 635f50d1-7f93-4cab-a606-113524f43586 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Oct  3 09:33:51 compute-0 ceph-mgr[192071]: [progress INFO root] complete: finished ev 9e125a51-7e4a-4e05-a620-01a4e7d73adc (PG autoscaler increasing pool 6 PGs from 1 to 32)
Oct  3 09:33:51 compute-0 ceph-mgr[192071]: [progress INFO root] Completed event 9e125a51-7e4a-4e05-a620-01a4e7d73adc (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Oct  3 09:33:51 compute-0 ceph-mgr[192071]: [progress INFO root] complete: finished ev f275a80a-c26c-42ea-ae2b-5dc703a34cdc (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct  3 09:33:51 compute-0 ceph-mgr[192071]: [progress INFO root] Completed event f275a80a-c26c-42ea-ae2b-5dc703a34cdc (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.1c( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.1f( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.10( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.17( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.8( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.1e( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.1d( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.b( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.6( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.4( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.3( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.19( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.7( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.f( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.12( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.10( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.17( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.16( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.11( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.a( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.b( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.6( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.d( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.1b( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.e( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=22/23 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:51 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.1f( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.1c( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.1d( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.1e( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.b( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.6( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.3( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.0( empty local-lis/les=35/36 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.19( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.10( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.17( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.a( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.8( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.0( empty local-lis/les=35/36 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.b( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.6( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.d( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.e( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 36 pg[5.1b( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=22/22 les/c/f=23/23/0 sis=35) [2] r=0 lpr=35 pi=[22,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.17( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.16( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 36 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:51 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:33:51 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Oct  3 09:33:51 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Oct  3 09:33:51 compute-0 python3[214739]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 09:33:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v90: 131 pgs: 93 unknown, 38 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  3 09:33:51 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  3 09:33:51 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:52 compute-0 python3[214810]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759484031.3112807-33783-126815772893706/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:33:52 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Oct  3 09:33:52 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Oct  3 09:33:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct  3 09:33:52 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  3 09:33:52 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  3 09:33:52 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:52 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:33:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:33:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct  3 09:33:52 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct  3 09:33:52 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=10.293997765s) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 65.453018188s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:52 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=10.293997765s) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown pruub 65.453018188s@ mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:52 compute-0 python3[214912]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 09:33:53 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Oct  3 09:33:53 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Oct  3 09:33:53 compute-0 python3[214987]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759484032.4835505-33797-218193946433911/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=f09164a8c2a2861a824cb36e8939a5acd81855e7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:33:53 compute-0 ceph-mon[191783]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  3 09:33:53 compute-0 ceph-mon[191783]: Cluster is now healthy
Oct  3 09:33:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:33:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:33:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct  3 09:33:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct  3 09:33:53 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.1e( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.12( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.17( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.10( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.b( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.7( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.d( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.19( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=26/27 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.12( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.1e( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.17( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.b( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.10( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.7( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.d( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.0( empty local-lis/les=37/38 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.19( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=26/26 les/c/f=27/27/0 sis=37) [1] r=0 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:53 compute-0 python3[215037]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:33:53 compute-0 podman[215038]: 2025-10-03 09:33:53.88453775 +0000 UTC m=+0.071051063 container create e14c0c81d0c07840ee0eb87c572de476864086fce4c394dadae7891add7581c6 (image=quay.io/ceph/ceph:v18, name=dreamy_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 09:33:53 compute-0 systemd[193541]: Starting Mark boot as successful...
Oct  3 09:33:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v93: 193 pgs: 124 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:53 compute-0 systemd[193541]: Finished Mark boot as successful.
Oct  3 09:33:53 compute-0 podman[215038]: 2025-10-03 09:33:53.861175751 +0000 UTC m=+0.047689073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:33:53 compute-0 systemd[1]: Started libpod-conmon-e14c0c81d0c07840ee0eb87c572de476864086fce4c394dadae7891add7581c6.scope.
Oct  3 09:33:54 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83974c5a207c5b295957da8ead6258228205d0e6ab430cd18b0c2d3bcdc8b99d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83974c5a207c5b295957da8ead6258228205d0e6ab430cd18b0c2d3bcdc8b99d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83974c5a207c5b295957da8ead6258228205d0e6ab430cd18b0c2d3bcdc8b99d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:54 compute-0 podman[215038]: 2025-10-03 09:33:54.067716704 +0000 UTC m=+0.254230036 container init e14c0c81d0c07840ee0eb87c572de476864086fce4c394dadae7891add7581c6 (image=quay.io/ceph/ceph:v18, name=dreamy_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 09:33:54 compute-0 podman[215038]: 2025-10-03 09:33:54.080641769 +0000 UTC m=+0.267155071 container start e14c0c81d0c07840ee0eb87c572de476864086fce4c394dadae7891add7581c6 (image=quay.io/ceph/ceph:v18, name=dreamy_liskov, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 09:33:54 compute-0 podman[215038]: 2025-10-03 09:33:54.096017123 +0000 UTC m=+0.282530455 container attach e14c0c81d0c07840ee0eb87c572de476864086fce4c394dadae7891add7581c6 (image=quay.io/ceph/ceph:v18, name=dreamy_liskov, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 37 pg[6.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=14.342964172s) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 77.886878967s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=14.342964172s) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown pruub 77.886878967s@ mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.d( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.e( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.f( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.10( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.11( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.12( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.13( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.14( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.17( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.18( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.15( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.16( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.19( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.1a( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.1d( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.1e( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.1b( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.1c( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.1f( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.3( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.4( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.5( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.6( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.1( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.2( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.7( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.8( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.9( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.a( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.b( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 38 pg[6.c( empty local-lis/les=24/25 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:54 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct  3 09:33:54 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct  3 09:33:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct  3 09:33:54 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3235360332' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  3 09:33:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct  3 09:33:54 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3235360332' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  3 09:33:54 compute-0 dreamy_liskov[215054]: 
Oct  3 09:33:54 compute-0 dreamy_liskov[215054]: [global]
Oct  3 09:33:54 compute-0 dreamy_liskov[215054]: #011fsid = 9b4e8c9a-5555-5510-a631-4742a1182561
Oct  3 09:33:54 compute-0 dreamy_liskov[215054]: #011mon_host = 192.168.122.100
Oct  3 09:33:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct  3 09:33:54 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/3235360332' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  3 09:33:54 compute-0 systemd[1]: libpod-e14c0c81d0c07840ee0eb87c572de476864086fce4c394dadae7891add7581c6.scope: Deactivated successfully.
Oct  3 09:33:54 compute-0 podman[215038]: 2025-10-03 09:33:54.726487621 +0000 UTC m=+0.913000923 container died e14c0c81d0c07840ee0eb87c572de476864086fce4c394dadae7891add7581c6 (image=quay.io/ceph/ceph:v18, name=dreamy_liskov, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:54 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.1a( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.17( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.16( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.11( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.10( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.14( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.13( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.d( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.c( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.f( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.2( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.3( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.0( empty local-lis/les=37/39 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.15( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.1b( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.12( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.1( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.b( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.6( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.18( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.7( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.8( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.19( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.4( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.9( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.5( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.a( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.1e( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.1f( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.1c( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.1d( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 39 pg[6.e( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=24/24 les/c/f=25/25/0 sis=37) [0] r=0 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-83974c5a207c5b295957da8ead6258228205d0e6ab430cd18b0c2d3bcdc8b99d-merged.mount: Deactivated successfully.
Oct  3 09:33:54 compute-0 podman[215038]: 2025-10-03 09:33:54.834050535 +0000 UTC m=+1.020563837 container remove e14c0c81d0c07840ee0eb87c572de476864086fce4c394dadae7891add7581c6 (image=quay.io/ceph/ceph:v18, name=dreamy_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 09:33:54 compute-0 systemd[1]: libpod-conmon-e14c0c81d0c07840ee0eb87c572de476864086fce4c394dadae7891add7581c6.scope: Deactivated successfully.
Oct  3 09:33:54 compute-0 podman[215081]: 2025-10-03 09:33:54.864618307 +0000 UTC m=+0.127486355 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct  3 09:33:54 compute-0 podman[215080]: 2025-10-03 09:33:54.900915053 +0000 UTC m=+0.164350959 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 09:33:55 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Oct  3 09:33:55 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Oct  3 09:33:55 compute-0 python3[215242]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:33:55 compute-0 podman[215266]: 2025-10-03 09:33:55.250133098 +0000 UTC m=+0.068034225 container create a2f9d194c45bbd262c61b4b1362e34df1eda5032f77d02ceec11571dd37abb5d (image=quay.io/ceph/ceph:v18, name=cool_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:33:55 compute-0 podman[215266]: 2025-10-03 09:33:55.222613114 +0000 UTC m=+0.040514261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:33:55 compute-0 systemd[1]: Started libpod-conmon-a2f9d194c45bbd262c61b4b1362e34df1eda5032f77d02ceec11571dd37abb5d.scope.
Oct  3 09:33:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:33:55 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abbaed2e05af3a40b85f9460a0859c58a146b277a442ae36ad06637b3ffe80d4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abbaed2e05af3a40b85f9460a0859c58a146b277a442ae36ad06637b3ffe80d4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abbaed2e05af3a40b85f9460a0859c58a146b277a442ae36ad06637b3ffe80d4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:55 compute-0 podman[215266]: 2025-10-03 09:33:55.468905574 +0000 UTC m=+0.286806731 container init a2f9d194c45bbd262c61b4b1362e34df1eda5032f77d02ceec11571dd37abb5d (image=quay.io/ceph/ceph:v18, name=cool_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  3 09:33:55 compute-0 podman[215266]: 2025-10-03 09:33:55.48183267 +0000 UTC m=+0.299733817 container start a2f9d194c45bbd262c61b4b1362e34df1eda5032f77d02ceec11571dd37abb5d (image=quay.io/ceph/ceph:v18, name=cool_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct  3 09:33:55 compute-0 podman[215266]: 2025-10-03 09:33:55.491300584 +0000 UTC m=+0.309201711 container attach a2f9d194c45bbd262c61b4b1362e34df1eda5032f77d02ceec11571dd37abb5d (image=quay.io/ceph/ceph:v18, name=cool_euclid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 09:33:55 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct  3 09:33:55 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct  3 09:33:55 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/3235360332' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  3 09:33:55 compute-0 podman[215352]: 2025-10-03 09:33:55.857817135 +0000 UTC m=+0.086676345 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 09:33:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v95: 193 pgs: 31 unknown, 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:55 compute-0 podman[215352]: 2025-10-03 09:33:55.977908872 +0000 UTC m=+0.206768072 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:33:56 compute-0 ceph-mgr[192071]: [progress INFO root] Writing back 9 completed events
Oct  3 09:33:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  3 09:33:56 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Oct  3 09:33:56 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2200823509' entity='client.admin' 
Oct  3 09:33:56 compute-0 cool_euclid[215304]: set ssl_option
Oct  3 09:33:56 compute-0 podman[215406]: 2025-10-03 09:33:56.301777103 +0000 UTC m=+0.089824925 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:33:56 compute-0 systemd[1]: libpod-a2f9d194c45bbd262c61b4b1362e34df1eda5032f77d02ceec11571dd37abb5d.scope: Deactivated successfully.
Oct  3 09:33:56 compute-0 podman[215266]: 2025-10-03 09:33:56.309717508 +0000 UTC m=+1.127618635 container died a2f9d194c45bbd262c61b4b1362e34df1eda5032f77d02ceec11571dd37abb5d (image=quay.io/ceph/ceph:v18, name=cool_euclid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 09:33:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-abbaed2e05af3a40b85f9460a0859c58a146b277a442ae36ad06637b3ffe80d4-merged.mount: Deactivated successfully.
Oct  3 09:33:56 compute-0 podman[215266]: 2025-10-03 09:33:56.438539246 +0000 UTC m=+1.256440373 container remove a2f9d194c45bbd262c61b4b1362e34df1eda5032f77d02ceec11571dd37abb5d (image=quay.io/ceph/ceph:v18, name=cool_euclid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 09:33:56 compute-0 systemd[1]: libpod-conmon-a2f9d194c45bbd262c61b4b1362e34df1eda5032f77d02ceec11571dd37abb5d.scope: Deactivated successfully.
Oct  3 09:33:56 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Oct  3 09:33:56 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Oct  3 09:33:56 compute-0 python3[215535]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:33:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:33:56 compute-0 podman[215552]: 2025-10-03 09:33:56.848882294 +0000 UTC m=+0.074409230 container create 6b1f1ac58270fe56269846af127c787858d04ac71aed81fa7e20e1afdc0487a5 (image=quay.io/ceph/ceph:v18, name=festive_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:56 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/2200823509' entity='client.admin' 
Oct  3 09:33:56 compute-0 podman[215552]: 2025-10-03 09:33:56.804096585 +0000 UTC m=+0.029623551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:33:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:33:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:33:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:33:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:33:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:33:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:33:57 compute-0 systemd[1]: Started libpod-conmon-6b1f1ac58270fe56269846af127c787858d04ac71aed81fa7e20e1afdc0487a5.scope.
Oct  3 09:33:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:57 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 7b244788-7a22-4f47-94c8-3cd0e0647722 does not exist
Oct  3 09:33:57 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 56531cea-1d6f-4ae2-911b-0f7a444c63a1 does not exist
Oct  3 09:33:57 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev de4af651-3737-4590-8b81-2da68b6f7c0e does not exist
Oct  3 09:33:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:33:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:33:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:33:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:33:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:33:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:33:57 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/571bbd77c33928226d32dde1ab51895fdb6ad1bc7a7c99a9c47584fdc28e1a9e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/571bbd77c33928226d32dde1ab51895fdb6ad1bc7a7c99a9c47584fdc28e1a9e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/571bbd77c33928226d32dde1ab51895fdb6ad1bc7a7c99a9c47584fdc28e1a9e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:57 compute-0 podman[215552]: 2025-10-03 09:33:57.221187691 +0000 UTC m=+0.446714627 container init 6b1f1ac58270fe56269846af127c787858d04ac71aed81fa7e20e1afdc0487a5 (image=quay.io/ceph/ceph:v18, name=festive_rubin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:33:57 compute-0 podman[215552]: 2025-10-03 09:33:57.234717726 +0000 UTC m=+0.460244662 container start 6b1f1ac58270fe56269846af127c787858d04ac71aed81fa7e20e1afdc0487a5 (image=quay.io/ceph/ceph:v18, name=festive_rubin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:57 compute-0 podman[215552]: 2025-10-03 09:33:57.248722776 +0000 UTC m=+0.474249712 container attach 6b1f1ac58270fe56269846af127c787858d04ac71aed81fa7e20e1afdc0487a5 (image=quay.io/ceph/ceph:v18, name=festive_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:33:57 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct  3 09:33:57 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct  3 09:33:57 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 09:33:57 compute-0 ceph-mgr[192071]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Oct  3 09:33:57 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct  3 09:33:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  3 09:33:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v96: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:33:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  3 09:33:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  3 09:33:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  3 09:33:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  3 09:33:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  3 09:33:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  3 09:33:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:57 compute-0 podman[215727]: 2025-10-03 09:33:57.938137277 +0000 UTC m=+0.093092791 container create 5da73163e0a675a0d4718cdc49daadbc0a97d415333c2c56f07b6878f1195441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_satoshi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 09:33:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:57 compute-0 festive_rubin[215567]: Scheduled rgw.rgw update...
Oct  3 09:33:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:33:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:33:57 compute-0 ceph-mon[191783]: Saving service rgw.rgw spec with placement compute-0
Oct  3 09:33:57 compute-0 podman[215727]: 2025-10-03 09:33:57.875698342 +0000 UTC m=+0.030653886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:57 compute-0 systemd[1]: libpod-6b1f1ac58270fe56269846af127c787858d04ac71aed81fa7e20e1afdc0487a5.scope: Deactivated successfully.
Oct  3 09:33:57 compute-0 podman[215552]: 2025-10-03 09:33:57.974943839 +0000 UTC m=+1.200470775 container died 6b1f1ac58270fe56269846af127c787858d04ac71aed81fa7e20e1afdc0487a5 (image=quay.io/ceph/ceph:v18, name=festive_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:33:57 compute-0 systemd[1]: Started libpod-conmon-5da73163e0a675a0d4718cdc49daadbc0a97d415333c2c56f07b6878f1195441.scope.
Oct  3 09:33:58 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:58 compute-0 podman[215727]: 2025-10-03 09:33:58.061327013 +0000 UTC m=+0.216282557 container init 5da73163e0a675a0d4718cdc49daadbc0a97d415333c2c56f07b6878f1195441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_satoshi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct  3 09:33:58 compute-0 podman[215727]: 2025-10-03 09:33:58.071881023 +0000 UTC m=+0.226836537 container start 5da73163e0a675a0d4718cdc49daadbc0a97d415333c2c56f07b6878f1195441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 09:33:58 compute-0 podman[215727]: 2025-10-03 09:33:58.076646795 +0000 UTC m=+0.231602309 container attach 5da73163e0a675a0d4718cdc49daadbc0a97d415333c2c56f07b6878f1195441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:58 compute-0 zen_satoshi[215750]: 167 167
Oct  3 09:33:58 compute-0 systemd[1]: libpod-5da73163e0a675a0d4718cdc49daadbc0a97d415333c2c56f07b6878f1195441.scope: Deactivated successfully.
Oct  3 09:33:58 compute-0 podman[215727]: 2025-10-03 09:33:58.078629549 +0000 UTC m=+0.233585103 container died 5da73163e0a675a0d4718cdc49daadbc0a97d415333c2c56f07b6878f1195441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_satoshi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:58 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct  3 09:33:58 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct  3 09:33:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-571bbd77c33928226d32dde1ab51895fdb6ad1bc7a7c99a9c47584fdc28e1a9e-merged.mount: Deactivated successfully.
Oct  3 09:33:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b873ea37f11a2eaec3cfecc5ffb93bffec1f751aab8341a3fa83c7f6ed753af6-merged.mount: Deactivated successfully.
Oct  3 09:33:58 compute-0 podman[215727]: 2025-10-03 09:33:58.154268478 +0000 UTC m=+0.309223992 container remove 5da73163e0a675a0d4718cdc49daadbc0a97d415333c2c56f07b6878f1195441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_satoshi, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct  3 09:33:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:33:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:33:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:33:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:33:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:33:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:33:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct  3 09:33:58 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.289967537s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.510810852s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.15( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.614885330s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.835815430s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.289916992s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.510810852s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.14( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.607693672s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.828659058s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.15( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.614836693s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.835815430s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.14( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.607668877s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.828659058s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.289447784s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.510620117s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.289402962s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.510604858s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.289427757s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.510620117s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.289371490s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.510604858s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.11( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.607445717s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.828758240s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.289269447s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.510589600s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.11( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.607425690s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.828758240s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.289250374s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.510589600s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.13( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.607344627s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.828781128s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.289457321s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.510894775s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.289440155s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.510894775s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.13( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.607331276s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.828781128s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.288967133s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.510559082s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.288488388s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.510482788s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.d( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.606643677s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.828781128s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.d( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.606554985s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.828781128s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.c( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.613299370s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.835655212s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.c( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.613283157s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.835655212s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.288949013s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.510559082s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.278334618s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.500816345s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.278319359s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.500816345s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.f( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.613159180s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.835678101s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.f( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.613138199s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.835678101s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.287804604s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.510482788s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.e( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.613582611s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.836296082s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.e( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.613565445s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.836296082s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.277907372s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.500854492s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.277885437s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.500854492s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.2( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.612765312s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.835769653s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.277624130s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.500755310s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.277608871s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.500755310s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.2( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.612745285s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.835769653s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.277336121s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.500785828s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.277318001s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.500785828s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.612053871s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.835845947s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.612034798s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.835845947s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.277213097s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.501296997s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.277196884s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.501296997s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.6( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.611712456s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.835884094s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.6( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.611699104s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.835884094s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.276438713s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.500679016s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.b( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.611287117s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.835876465s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.b( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.611266136s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.835876465s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.276007652s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.500717163s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.275988579s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.500717163s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.275808334s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.500648499s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.275750160s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.500610352s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.275731087s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.500610352s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.275777817s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.500648499s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.275522232s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.500617981s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.275493622s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.500617981s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.8( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.610898018s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.835929871s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.8( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.610665321s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.835929871s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.4( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.610638618s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.835952759s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.4( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.610611916s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.835952759s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.285115242s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.510528564s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.285096169s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.510528564s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.275069237s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.500572205s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.275043488s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.500572205s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.1e( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.610629082s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.836181641s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.1e( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.610610008s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.836181641s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.276424408s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.500679016s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.17( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.602094650s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.828689575s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.17( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.602049828s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.828689575s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.273570061s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 76.500419617s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.273550987s) [2] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.500419617s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.1d( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.609318733s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.836288452s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.1d( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.609294891s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.836288452s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.1c( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.609737396s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.836265564s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.1c( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.608867645s) [1] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.836265564s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.1f( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.608525276s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 79.836235046s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[6.1f( empty local-lis/les=37/39 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40 pruub=12.608503342s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.836235046s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[6.14( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[6.15( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[6.11( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[6.13( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[6.1f( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[6.1e( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[4.d( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.345734596s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.240989685s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.345707893s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.240989685s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.270542145s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.166313171s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.270384789s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.166313171s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.259319305s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.155349731s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[4.f( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.259281158s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.155349731s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.344802856s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.240982056s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.344931602s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.240989685s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[4.2( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[4.4( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.344807625s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.241027832s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.344775200s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.240982056s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.344769478s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.241027832s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.344839096s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.241195679s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.259654045s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.156021118s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.259620667s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.156021118s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.344315529s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.240837097s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.344294548s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.240837097s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.259353638s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.156028748s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.259334564s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.156028748s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.259200096s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.156044006s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.344134331s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.240989685s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.259181023s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.156044006s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.344012260s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.240974426s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.343989372s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.240974426s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.259078026s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.156074524s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.259058952s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.156074524s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.259050369s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.156143188s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.343696594s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.240806580s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.259033203s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.156143188s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.343679428s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.240806580s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.268515587s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.165702820s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.268499374s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.165702820s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.343555450s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.240806580s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.343536377s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.240806580s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.268513680s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.165847778s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.268497467s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.165847778s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.343396187s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.240791321s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.343377113s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.240791321s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.268374443s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.165901184s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.343211174s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.240760803s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.268355370s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.165901184s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.343168259s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.240760803s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.342954636s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.240638733s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.342933655s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.240638733s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.268200874s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.165962219s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.342782021s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.240562439s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.342751503s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.240562439s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.268153191s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.165992737s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.268136978s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.165992737s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.281305313s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.179237366s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.343278885s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.241195679s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.268007278s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.165954590s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.281287193s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.179237366s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.267990112s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.165954590s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.281177521s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.179222107s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.281163216s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.179222107s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.268058777s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.166145325s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.268037796s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.166145325s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.267918587s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.166069031s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.267904282s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.166069031s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.280828476s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.179054260s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.267860413s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.166107178s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.280810356s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.179054260s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.267843246s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.166107178s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.280637741s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.178962708s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.280621529s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.178962708s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.280636787s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.179023743s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.280621529s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.179023743s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.280479431s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.178962708s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.267647743s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.166152954s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.280460358s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.178962708s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.267629623s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.166152954s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.280334473s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.178955078s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.342740059s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.241378784s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.280318260s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.178955078s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.342720985s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.241378784s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.267517090s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.166275024s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.267498970s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.166275024s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.267441750s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.166252136s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.267422676s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.166252136s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.279885292s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.178756714s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.341692924s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 70.240585327s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.279869080s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.178756714s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40 pruub=15.341670036s) [1] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.240585327s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.267367363s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active pruub 64.166290283s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.267349243s) [1] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.166290283s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40 pruub=9.268178940s) [0] r=-1 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.165962219s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[5.1e( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[2.18( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[2.19( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[6.1( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[2.13( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[5.14( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[5.15( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[2.11( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[2.f( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[5.7( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[4.7( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[4.5( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[4.9( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[4.8( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[6.17( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[2.2( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[5.4( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[2.16( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[5.3( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[5.2( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[2.8( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[2.b( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[2.1c( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[2.1d( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[2.1f( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[5.5( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[4.14( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[4.12( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[7.1c( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[4.10( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[6.1d( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[6.1c( empty local-lis/les=0/0 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.521029472s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.189178467s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.521007538s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.189178467s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[3.18( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[3.16( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.18( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.238135338s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.906425476s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[7.11( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[3.11( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.18( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.238121986s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.906425476s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[7.15( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.17( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.238043785s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.906410217s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[3.e( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.17( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.238032341s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.906410217s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.534381866s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.202819824s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.534369469s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.202819824s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.237854004s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.906379700s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.237841606s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.906379700s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[7.a( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[7.8( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.237717628s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.906326294s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[7.5( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.15( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.237701416s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.906326294s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.534736633s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203445435s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.534723282s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203445435s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[3.5( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[7.1( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[7.2( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[3.8( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.12( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.237274170s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.906082153s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.12( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.237261772s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.906082153s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[7.c( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.237232208s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.906127930s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.11( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.237218857s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.906127930s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[7.e( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.534005165s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203002930s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.533991814s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203002930s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[3.7( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[3.17( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[7.13( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.237255096s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.906333923s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.237243652s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.906333923s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.236867905s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.906028748s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.236857414s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.906028748s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[3.1d( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[7.1a( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[3.15( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[3.12( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.533797264s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203063965s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[3.f( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 podman[215552]: 2025-10-03 09:33:58.220856446 +0000 UTC m=+1.446383382 container remove 6b1f1ac58270fe56269846af127c787858d04ac71aed81fa7e20e1afdc0487a5 (image=quay.io/ceph/ceph:v18, name=festive_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[7.9( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 40 pg[3.1e( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.533782959s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203063965s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[3.c( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.533822060s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203178406s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.533809662s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203178406s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.c( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.236624718s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.906059265s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.c( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.236611366s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.906059265s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.532588005s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203475952s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.532569885s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203475952s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.532231331s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203193665s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.532209396s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203193665s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.532543182s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203559875s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.532529831s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203559875s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[7.f( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[7.6( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[7.4( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[3.1( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[3.3( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[7.3( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[3.9( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[3.a( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.532382965s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203506470s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[7.1f( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[3.1b( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[7.18( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.1( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.234834671s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.905990601s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.532367706s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203506470s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[7.1b( empty local-lis/les=0/0 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[3.1f( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 40 pg[3.6( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 systemd[1]: libpod-conmon-6b1f1ac58270fe56269846af127c787858d04ac71aed81fa7e20e1afdc0487a5.scope: Deactivated successfully.
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.1( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.234820366s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.905990601s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.532148361s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203544617s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.3( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.234280586s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.905944824s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.3( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.234068871s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.905944824s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.5( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.233994484s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.905952454s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.531481743s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203544617s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.5( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.233852386s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.905952454s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.6( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.233751297s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.905937195s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.531484604s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203697205s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.6( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.233730316s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.905937195s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.531709671s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203926086s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.531592369s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203926086s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.531468391s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203697205s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.530985832s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203590393s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.530958176s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203590393s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.233146667s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.905899048s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.8( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.233124733s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.905899048s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[2.1b( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.530370712s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203567505s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.530351639s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203567505s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.232521057s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.905921936s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.9( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.232502937s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.905921936s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.7( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.232386589s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.905883789s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.a( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.232281685s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.905860901s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.a( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.232256889s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.905860901s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.529863358s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203727722s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.529822350s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203727722s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.529659271s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203613281s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.7( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.232354164s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.905883789s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.529632568s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203613281s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.231644630s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.905776978s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.1b( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.231625557s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.905776978s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.529371262s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203697205s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.529353142s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203697205s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.231453896s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.905921936s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.1d( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.231437683s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.905921936s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.529312134s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203910828s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.529293060s) [2] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203910828s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.1e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.231010437s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.905662537s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.1e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.230977058s) [2] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.905662537s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.529126167s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active pruub 72.203872681s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40 pruub=11.529108047s) [0] r=-1 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 72.203872681s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.1f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.238215446s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active pruub 68.913085938s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[5.1d( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[3.1f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40 pruub=8.238185883s) [0] r=-1 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.913085938s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[5.11( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[5.12( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[5.13( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[5.16( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[5.9( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[5.1( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[5.f( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[5.c( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[5.19( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 40 pg[5.18( empty local-lis/les=0/0 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:33:58 compute-0 systemd[1]: libpod-conmon-5da73163e0a675a0d4718cdc49daadbc0a97d415333c2c56f07b6878f1195441.scope: Deactivated successfully.
Oct  3 09:33:58 compute-0 podman[215779]: 2025-10-03 09:33:58.366780333 +0000 UTC m=+0.063606243 container create 436e56a4b7f89814509ef2e16c7bf20394b50d62b3a7822782f77021df65950c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lovelace, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:33:58 compute-0 podman[215779]: 2025-10-03 09:33:58.344087424 +0000 UTC m=+0.040913364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:33:58 compute-0 systemd[1]: Started libpod-conmon-436e56a4b7f89814509ef2e16c7bf20394b50d62b3a7822782f77021df65950c.scope.
Oct  3 09:33:58 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b8187e9acaf9d72855f72db3386f28aa853a691299eeaab7557972718b813d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b8187e9acaf9d72855f72db3386f28aa853a691299eeaab7557972718b813d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b8187e9acaf9d72855f72db3386f28aa853a691299eeaab7557972718b813d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b8187e9acaf9d72855f72db3386f28aa853a691299eeaab7557972718b813d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b8187e9acaf9d72855f72db3386f28aa853a691299eeaab7557972718b813d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:33:58 compute-0 podman[215779]: 2025-10-03 09:33:58.48594172 +0000 UTC m=+0.182767660 container init 436e56a4b7f89814509ef2e16c7bf20394b50d62b3a7822782f77021df65950c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  3 09:33:58 compute-0 podman[215779]: 2025-10-03 09:33:58.505308353 +0000 UTC m=+0.202134263 container start 436e56a4b7f89814509ef2e16c7bf20394b50d62b3a7822782f77021df65950c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lovelace, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:33:58 compute-0 podman[215779]: 2025-10-03 09:33:58.509887189 +0000 UTC m=+0.206713099 container attach 436e56a4b7f89814509ef2e16c7bf20394b50d62b3a7822782f77021df65950c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 09:33:58 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.b deep-scrub starts
Oct  3 09:33:58 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.b deep-scrub ok
Oct  3 09:33:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:33:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:33:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:33:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:33:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:33:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:33:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:33:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:33:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct  3 09:33:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct  3 09:33:59 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[7.1b( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[2.11( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[3.1f( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[4.18( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[6.f( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[4.1a( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[4.e( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[4.1( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[6.8( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[6.14( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[4.a( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[6.15( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[4.13( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[4.11( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[6.13( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[6.11( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[6.1f( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[3.12( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[2.13( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[5.15( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[2.16( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[3.15( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[3.17( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[3.9( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[2.8( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[3.a( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[2.b( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[7.f( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[7.3( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[5.3( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[3.6( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[5.2( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[2.1f( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[3.18( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[7.1c( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[4.1c( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[3.16( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[7.11( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[4.1b( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [2] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[7.15( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[3.11( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[7.a( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[7.8( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[7.5( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[3.e( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[7.2( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[3.5( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[3.7( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[7.c( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[3.8( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[7.e( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[3.1d( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[3.1e( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [2] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[7.1a( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 41 pg[7.1( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [2] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[3.3( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[7.13( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[2.2( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[5.5( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[2.f( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[2.1c( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[7.6( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[7.18( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[2.1d( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[3.1( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[5.7( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[5.14( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[3.c( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[6.1e( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[5.1d( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[4.d( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[6.c( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[5.c( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[7.9( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[7.4( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[3.1b( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[7.1f( empty local-lis/les=40/41 n=0 ec=37/26 lis/c=37/37 les/c/f=38/38/0 sis=40) [0] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[5.4( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[2.18( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[3.f( empty local-lis/les=40/41 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[5.1e( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [0] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 41 pg[2.19( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [0] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[6.d( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[4.2( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[4.4( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[6.6( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[6.1( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[6.4( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[4.7( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[6.e( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[4.5( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[6.2( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[6.b( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[4.9( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[5.9( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[6.17( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[4.14( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[4.12( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[4.10( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[4.8( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[6.1c( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[2.1b( empty local-lis/les=40/41 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=40) [1] r=0 lpr=40 pi=[33,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[6.1d( empty local-lis/les=40/41 n=0 ec=37/24 lis/c=37/37 les/c/f=39/39/0 sis=40) [1] r=0 lpr=40 pi=[37,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[4.f( empty local-lis/les=40/41 n=0 ec=35/21 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[5.11( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 41 pg[5.13( empty local-lis/les=40/41 n=0 ec=35/22 lis/c=35/35 les/c/f=36/36/0 sis=40) [1] r=0 lpr=40 pi=[35,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:33:59 compute-0 python3[215875]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 09:33:59 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.c scrub starts
Oct  3 09:33:59 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.c scrub ok
Oct  3 09:33:59 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct  3 09:33:59 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct  3 09:33:59 compute-0 inspiring_lovelace[215795]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:33:59 compute-0 inspiring_lovelace[215795]: --> relative data size: 1.0
Oct  3 09:33:59 compute-0 inspiring_lovelace[215795]: --> All data devices are unavailable
Oct  3 09:33:59 compute-0 systemd[1]: libpod-436e56a4b7f89814509ef2e16c7bf20394b50d62b3a7822782f77021df65950c.scope: Deactivated successfully.
Oct  3 09:33:59 compute-0 podman[215779]: 2025-10-03 09:33:59.634486347 +0000 UTC m=+1.331312257 container died 436e56a4b7f89814509ef2e16c7bf20394b50d62b3a7822782f77021df65950c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 09:33:59 compute-0 systemd[1]: libpod-436e56a4b7f89814509ef2e16c7bf20394b50d62b3a7822782f77021df65950c.scope: Consumed 1.040s CPU time.
Oct  3 09:33:59 compute-0 python3[215963]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759484038.9372394-33838-223977674422561/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:33:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3b8187e9acaf9d72855f72db3386f28aa853a691299eeaab7557972718b813d-merged.mount: Deactivated successfully.
Oct  3 09:33:59 compute-0 podman[215779]: 2025-10-03 09:33:59.711530392 +0000 UTC m=+1.408356302 container remove 436e56a4b7f89814509ef2e16c7bf20394b50d62b3a7822782f77021df65950c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 09:33:59 compute-0 systemd[1]: libpod-conmon-436e56a4b7f89814509ef2e16c7bf20394b50d62b3a7822782f77021df65950c.scope: Deactivated successfully.
Oct  3 09:33:59 compute-0 podman[157165]: time="2025-10-03T09:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:33:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29174 "" "Go-http-client/1.1"
Oct  3 09:33:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5828 "" "Go-http-client/1.1"
Oct  3 09:33:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 46 peering, 147 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:34:00 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Oct  3 09:34:00 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Oct  3 09:34:00 compute-0 python3[216129]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:34:00 compute-0 podman[216134]: 2025-10-03 09:34:00.232662969 +0000 UTC m=+0.055033950 container create 301ea69dab33da73d529a76005642bb6d0b17e4540ee9411dbdd1859136bc3c6 (image=quay.io/ceph/ceph:v18, name=blissful_raman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:00 compute-0 systemd[1]: Started libpod-conmon-301ea69dab33da73d529a76005642bb6d0b17e4540ee9411dbdd1859136bc3c6.scope.
Oct  3 09:34:00 compute-0 podman[216134]: 2025-10-03 09:34:00.213596606 +0000 UTC m=+0.035967587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:34:00 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87afe41ef40f697e280602d989cc47ca58a145854ccf6ee4d87c0e7714fa7e6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87afe41ef40f697e280602d989cc47ca58a145854ccf6ee4d87c0e7714fa7e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87afe41ef40f697e280602d989cc47ca58a145854ccf6ee4d87c0e7714fa7e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:00 compute-0 podman[216134]: 2025-10-03 09:34:00.353827459 +0000 UTC m=+0.176198450 container init 301ea69dab33da73d529a76005642bb6d0b17e4540ee9411dbdd1859136bc3c6 (image=quay.io/ceph/ceph:v18, name=blissful_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:34:00 compute-0 podman[216134]: 2025-10-03 09:34:00.368897954 +0000 UTC m=+0.191268925 container start 301ea69dab33da73d529a76005642bb6d0b17e4540ee9411dbdd1859136bc3c6 (image=quay.io/ceph/ceph:v18, name=blissful_raman, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:00 compute-0 podman[216134]: 2025-10-03 09:34:00.373069808 +0000 UTC m=+0.195440809 container attach 301ea69dab33da73d529a76005642bb6d0b17e4540ee9411dbdd1859136bc3c6 (image=quay.io/ceph/ceph:v18, name=blissful_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 09:34:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:34:00 compute-0 podman[216188]: 2025-10-03 09:34:00.566897882 +0000 UTC m=+0.058861461 container create 5014f9c0d1ee13a7b8c99e8515a03cadf6ef56febbdc96b6716341ae12d10c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  3 09:34:00 compute-0 systemd[1]: Started libpod-conmon-5014f9c0d1ee13a7b8c99e8515a03cadf6ef56febbdc96b6716341ae12d10c9d.scope.
Oct  3 09:34:00 compute-0 podman[216188]: 2025-10-03 09:34:00.5444011 +0000 UTC m=+0.036364699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:00 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:00 compute-0 podman[216188]: 2025-10-03 09:34:00.77388706 +0000 UTC m=+0.265850649 container init 5014f9c0d1ee13a7b8c99e8515a03cadf6ef56febbdc96b6716341ae12d10c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_napier, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:00 compute-0 podman[216188]: 2025-10-03 09:34:00.783446237 +0000 UTC m=+0.275409816 container start 5014f9c0d1ee13a7b8c99e8515a03cadf6ef56febbdc96b6716341ae12d10c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_napier, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct  3 09:34:00 compute-0 quizzical_napier[216204]: 167 167
Oct  3 09:34:00 compute-0 systemd[1]: libpod-5014f9c0d1ee13a7b8c99e8515a03cadf6ef56febbdc96b6716341ae12d10c9d.scope: Deactivated successfully.
Oct  3 09:34:00 compute-0 podman[216188]: 2025-10-03 09:34:00.842166333 +0000 UTC m=+0.334129912 container attach 5014f9c0d1ee13a7b8c99e8515a03cadf6ef56febbdc96b6716341ae12d10c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_napier, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 09:34:00 compute-0 podman[216188]: 2025-10-03 09:34:00.842845285 +0000 UTC m=+0.334808864 container died 5014f9c0d1ee13a7b8c99e8515a03cadf6ef56febbdc96b6716341ae12d10c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:00 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 09:34:00 compute-0 ceph-mgr[192071]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct  3 09:34:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Oct  3 09:34:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  3 09:34:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Oct  3 09:34:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  3 09:34:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Oct  3 09:34:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  3 09:34:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct  3 09:34:00 compute-0 ceph-mon[191783]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  3 09:34:00 compute-0 ceph-mon[191783]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  3 09:34:00 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0[191779]: 2025-10-03T09:34:00.974+0000 7f4072929640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  3 09:34:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-b59946af2a6f8f18971ce2185d6f54dc04706fb0019e1d60158a5aea9a4f0694-merged.mount: Deactivated successfully.
Oct  3 09:34:01 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  3 09:34:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).mds e2 new map
Oct  3 09:34:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-03T09:34:00.975180+0000#012modified#0112025-10-03T09:34:00.975214+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Oct  3 09:34:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct  3 09:34:01 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct  3 09:34:01 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct  3 09:34:01 compute-0 ceph-mgr[192071]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct  3 09:34:01 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct  3 09:34:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  3 09:34:01 compute-0 podman[216188]: 2025-10-03 09:34:01.084144615 +0000 UTC m=+0.576108234 container remove 5014f9c0d1ee13a7b8c99e8515a03cadf6ef56febbdc96b6716341ae12d10c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:01 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:01 compute-0 ceph-mgr[192071]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct  3 09:34:01 compute-0 systemd[1]: libpod-301ea69dab33da73d529a76005642bb6d0b17e4540ee9411dbdd1859136bc3c6.scope: Deactivated successfully.
Oct  3 09:34:01 compute-0 podman[216134]: 2025-10-03 09:34:01.1185657 +0000 UTC m=+0.940936681 container died 301ea69dab33da73d529a76005642bb6d0b17e4540ee9411dbdd1859136bc3c6 (image=quay.io/ceph/ceph:v18, name=blissful_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:34:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a87afe41ef40f697e280602d989cc47ca58a145854ccf6ee4d87c0e7714fa7e6-merged.mount: Deactivated successfully.
Oct  3 09:34:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  3 09:34:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  3 09:34:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  3 09:34:01 compute-0 ceph-mon[191783]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  3 09:34:01 compute-0 ceph-mon[191783]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  3 09:34:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  3 09:34:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:01 compute-0 podman[216134]: 2025-10-03 09:34:01.262966488 +0000 UTC m=+1.085337459 container remove 301ea69dab33da73d529a76005642bb6d0b17e4540ee9411dbdd1859136bc3c6 (image=quay.io/ceph/ceph:v18, name=blissful_raman, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 09:34:01 compute-0 systemd[1]: libpod-conmon-301ea69dab33da73d529a76005642bb6d0b17e4540ee9411dbdd1859136bc3c6.scope: Deactivated successfully.
Oct  3 09:34:01 compute-0 systemd[1]: libpod-conmon-5014f9c0d1ee13a7b8c99e8515a03cadf6ef56febbdc96b6716341ae12d10c9d.scope: Deactivated successfully.
Oct  3 09:34:01 compute-0 podman[216263]: 2025-10-03 09:34:01.309848473 +0000 UTC m=+0.070428913 container create 180cf0d627ddc93f303021284378585049021f64edc6fc77239553cec20113db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:01 compute-0 podman[216263]: 2025-10-03 09:34:01.273743724 +0000 UTC m=+0.034324184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:01 compute-0 systemd[1]: Started libpod-conmon-180cf0d627ddc93f303021284378585049021f64edc6fc77239553cec20113db.scope.
Oct  3 09:34:01 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.e scrub starts
Oct  3 09:34:01 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.e scrub ok
Oct  3 09:34:01 compute-0 openstack_network_exporter[159287]: ERROR   09:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:34:01 compute-0 openstack_network_exporter[159287]: ERROR   09:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:34:01 compute-0 openstack_network_exporter[159287]: ERROR   09:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:34:01 compute-0 openstack_network_exporter[159287]: ERROR   09:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:34:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:34:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:01 compute-0 openstack_network_exporter[159287]: ERROR   09:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:34:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d768eb257c84a19590af1990048f59c6298dd5b1c51f658cb41da40c3735dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d768eb257c84a19590af1990048f59c6298dd5b1c51f658cb41da40c3735dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d768eb257c84a19590af1990048f59c6298dd5b1c51f658cb41da40c3735dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d768eb257c84a19590af1990048f59c6298dd5b1c51f658cb41da40c3735dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:01 compute-0 podman[216263]: 2025-10-03 09:34:01.472972943 +0000 UTC m=+0.233553393 container init 180cf0d627ddc93f303021284378585049021f64edc6fc77239553cec20113db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 09:34:01 compute-0 podman[216263]: 2025-10-03 09:34:01.489154442 +0000 UTC m=+0.249734882 container start 180cf0d627ddc93f303021284378585049021f64edc6fc77239553cec20113db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_panini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:34:01 compute-0 podman[216263]: 2025-10-03 09:34:01.499719701 +0000 UTC m=+0.260300141 container attach 180cf0d627ddc93f303021284378585049021f64edc6fc77239553cec20113db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 09:34:01 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct  3 09:34:01 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct  3 09:34:01 compute-0 python3[216308]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:34:01 compute-0 podman[216310]: 2025-10-03 09:34:01.696836992 +0000 UTC m=+0.061530607 container create 9921e3695cba00e6e2c54316be3fb714f60b6628b736417c39e1a57cacfd4898 (image=quay.io/ceph/ceph:v18, name=eager_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  3 09:34:01 compute-0 podman[216310]: 2025-10-03 09:34:01.67154827 +0000 UTC m=+0.036241905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:34:01 compute-0 systemd[1]: Started libpod-conmon-9921e3695cba00e6e2c54316be3fb714f60b6628b736417c39e1a57cacfd4898.scope.
Oct  3 09:34:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a67a3c61fabaf402f22358d3f7fd648c744a067b2d727effd3efc743014804ff/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a67a3c61fabaf402f22358d3f7fd648c744a067b2d727effd3efc743014804ff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a67a3c61fabaf402f22358d3f7fd648c744a067b2d727effd3efc743014804ff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:01 compute-0 podman[216310]: 2025-10-03 09:34:01.916989622 +0000 UTC m=+0.281683267 container init 9921e3695cba00e6e2c54316be3fb714f60b6628b736417c39e1a57cacfd4898 (image=quay.io/ceph/ceph:v18, name=eager_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:01 compute-0 podman[216310]: 2025-10-03 09:34:01.924976799 +0000 UTC m=+0.289670414 container start 9921e3695cba00e6e2c54316be3fb714f60b6628b736417c39e1a57cacfd4898 (image=quay.io/ceph/ceph:v18, name=eager_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:34:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 46 peering, 147 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:34:01 compute-0 podman[216310]: 2025-10-03 09:34:01.932445959 +0000 UTC m=+0.297139584 container attach 9921e3695cba00e6e2c54316be3fb714f60b6628b736417c39e1a57cacfd4898 (image=quay.io/ceph/ceph:v18, name=eager_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:34:02 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct  3 09:34:02 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct  3 09:34:02 compute-0 ceph-mon[191783]: Saving service mds.cephfs spec with placement compute-0
Oct  3 09:34:02 compute-0 fervent_panini[216286]: {
Oct  3 09:34:02 compute-0 fervent_panini[216286]:    "0": [
Oct  3 09:34:02 compute-0 fervent_panini[216286]:        {
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "devices": [
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "/dev/loop3"
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            ],
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "lv_name": "ceph_lv0",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "lv_size": "21470642176",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "name": "ceph_lv0",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "tags": {
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.cluster_name": "ceph",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.crush_device_class": "",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.encrypted": "0",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.osd_id": "0",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.type": "block",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.vdo": "0"
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            },
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "type": "block",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "vg_name": "ceph_vg0"
Oct  3 09:34:02 compute-0 fervent_panini[216286]:        }
Oct  3 09:34:02 compute-0 fervent_panini[216286]:    ],
Oct  3 09:34:02 compute-0 fervent_panini[216286]:    "1": [
Oct  3 09:34:02 compute-0 fervent_panini[216286]:        {
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "devices": [
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "/dev/loop4"
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            ],
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "lv_name": "ceph_lv1",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "lv_size": "21470642176",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "name": "ceph_lv1",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "tags": {
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.cluster_name": "ceph",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.crush_device_class": "",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.encrypted": "0",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.osd_id": "1",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.type": "block",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.vdo": "0"
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            },
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "type": "block",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "vg_name": "ceph_vg1"
Oct  3 09:34:02 compute-0 fervent_panini[216286]:        }
Oct  3 09:34:02 compute-0 fervent_panini[216286]:    ],
Oct  3 09:34:02 compute-0 fervent_panini[216286]:    "2": [
Oct  3 09:34:02 compute-0 fervent_panini[216286]:        {
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "devices": [
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "/dev/loop5"
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            ],
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "lv_name": "ceph_lv2",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "lv_size": "21470642176",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "name": "ceph_lv2",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "tags": {
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.cluster_name": "ceph",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.crush_device_class": "",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.encrypted": "0",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.osd_id": "2",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.type": "block",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:                "ceph.vdo": "0"
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            },
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "type": "block",
Oct  3 09:34:02 compute-0 fervent_panini[216286]:            "vg_name": "ceph_vg2"
Oct  3 09:34:02 compute-0 fervent_panini[216286]:        }
Oct  3 09:34:02 compute-0 fervent_panini[216286]:    ]
Oct  3 09:34:02 compute-0 fervent_panini[216286]: }
Oct  3 09:34:02 compute-0 systemd[1]: libpod-180cf0d627ddc93f303021284378585049021f64edc6fc77239553cec20113db.scope: Deactivated successfully.
Oct  3 09:34:02 compute-0 podman[216263]: 2025-10-03 09:34:02.352130478 +0000 UTC m=+1.112710938 container died 180cf0d627ddc93f303021284378585049021f64edc6fc77239553cec20113db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_panini, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 09:34:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-67d768eb257c84a19590af1990048f59c6298dd5b1c51f658cb41da40c3735dd-merged.mount: Deactivated successfully.
Oct  3 09:34:02 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 09:34:02 compute-0 ceph-mgr[192071]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct  3 09:34:02 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct  3 09:34:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  3 09:34:02 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:02 compute-0 eager_ishizaka[216325]: Scheduled mds.cephfs update...
Oct  3 09:34:02 compute-0 podman[216263]: 2025-10-03 09:34:02.561677787 +0000 UTC m=+1.322258267 container remove 180cf0d627ddc93f303021284378585049021f64edc6fc77239553cec20113db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_panini, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:02 compute-0 systemd[1]: libpod-conmon-180cf0d627ddc93f303021284378585049021f64edc6fc77239553cec20113db.scope: Deactivated successfully.
Oct  3 09:34:02 compute-0 systemd[1]: libpod-9921e3695cba00e6e2c54316be3fb714f60b6628b736417c39e1a57cacfd4898.scope: Deactivated successfully.
Oct  3 09:34:02 compute-0 podman[216310]: 2025-10-03 09:34:02.595847855 +0000 UTC m=+0.960541470 container died 9921e3695cba00e6e2c54316be3fb714f60b6628b736417c39e1a57cacfd4898 (image=quay.io/ceph/ceph:v18, name=eager_ishizaka, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  3 09:34:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a67a3c61fabaf402f22358d3f7fd648c744a067b2d727effd3efc743014804ff-merged.mount: Deactivated successfully.
Oct  3 09:34:02 compute-0 podman[216310]: 2025-10-03 09:34:02.686956841 +0000 UTC m=+1.051650456 container remove 9921e3695cba00e6e2c54316be3fb714f60b6628b736417c39e1a57cacfd4898 (image=quay.io/ceph/ceph:v18, name=eager_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  3 09:34:02 compute-0 systemd[1]: libpod-conmon-9921e3695cba00e6e2c54316be3fb714f60b6628b736417c39e1a57cacfd4898.scope: Deactivated successfully.
Oct  3 09:34:03 compute-0 ceph-mon[191783]: Saving service mds.cephfs spec with placement compute-0
Oct  3 09:34:03 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:03 compute-0 podman[216592]: 2025-10-03 09:34:03.374351728 +0000 UTC m=+0.050925817 container create 8c058fa1d38fe3f6a2dcf3dd60f522ea6fe1696d8eb30b31771c7f186064b7aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_burnell, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:34:03 compute-0 systemd[1]: Started libpod-conmon-8c058fa1d38fe3f6a2dcf3dd60f522ea6fe1696d8eb30b31771c7f186064b7aa.scope.
Oct  3 09:34:03 compute-0 python3[216591]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  3 09:34:03 compute-0 podman[216592]: 2025-10-03 09:34:03.35545666 +0000 UTC m=+0.032030779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:03 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:03 compute-0 podman[216592]: 2025-10-03 09:34:03.479817495 +0000 UTC m=+0.156391614 container init 8c058fa1d38fe3f6a2dcf3dd60f522ea6fe1696d8eb30b31771c7f186064b7aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:34:03 compute-0 podman[216592]: 2025-10-03 09:34:03.488415761 +0000 UTC m=+0.164989850 container start 8c058fa1d38fe3f6a2dcf3dd60f522ea6fe1696d8eb30b31771c7f186064b7aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:03 compute-0 systemd[1]: libpod-8c058fa1d38fe3f6a2dcf3dd60f522ea6fe1696d8eb30b31771c7f186064b7aa.scope: Deactivated successfully.
Oct  3 09:34:03 compute-0 wizardly_burnell[216608]: 167 167
Oct  3 09:34:03 compute-0 conmon[216608]: conmon 8c058fa1d38fe3f6a2dc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c058fa1d38fe3f6a2dcf3dd60f522ea6fe1696d8eb30b31771c7f186064b7aa.scope/container/memory.events
Oct  3 09:34:03 compute-0 podman[216592]: 2025-10-03 09:34:03.500625632 +0000 UTC m=+0.177199721 container attach 8c058fa1d38fe3f6a2dcf3dd60f522ea6fe1696d8eb30b31771c7f186064b7aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_burnell, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct  3 09:34:03 compute-0 podman[216592]: 2025-10-03 09:34:03.506685807 +0000 UTC m=+0.183259896 container died 8c058fa1d38fe3f6a2dcf3dd60f522ea6fe1696d8eb30b31771c7f186064b7aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:34:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d420eeded651e1c7aa65ff18075516a63554d4e508ae12493e66d83ffc3f3f2-merged.mount: Deactivated successfully.
Oct  3 09:34:03 compute-0 podman[216592]: 2025-10-03 09:34:03.55661083 +0000 UTC m=+0.233184919 container remove 8c058fa1d38fe3f6a2dcf3dd60f522ea6fe1696d8eb30b31771c7f186064b7aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_burnell, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 09:34:03 compute-0 systemd[1]: libpod-conmon-8c058fa1d38fe3f6a2dcf3dd60f522ea6fe1696d8eb30b31771c7f186064b7aa.scope: Deactivated successfully.
Oct  3 09:34:03 compute-0 podman[216697]: 2025-10-03 09:34:03.756961315 +0000 UTC m=+0.061073322 container create 2a3ca644a50cbb59434140049b1260ac4043019e80e226350c1278c93a044b2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_torvalds, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 09:34:03 compute-0 systemd[1]: Started libpod-conmon-2a3ca644a50cbb59434140049b1260ac4043019e80e226350c1278c93a044b2b.scope.
Oct  3 09:34:03 compute-0 podman[216697]: 2025-10-03 09:34:03.72503319 +0000 UTC m=+0.029145237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:03 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becc0648b4dbb5d550634e2bcb0ecc145c91648a24069af638ad6c5bb5c05ef5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becc0648b4dbb5d550634e2bcb0ecc145c91648a24069af638ad6c5bb5c05ef5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becc0648b4dbb5d550634e2bcb0ecc145c91648a24069af638ad6c5bb5c05ef5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becc0648b4dbb5d550634e2bcb0ecc145c91648a24069af638ad6c5bb5c05ef5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:03 compute-0 podman[216697]: 2025-10-03 09:34:03.888290083 +0000 UTC m=+0.192402130 container init 2a3ca644a50cbb59434140049b1260ac4043019e80e226350c1278c93a044b2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_torvalds, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 09:34:03 compute-0 python3[216715]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759484043.102588-33868-183434433415610/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=51658b1d55381b8ed429b76b468ec988b4f1d33b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:34:03 compute-0 podman[216697]: 2025-10-03 09:34:03.904324528 +0000 UTC m=+0.208436545 container start 2a3ca644a50cbb59434140049b1260ac4043019e80e226350c1278c93a044b2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_torvalds, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 09:34:03 compute-0 podman[216697]: 2025-10-03 09:34:03.922192462 +0000 UTC m=+0.226304509 container attach 2a3ca644a50cbb59434140049b1260ac4043019e80e226350c1278c93a044b2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 09:34:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v102: 193 pgs: 46 peering, 147 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:34:04 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Oct  3 09:34:04 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Oct  3 09:34:04 compute-0 python3[216774]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:34:04 compute-0 podman[216775]: 2025-10-03 09:34:04.550638645 +0000 UTC m=+0.072087626 container create ae579dc1eb280b2875e9720f54bcda275d45d28eb259cd04941653c467e06432 (image=quay.io/ceph/ceph:v18, name=relaxed_leavitt, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 09:34:04 compute-0 systemd[1]: Started libpod-conmon-ae579dc1eb280b2875e9720f54bcda275d45d28eb259cd04941653c467e06432.scope.
Oct  3 09:34:04 compute-0 podman[216775]: 2025-10-03 09:34:04.524586489 +0000 UTC m=+0.046035500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:34:04 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/195fa682e45d1e757c1864d2575b27e39831b2045da879a266bf26e7a74ef49a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/195fa682e45d1e757c1864d2575b27e39831b2045da879a266bf26e7a74ef49a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:04 compute-0 podman[216775]: 2025-10-03 09:34:04.686751887 +0000 UTC m=+0.208200878 container init ae579dc1eb280b2875e9720f54bcda275d45d28eb259cd04941653c467e06432 (image=quay.io/ceph/ceph:v18, name=relaxed_leavitt, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  3 09:34:04 compute-0 podman[216775]: 2025-10-03 09:34:04.695508068 +0000 UTC m=+0.216957049 container start ae579dc1eb280b2875e9720f54bcda275d45d28eb259cd04941653c467e06432 (image=quay.io/ceph/ceph:v18, name=relaxed_leavitt, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 09:34:04 compute-0 podman[216775]: 2025-10-03 09:34:04.705743517 +0000 UTC m=+0.227192518 container attach ae579dc1eb280b2875e9720f54bcda275d45d28eb259cd04941653c467e06432 (image=quay.io/ceph/ceph:v18, name=relaxed_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]: {
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:        "osd_id": 1,
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:        "type": "bluestore"
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:    },
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:        "osd_id": 2,
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:        "type": "bluestore"
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:    },
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:        "osd_id": 0,
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:        "type": "bluestore"
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]:    }
Oct  3 09:34:05 compute-0 intelligent_torvalds[216720]: }
Oct  3 09:34:05 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.c scrub starts
Oct  3 09:34:05 compute-0 systemd[1]: libpod-2a3ca644a50cbb59434140049b1260ac4043019e80e226350c1278c93a044b2b.scope: Deactivated successfully.
Oct  3 09:34:05 compute-0 podman[216697]: 2025-10-03 09:34:05.061690238 +0000 UTC m=+1.365802255 container died 2a3ca644a50cbb59434140049b1260ac4043019e80e226350c1278c93a044b2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_torvalds, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:05 compute-0 systemd[1]: libpod-2a3ca644a50cbb59434140049b1260ac4043019e80e226350c1278c93a044b2b.scope: Consumed 1.139s CPU time.
Oct  3 09:34:05 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.c scrub ok
Oct  3 09:34:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-becc0648b4dbb5d550634e2bcb0ecc145c91648a24069af638ad6c5bb5c05ef5-merged.mount: Deactivated successfully.
Oct  3 09:34:05 compute-0 podman[216697]: 2025-10-03 09:34:05.146736909 +0000 UTC m=+1.450848926 container remove 2a3ca644a50cbb59434140049b1260ac4043019e80e226350c1278c93a044b2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 09:34:05 compute-0 systemd[1]: libpod-conmon-2a3ca644a50cbb59434140049b1260ac4043019e80e226350c1278c93a044b2b.scope: Deactivated successfully.
Oct  3 09:34:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:34:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:34:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:05 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:05 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:34:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Oct  3 09:34:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1724835426' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  3 09:34:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1724835426' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  3 09:34:05 compute-0 systemd[1]: libpod-ae579dc1eb280b2875e9720f54bcda275d45d28eb259cd04941653c467e06432.scope: Deactivated successfully.
Oct  3 09:34:05 compute-0 podman[216932]: 2025-10-03 09:34:05.529633127 +0000 UTC m=+0.032491185 container died ae579dc1eb280b2875e9720f54bcda275d45d28eb259cd04941653c467e06432 (image=quay.io/ceph/ceph:v18, name=relaxed_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  3 09:34:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-195fa682e45d1e757c1864d2575b27e39831b2045da879a266bf26e7a74ef49a-merged.mount: Deactivated successfully.
Oct  3 09:34:05 compute-0 podman[216932]: 2025-10-03 09:34:05.587134963 +0000 UTC m=+0.089992991 container remove ae579dc1eb280b2875e9720f54bcda275d45d28eb259cd04941653c467e06432 (image=quay.io/ceph/ceph:v18, name=relaxed_leavitt, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:05 compute-0 systemd[1]: libpod-conmon-ae579dc1eb280b2875e9720f54bcda275d45d28eb259cd04941653c467e06432.scope: Deactivated successfully.
Oct  3 09:34:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:34:06 compute-0 ceph-mgr[192071]: [progress INFO root] Completed event 475f7d67-672e-4a6a-8c89-d63581960f23 (Global Recovery Event) in 15 seconds
Oct  3 09:34:06 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/1724835426' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  3 09:34:06 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/1724835426' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  3 09:34:06 compute-0 python3[217099]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:34:06 compute-0 podman[217110]: 2025-10-03 09:34:06.385493674 +0000 UTC m=+0.116526183 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 09:34:06 compute-0 podman[217130]: 2025-10-03 09:34:06.517503493 +0000 UTC m=+0.128571370 container create 0a099f41647d93b9f791931fb4ee71a428c3b9f36fc5d030462c1bdbb7e54315 (image=quay.io/ceph/ceph:v18, name=infallible_herschel, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:06 compute-0 podman[217110]: 2025-10-03 09:34:06.519710294 +0000 UTC m=+0.250742843 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 09:34:06 compute-0 podman[217130]: 2025-10-03 09:34:06.450321556 +0000 UTC m=+0.061389433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:34:06 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct  3 09:34:06 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct  3 09:34:06 compute-0 systemd[1]: Started libpod-conmon-0a099f41647d93b9f791931fb4ee71a428c3b9f36fc5d030462c1bdbb7e54315.scope.
Oct  3 09:34:06 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a069cc0d923f5ece2a67dc42fd8ab68ab5cfed151e24174925722e68891b45/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a069cc0d923f5ece2a67dc42fd8ab68ab5cfed151e24174925722e68891b45/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:06 compute-0 podman[217130]: 2025-10-03 09:34:06.699620402 +0000 UTC m=+0.310688259 container init 0a099f41647d93b9f791931fb4ee71a428c3b9f36fc5d030462c1bdbb7e54315 (image=quay.io/ceph/ceph:v18, name=infallible_herschel, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 09:34:06 compute-0 podman[217130]: 2025-10-03 09:34:06.714600353 +0000 UTC m=+0.325668220 container start 0a099f41647d93b9f791931fb4ee71a428c3b9f36fc5d030462c1bdbb7e54315 (image=quay.io/ceph/ceph:v18, name=infallible_herschel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:06 compute-0 podman[217130]: 2025-10-03 09:34:06.738538332 +0000 UTC m=+0.349606209 container attach 0a099f41647d93b9f791931fb4ee71a428c3b9f36fc5d030462c1bdbb7e54315 (image=quay.io/ceph/ceph:v18, name=infallible_herschel, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:34:07 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:34:07 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  3 09:34:07 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3875222264' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  3 09:34:07 compute-0 infallible_herschel[217161]: 
Oct  3 09:34:07 compute-0 infallible_herschel[217161]: {"fsid":"9b4e8c9a-5555-5510-a631-4742a1182561","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":192,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1759483989,"num_in_osds":3,"osd_in_since":1759483955,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84201472,"bytes_avail":64327725056,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-10-03T09:34:05.931433+0000","services":{"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"475f7d67-672e-4a6a-8c89-d63581960f23":{"message":"Global Recovery Event (10s)\n      [=====================.......] (remaining: 3s)","progress":0.76165801286697388,"add_to_ceph_s":true}}}
Oct  3 09:34:07 compute-0 systemd[1]: libpod-0a099f41647d93b9f791931fb4ee71a428c3b9f36fc5d030462c1bdbb7e54315.scope: Deactivated successfully.
Oct  3 09:34:07 compute-0 podman[217130]: 2025-10-03 09:34:07.376568243 +0000 UTC m=+0.987636080 container died 0a099f41647d93b9f791931fb4ee71a428c3b9f36fc5d030462c1bdbb7e54315 (image=quay.io/ceph/ceph:v18, name=infallible_herschel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-41a069cc0d923f5ece2a67dc42fd8ab68ab5cfed151e24174925722e68891b45-merged.mount: Deactivated successfully.
Oct  3 09:34:07 compute-0 podman[217130]: 2025-10-03 09:34:07.66666656 +0000 UTC m=+1.277734397 container remove 0a099f41647d93b9f791931fb4ee71a428c3b9f36fc5d030462c1bdbb7e54315 (image=quay.io/ceph/ceph:v18, name=infallible_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 09:34:07 compute-0 systemd[1]: libpod-conmon-0a099f41647d93b9f791931fb4ee71a428c3b9f36fc5d030462c1bdbb7e54315.scope: Deactivated successfully.
Oct  3 09:34:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:34:07 compute-0 python3[217413]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:34:08 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct  3 09:34:08 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct  3 09:34:08 compute-0 podman[217422]: 2025-10-03 09:34:08.150169729 +0000 UTC m=+0.125757400 container create 1d6333468cf7477b93eafec57dcbe048942eabca16e5c059a1bd54badf3b190d (image=quay.io/ceph/ceph:v18, name=laughing_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 09:34:08 compute-0 podman[217422]: 2025-10-03 09:34:08.069848078 +0000 UTC m=+0.045435779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:34:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:34:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:34:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:34:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:34:08 compute-0 systemd[1]: Started libpod-conmon-1d6333468cf7477b93eafec57dcbe048942eabca16e5c059a1bd54badf3b190d.scope.
Oct  3 09:34:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:34:08 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df9853584c6978df6aa2ee0abec95c0e89cd4f89bc62088e71f172429ddb76f6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df9853584c6978df6aa2ee0abec95c0e89cd4f89bc62088e71f172429ddb76f6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 7a50c7f9-cab1-4a82-8748-16a446dcb2b2 does not exist
Oct  3 09:34:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 55d99c26-9e01-44e4-af17-e3107eb30933 does not exist
Oct  3 09:34:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 5c34aa82-026a-4a19-bf06-380614fab8b4 does not exist
Oct  3 09:34:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:34:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:34:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:34:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:34:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:34:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:34:08 compute-0 podman[217422]: 2025-10-03 09:34:08.321600634 +0000 UTC m=+0.297188325 container init 1d6333468cf7477b93eafec57dcbe048942eabca16e5c059a1bd54badf3b190d (image=quay.io/ceph/ceph:v18, name=laughing_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:08 compute-0 podman[217422]: 2025-10-03 09:34:08.332013259 +0000 UTC m=+0.307600930 container start 1d6333468cf7477b93eafec57dcbe048942eabca16e5c059a1bd54badf3b190d (image=quay.io/ceph/ceph:v18, name=laughing_matsumoto, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 09:34:08 compute-0 podman[217422]: 2025-10-03 09:34:08.36535786 +0000 UTC m=+0.340945531 container attach 1d6333468cf7477b93eafec57dcbe048942eabca16e5c059a1bd54badf3b190d (image=quay.io/ceph/ceph:v18, name=laughing_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:08 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:34:08 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:08 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:34:08 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct  3 09:34:08 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct  3 09:34:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 09:34:09 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/511977290' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 09:34:09 compute-0 laughing_matsumoto[217453]: 
Oct  3 09:34:09 compute-0 laughing_matsumoto[217453]: {"epoch":1,"fsid":"9b4e8c9a-5555-5510-a631-4742a1182561","modified":"2025-10-03T09:30:48.618005Z","created":"2025-10-03T09:30:48.618005Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Oct  3 09:34:09 compute-0 laughing_matsumoto[217453]: dumped monmap epoch 1
Oct  3 09:34:09 compute-0 systemd[1]: libpod-1d6333468cf7477b93eafec57dcbe048942eabca16e5c059a1bd54badf3b190d.scope: Deactivated successfully.
Oct  3 09:34:09 compute-0 podman[217422]: 2025-10-03 09:34:09.033478117 +0000 UTC m=+1.009065788 container died 1d6333468cf7477b93eafec57dcbe048942eabca16e5c059a1bd54badf3b190d (image=quay.io/ceph/ceph:v18, name=laughing_matsumoto, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-df9853584c6978df6aa2ee0abec95c0e89cd4f89bc62088e71f172429ddb76f6-merged.mount: Deactivated successfully.
Oct  3 09:34:09 compute-0 podman[217613]: 2025-10-03 09:34:09.061313551 +0000 UTC m=+0.042446385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:09 compute-0 podman[217422]: 2025-10-03 09:34:09.169632259 +0000 UTC m=+1.145219930 container remove 1d6333468cf7477b93eafec57dcbe048942eabca16e5c059a1bd54badf3b190d (image=quay.io/ceph/ceph:v18, name=laughing_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 09:34:09 compute-0 systemd[1]: libpod-conmon-1d6333468cf7477b93eafec57dcbe048942eabca16e5c059a1bd54badf3b190d.scope: Deactivated successfully.
Oct  3 09:34:09 compute-0 podman[217613]: 2025-10-03 09:34:09.220551805 +0000 UTC m=+0.201684659 container create d1b38a0f07017076aded54a3bf7291de8694de057336637190f66ea70b09cfea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mahavira, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 09:34:09 compute-0 systemd[1]: Started libpod-conmon-d1b38a0f07017076aded54a3bf7291de8694de057336637190f66ea70b09cfea.scope.
Oct  3 09:34:09 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:09 compute-0 podman[217613]: 2025-10-03 09:34:09.34713411 +0000 UTC m=+0.328266964 container init d1b38a0f07017076aded54a3bf7291de8694de057336637190f66ea70b09cfea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:09 compute-0 podman[217613]: 2025-10-03 09:34:09.363607969 +0000 UTC m=+0.344740783 container start d1b38a0f07017076aded54a3bf7291de8694de057336637190f66ea70b09cfea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mahavira, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:09 compute-0 blissful_mahavira[217644]: 167 167
Oct  3 09:34:09 compute-0 systemd[1]: libpod-d1b38a0f07017076aded54a3bf7291de8694de057336637190f66ea70b09cfea.scope: Deactivated successfully.
Oct  3 09:34:09 compute-0 podman[217613]: 2025-10-03 09:34:09.375492441 +0000 UTC m=+0.356625345 container attach d1b38a0f07017076aded54a3bf7291de8694de057336637190f66ea70b09cfea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 09:34:09 compute-0 podman[217613]: 2025-10-03 09:34:09.376406361 +0000 UTC m=+0.357539225 container died d1b38a0f07017076aded54a3bf7291de8694de057336637190f66ea70b09cfea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mahavira, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 09:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a5ac323ec9af98d125deeff26163d71659e30869f7230f95dd5b0025aa00cb9-merged.mount: Deactivated successfully.
Oct  3 09:34:09 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.19 deep-scrub starts
Oct  3 09:34:09 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.19 deep-scrub ok
Oct  3 09:34:09 compute-0 podman[217613]: 2025-10-03 09:34:09.594916389 +0000 UTC m=+0.576049203 container remove d1b38a0f07017076aded54a3bf7291de8694de057336637190f66ea70b09cfea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  3 09:34:09 compute-0 systemd[1]: libpod-conmon-d1b38a0f07017076aded54a3bf7291de8694de057336637190f66ea70b09cfea.scope: Deactivated successfully.
Oct  3 09:34:09 compute-0 python3[217688]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:34:09 compute-0 podman[217694]: 2025-10-03 09:34:09.793148945 +0000 UTC m=+0.064437841 container create 8a91afceb673271c9a958cc6e5ff6f97689e60ecab4f9a4e47195d2dc32acdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:09 compute-0 podman[217694]: 2025-10-03 09:34:09.762848012 +0000 UTC m=+0.034136908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:09 compute-0 systemd[1]: Started libpod-conmon-8a91afceb673271c9a958cc6e5ff6f97689e60ecab4f9a4e47195d2dc32acdde.scope.
Oct  3 09:34:09 compute-0 podman[217706]: 2025-10-03 09:34:09.876059327 +0000 UTC m=+0.085238368 container create 286fe4811d44f1a17bf9ef326e6b3c126e66943c59867e2ee4245e6c3a1f805d (image=quay.io/ceph/ceph:v18, name=interesting_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:09 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba17fe3fcfc15b167850eab4ff6bb26fe4d8c48fcb6beb473a9402bc45bce94b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba17fe3fcfc15b167850eab4ff6bb26fe4d8c48fcb6beb473a9402bc45bce94b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba17fe3fcfc15b167850eab4ff6bb26fe4d8c48fcb6beb473a9402bc45bce94b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba17fe3fcfc15b167850eab4ff6bb26fe4d8c48fcb6beb473a9402bc45bce94b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba17fe3fcfc15b167850eab4ff6bb26fe4d8c48fcb6beb473a9402bc45bce94b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:09 compute-0 podman[217694]: 2025-10-03 09:34:09.925800884 +0000 UTC m=+0.197089791 container init 8a91afceb673271c9a958cc6e5ff6f97689e60ecab4f9a4e47195d2dc32acdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:34:09 compute-0 systemd[1]: Started libpod-conmon-286fe4811d44f1a17bf9ef326e6b3c126e66943c59867e2ee4245e6c3a1f805d.scope.
Oct  3 09:34:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:34:09 compute-0 podman[217694]: 2025-10-03 09:34:09.940168976 +0000 UTC m=+0.211457872 container start 8a91afceb673271c9a958cc6e5ff6f97689e60ecab4f9a4e47195d2dc32acdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 09:34:09 compute-0 podman[217706]: 2025-10-03 09:34:09.853183782 +0000 UTC m=+0.062362833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:34:09 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6296e251303f19beb1b2a8c423d32e43f8fc30e7724738c86086277764abcf52/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6296e251303f19beb1b2a8c423d32e43f8fc30e7724738c86086277764abcf52/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:10 compute-0 podman[217694]: 2025-10-03 09:34:10.022843942 +0000 UTC m=+0.294132928 container attach 8a91afceb673271c9a958cc6e5ff6f97689e60ecab4f9a4e47195d2dc32acdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 09:34:10 compute-0 podman[217706]: 2025-10-03 09:34:10.046386617 +0000 UTC m=+0.255565808 container init 286fe4811d44f1a17bf9ef326e6b3c126e66943c59867e2ee4245e6c3a1f805d (image=quay.io/ceph/ceph:v18, name=interesting_tesla, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 09:34:10 compute-0 podman[217706]: 2025-10-03 09:34:10.05393085 +0000 UTC m=+0.263109881 container start 286fe4811d44f1a17bf9ef326e6b3c126e66943c59867e2ee4245e6c3a1f805d (image=quay.io/ceph/ceph:v18, name=interesting_tesla, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 09:34:10 compute-0 podman[217706]: 2025-10-03 09:34:10.141656077 +0000 UTC m=+0.350835128 container attach 286fe4811d44f1a17bf9ef326e6b3c126e66943c59867e2ee4245e6c3a1f805d (image=quay.io/ceph/ceph:v18, name=interesting_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 09:34:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:34:10 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Oct  3 09:34:10 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Oct  3 09:34:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Oct  3 09:34:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1083950780' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  3 09:34:10 compute-0 interesting_tesla[217729]: [client.openstack]
Oct  3 09:34:10 compute-0 interesting_tesla[217729]: #011key = AQCcl99oAAAAABAAeilSs1C+Gkk5Z0s9jJBH2g==
Oct  3 09:34:10 compute-0 interesting_tesla[217729]: #011caps mgr = "allow *"
Oct  3 09:34:10 compute-0 interesting_tesla[217729]: #011caps mon = "profile rbd"
Oct  3 09:34:10 compute-0 interesting_tesla[217729]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct  3 09:34:10 compute-0 systemd[1]: libpod-286fe4811d44f1a17bf9ef326e6b3c126e66943c59867e2ee4245e6c3a1f805d.scope: Deactivated successfully.
Oct  3 09:34:10 compute-0 conmon[217729]: conmon 286fe4811d44f1a17bf9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-286fe4811d44f1a17bf9ef326e6b3c126e66943c59867e2ee4245e6c3a1f805d.scope/container/memory.events
Oct  3 09:34:10 compute-0 podman[217763]: 2025-10-03 09:34:10.782158657 +0000 UTC m=+0.037980030 container died 286fe4811d44f1a17bf9ef326e6b3c126e66943c59867e2ee4245e6c3a1f805d (image=quay.io/ceph/ceph:v18, name=interesting_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 09:34:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6296e251303f19beb1b2a8c423d32e43f8fc30e7724738c86086277764abcf52-merged.mount: Deactivated successfully.
Oct  3 09:34:11 compute-0 pedantic_chatelet[217723]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:34:11 compute-0 pedantic_chatelet[217723]: --> relative data size: 1.0
Oct  3 09:34:11 compute-0 pedantic_chatelet[217723]: --> All data devices are unavailable
Oct  3 09:34:11 compute-0 systemd[1]: libpod-8a91afceb673271c9a958cc6e5ff6f97689e60ecab4f9a4e47195d2dc32acdde.scope: Deactivated successfully.
Oct  3 09:34:11 compute-0 systemd[1]: libpod-8a91afceb673271c9a958cc6e5ff6f97689e60ecab4f9a4e47195d2dc32acdde.scope: Consumed 1.055s CPU time.
Oct  3 09:34:11 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.16 deep-scrub starts
Oct  3 09:34:11 compute-0 ceph-mgr[192071]: [progress INFO root] Writing back 10 completed events
Oct  3 09:34:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  3 09:34:11 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.16 deep-scrub ok
Oct  3 09:34:11 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:11 compute-0 podman[217763]: 2025-10-03 09:34:11.20099724 +0000 UTC m=+0.456818643 container remove 286fe4811d44f1a17bf9ef326e6b3c126e66943c59867e2ee4245e6c3a1f805d (image=quay.io/ceph/ceph:v18, name=interesting_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:11 compute-0 podman[217694]: 2025-10-03 09:34:11.20539413 +0000 UTC m=+1.476683026 container died 8a91afceb673271c9a958cc6e5ff6f97689e60ecab4f9a4e47195d2dc32acdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 09:34:11 compute-0 systemd[1]: libpod-conmon-286fe4811d44f1a17bf9ef326e6b3c126e66943c59867e2ee4245e6c3a1f805d.scope: Deactivated successfully.
Oct  3 09:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba17fe3fcfc15b167850eab4ff6bb26fe4d8c48fcb6beb473a9402bc45bce94b-merged.mount: Deactivated successfully.
Oct  3 09:34:11 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/1083950780' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  3 09:34:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:11 compute-0 podman[217694]: 2025-10-03 09:34:11.871633447 +0000 UTC m=+2.142922353 container remove 8a91afceb673271c9a958cc6e5ff6f97689e60ecab4f9a4e47195d2dc32acdde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 09:34:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:34:11 compute-0 systemd[1]: libpod-conmon-8a91afceb673271c9a958cc6e5ff6f97689e60ecab4f9a4e47195d2dc32acdde.scope: Deactivated successfully.
Oct  3 09:34:12 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct  3 09:34:12 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct  3 09:34:12 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Oct  3 09:34:12 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Oct  3 09:34:12 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.1c deep-scrub starts
Oct  3 09:34:12 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 3.1c deep-scrub ok
Oct  3 09:34:12 compute-0 podman[217941]: 2025-10-03 09:34:12.681308611 +0000 UTC m=+0.059545393 container create bd1b9ca46a1062cfbd56f913f2331f243bf9d4190171bf2512780b8c0a3ec78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:34:12 compute-0 podman[217941]: 2025-10-03 09:34:12.654565193 +0000 UTC m=+0.032801915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:12 compute-0 systemd[1]: Started libpod-conmon-bd1b9ca46a1062cfbd56f913f2331f243bf9d4190171bf2512780b8c0a3ec78b.scope.
Oct  3 09:34:12 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:12 compute-0 podman[217941]: 2025-10-03 09:34:12.83756886 +0000 UTC m=+0.215805562 container init bd1b9ca46a1062cfbd56f913f2331f243bf9d4190171bf2512780b8c0a3ec78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_goldberg, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  3 09:34:12 compute-0 podman[217941]: 2025-10-03 09:34:12.855143884 +0000 UTC m=+0.233380576 container start bd1b9ca46a1062cfbd56f913f2331f243bf9d4190171bf2512780b8c0a3ec78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 09:34:12 compute-0 strange_goldberg[217956]: 167 167
Oct  3 09:34:12 compute-0 systemd[1]: libpod-bd1b9ca46a1062cfbd56f913f2331f243bf9d4190171bf2512780b8c0a3ec78b.scope: Deactivated successfully.
Oct  3 09:34:12 compute-0 podman[217941]: 2025-10-03 09:34:12.87060688 +0000 UTC m=+0.248843582 container attach bd1b9ca46a1062cfbd56f913f2331f243bf9d4190171bf2512780b8c0a3ec78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_goldberg, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 09:34:12 compute-0 podman[217941]: 2025-10-03 09:34:12.871229421 +0000 UTC m=+0.249466103 container died bd1b9ca46a1062cfbd56f913f2331f243bf9d4190171bf2512780b8c0a3ec78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:34:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-64a59a2f484db3151fcc2b65caa1088869ed0af41368ad449c763254f84696b5-merged.mount: Deactivated successfully.
Oct  3 09:34:13 compute-0 podman[217941]: 2025-10-03 09:34:13.038992898 +0000 UTC m=+0.417229580 container remove bd1b9ca46a1062cfbd56f913f2331f243bf9d4190171bf2512780b8c0a3ec78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:13 compute-0 systemd[1]: libpod-conmon-bd1b9ca46a1062cfbd56f913f2331f243bf9d4190171bf2512780b8c0a3ec78b.scope: Deactivated successfully.
Oct  3 09:34:13 compute-0 podman[218060]: 2025-10-03 09:34:13.255934416 +0000 UTC m=+0.092003845 container create 72069cd6bc8003078059f330612b65ea7e43ba22ec97d757dba6cf21561b39a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 09:34:13 compute-0 podman[218060]: 2025-10-03 09:34:13.191077763 +0000 UTC m=+0.027147212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:13 compute-0 systemd[1]: Started libpod-conmon-72069cd6bc8003078059f330612b65ea7e43ba22ec97d757dba6cf21561b39a9.scope.
Oct  3 09:34:13 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:13 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Oct  3 09:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a41c9931d458361147e313edf6d0dbf79d3cf5d009fbb79b86b2bcb2b23452d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a41c9931d458361147e313edf6d0dbf79d3cf5d009fbb79b86b2bcb2b23452d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a41c9931d458361147e313edf6d0dbf79d3cf5d009fbb79b86b2bcb2b23452d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a41c9931d458361147e313edf6d0dbf79d3cf5d009fbb79b86b2bcb2b23452d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:13 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Oct  3 09:34:13 compute-0 podman[218060]: 2025-10-03 09:34:13.482636737 +0000 UTC m=+0.318706196 container init 72069cd6bc8003078059f330612b65ea7e43ba22ec97d757dba6cf21561b39a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 09:34:13 compute-0 podman[218060]: 2025-10-03 09:34:13.492546685 +0000 UTC m=+0.328616114 container start 72069cd6bc8003078059f330612b65ea7e43ba22ec97d757dba6cf21561b39a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_austin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Oct  3 09:34:13 compute-0 podman[218060]: 2025-10-03 09:34:13.554691401 +0000 UTC m=+0.390760840 container attach 72069cd6bc8003078059f330612b65ea7e43ba22ec97d757dba6cf21561b39a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_austin, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:34:13 compute-0 ansible-async_wrapper.py[218148]: Invoked with j180678054115 30 /home/zuul/.ansible/tmp/ansible-tmp-1759484052.9333377-33940-228901792304343/AnsiballZ_command.py _
Oct  3 09:34:13 compute-0 ansible-async_wrapper.py[218153]: Starting module and watcher
Oct  3 09:34:13 compute-0 ansible-async_wrapper.py[218153]: Start watching 218154 (30)
Oct  3 09:34:13 compute-0 ansible-async_wrapper.py[218154]: Start module (218154)
Oct  3 09:34:13 compute-0 ansible-async_wrapper.py[218148]: Return async_wrapper task started.
Oct  3 09:34:13 compute-0 python3[218155]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:34:13 compute-0 podman[218156]: 2025-10-03 09:34:13.887295043 +0000 UTC m=+0.068198191 container create b44474787983478b59769dc05c055e65cccb154a2fd226b4585db425f8833b36 (image=quay.io/ceph/ceph:v18, name=hopeful_ride, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:34:13 compute-0 podman[218156]: 2025-10-03 09:34:13.850011836 +0000 UTC m=+0.030915014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:34:14 compute-0 systemd[1]: Started libpod-conmon-b44474787983478b59769dc05c055e65cccb154a2fd226b4585db425f8833b36.scope.
Oct  3 09:34:14 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c1e97203661beafa4cb2661a52826d2bcbfcdbfa43fd094da21cab1999f110d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c1e97203661beafa4cb2661a52826d2bcbfcdbfa43fd094da21cab1999f110d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:14 compute-0 podman[218156]: 2025-10-03 09:34:14.082832673 +0000 UTC m=+0.263735841 container init b44474787983478b59769dc05c055e65cccb154a2fd226b4585db425f8833b36 (image=quay.io/ceph/ceph:v18, name=hopeful_ride, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 09:34:14 compute-0 podman[218156]: 2025-10-03 09:34:14.092507604 +0000 UTC m=+0.273410752 container start b44474787983478b59769dc05c055e65cccb154a2fd226b4585db425f8833b36 (image=quay.io/ceph/ceph:v18, name=hopeful_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 09:34:14 compute-0 podman[218156]: 2025-10-03 09:34:14.107385832 +0000 UTC m=+0.288288980 container attach b44474787983478b59769dc05c055e65cccb154a2fd226b4585db425f8833b36 (image=quay.io/ceph/ceph:v18, name=hopeful_ride, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 09:34:14 compute-0 agitated_austin[218143]: {
Oct  3 09:34:14 compute-0 agitated_austin[218143]:    "0": [
Oct  3 09:34:14 compute-0 agitated_austin[218143]:        {
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "devices": [
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "/dev/loop3"
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            ],
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "lv_name": "ceph_lv0",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "lv_size": "21470642176",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "name": "ceph_lv0",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "tags": {
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.cluster_name": "ceph",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.crush_device_class": "",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.encrypted": "0",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.osd_id": "0",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.type": "block",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.vdo": "0"
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            },
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "type": "block",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "vg_name": "ceph_vg0"
Oct  3 09:34:14 compute-0 agitated_austin[218143]:        }
Oct  3 09:34:14 compute-0 agitated_austin[218143]:    ],
Oct  3 09:34:14 compute-0 agitated_austin[218143]:    "1": [
Oct  3 09:34:14 compute-0 agitated_austin[218143]:        {
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "devices": [
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "/dev/loop4"
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            ],
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "lv_name": "ceph_lv1",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "lv_size": "21470642176",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "name": "ceph_lv1",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "tags": {
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.cluster_name": "ceph",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.crush_device_class": "",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.encrypted": "0",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.osd_id": "1",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.type": "block",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.vdo": "0"
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            },
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "type": "block",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "vg_name": "ceph_vg1"
Oct  3 09:34:14 compute-0 agitated_austin[218143]:        }
Oct  3 09:34:14 compute-0 agitated_austin[218143]:    ],
Oct  3 09:34:14 compute-0 agitated_austin[218143]:    "2": [
Oct  3 09:34:14 compute-0 agitated_austin[218143]:        {
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "devices": [
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "/dev/loop5"
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            ],
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "lv_name": "ceph_lv2",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "lv_size": "21470642176",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "name": "ceph_lv2",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "tags": {
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.cluster_name": "ceph",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.crush_device_class": "",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.encrypted": "0",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.osd_id": "2",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.type": "block",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:                "ceph.vdo": "0"
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            },
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "type": "block",
Oct  3 09:34:14 compute-0 agitated_austin[218143]:            "vg_name": "ceph_vg2"
Oct  3 09:34:14 compute-0 agitated_austin[218143]:        }
Oct  3 09:34:14 compute-0 agitated_austin[218143]:    ]
Oct  3 09:34:14 compute-0 agitated_austin[218143]: }
Oct  3 09:34:14 compute-0 systemd[1]: libpod-72069cd6bc8003078059f330612b65ea7e43ba22ec97d757dba6cf21561b39a9.scope: Deactivated successfully.
Oct  3 09:34:14 compute-0 podman[218060]: 2025-10-03 09:34:14.346030505 +0000 UTC m=+1.182099944 container died 72069cd6bc8003078059f330612b65ea7e43ba22ec97d757dba6cf21561b39a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 09:34:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a41c9931d458361147e313edf6d0dbf79d3cf5d009fbb79b86b2bcb2b23452d-merged.mount: Deactivated successfully.
Oct  3 09:34:14 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Oct  3 09:34:14 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Oct  3 09:34:14 compute-0 podman[218060]: 2025-10-03 09:34:14.541228185 +0000 UTC m=+1.377297624 container remove 72069cd6bc8003078059f330612b65ea7e43ba22ec97d757dba6cf21561b39a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:34:14 compute-0 systemd[1]: libpod-conmon-72069cd6bc8003078059f330612b65ea7e43ba22ec97d757dba6cf21561b39a9.scope: Deactivated successfully.
Oct  3 09:34:14 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  3 09:34:14 compute-0 hopeful_ride[218169]: 
Oct  3 09:34:14 compute-0 hopeful_ride[218169]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  3 09:34:14 compute-0 systemd[1]: libpod-b44474787983478b59769dc05c055e65cccb154a2fd226b4585db425f8833b36.scope: Deactivated successfully.
Oct  3 09:34:14 compute-0 podman[218156]: 2025-10-03 09:34:14.743818411 +0000 UTC m=+0.924721559 container died b44474787983478b59769dc05c055e65cccb154a2fd226b4585db425f8833b36 (image=quay.io/ceph/ceph:v18, name=hopeful_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 09:34:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c1e97203661beafa4cb2661a52826d2bcbfcdbfa43fd094da21cab1999f110d-merged.mount: Deactivated successfully.
Oct  3 09:34:14 compute-0 podman[218156]: 2025-10-03 09:34:14.947559725 +0000 UTC m=+1.128462883 container remove b44474787983478b59769dc05c055e65cccb154a2fd226b4585db425f8833b36 (image=quay.io/ceph/ceph:v18, name=hopeful_ride, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 09:34:14 compute-0 systemd[1]: libpod-conmon-b44474787983478b59769dc05c055e65cccb154a2fd226b4585db425f8833b36.scope: Deactivated successfully.
Oct  3 09:34:14 compute-0 ansible-async_wrapper.py[218154]: Module complete (218154)
Oct  3 09:34:15 compute-0 python3[218338]: ansible-ansible.legacy.async_status Invoked with jid=j180678054115.218148 mode=status _async_dir=/root/.ansible_async
Oct  3 09:34:15 compute-0 python3[218438]: ansible-ansible.legacy.async_status Invoked with jid=j180678054115.218148 mode=cleanup _async_dir=/root/.ansible_async
Oct  3 09:34:15 compute-0 podman[218453]: 2025-10-03 09:34:15.403696454 +0000 UTC m=+0.056206806 container create dbcd2818c3423e334869f4fdcd6b44b5c12f0991dc78e85f7274badd133f55fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 09:34:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:34:15 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct  3 09:34:15 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct  3 09:34:15 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.b scrub starts
Oct  3 09:34:15 compute-0 podman[218453]: 2025-10-03 09:34:15.379954672 +0000 UTC m=+0.032465044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:15 compute-0 systemd[1]: Started libpod-conmon-dbcd2818c3423e334869f4fdcd6b44b5c12f0991dc78e85f7274badd133f55fd.scope.
Oct  3 09:34:15 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.b scrub ok
Oct  3 09:34:15 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:15 compute-0 podman[218453]: 2025-10-03 09:34:15.538772972 +0000 UTC m=+0.191283374 container init dbcd2818c3423e334869f4fdcd6b44b5c12f0991dc78e85f7274badd133f55fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct  3 09:34:15 compute-0 podman[218453]: 2025-10-03 09:34:15.551541402 +0000 UTC m=+0.204051764 container start dbcd2818c3423e334869f4fdcd6b44b5c12f0991dc78e85f7274badd133f55fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 09:34:15 compute-0 naughty_torvalds[218469]: 167 167
Oct  3 09:34:15 compute-0 systemd[1]: libpod-dbcd2818c3423e334869f4fdcd6b44b5c12f0991dc78e85f7274badd133f55fd.scope: Deactivated successfully.
Oct  3 09:34:15 compute-0 podman[218453]: 2025-10-03 09:34:15.583193909 +0000 UTC m=+0.235704281 container attach dbcd2818c3423e334869f4fdcd6b44b5c12f0991dc78e85f7274badd133f55fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:34:15 compute-0 podman[218453]: 2025-10-03 09:34:15.583660173 +0000 UTC m=+0.236170525 container died dbcd2818c3423e334869f4fdcd6b44b5c12f0991dc78e85f7274badd133f55fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_torvalds, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 09:34:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a84b75ba0492fb45bf598b456d0e1463871c1aaf2d09c6c73a0732cac37fff3-merged.mount: Deactivated successfully.
Oct  3 09:34:15 compute-0 podman[218453]: 2025-10-03 09:34:15.89615505 +0000 UTC m=+0.548665412 container remove dbcd2818c3423e334869f4fdcd6b44b5c12f0991dc78e85f7274badd133f55fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:34:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:34:15 compute-0 systemd[1]: libpod-conmon-dbcd2818c3423e334869f4fdcd6b44b5c12f0991dc78e85f7274badd133f55fd.scope: Deactivated successfully.
Oct  3 09:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:34:16 compute-0 python3[218510]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:34:16 compute-0 podman[218518]: 2025-10-03 09:34:16.083847358 +0000 UTC m=+0.039267162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:16 compute-0 podman[218518]: 2025-10-03 09:34:16.187519097 +0000 UTC m=+0.142938881 container create d341036cb7f479c814eed073d9190d418fe5c872fdfd36ff788a58463152383e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 09:34:16 compute-0 podman[218524]: 2025-10-03 09:34:16.103771778 +0000 UTC m=+0.047162087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:34:16 compute-0 podman[218524]: 2025-10-03 09:34:16.241919624 +0000 UTC m=+0.185309903 container create 41daabda56e5fa1d3ce4e41da6ade0a3364ccbe9f1bdd313e8901ae335ca346d (image=quay.io/ceph/ceph:v18, name=gifted_neumann, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:34:16 compute-0 systemd[1]: Started libpod-conmon-d341036cb7f479c814eed073d9190d418fe5c872fdfd36ff788a58463152383e.scope.
Oct  3 09:34:16 compute-0 systemd[1]: Started libpod-conmon-41daabda56e5fa1d3ce4e41da6ade0a3364ccbe9f1bdd313e8901ae335ca346d.scope.
Oct  3 09:34:16 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:16 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/772179d2a458e6408ad140d207721926f7babf6db35995e3ce7806849cb1ef99/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/772179d2a458e6408ad140d207721926f7babf6db35995e3ce7806849cb1ef99/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de7bf8ea79a0c3833653d72a1c691d21bde2babb5fdd42443044a746e42ba19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de7bf8ea79a0c3833653d72a1c691d21bde2babb5fdd42443044a746e42ba19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de7bf8ea79a0c3833653d72a1c691d21bde2babb5fdd42443044a746e42ba19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0de7bf8ea79a0c3833653d72a1c691d21bde2babb5fdd42443044a746e42ba19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:16 compute-0 podman[218524]: 2025-10-03 09:34:16.402888634 +0000 UTC m=+0.346278953 container init 41daabda56e5fa1d3ce4e41da6ade0a3364ccbe9f1bdd313e8901ae335ca346d (image=quay.io/ceph/ceph:v18, name=gifted_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:16 compute-0 podman[218524]: 2025-10-03 09:34:16.411127799 +0000 UTC m=+0.354518078 container start 41daabda56e5fa1d3ce4e41da6ade0a3364ccbe9f1bdd313e8901ae335ca346d (image=quay.io/ceph/ceph:v18, name=gifted_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 09:34:16 compute-0 podman[218524]: 2025-10-03 09:34:16.525004075 +0000 UTC m=+0.468394434 container attach 41daabda56e5fa1d3ce4e41da6ade0a3364ccbe9f1bdd313e8901ae335ca346d (image=quay.io/ceph/ceph:v18, name=gifted_neumann, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:16 compute-0 podman[218518]: 2025-10-03 09:34:16.561026052 +0000 UTC m=+0.516445936 container init d341036cb7f479c814eed073d9190d418fe5c872fdfd36ff788a58463152383e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:16 compute-0 podman[218518]: 2025-10-03 09:34:16.577013426 +0000 UTC m=+0.532433240 container start d341036cb7f479c814eed073d9190d418fe5c872fdfd36ff788a58463152383e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:34:16 compute-0 podman[218518]: 2025-10-03 09:34:16.585923992 +0000 UTC m=+0.541343816 container attach d341036cb7f479c814eed073d9190d418fe5c872fdfd36ff788a58463152383e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 09:34:16 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  3 09:34:16 compute-0 gifted_neumann[218550]: 
Oct  3 09:34:16 compute-0 gifted_neumann[218550]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  3 09:34:17 compute-0 systemd[1]: libpod-41daabda56e5fa1d3ce4e41da6ade0a3364ccbe9f1bdd313e8901ae335ca346d.scope: Deactivated successfully.
Oct  3 09:34:17 compute-0 podman[218524]: 2025-10-03 09:34:17.011977995 +0000 UTC m=+0.955368304 container died 41daabda56e5fa1d3ce4e41da6ade0a3364ccbe9f1bdd313e8901ae335ca346d (image=quay.io/ceph/ceph:v18, name=gifted_neumann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 09:34:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-772179d2a458e6408ad140d207721926f7babf6db35995e3ce7806849cb1ef99-merged.mount: Deactivated successfully.
Oct  3 09:34:17 compute-0 podman[218524]: 2025-10-03 09:34:17.245773564 +0000 UTC m=+1.189163883 container remove 41daabda56e5fa1d3ce4e41da6ade0a3364ccbe9f1bdd313e8901ae335ca346d (image=quay.io/ceph/ceph:v18, name=gifted_neumann, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 09:34:17 compute-0 systemd[1]: libpod-conmon-41daabda56e5fa1d3ce4e41da6ade0a3364ccbe9f1bdd313e8901ae335ca346d.scope: Deactivated successfully.
Oct  3 09:34:17 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Oct  3 09:34:17 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Oct  3 09:34:17 compute-0 intelligent_tu[218548]: {
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:        "osd_id": 1,
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:        "type": "bluestore"
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:    },
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:        "osd_id": 2,
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:        "type": "bluestore"
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:    },
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:        "osd_id": 0,
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:        "type": "bluestore"
Oct  3 09:34:17 compute-0 intelligent_tu[218548]:    }
Oct  3 09:34:17 compute-0 intelligent_tu[218548]: }
Oct  3 09:34:17 compute-0 systemd[1]: libpod-d341036cb7f479c814eed073d9190d418fe5c872fdfd36ff788a58463152383e.scope: Deactivated successfully.
Oct  3 09:34:17 compute-0 systemd[1]: libpod-d341036cb7f479c814eed073d9190d418fe5c872fdfd36ff788a58463152383e.scope: Consumed 1.082s CPU time.
Oct  3 09:34:17 compute-0 podman[218518]: 2025-10-03 09:34:17.67785321 +0000 UTC m=+1.633273064 container died d341036cb7f479c814eed073d9190d418fe5c872fdfd36ff788a58463152383e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 09:34:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0de7bf8ea79a0c3833653d72a1c691d21bde2babb5fdd42443044a746e42ba19-merged.mount: Deactivated successfully.
Oct  3 09:34:17 compute-0 podman[218518]: 2025-10-03 09:34:17.818288061 +0000 UTC m=+1.773707845 container remove d341036cb7f479c814eed073d9190d418fe5c872fdfd36ff788a58463152383e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:17 compute-0 systemd[1]: libpod-conmon-d341036cb7f479c814eed073d9190d418fe5c872fdfd36ff788a58463152383e.scope: Deactivated successfully.
Oct  3 09:34:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:34:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:34:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:17 compute-0 ceph-mgr[192071]: [progress INFO root] update: starting ev 5bce410d-60e5-4249-8c18-9763bca9477a (Updating rgw.rgw deployment (+1 -> 1))
Oct  3 09:34:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pbmwsx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct  3 09:34:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pbmwsx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  3 09:34:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pbmwsx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  3 09:34:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct  3 09:34:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:34:17 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:34:17 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.pbmwsx on compute-0
Oct  3 09:34:17 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.pbmwsx on compute-0
Oct  3 09:34:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:34:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pbmwsx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  3 09:34:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pbmwsx", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  3 09:34:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:18 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct  3 09:34:18 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct  3 09:34:18 compute-0 podman[218677]: 2025-10-03 09:34:18.118943567 +0000 UTC m=+0.089483424 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 09:34:18 compute-0 podman[218678]: 2025-10-03 09:34:18.120270529 +0000 UTC m=+0.088186082 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct  3 09:34:18 compute-0 podman[218676]: 2025-10-03 09:34:18.140364425 +0000 UTC m=+0.119651003 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, com.redhat.component=ubi9-container, vcs-type=git, architecture=x86_64, name=ubi9)
Oct  3 09:34:18 compute-0 podman[218679]: 2025-10-03 09:34:18.163606672 +0000 UTC m=+0.132687233 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 09:34:18 compute-0 python3[218687]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:34:18 compute-0 podman[218801]: 2025-10-03 09:34:18.230327854 +0000 UTC m=+0.030106917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:34:18 compute-0 podman[218801]: 2025-10-03 09:34:18.414139307 +0000 UTC m=+0.213918370 container create e4d59c8c5b1ad08f2f69f8bb1a382e364cefa2afdc680df91b8cfa07883a7ae6 (image=quay.io/ceph/ceph:v18, name=charming_bhaskara, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 09:34:18 compute-0 systemd[1]: Started libpod-conmon-e4d59c8c5b1ad08f2f69f8bb1a382e364cefa2afdc680df91b8cfa07883a7ae6.scope.
Oct  3 09:34:18 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc9d823d34ca0e3cdca58b122bd0f5528f851626eb8e3b13214129ab4fe8bb5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc9d823d34ca0e3cdca58b122bd0f5528f851626eb8e3b13214129ab4fe8bb5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:18 compute-0 podman[218801]: 2025-10-03 09:34:18.516725193 +0000 UTC m=+0.316504256 container init e4d59c8c5b1ad08f2f69f8bb1a382e364cefa2afdc680df91b8cfa07883a7ae6 (image=quay.io/ceph/ceph:v18, name=charming_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:18 compute-0 podman[218801]: 2025-10-03 09:34:18.525622859 +0000 UTC m=+0.325401912 container start e4d59c8c5b1ad08f2f69f8bb1a382e364cefa2afdc680df91b8cfa07883a7ae6 (image=quay.io/ceph/ceph:v18, name=charming_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 09:34:18 compute-0 ansible-async_wrapper.py[218153]: Done in kid B.
Oct  3 09:34:18 compute-0 podman[218801]: 2025-10-03 09:34:18.655093226 +0000 UTC m=+0.454872309 container attach e4d59c8c5b1ad08f2f69f8bb1a382e364cefa2afdc680df91b8cfa07883a7ae6 (image=quay.io/ceph/ceph:v18, name=charming_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:34:18 compute-0 podman[218891]: 2025-10-03 09:34:18.836617346 +0000 UTC m=+0.074355589 container create ae758e21bb51c82da8ab7d61ef0c8c0bacbb87d82aebf40df9a7a8bb561437e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct  3 09:34:18 compute-0 podman[218891]: 2025-10-03 09:34:18.800907879 +0000 UTC m=+0.038646112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:18 compute-0 systemd[1]: Started libpod-conmon-ae758e21bb51c82da8ab7d61ef0c8c0bacbb87d82aebf40df9a7a8bb561437e7.scope.
Oct  3 09:34:18 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:18 compute-0 podman[218891]: 2025-10-03 09:34:18.969545595 +0000 UTC m=+0.207283818 container init ae758e21bb51c82da8ab7d61ef0c8c0bacbb87d82aebf40df9a7a8bb561437e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:18 compute-0 podman[218891]: 2025-10-03 09:34:18.978369399 +0000 UTC m=+0.216107612 container start ae758e21bb51c82da8ab7d61ef0c8c0bacbb87d82aebf40df9a7a8bb561437e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 09:34:18 compute-0 adoring_joliot[218927]: 167 167
Oct  3 09:34:18 compute-0 systemd[1]: libpod-ae758e21bb51c82da8ab7d61ef0c8c0bacbb87d82aebf40df9a7a8bb561437e7.scope: Deactivated successfully.
Oct  3 09:34:18 compute-0 podman[218891]: 2025-10-03 09:34:18.992862014 +0000 UTC m=+0.230600257 container attach ae758e21bb51c82da8ab7d61ef0c8c0bacbb87d82aebf40df9a7a8bb561437e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_joliot, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:18 compute-0 podman[218891]: 2025-10-03 09:34:18.993275507 +0000 UTC m=+0.231013720 container died ae758e21bb51c82da8ab7d61ef0c8c0bacbb87d82aebf40df9a7a8bb561437e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_joliot, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 09:34:18 compute-0 ceph-mon[191783]: Deploying daemon rgw.rgw.compute-0.pbmwsx on compute-0
Oct  3 09:34:19 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  3 09:34:19 compute-0 charming_bhaskara[218859]: 
Oct  3 09:34:19 compute-0 charming_bhaskara[218859]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct  3 09:34:19 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Oct  3 09:34:19 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Oct  3 09:34:19 compute-0 systemd[1]: libpod-e4d59c8c5b1ad08f2f69f8bb1a382e364cefa2afdc680df91b8cfa07883a7ae6.scope: Deactivated successfully.
Oct  3 09:34:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-60c3f71b073d22dff2e42cc7dfce71b4fcfc3a16568fa16d3e7b92358c4bb7df-merged.mount: Deactivated successfully.
Oct  3 09:34:19 compute-0 podman[218801]: 2025-10-03 09:34:19.189124618 +0000 UTC m=+0.988903701 container died e4d59c8c5b1ad08f2f69f8bb1a382e364cefa2afdc680df91b8cfa07883a7ae6 (image=quay.io/ceph/ceph:v18, name=charming_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  3 09:34:19 compute-0 podman[218891]: 2025-10-03 09:34:19.289915235 +0000 UTC m=+0.527653438 container remove ae758e21bb51c82da8ab7d61ef0c8c0bacbb87d82aebf40df9a7a8bb561437e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddc9d823d34ca0e3cdca58b122bd0f5528f851626eb8e3b13214129ab4fe8bb5-merged.mount: Deactivated successfully.
Oct  3 09:34:19 compute-0 podman[218801]: 2025-10-03 09:34:19.350530272 +0000 UTC m=+1.150309315 container remove e4d59c8c5b1ad08f2f69f8bb1a382e364cefa2afdc680df91b8cfa07883a7ae6 (image=quay.io/ceph/ceph:v18, name=charming_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 09:34:19 compute-0 systemd[1]: Reloading.
Oct  3 09:34:19 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:34:19 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:34:19 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct  3 09:34:19 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct  3 09:34:19 compute-0 systemd[1]: libpod-conmon-e4d59c8c5b1ad08f2f69f8bb1a382e364cefa2afdc680df91b8cfa07883a7ae6.scope: Deactivated successfully.
Oct  3 09:34:19 compute-0 systemd[1]: libpod-conmon-ae758e21bb51c82da8ab7d61ef0c8c0bacbb87d82aebf40df9a7a8bb561437e7.scope: Deactivated successfully.
Oct  3 09:34:19 compute-0 systemd[1]: Reloading.
Oct  3 09:34:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:34:19 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:34:19 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:34:20 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct  3 09:34:20 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct  3 09:34:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:34:20 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.pbmwsx for 9b4e8c9a-5555-5510-a631-4742a1182561...
Oct  3 09:34:20 compute-0 python3[219063]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:34:20 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct  3 09:34:20 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct  3 09:34:20 compute-0 podman[219092]: 2025-10-03 09:34:20.641202155 +0000 UTC m=+0.043629253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:34:20 compute-0 podman[219092]: 2025-10-03 09:34:20.778113392 +0000 UTC m=+0.180540510 container create be6b9bf9da6606cfb1453b9ff73ae62594ec799da01de9c429267ccdc27e57f3 (image=quay.io/ceph/ceph:v18, name=kind_chaum, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:20 compute-0 podman[219118]: 2025-10-03 09:34:20.805828052 +0000 UTC m=+0.119086766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:20 compute-0 systemd[1]: Started libpod-conmon-be6b9bf9da6606cfb1453b9ff73ae62594ec799da01de9c429267ccdc27e57f3.scope.
Oct  3 09:34:20 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de70acb98433019dcc25da862d1e0012f6150c59edf3fa219a1c8c55ab0e8983/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de70acb98433019dcc25da862d1e0012f6150c59edf3fa219a1c8c55ab0e8983/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:20 compute-0 podman[219118]: 2025-10-03 09:34:20.994347927 +0000 UTC m=+0.307606551 container create ba44c4f9c8b051ec5817f2aec1c777d33f242f014115b778ea827a9bba73b812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-rgw-rgw-compute-0-pbmwsx, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:21 compute-0 podman[219092]: 2025-10-03 09:34:21.08911202 +0000 UTC m=+0.491539188 container init be6b9bf9da6606cfb1453b9ff73ae62594ec799da01de9c429267ccdc27e57f3 (image=quay.io/ceph/ceph:v18, name=kind_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 09:34:21 compute-0 podman[219092]: 2025-10-03 09:34:21.100524686 +0000 UTC m=+0.502951764 container start be6b9bf9da6606cfb1453b9ff73ae62594ec799da01de9c429267ccdc27e57f3 (image=quay.io/ceph/ceph:v18, name=kind_chaum, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Oct  3 09:34:21 compute-0 podman[219092]: 2025-10-03 09:34:21.187789459 +0000 UTC m=+0.590216627 container attach be6b9bf9da6606cfb1453b9ff73ae62594ec799da01de9c429267ccdc27e57f3 (image=quay.io/ceph/ceph:v18, name=kind_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db17ffdbe8d81041c5691f5fe002eed622e19ba02f574b19123ab45e7697b14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db17ffdbe8d81041c5691f5fe002eed622e19ba02f574b19123ab45e7697b14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db17ffdbe8d81041c5691f5fe002eed622e19ba02f574b19123ab45e7697b14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db17ffdbe8d81041c5691f5fe002eed622e19ba02f574b19123ab45e7697b14/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.pbmwsx supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:21 compute-0 podman[219118]: 2025-10-03 09:34:21.537974786 +0000 UTC m=+0.851233430 container init ba44c4f9c8b051ec5817f2aec1c777d33f242f014115b778ea827a9bba73b812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-rgw-rgw-compute-0-pbmwsx, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:21 compute-0 podman[219118]: 2025-10-03 09:34:21.54684836 +0000 UTC m=+0.860106984 container start ba44c4f9c8b051ec5817f2aec1c777d33f242f014115b778ea827a9bba73b812 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-rgw-rgw-compute-0-pbmwsx, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 09:34:21 compute-0 bash[219118]: ba44c4f9c8b051ec5817f2aec1c777d33f242f014115b778ea827a9bba73b812
Oct  3 09:34:21 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.pbmwsx for 9b4e8c9a-5555-5510-a631-4742a1182561.
Oct  3 09:34:21 compute-0 radosgw[219161]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct  3 09:34:21 compute-0 radosgw[219161]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct  3 09:34:21 compute-0 radosgw[219161]: framework: beast
Oct  3 09:34:21 compute-0 radosgw[219161]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct  3 09:34:21 compute-0 radosgw[219161]: init_numa not setting numa affinity
Oct  3 09:34:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:34:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:34:21 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  3 09:34:21 compute-0 kind_chaum[219132]: 
Oct  3 09:34:21 compute-0 kind_chaum[219132]: [{"container_id": "97c57d436adb", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.38%", "created": "2025-10-03T09:32:17.303398Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-10-03T09:32:17.685889Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-03T09:34:07.274442Z", "memory_usage": 11639193, "ports": [], "service_name": "crash", "started": "2025-10-03T09:32:17.034758Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-9b4e8c9a-5555-5510-a631-4742a1182561@crash.compute-0", "version": "18.2.7"}, {"container_id": "b32ef6df7b93", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "27.44%", "created": "2025-10-03T09:30:59.030853Z", "daemon_id": "compute-0.vtkhde", "daemon_name": "mgr.compute-0.vtkhde", "daemon_type": "mgr", "events": ["2025-10-03T09:33:22.137627Z daemon:mgr.compute-0.vtkhde [INFO] \"Reconfigured mgr.compute-0.vtkhde on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-03T09:34:07.274345Z", "memory_usage": 551655833, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-03T09:30:58.391520Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-9b4e8c9a-5555-5510-a631-4742a1182561@mgr.compute-0.vtkhde", "version": "18.2.7"}, {"container_id": "5224f5bf68a0", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.58%", "created": "2025-10-03T09:30:51.536609Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-10-03T09:33:21.130931Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-03T09:34:07.273854Z", "memory_request": 2147483648, "memory_usage": 38807797, "ports": [], "service_name": "mon", "started": "2025-10-03T09:30:55.079692Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-9b4e8c9a-5555-5510-a631-4742a1182561@mon.compute-0", "version": "18.2.7"}, {"container_id": "afc3a3d88412", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.63%", "created": "2025-10-03T09:32:49.371394Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-10-03T09:32:49.596229Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-03T09:34:07.274526Z", "memory_request": 4294967296, "memory_usage": 65965916, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-03T09:32:49.242410Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-9b4e8c9a-5555-5510-a631-4742a1182561@osd.0", "version": "18.2.7"}, {"container_id": "04521b7b64d4", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.02%", "created": "2025-10-03T09:32:56.004596Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-10-03T09:32:56.081596Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-03T09:34:07.274608Z", "memory_request": 4294967296, "memory_usage": 67339550, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-03T09:32:55.828869Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-9b4e8c9a-5555-5510-a631-4742a1182561@osd.1", "version": "18.2.7"}, {"container_id": "166adbb3547d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.05%", "created": "2025-10-03T09:33:01.800774Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-10-03T09:33:01.875718Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-03T09:34:07.274688Z", "memory_request": 4294967296, "memory_usage": 65515028, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-03T09:33:01.558711Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-9b4e8c9a-5555-5510-a631-4742a1182561@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.pbmwsx", "daemon_name": "rgw.rgw.compute-0.pbmwsx", "daemon_type": "rgw", "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Oct  3 09:34:21 compute-0 systemd[1]: libpod-be6b9bf9da6606cfb1453b9ff73ae62594ec799da01de9c429267ccdc27e57f3.scope: Deactivated successfully.
Oct  3 09:34:21 compute-0 podman[219092]: 2025-10-03 09:34:21.746459321 +0000 UTC m=+1.148886399 container died be6b9bf9da6606cfb1453b9ff73ae62594ec799da01de9c429267ccdc27e57f3 (image=quay.io/ceph/ceph:v18, name=kind_chaum, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 09:34:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  3 09:34:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:21 compute-0 ceph-mgr[192071]: [progress INFO root] complete: finished ev 5bce410d-60e5-4249-8c18-9763bca9477a (Updating rgw.rgw deployment (+1 -> 1))
Oct  3 09:34:21 compute-0 ceph-mgr[192071]: [progress INFO root] Completed event 5bce410d-60e5-4249-8c18-9763bca9477a (Updating rgw.rgw deployment (+1 -> 1)) in 4 seconds
Oct  3 09:34:21 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Oct  3 09:34:21 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct  3 09:34:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  3 09:34:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  3 09:34:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:21 compute-0 ceph-mgr[192071]: [progress INFO root] update: starting ev 85b04583-db15-4bc5-9ea2-2b926f6e212c (Updating mds.cephfs deployment (+1 -> 1))
Oct  3 09:34:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-de70acb98433019dcc25da862d1e0012f6150c59edf3fa219a1c8c55ab0e8983-merged.mount: Deactivated successfully.
Oct  3 09:34:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.svanmi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct  3 09:34:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.svanmi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  3 09:34:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.svanmi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  3 09:34:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:34:21 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:34:21 compute-0 ceph-mgr[192071]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.svanmi on compute-0
Oct  3 09:34:21 compute-0 ceph-mgr[192071]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.svanmi on compute-0
Oct  3 09:34:21 compute-0 podman[219092]: 2025-10-03 09:34:21.874166062 +0000 UTC m=+1.276593160 container remove be6b9bf9da6606cfb1453b9ff73ae62594ec799da01de9c429267ccdc27e57f3 (image=quay.io/ceph/ceph:v18, name=kind_chaum, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  3 09:34:21 compute-0 systemd[1]: libpod-conmon-be6b9bf9da6606cfb1453b9ff73ae62594ec799da01de9c429267ccdc27e57f3.scope: Deactivated successfully.
Oct  3 09:34:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:34:22 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct  3 09:34:22 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct  3 09:34:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct  3 09:34:22 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct  3 09:34:22 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct  3 09:34:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct  3 09:34:22 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct  3 09:34:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Oct  3 09:34:22 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/264610143' entity='client.rgw.rgw.compute-0.pbmwsx' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  3 09:34:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:22 compute-0 ceph-mon[191783]: Saving service rgw.rgw spec with placement compute-0
Oct  3 09:34:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.svanmi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  3 09:34:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.svanmi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  3 09:34:22 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct  3 09:34:22 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 43 pg[8.0( empty local-lis/les=0/0 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:22 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct  3 09:34:22 compute-0 podman[219380]: 2025-10-03 09:34:22.773632459 +0000 UTC m=+0.072642874 container create 3f69f060655136e539da7acae13856380eccef0e1e1c5b5033ebeefc68414247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kepler, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:34:22 compute-0 podman[219380]: 2025-10-03 09:34:22.740863377 +0000 UTC m=+0.039873892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:22 compute-0 systemd[1]: Started libpod-conmon-3f69f060655136e539da7acae13856380eccef0e1e1c5b5033ebeefc68414247.scope.
Oct  3 09:34:22 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:22 compute-0 podman[219380]: 2025-10-03 09:34:22.900735642 +0000 UTC m=+0.199746077 container init 3f69f060655136e539da7acae13856380eccef0e1e1c5b5033ebeefc68414247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kepler, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:22 compute-0 podman[219380]: 2025-10-03 09:34:22.910391281 +0000 UTC m=+0.209401696 container start 3f69f060655136e539da7acae13856380eccef0e1e1c5b5033ebeefc68414247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 09:34:22 compute-0 podman[219380]: 2025-10-03 09:34:22.915197226 +0000 UTC m=+0.214207641 container attach 3f69f060655136e539da7acae13856380eccef0e1e1c5b5033ebeefc68414247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 09:34:22 compute-0 bold_kepler[219412]: 167 167
Oct  3 09:34:22 compute-0 systemd[1]: libpod-3f69f060655136e539da7acae13856380eccef0e1e1c5b5033ebeefc68414247.scope: Deactivated successfully.
Oct  3 09:34:22 compute-0 podman[219380]: 2025-10-03 09:34:22.918796881 +0000 UTC m=+0.217807306 container died 3f69f060655136e539da7acae13856380eccef0e1e1c5b5033ebeefc68414247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kepler, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:34:22 compute-0 python3[219408]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:34:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-eebdd160cdd2085f35733441b830d25f36e57e2ab6d6450a6716f94d836c1755-merged.mount: Deactivated successfully.
Oct  3 09:34:22 compute-0 podman[219380]: 2025-10-03 09:34:22.970871144 +0000 UTC m=+0.269881559 container remove 3f69f060655136e539da7acae13856380eccef0e1e1c5b5033ebeefc68414247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kepler, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 09:34:22 compute-0 systemd[1]: libpod-conmon-3f69f060655136e539da7acae13856380eccef0e1e1c5b5033ebeefc68414247.scope: Deactivated successfully.
Oct  3 09:34:23 compute-0 podman[219424]: 2025-10-03 09:34:23.008949287 +0000 UTC m=+0.051152464 container create 2ce97d3a6c4445cf2a0c7b10f3553f0cc6879fa09e439d68c92b5f522670effb (image=quay.io/ceph/ceph:v18, name=quirky_diffie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 09:34:23 compute-0 systemd[1]: Reloading.
Oct  3 09:34:23 compute-0 podman[219424]: 2025-10-03 09:34:22.989817653 +0000 UTC m=+0.032020850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:34:23 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:34:23 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:34:23 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Oct  3 09:34:23 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Oct  3 09:34:23 compute-0 systemd[1]: Started libpod-conmon-2ce97d3a6c4445cf2a0c7b10f3553f0cc6879fa09e439d68c92b5f522670effb.scope.
Oct  3 09:34:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct  3 09:34:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/264610143' entity='client.rgw.rgw.compute-0.pbmwsx' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  3 09:34:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct  3 09:34:23 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct  3 09:34:23 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 44 pg[8.0( empty local-lis/les=43/44 n=0 ec=43/43 lis/c=0/0 les/c/f=0/0/0 sis=43) [1] r=0 lpr=43 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:23 compute-0 ceph-mon[191783]: Deploying daemon mds.cephfs.compute-0.svanmi on compute-0
Oct  3 09:34:23 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/264610143' entity='client.rgw.rgw.compute-0.pbmwsx' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  3 09:34:23 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/264610143' entity='client.rgw.rgw.compute-0.pbmwsx' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  3 09:34:23 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a18fd00bb3eaa004f502b2009729aaf0e26823620a41913702fbf36d082e428/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a18fd00bb3eaa004f502b2009729aaf0e26823620a41913702fbf36d082e428/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:23 compute-0 systemd[1]: Reloading.
Oct  3 09:34:23 compute-0 podman[219424]: 2025-10-03 09:34:23.597650594 +0000 UTC m=+0.639853811 container init 2ce97d3a6c4445cf2a0c7b10f3553f0cc6879fa09e439d68c92b5f522670effb (image=quay.io/ceph/ceph:v18, name=quirky_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:23 compute-0 podman[219424]: 2025-10-03 09:34:23.615135486 +0000 UTC m=+0.657338663 container start 2ce97d3a6c4445cf2a0c7b10f3553f0cc6879fa09e439d68c92b5f522670effb (image=quay.io/ceph/ceph:v18, name=quirky_diffie, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 09:34:23 compute-0 podman[219424]: 2025-10-03 09:34:23.619347441 +0000 UTC m=+0.661550618 container attach 2ce97d3a6c4445cf2a0c7b10f3553f0cc6879fa09e439d68c92b5f522670effb (image=quay.io/ceph/ceph:v18, name=quirky_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:23 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct  3 09:34:23 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct  3 09:34:23 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:34:23 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:34:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v114: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:34:24 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.svanmi for 9b4e8c9a-5555-5510-a631-4742a1182561...
Oct  3 09:34:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  3 09:34:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4154205283' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  3 09:34:24 compute-0 quirky_diffie[219482]: 
Oct  3 09:34:24 compute-0 quirky_diffie[219482]: {"fsid":"9b4e8c9a-5555-5510-a631-4742a1182561","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":208,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":44,"num_osds":3,"num_up_osds":3,"osd_up_since":1759483989,"num_in_osds":3,"osd_in_since":1759483955,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84180992,"bytes_avail":64327745536,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-10-03T09:34:11.933701+0000","services":{}},"progress_events":{"85b04583-db15-4bc5-9ea2-2b926f6e212c":{"message":"Updating mds.cephfs deployment (+1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Oct  3 09:34:24 compute-0 podman[219592]: 2025-10-03 09:34:24.30715106 +0000 UTC m=+0.049217641 container create 5c2634ba7c52eeb0d47950141ad96eadd063fe266a57ec426eff6b82a7079f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mds-cephfs-compute-0-svanmi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 09:34:24 compute-0 systemd[1]: libpod-2ce97d3a6c4445cf2a0c7b10f3553f0cc6879fa09e439d68c92b5f522670effb.scope: Deactivated successfully.
Oct  3 09:34:24 compute-0 conmon[219482]: conmon 2ce97d3a6c4445cf2a0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2ce97d3a6c4445cf2a0c7b10f3553f0cc6879fa09e439d68c92b5f522670effb.scope/container/memory.events
Oct  3 09:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61e4cd7db882bf5868f43878794549dbc065e2b47bdc660953706079bba8d9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61e4cd7db882bf5868f43878794549dbc065e2b47bdc660953706079bba8d9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61e4cd7db882bf5868f43878794549dbc065e2b47bdc660953706079bba8d9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61e4cd7db882bf5868f43878794549dbc065e2b47bdc660953706079bba8d9c/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.svanmi supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:24 compute-0 podman[219607]: 2025-10-03 09:34:24.371563388 +0000 UTC m=+0.035482120 container died 2ce97d3a6c4445cf2a0c7b10f3553f0cc6879fa09e439d68c92b5f522670effb (image=quay.io/ceph/ceph:v18, name=quirky_diffie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  3 09:34:24 compute-0 podman[219592]: 2025-10-03 09:34:24.287315403 +0000 UTC m=+0.029382004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:24 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.a scrub starts
Oct  3 09:34:24 compute-0 podman[219592]: 2025-10-03 09:34:24.394217516 +0000 UTC m=+0.136284097 container init 5c2634ba7c52eeb0d47950141ad96eadd063fe266a57ec426eff6b82a7079f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mds-cephfs-compute-0-svanmi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 09:34:24 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.a scrub ok
Oct  3 09:34:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a18fd00bb3eaa004f502b2009729aaf0e26823620a41913702fbf36d082e428-merged.mount: Deactivated successfully.
Oct  3 09:34:24 compute-0 podman[219592]: 2025-10-03 09:34:24.410402596 +0000 UTC m=+0.152469177 container start 5c2634ba7c52eeb0d47950141ad96eadd063fe266a57ec426eff6b82a7079f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mds-cephfs-compute-0-svanmi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 09:34:24 compute-0 podman[219607]: 2025-10-03 09:34:24.432528907 +0000 UTC m=+0.096447619 container remove 2ce97d3a6c4445cf2a0c7b10f3553f0cc6879fa09e439d68c92b5f522670effb (image=quay.io/ceph/ceph:v18, name=quirky_diffie, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 09:34:24 compute-0 systemd[1]: libpod-conmon-2ce97d3a6c4445cf2a0c7b10f3553f0cc6879fa09e439d68c92b5f522670effb.scope: Deactivated successfully.
Oct  3 09:34:24 compute-0 bash[219592]: 5c2634ba7c52eeb0d47950141ad96eadd063fe266a57ec426eff6b82a7079f4b
Oct  3 09:34:24 compute-0 ceph-mds[219626]: set uid:gid to 167:167 (ceph:ceph)
Oct  3 09:34:24 compute-0 ceph-mds[219626]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct  3 09:34:24 compute-0 ceph-mds[219626]: main not setting numa affinity
Oct  3 09:34:24 compute-0 ceph-mds[219626]: pidfile_write: ignore empty --pid-file
Oct  3 09:34:24 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mds-cephfs-compute-0-svanmi[219616]: starting mds.cephfs.compute-0.svanmi at 
Oct  3 09:34:24 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.svanmi for 9b4e8c9a-5555-5510-a631-4742a1182561.
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi Updating MDS map to version 2 from mon.0
Oct  3 09:34:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:34:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:34:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  3 09:34:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct  3 09:34:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:24 compute-0 ceph-mgr[192071]: [progress INFO root] complete: finished ev 85b04583-db15-4bc5-9ea2-2b926f6e212c (Updating mds.cephfs deployment (+1 -> 1))
Oct  3 09:34:24 compute-0 ceph-mgr[192071]: [progress INFO root] Completed event 85b04583-db15-4bc5-9ea2-2b926f6e212c (Updating mds.cephfs deployment (+1 -> 1)) in 3 seconds
Oct  3 09:34:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Oct  3 09:34:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct  3 09:34:24 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct  3 09:34:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct  3 09:34:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/264610143' entity='client.rgw.rgw.compute-0.pbmwsx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  3 09:34:24 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 45 pg[9.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  3 09:34:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).mds e3 new map
Oct  3 09:34:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-03T09:34:00.975180+0000#012modified#0112025-10-03T09:34:00.975214+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.svanmi{-1:14271} state up:standby seq 1 addr [v2:192.168.122.100:6814/2283251280,v1:192.168.122.100:6815/2283251280] compat {c=[1],r=[1],i=[7ff]}]
Oct  3 09:34:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:24 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/264610143' entity='client.rgw.rgw.compute-0.pbmwsx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  3 09:34:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi Updating MDS map to version 3 from mon.0
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi Monitors have assigned me to become a standby.
Oct  3 09:34:24 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2283251280,v1:192.168.122.100:6815/2283251280] up:boot
Oct  3 09:34:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2283251280,v1:192.168.122.100:6815/2283251280] as mds.0
Oct  3 09:34:24 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.svanmi assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  3 09:34:24 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  3 09:34:24 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  3 09:34:24 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  3 09:34:24 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct  3 09:34:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.svanmi"} v 0) v1
Oct  3 09:34:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.svanmi"}]: dispatch
Oct  3 09:34:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).mds e3 all = 0
Oct  3 09:34:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).mds e4 new map
Oct  3 09:34:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-03T09:34:00.975180+0000#012modified#0112025-10-03T09:34:24.573805+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14271}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.svanmi{0:14271} state up:creating seq 1 addr [v2:192.168.122.100:6814/2283251280,v1:192.168.122.100:6815/2283251280] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi Updating MDS map to version 4 from mon.0
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.0.4 handle_mds_map i am now mds.0.4
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.0.cache creating system inode with ino:0x1
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.0.cache creating system inode with ino:0x100
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.0.cache creating system inode with ino:0x600
Oct  3 09:34:24 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.svanmi=up:creating}
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.0.cache creating system inode with ino:0x601
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.0.cache creating system inode with ino:0x602
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.0.cache creating system inode with ino:0x603
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.0.cache creating system inode with ino:0x604
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.0.cache creating system inode with ino:0x605
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.0.cache creating system inode with ino:0x606
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.0.cache creating system inode with ino:0x607
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.0.cache creating system inode with ino:0x608
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.0.cache creating system inode with ino:0x609
Oct  3 09:34:24 compute-0 ceph-mds[219626]: mds.0.4 creating_done
Oct  3 09:34:24 compute-0 ceph-mon[191783]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.svanmi is now active in filesystem cephfs as rank 0
Oct  3 09:34:25 compute-0 podman[219754]: 2025-10-03 09:34:25.054644206 +0000 UTC m=+0.086186858 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, release=1755695350, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Oct  3 09:34:25 compute-0 podman[219753]: 2025-10-03 09:34:25.086164279 +0000 UTC m=+0.118577340 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct  3 09:34:25 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.3 deep-scrub starts
Oct  3 09:34:25 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.3 deep-scrub ok
Oct  3 09:34:25 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.b scrub starts
Oct  3 09:34:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:34:25 compute-0 python3[219879]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:34:25 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.b scrub ok
Oct  3 09:34:25 compute-0 podman[219898]: 2025-10-03 09:34:25.510537139 +0000 UTC m=+0.074800754 container create ffab2668523219abf1437dae07984aa94406b89d501ad5652d04be07be23f4d2 (image=quay.io/ceph/ceph:v18, name=bold_shockley, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  3 09:34:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct  3 09:34:25 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/264610143' entity='client.rgw.rgw.compute-0.pbmwsx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  3 09:34:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct  3 09:34:25 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct  3 09:34:25 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 46 pg[9.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:25 compute-0 systemd[1]: Started libpod-conmon-ffab2668523219abf1437dae07984aa94406b89d501ad5652d04be07be23f4d2.scope.
Oct  3 09:34:25 compute-0 podman[219898]: 2025-10-03 09:34:25.487346424 +0000 UTC m=+0.051610059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:34:25 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:25 compute-0 ceph-mon[191783]: daemon mds.cephfs.compute-0.svanmi assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  3 09:34:25 compute-0 ceph-mon[191783]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  3 09:34:25 compute-0 ceph-mon[191783]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  3 09:34:25 compute-0 ceph-mon[191783]: Cluster is now healthy
Oct  3 09:34:25 compute-0 ceph-mon[191783]: daemon mds.cephfs.compute-0.svanmi is now active in filesystem cephfs as rank 0
Oct  3 09:34:25 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/264610143' entity='client.rgw.rgw.compute-0.pbmwsx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  3 09:34:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).mds e5 new map
Oct  3 09:34:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-03T09:34:00.975180+0000#012modified#0112025-10-03T09:34:25.590589+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14271}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.svanmi{0:14271} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2283251280,v1:192.168.122.100:6815/2283251280] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct  3 09:34:25 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi Updating MDS map to version 5 from mon.0
Oct  3 09:34:25 compute-0 ceph-mds[219626]: mds.0.4 handle_mds_map i am now mds.0.4
Oct  3 09:34:25 compute-0 ceph-mds[219626]: mds.0.4 handle_mds_map state change up:creating --> up:active
Oct  3 09:34:25 compute-0 ceph-mds[219626]: mds.0.4 recovery_done -- successful recovery!
Oct  3 09:34:25 compute-0 ceph-mds[219626]: mds.0.4 active_start
Oct  3 09:34:25 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2283251280,v1:192.168.122.100:6815/2283251280] up:active
Oct  3 09:34:25 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.svanmi=up:active}
Oct  3 09:34:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f632bee555decc348e5185ae75cbec48a5e1c89a5fc92d820d6310197dcae6c6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f632bee555decc348e5185ae75cbec48a5e1c89a5fc92d820d6310197dcae6c6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:25 compute-0 podman[219898]: 2025-10-03 09:34:25.658833572 +0000 UTC m=+0.223097197 container init ffab2668523219abf1437dae07984aa94406b89d501ad5652d04be07be23f4d2 (image=quay.io/ceph/ceph:v18, name=bold_shockley, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  3 09:34:25 compute-0 podman[219898]: 2025-10-03 09:34:25.672507931 +0000 UTC m=+0.236771536 container start ffab2668523219abf1437dae07984aa94406b89d501ad5652d04be07be23f4d2 (image=quay.io/ceph/ceph:v18, name=bold_shockley, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:34:25 compute-0 podman[219898]: 2025-10-03 09:34:25.677585023 +0000 UTC m=+0.241848638 container attach ffab2668523219abf1437dae07984aa94406b89d501ad5652d04be07be23f4d2 (image=quay.io/ceph/ceph:v18, name=bold_shockley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 09:34:25 compute-0 podman[219962]: 2025-10-03 09:34:25.797692801 +0000 UTC m=+0.093001328 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 09:34:25 compute-0 podman[219962]: 2025-10-03 09:34:25.892909859 +0000 UTC m=+0.188218376 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 09:34:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v117: 195 pgs: 1 unknown, 194 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.2 KiB/s wr, 6 op/s
Oct  3 09:34:26 compute-0 ceph-mgr[192071]: [progress INFO root] Writing back 12 completed events
Oct  3 09:34:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  3 09:34:26 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  3 09:34:26 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/518360951' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  3 09:34:26 compute-0 bold_shockley[219941]: 
Oct  3 09:34:26 compute-0 systemd[1]: libpod-ffab2668523219abf1437dae07984aa94406b89d501ad5652d04be07be23f4d2.scope: Deactivated successfully.
Oct  3 09:34:26 compute-0 bold_shockley[219941]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.pbmwsx","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct  3 09:34:26 compute-0 podman[219898]: 2025-10-03 09:34:26.255739231 +0000 UTC m=+0.820002836 container died ffab2668523219abf1437dae07984aa94406b89d501ad5652d04be07be23f4d2 (image=quay.io/ceph/ceph:v18, name=bold_shockley, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 09:34:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f632bee555decc348e5185ae75cbec48a5e1c89a5fc92d820d6310197dcae6c6-merged.mount: Deactivated successfully.
Oct  3 09:34:26 compute-0 podman[219898]: 2025-10-03 09:34:26.330873945 +0000 UTC m=+0.895137550 container remove ffab2668523219abf1437dae07984aa94406b89d501ad5652d04be07be23f4d2 (image=quay.io/ceph/ceph:v18, name=bold_shockley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:34:26 compute-0 systemd[1]: libpod-conmon-ffab2668523219abf1437dae07984aa94406b89d501ad5652d04be07be23f4d2.scope: Deactivated successfully.
Oct  3 09:34:26 compute-0 podman[220098]: 2025-10-03 09:34:26.460006441 +0000 UTC m=+0.086970944 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:34:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct  3 09:34:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct  3 09:34:26 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct  3 09:34:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct  3 09:34:26 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/264610143' entity='client.rgw.rgw.compute-0.pbmwsx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  3 09:34:26 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:26 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/264610143' entity='client.rgw.rgw.compute-0.pbmwsx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  3 09:34:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:34:26 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:34:26 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:34:26 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:34:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:34:26 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:34:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:34:26 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:26 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 726b8965-d6f3-41be-9a96-10a73ab57b49 does not exist
Oct  3 09:34:26 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev dfcdecb8-9f6d-4108-852b-e74d44db4923 does not exist
Oct  3 09:34:26 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev dde5a66d-7c5d-4806-81b9-cf247a7de533 does not exist
Oct  3 09:34:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:34:26 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:34:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:34:26 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:34:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:34:26 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:34:27 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct  3 09:34:27 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct  3 09:34:27 compute-0 python3[220297]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:34:27 compute-0 podman[220320]: 2025-10-03 09:34:27.362085223 +0000 UTC m=+0.051550666 container create 1bf42dcba303f13fc141cef8c7f9e6d48b1300adc5eb94bfda1c5f0eb395615a (image=quay.io/ceph/ceph:v18, name=happy_margulis, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct  3 09:34:27 compute-0 systemd[1]: Started libpod-conmon-1bf42dcba303f13fc141cef8c7f9e6d48b1300adc5eb94bfda1c5f0eb395615a.scope.
Oct  3 09:34:27 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.d deep-scrub starts
Oct  3 09:34:27 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 47 pg[10.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [2] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:27 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.d deep-scrub ok
Oct  3 09:34:27 compute-0 podman[220320]: 2025-10-03 09:34:27.346013447 +0000 UTC m=+0.035478910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:34:27 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82125c1aab68810a3a2c227505101a6dfb7e8de12c5fb380c930894df9984bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82125c1aab68810a3a2c227505101a6dfb7e8de12c5fb380c930894df9984bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:27 compute-0 podman[220320]: 2025-10-03 09:34:27.472650614 +0000 UTC m=+0.162116077 container init 1bf42dcba303f13fc141cef8c7f9e6d48b1300adc5eb94bfda1c5f0eb395615a (image=quay.io/ceph/ceph:v18, name=happy_margulis, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:34:27 compute-0 podman[220320]: 2025-10-03 09:34:27.481055264 +0000 UTC m=+0.170520707 container start 1bf42dcba303f13fc141cef8c7f9e6d48b1300adc5eb94bfda1c5f0eb395615a (image=quay.io/ceph/ceph:v18, name=happy_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:34:27 compute-0 podman[220320]: 2025-10-03 09:34:27.485668892 +0000 UTC m=+0.175134355 container attach 1bf42dcba303f13fc141cef8c7f9e6d48b1300adc5eb94bfda1c5f0eb395615a (image=quay.io/ceph/ceph:v18, name=happy_margulis, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 09:34:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct  3 09:34:27 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/264610143' entity='client.rgw.rgw.compute-0.pbmwsx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  3 09:34:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct  3 09:34:27 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct  3 09:34:27 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 48 pg[10.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [2] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:27 compute-0 podman[220356]: 2025-10-03 09:34:27.574595958 +0000 UTC m=+0.059433690 container create a40d79a69cf34fb3cabf6608e4decdb8da92ff3cccb44f7cd46aa82908423d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 09:34:27 compute-0 systemd[1]: Started libpod-conmon-a40d79a69cf34fb3cabf6608e4decdb8da92ff3cccb44f7cd46aa82908423d75.scope.
Oct  3 09:34:27 compute-0 podman[220356]: 2025-10-03 09:34:27.544974307 +0000 UTC m=+0.029812079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:27 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:27 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:27 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:27 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:34:27 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:27 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:34:27 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/264610143' entity='client.rgw.rgw.compute-0.pbmwsx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  3 09:34:27 compute-0 podman[220356]: 2025-10-03 09:34:27.688423604 +0000 UTC m=+0.173261326 container init a40d79a69cf34fb3cabf6608e4decdb8da92ff3cccb44f7cd46aa82908423d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_proskuriakova, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 09:34:27 compute-0 podman[220356]: 2025-10-03 09:34:27.697478574 +0000 UTC m=+0.182316286 container start a40d79a69cf34fb3cabf6608e4decdb8da92ff3cccb44f7cd46aa82908423d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 09:34:27 compute-0 youthful_proskuriakova[220371]: 167 167
Oct  3 09:34:27 compute-0 podman[220356]: 2025-10-03 09:34:27.701842485 +0000 UTC m=+0.186680217 container attach a40d79a69cf34fb3cabf6608e4decdb8da92ff3cccb44f7cd46aa82908423d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 09:34:27 compute-0 systemd[1]: libpod-a40d79a69cf34fb3cabf6608e4decdb8da92ff3cccb44f7cd46aa82908423d75.scope: Deactivated successfully.
Oct  3 09:34:27 compute-0 podman[220356]: 2025-10-03 09:34:27.703528779 +0000 UTC m=+0.188366491 container died a40d79a69cf34fb3cabf6608e4decdb8da92ff3cccb44f7cd46aa82908423d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 09:34:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c912559ca98b8f2e8d45c6ada4d3a831b91159eef026025739002939600bbfe-merged.mount: Deactivated successfully.
Oct  3 09:34:27 compute-0 podman[220356]: 2025-10-03 09:34:27.752213653 +0000 UTC m=+0.237051365 container remove a40d79a69cf34fb3cabf6608e4decdb8da92ff3cccb44f7cd46aa82908423d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_proskuriakova, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:34:27 compute-0 systemd[1]: libpod-conmon-a40d79a69cf34fb3cabf6608e4decdb8da92ff3cccb44f7cd46aa82908423d75.scope: Deactivated successfully.
Oct  3 09:34:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v120: 196 pgs: 2 unknown, 194 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.2 KiB/s wr, 10 op/s
Oct  3 09:34:27 compute-0 podman[220425]: 2025-10-03 09:34:27.9404963 +0000 UTC m=+0.051421333 container create 9a9be672488aa74d839a666a2a27481492bf380aebba0beaf708479c07aab72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 09:34:27 compute-0 systemd[1]: Started libpod-conmon-9a9be672488aa74d839a666a2a27481492bf380aebba0beaf708479c07aab72e.scope.
Oct  3 09:34:28 compute-0 podman[220425]: 2025-10-03 09:34:27.918567655 +0000 UTC m=+0.029492738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:28 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb06f84cc65d1b48fb3c9cc91e8d0258afa4cc5192d89cf33721ea00d613fb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb06f84cc65d1b48fb3c9cc91e8d0258afa4cc5192d89cf33721ea00d613fb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb06f84cc65d1b48fb3c9cc91e8d0258afa4cc5192d89cf33721ea00d613fb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb06f84cc65d1b48fb3c9cc91e8d0258afa4cc5192d89cf33721ea00d613fb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb06f84cc65d1b48fb3c9cc91e8d0258afa4cc5192d89cf33721ea00d613fb4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Oct  3 09:34:28 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/542605655' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct  3 09:34:28 compute-0 podman[220425]: 2025-10-03 09:34:28.08001004 +0000 UTC m=+0.190935093 container init 9a9be672488aa74d839a666a2a27481492bf380aebba0beaf708479c07aab72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:28 compute-0 happy_margulis[220351]: mimic
Oct  3 09:34:28 compute-0 podman[220425]: 2025-10-03 09:34:28.092125189 +0000 UTC m=+0.203050222 container start 9a9be672488aa74d839a666a2a27481492bf380aebba0beaf708479c07aab72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 09:34:28 compute-0 podman[220425]: 2025-10-03 09:34:28.097172962 +0000 UTC m=+0.208097995 container attach 9a9be672488aa74d839a666a2a27481492bf380aebba0beaf708479c07aab72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 09:34:28 compute-0 systemd[1]: libpod-1bf42dcba303f13fc141cef8c7f9e6d48b1300adc5eb94bfda1c5f0eb395615a.scope: Deactivated successfully.
Oct  3 09:34:28 compute-0 podman[220320]: 2025-10-03 09:34:28.11489364 +0000 UTC m=+0.804359083 container died 1bf42dcba303f13fc141cef8c7f9e6d48b1300adc5eb94bfda1c5f0eb395615a (image=quay.io/ceph/ceph:v18, name=happy_margulis, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 09:34:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e82125c1aab68810a3a2c227505101a6dfb7e8de12c5fb380c930894df9984bc-merged.mount: Deactivated successfully.
Oct  3 09:34:28 compute-0 podman[220320]: 2025-10-03 09:34:28.171511769 +0000 UTC m=+0.860977212 container remove 1bf42dcba303f13fc141cef8c7f9e6d48b1300adc5eb94bfda1c5f0eb395615a (image=quay.io/ceph/ceph:v18, name=happy_margulis, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:28 compute-0 systemd[1]: libpod-conmon-1bf42dcba303f13fc141cef8c7f9e6d48b1300adc5eb94bfda1c5f0eb395615a.scope: Deactivated successfully.
Oct  3 09:34:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct  3 09:34:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct  3 09:34:28 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct  3 09:34:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct  3 09:34:28 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3036234377' entity='client.rgw.rgw.compute-0.pbmwsx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  3 09:34:28 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/3036234377' entity='client.rgw.rgw.compute-0.pbmwsx' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  3 09:34:28 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:29 compute-0 python3[220503]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:34:29 compute-0 charming_shaw[220440]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:34:29 compute-0 charming_shaw[220440]: --> relative data size: 1.0
Oct  3 09:34:29 compute-0 charming_shaw[220440]: --> All data devices are unavailable
Oct  3 09:34:29 compute-0 systemd[1]: libpod-9a9be672488aa74d839a666a2a27481492bf380aebba0beaf708479c07aab72e.scope: Deactivated successfully.
Oct  3 09:34:29 compute-0 systemd[1]: libpod-9a9be672488aa74d839a666a2a27481492bf380aebba0beaf708479c07aab72e.scope: Consumed 1.073s CPU time.
Oct  3 09:34:29 compute-0 podman[220425]: 2025-10-03 09:34:29.242279678 +0000 UTC m=+1.353204711 container died 9a9be672488aa74d839a666a2a27481492bf380aebba0beaf708479c07aab72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:34:29 compute-0 podman[220514]: 2025-10-03 09:34:29.268588213 +0000 UTC m=+0.055437072 container create d0d9cf2452279f05daae790273c84686c38ee7a1ef5a2234041bacfae5049529 (image=quay.io/ceph/ceph:v18, name=cool_brown, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 09:34:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceb06f84cc65d1b48fb3c9cc91e8d0258afa4cc5192d89cf33721ea00d613fb4-merged.mount: Deactivated successfully.
Oct  3 09:34:29 compute-0 podman[220425]: 2025-10-03 09:34:29.314471047 +0000 UTC m=+1.425396080 container remove 9a9be672488aa74d839a666a2a27481492bf380aebba0beaf708479c07aab72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shaw, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 09:34:29 compute-0 systemd[1]: Started libpod-conmon-d0d9cf2452279f05daae790273c84686c38ee7a1ef5a2234041bacfae5049529.scope.
Oct  3 09:34:29 compute-0 systemd[1]: libpod-conmon-9a9be672488aa74d839a666a2a27481492bf380aebba0beaf708479c07aab72e.scope: Deactivated successfully.
Oct  3 09:34:29 compute-0 podman[220514]: 2025-10-03 09:34:29.247162185 +0000 UTC m=+0.034011064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:34:29 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55af015b590309aab666237dda590afcf537620867bdd68c24eabdd3da35009d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55af015b590309aab666237dda590afcf537620867bdd68c24eabdd3da35009d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:29 compute-0 podman[220514]: 2025-10-03 09:34:29.397782222 +0000 UTC m=+0.184631081 container init d0d9cf2452279f05daae790273c84686c38ee7a1ef5a2234041bacfae5049529 (image=quay.io/ceph/ceph:v18, name=cool_brown, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 09:34:29 compute-0 podman[220514]: 2025-10-03 09:34:29.409579961 +0000 UTC m=+0.196428820 container start d0d9cf2452279f05daae790273c84686c38ee7a1ef5a2234041bacfae5049529 (image=quay.io/ceph/ceph:v18, name=cool_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 09:34:29 compute-0 podman[220514]: 2025-10-03 09:34:29.413323571 +0000 UTC m=+0.200172440 container attach d0d9cf2452279f05daae790273c84686c38ee7a1ef5a2234041bacfae5049529 (image=quay.io/ceph/ceph:v18, name=cool_brown, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:29 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct  3 09:34:29 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3036234377' entity='client.rgw.rgw.compute-0.pbmwsx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  3 09:34:29 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct  3 09:34:29 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct  3 09:34:29 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct  3 09:34:29 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3036234377' entity='client.rgw.rgw.compute-0.pbmwsx' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  3 09:34:29 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:29 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mds-cephfs-compute-0-svanmi[219616]: 2025-10-03T09:34:29.594+0000 7fc8710dd640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct  3 09:34:29 compute-0 ceph-mds[219626]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct  3 09:34:29 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/3036234377' entity='client.rgw.rgw.compute-0.pbmwsx' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  3 09:34:29 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/3036234377' entity='client.rgw.rgw.compute-0.pbmwsx' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  3 09:34:29 compute-0 podman[157165]: time="2025-10-03T09:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:34:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34274 "" "Go-http-client/1.1"
Oct  3 09:34:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7191 "" "Go-http-client/1.1"
Oct  3 09:34:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v123: 197 pgs: 1 creating+peering, 1 unknown, 195 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 767 B/s wr, 7 op/s
Oct  3 09:34:30 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct  3 09:34:30 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct  3 09:34:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Oct  3 09:34:30 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1705310009' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct  3 09:34:30 compute-0 cool_brown[220540]: 
Oct  3 09:34:30 compute-0 podman[220704]: 2025-10-03 09:34:30.116641689 +0000 UTC m=+0.071806827 container create 8033541aab8a7c1cacbe91bb567cd9209a58480256eb902628db574754294e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:30 compute-0 cool_brown[220540]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Oct  3 09:34:30 compute-0 systemd[1]: libpod-d0d9cf2452279f05daae790273c84686c38ee7a1ef5a2234041bacfae5049529.scope: Deactivated successfully.
Oct  3 09:34:30 compute-0 podman[220514]: 2025-10-03 09:34:30.125584876 +0000 UTC m=+0.912433735 container died d0d9cf2452279f05daae790273c84686c38ee7a1ef5a2234041bacfae5049529 (image=quay.io/ceph/ceph:v18, name=cool_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 09:34:30 compute-0 systemd[1]: Started libpod-conmon-8033541aab8a7c1cacbe91bb567cd9209a58480256eb902628db574754294e0e.scope.
Oct  3 09:34:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-55af015b590309aab666237dda590afcf537620867bdd68c24eabdd3da35009d-merged.mount: Deactivated successfully.
Oct  3 09:34:30 compute-0 podman[220704]: 2025-10-03 09:34:30.08805133 +0000 UTC m=+0.043216448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:30 compute-0 podman[220514]: 2025-10-03 09:34:30.191888515 +0000 UTC m=+0.978737384 container remove d0d9cf2452279f05daae790273c84686c38ee7a1ef5a2234041bacfae5049529 (image=quay.io/ceph/ceph:v18, name=cool_brown, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 09:34:30 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:30 compute-0 systemd[1]: libpod-conmon-d0d9cf2452279f05daae790273c84686c38ee7a1ef5a2234041bacfae5049529.scope: Deactivated successfully.
Oct  3 09:34:30 compute-0 podman[220704]: 2025-10-03 09:34:30.219327107 +0000 UTC m=+0.174492215 container init 8033541aab8a7c1cacbe91bb567cd9209a58480256eb902628db574754294e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:30 compute-0 podman[220704]: 2025-10-03 09:34:30.227599323 +0000 UTC m=+0.182764411 container start 8033541aab8a7c1cacbe91bb567cd9209a58480256eb902628db574754294e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 09:34:30 compute-0 podman[220704]: 2025-10-03 09:34:30.231616761 +0000 UTC m=+0.186781849 container attach 8033541aab8a7c1cacbe91bb567cd9209a58480256eb902628db574754294e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  3 09:34:30 compute-0 vibrant_wu[220735]: 167 167
Oct  3 09:34:30 compute-0 systemd[1]: libpod-8033541aab8a7c1cacbe91bb567cd9209a58480256eb902628db574754294e0e.scope: Deactivated successfully.
Oct  3 09:34:30 compute-0 podman[220704]: 2025-10-03 09:34:30.23312329 +0000 UTC m=+0.188288378 container died 8033541aab8a7c1cacbe91bb567cd9209a58480256eb902628db574754294e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:34:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd3ecf3f394e370c5165e9cb7e0fb8df11c3765cf7982a693198df6b4d033f11-merged.mount: Deactivated successfully.
Oct  3 09:34:30 compute-0 podman[220704]: 2025-10-03 09:34:30.275958235 +0000 UTC m=+0.231123323 container remove 8033541aab8a7c1cacbe91bb567cd9209a58480256eb902628db574754294e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:34:30 compute-0 systemd[1]: libpod-conmon-8033541aab8a7c1cacbe91bb567cd9209a58480256eb902628db574754294e0e.scope: Deactivated successfully.
Oct  3 09:34:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:34:30 compute-0 podman[220758]: 2025-10-03 09:34:30.470672529 +0000 UTC m=+0.064136430 container create 3f1544689c0c046a05f05586d2bbd60bd704fadfe2d7a31097ac9a1e3311fbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:34:30 compute-0 systemd[1]: Started libpod-conmon-3f1544689c0c046a05f05586d2bbd60bd704fadfe2d7a31097ac9a1e3311fbb5.scope.
Oct  3 09:34:30 compute-0 podman[220758]: 2025-10-03 09:34:30.438293669 +0000 UTC m=+0.031757570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:30 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct  3 09:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56efa8a67a4a444916760d39808ed2dfdff264d4a9ae1e3db8c86303ec35392c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56efa8a67a4a444916760d39808ed2dfdff264d4a9ae1e3db8c86303ec35392c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56efa8a67a4a444916760d39808ed2dfdff264d4a9ae1e3db8c86303ec35392c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56efa8a67a4a444916760d39808ed2dfdff264d4a9ae1e3db8c86303ec35392c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:30 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3036234377' entity='client.rgw.rgw.compute-0.pbmwsx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  3 09:34:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct  3 09:34:30 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct  3 09:34:30 compute-0 podman[220758]: 2025-10-03 09:34:30.586784108 +0000 UTC m=+0.180247999 container init 3f1544689c0c046a05f05586d2bbd60bd704fadfe2d7a31097ac9a1e3311fbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 09:34:30 compute-0 podman[220758]: 2025-10-03 09:34:30.601723298 +0000 UTC m=+0.195187169 container start 3f1544689c0c046a05f05586d2bbd60bd704fadfe2d7a31097ac9a1e3311fbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:34:30 compute-0 podman[220758]: 2025-10-03 09:34:30.605334154 +0000 UTC m=+0.198798045 container attach 3f1544689c0c046a05f05586d2bbd60bd704fadfe2d7a31097ac9a1e3311fbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct  3 09:34:30 compute-0 ceph-mon[191783]: from='client.? 192.168.122.100:0/3036234377' entity='client.rgw.rgw.compute-0.pbmwsx' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  3 09:34:30 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Oct  3 09:34:30 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Oct  3 09:34:30 compute-0 radosgw[219161]: LDAP not started since no server URIs were provided in the configuration.
Oct  3 09:34:30 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-rgw-rgw-compute-0-pbmwsx[219138]: 2025-10-03T09:34:30.833+0000 7f7ccdcb5940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct  3 09:34:30 compute-0 radosgw[219161]: framework: beast
Oct  3 09:34:30 compute-0 radosgw[219161]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct  3 09:34:30 compute-0 radosgw[219161]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct  3 09:34:30 compute-0 radosgw[219161]: starting handler: beast
Oct  3 09:34:30 compute-0 radosgw[219161]: set uid:gid to 167:167 (ceph:ceph)
Oct  3 09:34:30 compute-0 radosgw[219161]: mgrc service_daemon_register rgw.14275 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.pbmwsx,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864100,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=0bf4c7fa-89e0-4bb9-b6d5-f53c71878a8b,zone_name=default,zonegroup_id=b6cd11f1-8f83-4833-97f8-e3cc479ef978,zonegroup_name=default}
Oct  3 09:34:31 compute-0 openstack_network_exporter[159287]: ERROR   09:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:34:31 compute-0 openstack_network_exporter[159287]: ERROR   09:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:34:31 compute-0 openstack_network_exporter[159287]: ERROR   09:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:34:31 compute-0 openstack_network_exporter[159287]: ERROR   09:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:34:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:34:31 compute-0 openstack_network_exporter[159287]: ERROR   09:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:34:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:34:31 compute-0 amazing_banach[220774]: {
Oct  3 09:34:31 compute-0 amazing_banach[220774]:    "0": [
Oct  3 09:34:31 compute-0 amazing_banach[220774]:        {
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "devices": [
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "/dev/loop3"
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            ],
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "lv_name": "ceph_lv0",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "lv_size": "21470642176",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "name": "ceph_lv0",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "tags": {
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.cluster_name": "ceph",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.crush_device_class": "",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.encrypted": "0",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.osd_id": "0",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.type": "block",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.vdo": "0"
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            },
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "type": "block",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "vg_name": "ceph_vg0"
Oct  3 09:34:31 compute-0 amazing_banach[220774]:        }
Oct  3 09:34:31 compute-0 amazing_banach[220774]:    ],
Oct  3 09:34:31 compute-0 amazing_banach[220774]:    "1": [
Oct  3 09:34:31 compute-0 amazing_banach[220774]:        {
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "devices": [
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "/dev/loop4"
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            ],
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "lv_name": "ceph_lv1",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "lv_size": "21470642176",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "name": "ceph_lv1",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "tags": {
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.cluster_name": "ceph",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.crush_device_class": "",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.encrypted": "0",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.osd_id": "1",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.type": "block",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.vdo": "0"
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            },
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "type": "block",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "vg_name": "ceph_vg1"
Oct  3 09:34:31 compute-0 amazing_banach[220774]:        }
Oct  3 09:34:31 compute-0 amazing_banach[220774]:    ],
Oct  3 09:34:31 compute-0 amazing_banach[220774]:    "2": [
Oct  3 09:34:31 compute-0 amazing_banach[220774]:        {
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "devices": [
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "/dev/loop5"
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            ],
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "lv_name": "ceph_lv2",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "lv_size": "21470642176",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "name": "ceph_lv2",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "tags": {
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.cluster_name": "ceph",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.crush_device_class": "",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.encrypted": "0",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.osd_id": "2",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.type": "block",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:                "ceph.vdo": "0"
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            },
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "type": "block",
Oct  3 09:34:31 compute-0 amazing_banach[220774]:            "vg_name": "ceph_vg2"
Oct  3 09:34:31 compute-0 amazing_banach[220774]:        }
Oct  3 09:34:31 compute-0 amazing_banach[220774]:    ]
Oct  3 09:34:31 compute-0 amazing_banach[220774]: }
Oct  3 09:34:31 compute-0 systemd[1]: libpod-3f1544689c0c046a05f05586d2bbd60bd704fadfe2d7a31097ac9a1e3311fbb5.scope: Deactivated successfully.
Oct  3 09:34:31 compute-0 podman[221326]: 2025-10-03 09:34:31.512472648 +0000 UTC m=+0.026055708 container died 3f1544689c0c046a05f05586d2bbd60bd704fadfe2d7a31097ac9a1e3311fbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Oct  3 09:34:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-56efa8a67a4a444916760d39808ed2dfdff264d4a9ae1e3db8c86303ec35392c-merged.mount: Deactivated successfully.
Oct  3 09:34:31 compute-0 podman[221326]: 2025-10-03 09:34:31.590964319 +0000 UTC m=+0.104547389 container remove 3f1544689c0c046a05f05586d2bbd60bd704fadfe2d7a31097ac9a1e3311fbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 09:34:31 compute-0 systemd[1]: libpod-conmon-3f1544689c0c046a05f05586d2bbd60bd704fadfe2d7a31097ac9a1e3311fbb5.scope: Deactivated successfully.
Oct  3 09:34:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v125: 197 pgs: 1 creating+peering, 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 233 B/s rd, 699 B/s wr, 7 op/s
Oct  3 09:34:32 compute-0 podman[221474]: 2025-10-03 09:34:32.378412398 +0000 UTC m=+0.056451473 container create 257fa0b992cf81d94bef2e9cd164794b2ccb5b4c3ea6bca399161e05dda07659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gould, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 09:34:32 compute-0 systemd[1]: Started libpod-conmon-257fa0b992cf81d94bef2e9cd164794b2ccb5b4c3ea6bca399161e05dda07659.scope.
Oct  3 09:34:32 compute-0 podman[221474]: 2025-10-03 09:34:32.357951412 +0000 UTC m=+0.035990517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:32 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:32 compute-0 podman[221474]: 2025-10-03 09:34:32.483832044 +0000 UTC m=+0.161871199 container init 257fa0b992cf81d94bef2e9cd164794b2ccb5b4c3ea6bca399161e05dda07659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:32 compute-0 podman[221474]: 2025-10-03 09:34:32.494089974 +0000 UTC m=+0.172129079 container start 257fa0b992cf81d94bef2e9cd164794b2ccb5b4c3ea6bca399161e05dda07659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gould, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:34:32 compute-0 podman[221474]: 2025-10-03 09:34:32.500037625 +0000 UTC m=+0.178076740 container attach 257fa0b992cf81d94bef2e9cd164794b2ccb5b4c3ea6bca399161e05dda07659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 09:34:32 compute-0 awesome_gould[221490]: 167 167
Oct  3 09:34:32 compute-0 systemd[1]: libpod-257fa0b992cf81d94bef2e9cd164794b2ccb5b4c3ea6bca399161e05dda07659.scope: Deactivated successfully.
Oct  3 09:34:32 compute-0 podman[221474]: 2025-10-03 09:34:32.504677953 +0000 UTC m=+0.182717028 container died 257fa0b992cf81d94bef2e9cd164794b2ccb5b4c3ea6bca399161e05dda07659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gould, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:34:32 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct  3 09:34:32 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct  3 09:34:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e550e58bf6726631e8ef518bb6f045b54456e43edac42f004ee544cbd336d765-merged.mount: Deactivated successfully.
Oct  3 09:34:32 compute-0 podman[221474]: 2025-10-03 09:34:32.567558703 +0000 UTC m=+0.245597778 container remove 257fa0b992cf81d94bef2e9cd164794b2ccb5b4c3ea6bca399161e05dda07659 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_gould, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:34:32 compute-0 systemd[1]: libpod-conmon-257fa0b992cf81d94bef2e9cd164794b2ccb5b4c3ea6bca399161e05dda07659.scope: Deactivated successfully.
Oct  3 09:34:32 compute-0 podman[221514]: 2025-10-03 09:34:32.756753369 +0000 UTC m=+0.053844070 container create 7256e65a514d66f4365cfca23b49bbbd6be26858f89db3d23310b60137941249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mcnulty, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  3 09:34:32 compute-0 systemd[1]: Started libpod-conmon-7256e65a514d66f4365cfca23b49bbbd6be26858f89db3d23310b60137941249.scope.
Oct  3 09:34:32 compute-0 podman[221514]: 2025-10-03 09:34:32.733309277 +0000 UTC m=+0.030400018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:32 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1258cc8e122fb5bacaa7f9c159643ab69e32a7f1be82ad280a42c94423c69b6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1258cc8e122fb5bacaa7f9c159643ab69e32a7f1be82ad280a42c94423c69b6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1258cc8e122fb5bacaa7f9c159643ab69e32a7f1be82ad280a42c94423c69b6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1258cc8e122fb5bacaa7f9c159643ab69e32a7f1be82ad280a42c94423c69b6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:32 compute-0 podman[221514]: 2025-10-03 09:34:32.865126459 +0000 UTC m=+0.162217160 container init 7256e65a514d66f4365cfca23b49bbbd6be26858f89db3d23310b60137941249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:32 compute-0 podman[221514]: 2025-10-03 09:34:32.885819184 +0000 UTC m=+0.182909885 container start 7256e65a514d66f4365cfca23b49bbbd6be26858f89db3d23310b60137941249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 09:34:32 compute-0 podman[221514]: 2025-10-03 09:34:32.892172099 +0000 UTC m=+0.189262840 container attach 7256e65a514d66f4365cfca23b49bbbd6be26858f89db3d23310b60137941249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mcnulty, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 09:34:33 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct  3 09:34:33 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]: {
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:        "osd_id": 1,
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:        "type": "bluestore"
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:    },
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:        "osd_id": 2,
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:        "type": "bluestore"
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:    },
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:        "osd_id": 0,
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:        "type": "bluestore"
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]:    }
Oct  3 09:34:33 compute-0 wonderful_mcnulty[221531]: }
Oct  3 09:34:33 compute-0 systemd[1]: libpod-7256e65a514d66f4365cfca23b49bbbd6be26858f89db3d23310b60137941249.scope: Deactivated successfully.
Oct  3 09:34:33 compute-0 conmon[221531]: conmon 7256e65a514d66f4365c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7256e65a514d66f4365cfca23b49bbbd6be26858f89db3d23310b60137941249.scope/container/memory.events
Oct  3 09:34:33 compute-0 podman[221514]: 2025-10-03 09:34:33.873224236 +0000 UTC m=+1.170314937 container died 7256e65a514d66f4365cfca23b49bbbd6be26858f89db3d23310b60137941249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mcnulty, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1258cc8e122fb5bacaa7f9c159643ab69e32a7f1be82ad280a42c94423c69b6c-merged.mount: Deactivated successfully.
Oct  3 09:34:33 compute-0 podman[221514]: 2025-10-03 09:34:33.939274788 +0000 UTC m=+1.236365489 container remove 7256e65a514d66f4365cfca23b49bbbd6be26858f89db3d23310b60137941249 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v126: 197 pgs: 1 creating+peering, 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 511 B/s wr, 2 op/s
Oct  3 09:34:33 compute-0 systemd[1]: libpod-conmon-7256e65a514d66f4365cfca23b49bbbd6be26858f89db3d23310b60137941249.scope: Deactivated successfully.
Oct  3 09:34:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:34:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:34:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e9e770df-d25d-47e8-9bc1-ad086e1d35ca does not exist
Oct  3 09:34:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 0015a9ce-2ada-4e4d-8332-73843da59ab0 does not exist
Oct  3 09:34:35 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Oct  3 09:34:35 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Oct  3 09:34:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:34:35 compute-0 podman[221795]: 2025-10-03 09:34:35.481155553 +0000 UTC m=+0.307736697 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 09:34:35 compute-0 podman[221795]: 2025-10-03 09:34:35.631125427 +0000 UTC m=+0.457706551 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 09:34:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 4.7 KiB/s wr, 150 op/s
Oct  3 09:34:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:34:36 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:34:36 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:34:36 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:34:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:34:36 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:34:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:34:36 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:36 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ce6ddd72-9351-4b91-9aff-a6ed9666103e does not exist
Oct  3 09:34:36 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 367acb6e-f752-47d7-abb9-1c928a1fab74 does not exist
Oct  3 09:34:36 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 40ba5af6-1f20-48af-a988-f5689044d580 does not exist
Oct  3 09:34:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:34:36 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:34:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:34:36 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:34:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:34:36 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:34:37 compute-0 python3[222067]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:34:37 compute-0 podman[222075]: 2025-10-03 09:34:37.103120883 +0000 UTC m=+0.055998587 container create 432fa673ab9a45b15672c35f06f89a505fd56e5d6377c2cc49468a8d7c47beac (image=quay.io/ceph/ceph:v18, name=tender_saha, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:37 compute-0 systemd[1]: Started libpod-conmon-432fa673ab9a45b15672c35f06f89a505fd56e5d6377c2cc49468a8d7c47beac.scope.
Oct  3 09:34:37 compute-0 podman[222075]: 2025-10-03 09:34:37.082104512 +0000 UTC m=+0.034982246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:34:37 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d43a93c8f8ffcc4702a74726c7d26e067186eaa58968756568760c212b7ecf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d43a93c8f8ffcc4702a74726c7d26e067186eaa58968756568760c212b7ecf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:37 compute-0 podman[222075]: 2025-10-03 09:34:37.217841453 +0000 UTC m=+0.170719177 container init 432fa673ab9a45b15672c35f06f89a505fd56e5d6377c2cc49468a8d7c47beac (image=quay.io/ceph/ceph:v18, name=tender_saha, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 09:34:37 compute-0 podman[222075]: 2025-10-03 09:34:37.227126978 +0000 UTC m=+0.180004682 container start 432fa673ab9a45b15672c35f06f89a505fd56e5d6377c2cc49468a8d7c47beac (image=quay.io/ceph/ceph:v18, name=tender_saha, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:34:37 compute-0 podman[222075]: 2025-10-03 09:34:37.231341593 +0000 UTC m=+0.184219327 container attach 432fa673ab9a45b15672c35f06f89a505fd56e5d6377c2cc49468a8d7c47beac (image=quay.io/ceph/ceph:v18, name=tender_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:37 compute-0 podman[222140]: 2025-10-03 09:34:37.42311332 +0000 UTC m=+0.075575972 container create c0c708e1758e05fbcc79dd39b04d5869258e75d5757785f6d6d8bc5ee70ecd26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:37 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:37 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:37 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:34:37 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:37 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:34:37 compute-0 systemd[1]: Started libpod-conmon-c0c708e1758e05fbcc79dd39b04d5869258e75d5757785f6d6d8bc5ee70ecd26.scope.
Oct  3 09:34:37 compute-0 podman[222140]: 2025-10-03 09:34:37.383034382 +0000 UTC m=+0.035497034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:37 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:37 compute-0 tender_saha[222103]: could not fetch user info: no user info saved
Oct  3 09:34:37 compute-0 podman[222140]: 2025-10-03 09:34:37.522320295 +0000 UTC m=+0.174782947 container init c0c708e1758e05fbcc79dd39b04d5869258e75d5757785f6d6d8bc5ee70ecd26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_nightingale, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:37 compute-0 podman[222140]: 2025-10-03 09:34:37.534206225 +0000 UTC m=+0.186668877 container start c0c708e1758e05fbcc79dd39b04d5869258e75d5757785f6d6d8bc5ee70ecd26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_nightingale, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 09:34:37 compute-0 angry_nightingale[222194]: 167 167
Oct  3 09:34:37 compute-0 podman[222140]: 2025-10-03 09:34:37.538902154 +0000 UTC m=+0.191364896 container attach c0c708e1758e05fbcc79dd39b04d5869258e75d5757785f6d6d8bc5ee70ecd26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_nightingale, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 09:34:37 compute-0 systemd[1]: libpod-c0c708e1758e05fbcc79dd39b04d5869258e75d5757785f6d6d8bc5ee70ecd26.scope: Deactivated successfully.
Oct  3 09:34:37 compute-0 conmon[222194]: conmon c0c708e1758e05fbcc79 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c0c708e1758e05fbcc79dd39b04d5869258e75d5757785f6d6d8bc5ee70ecd26.scope/container/memory.events
Oct  3 09:34:37 compute-0 podman[222218]: 2025-10-03 09:34:37.58514333 +0000 UTC m=+0.031476776 container died c0c708e1758e05fbcc79dd39b04d5869258e75d5757785f6d6d8bc5ee70ecd26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_nightingale, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 09:34:37 compute-0 systemd[1]: libpod-432fa673ab9a45b15672c35f06f89a505fd56e5d6377c2cc49468a8d7c47beac.scope: Deactivated successfully.
Oct  3 09:34:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c19dddc02f9c76675bc1edf8cc44d5400bd4be2da16f4b99f60c48ae39e3d644-merged.mount: Deactivated successfully.
Oct  3 09:34:37 compute-0 podman[222075]: 2025-10-03 09:34:37.61651749 +0000 UTC m=+0.569395204 container died 432fa673ab9a45b15672c35f06f89a505fd56e5d6377c2cc49468a8d7c47beac (image=quay.io/ceph/ceph:v18, name=tender_saha, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 09:34:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-12d43a93c8f8ffcc4702a74726c7d26e067186eaa58968756568760c212b7ecf-merged.mount: Deactivated successfully.
Oct  3 09:34:37 compute-0 podman[222218]: 2025-10-03 09:34:37.658043954 +0000 UTC m=+0.104377350 container remove c0c708e1758e05fbcc79dd39b04d5869258e75d5757785f6d6d8bc5ee70ecd26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:37 compute-0 systemd[1]: libpod-conmon-c0c708e1758e05fbcc79dd39b04d5869258e75d5757785f6d6d8bc5ee70ecd26.scope: Deactivated successfully.
Oct  3 09:34:37 compute-0 podman[222075]: 2025-10-03 09:34:37.673363053 +0000 UTC m=+0.626240757 container remove 432fa673ab9a45b15672c35f06f89a505fd56e5d6377c2cc49468a8d7c47beac (image=quay.io/ceph/ceph:v18, name=tender_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:37 compute-0 systemd[1]: libpod-conmon-432fa673ab9a45b15672c35f06f89a505fd56e5d6377c2cc49468a8d7c47beac.scope: Deactivated successfully.
Oct  3 09:34:37 compute-0 podman[222252]: 2025-10-03 09:34:37.861514876 +0000 UTC m=+0.056347749 container create 46c67108a0761600b7f53e4266037cd97d44243db6c4610d05460ca119296214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 09:34:37 compute-0 systemd[1]: Started libpod-conmon-46c67108a0761600b7f53e4266037cd97d44243db6c4610d05460ca119296214.scope.
Oct  3 09:34:37 compute-0 podman[222252]: 2025-10-03 09:34:37.839658218 +0000 UTC m=+0.034491091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.8 KiB/s wr, 130 op/s
Oct  3 09:34:37 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278ff1e881a6edc68d68dfbda789ac79eaf58a821380043ae0ce8d7141152e0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278ff1e881a6edc68d68dfbda789ac79eaf58a821380043ae0ce8d7141152e0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278ff1e881a6edc68d68dfbda789ac79eaf58a821380043ae0ce8d7141152e0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278ff1e881a6edc68d68dfbda789ac79eaf58a821380043ae0ce8d7141152e0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278ff1e881a6edc68d68dfbda789ac79eaf58a821380043ae0ce8d7141152e0a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:38 compute-0 podman[222252]: 2025-10-03 09:34:38.003415972 +0000 UTC m=+0.198248875 container init 46c67108a0761600b7f53e4266037cd97d44243db6c4610d05460ca119296214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Oct  3 09:34:38 compute-0 podman[222252]: 2025-10-03 09:34:38.01778438 +0000 UTC m=+0.212617233 container start 46c67108a0761600b7f53e4266037cd97d44243db6c4610d05460ca119296214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  3 09:34:38 compute-0 podman[222252]: 2025-10-03 09:34:38.022340995 +0000 UTC m=+0.217173878 container attach 46c67108a0761600b7f53e4266037cd97d44243db6c4610d05460ca119296214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 09:34:38 compute-0 python3[222287]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 9b4e8c9a-5555-5510-a631-4742a1182561 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:34:38 compute-0 podman[222296]: 2025-10-03 09:34:38.126769377 +0000 UTC m=+0.069665503 container create 6a0d806f21b806e9e9e3d9eebeb1b3185375707e0ee23069f3a1fbc4174a2ef5 (image=quay.io/ceph/ceph:v18, name=busy_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct  3 09:34:38 compute-0 systemd[1]: Started libpod-conmon-6a0d806f21b806e9e9e3d9eebeb1b3185375707e0ee23069f3a1fbc4174a2ef5.scope.
Oct  3 09:34:38 compute-0 podman[222296]: 2025-10-03 09:34:38.09927374 +0000 UTC m=+0.042169896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  3 09:34:38 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4580ee64f73cce28c8a53d4c0ca19ad8b565cad7722eeb6e8c8804cd80e1907e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4580ee64f73cce28c8a53d4c0ca19ad8b565cad7722eeb6e8c8804cd80e1907e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:38 compute-0 podman[222296]: 2025-10-03 09:34:38.237109687 +0000 UTC m=+0.180005843 container init 6a0d806f21b806e9e9e3d9eebeb1b3185375707e0ee23069f3a1fbc4174a2ef5 (image=quay.io/ceph/ceph:v18, name=busy_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 09:34:38 compute-0 podman[222296]: 2025-10-03 09:34:38.246133984 +0000 UTC m=+0.189030120 container start 6a0d806f21b806e9e9e3d9eebeb1b3185375707e0ee23069f3a1fbc4174a2ef5 (image=quay.io/ceph/ceph:v18, name=busy_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 09:34:38 compute-0 podman[222296]: 2025-10-03 09:34:38.250395781 +0000 UTC m=+0.193292027 container attach 6a0d806f21b806e9e9e3d9eebeb1b3185375707e0ee23069f3a1fbc4174a2ef5 (image=quay.io/ceph/ceph:v18, name=busy_sutherland, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 09:34:38 compute-0 busy_sutherland[222309]: {
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "user_id": "openstack",
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "display_name": "openstack",
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "email": "",
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "suspended": 0,
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "max_buckets": 1000,
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "subusers": [],
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "keys": [
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:        {
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:            "user": "openstack",
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:            "access_key": "SDQNQ4YYDWF6OZN3D255",
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:            "secret_key": "zhKB0PqrHEX4ZomJaK60XeRp79LmzMVLb2NvrzdT"
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:        }
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    ],
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "swift_keys": [],
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "caps": [],
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "op_mask": "read, write, delete",
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "default_placement": "",
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "default_storage_class": "",
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "placement_tags": [],
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "bucket_quota": {
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:        "enabled": false,
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:        "check_on_raw": false,
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:        "max_size": -1,
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:        "max_size_kb": 0,
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:        "max_objects": -1
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    },
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "user_quota": {
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:        "enabled": false,
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:        "check_on_raw": false,
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:        "max_size": -1,
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:        "max_size_kb": 0,
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:        "max_objects": -1
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    },
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "temp_url_keys": [],
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "type": "rgw",
Oct  3 09:34:38 compute-0 busy_sutherland[222309]:    "mfa_ids": []
Oct  3 09:34:38 compute-0 busy_sutherland[222309]: }
Oct  3 09:34:38 compute-0 busy_sutherland[222309]: 
Oct  3 09:34:38 compute-0 systemd[1]: libpod-6a0d806f21b806e9e9e3d9eebeb1b3185375707e0ee23069f3a1fbc4174a2ef5.scope: Deactivated successfully.
Oct  3 09:34:38 compute-0 podman[222296]: 2025-10-03 09:34:38.66951499 +0000 UTC m=+0.612411226 container died 6a0d806f21b806e9e9e3d9eebeb1b3185375707e0ee23069f3a1fbc4174a2ef5 (image=quay.io/ceph/ceph:v18, name=busy_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 09:34:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-4580ee64f73cce28c8a53d4c0ca19ad8b565cad7722eeb6e8c8804cd80e1907e-merged.mount: Deactivated successfully.
Oct  3 09:34:38 compute-0 podman[222296]: 2025-10-03 09:34:38.751456334 +0000 UTC m=+0.694352470 container remove 6a0d806f21b806e9e9e3d9eebeb1b3185375707e0ee23069f3a1fbc4174a2ef5 (image=quay.io/ceph/ceph:v18, name=busy_sutherland, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 09:34:38 compute-0 systemd[1]: libpod-conmon-6a0d806f21b806e9e9e3d9eebeb1b3185375707e0ee23069f3a1fbc4174a2ef5.scope: Deactivated successfully.
Oct  3 09:34:39 compute-0 nifty_hermann[222291]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:34:39 compute-0 nifty_hermann[222291]: --> relative data size: 1.0
Oct  3 09:34:39 compute-0 nifty_hermann[222291]: --> All data devices are unavailable
Oct  3 09:34:39 compute-0 systemd[1]: libpod-46c67108a0761600b7f53e4266037cd97d44243db6c4610d05460ca119296214.scope: Deactivated successfully.
Oct  3 09:34:39 compute-0 systemd[1]: libpod-46c67108a0761600b7f53e4266037cd97d44243db6c4610d05460ca119296214.scope: Consumed 1.052s CPU time.
Oct  3 09:34:39 compute-0 podman[222430]: 2025-10-03 09:34:39.192662838 +0000 UTC m=+0.038447957 container died 46c67108a0761600b7f53e4266037cd97d44243db6c4610d05460ca119296214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hermann, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:34:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-278ff1e881a6edc68d68dfbda789ac79eaf58a821380043ae0ce8d7141152e0a-merged.mount: Deactivated successfully.
Oct  3 09:34:39 compute-0 podman[222430]: 2025-10-03 09:34:39.263192188 +0000 UTC m=+0.108977287 container remove 46c67108a0761600b7f53e4266037cd97d44243db6c4610d05460ca119296214 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 09:34:39 compute-0 systemd[1]: libpod-conmon-46c67108a0761600b7f53e4266037cd97d44243db6c4610d05460ca119296214.scope: Deactivated successfully.
Oct  3 09:34:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 3.4 KiB/s wr, 111 op/s
Oct  3 09:34:40 compute-0 podman[222582]: 2025-10-03 09:34:40.043106227 +0000 UTC m=+0.046922358 container create 7919b5ffbd2180d4e7d33f77a76aaa8db988d6a04d8896cc9827aa16e850f515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mclaren, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:34:40 compute-0 systemd[1]: Started libpod-conmon-7919b5ffbd2180d4e7d33f77a76aaa8db988d6a04d8896cc9827aa16e850f515.scope.
Oct  3 09:34:40 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:40 compute-0 podman[222582]: 2025-10-03 09:34:40.025616559 +0000 UTC m=+0.029432730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:40 compute-0 podman[222582]: 2025-10-03 09:34:40.13694864 +0000 UTC m=+0.140764781 container init 7919b5ffbd2180d4e7d33f77a76aaa8db988d6a04d8896cc9827aa16e850f515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:40 compute-0 podman[222582]: 2025-10-03 09:34:40.145764812 +0000 UTC m=+0.149580953 container start 7919b5ffbd2180d4e7d33f77a76aaa8db988d6a04d8896cc9827aa16e850f515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mclaren, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 09:34:40 compute-0 ecstatic_mclaren[222598]: 167 167
Oct  3 09:34:40 compute-0 podman[222582]: 2025-10-03 09:34:40.151375161 +0000 UTC m=+0.155191372 container attach 7919b5ffbd2180d4e7d33f77a76aaa8db988d6a04d8896cc9827aa16e850f515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mclaren, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:34:40 compute-0 systemd[1]: libpod-7919b5ffbd2180d4e7d33f77a76aaa8db988d6a04d8896cc9827aa16e850f515.scope: Deactivated successfully.
Oct  3 09:34:40 compute-0 podman[222582]: 2025-10-03 09:34:40.153579841 +0000 UTC m=+0.157396032 container died 7919b5ffbd2180d4e7d33f77a76aaa8db988d6a04d8896cc9827aa16e850f515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4d8d3fc3b3fa9ac456bdc803cf3a96aaa45bec1c47f80903a9dfa35f1f556b6-merged.mount: Deactivated successfully.
Oct  3 09:34:40 compute-0 podman[222582]: 2025-10-03 09:34:40.220551847 +0000 UTC m=+0.224368008 container remove 7919b5ffbd2180d4e7d33f77a76aaa8db988d6a04d8896cc9827aa16e850f515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  3 09:34:40 compute-0 systemd[1]: libpod-conmon-7919b5ffbd2180d4e7d33f77a76aaa8db988d6a04d8896cc9827aa16e850f515.scope: Deactivated successfully.
Oct  3 09:34:40 compute-0 podman[222621]: 2025-10-03 09:34:40.410591859 +0000 UTC m=+0.066626866 container create 5d55aa75e5f4613160719f15ecc61c6a8937a14873ba7f08319d3e34eded6366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:34:40 compute-0 systemd[1]: Started libpod-conmon-5d55aa75e5f4613160719f15ecc61c6a8937a14873ba7f08319d3e34eded6366.scope.
Oct  3 09:34:40 compute-0 podman[222621]: 2025-10-03 09:34:40.379210818 +0000 UTC m=+0.035245875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:40 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535a9d178ec3eb4fc03f06a787e3b9077de918e54b655dc09db036966b6d7c28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535a9d178ec3eb4fc03f06a787e3b9077de918e54b655dc09db036966b6d7c28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535a9d178ec3eb4fc03f06a787e3b9077de918e54b655dc09db036966b6d7c28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535a9d178ec3eb4fc03f06a787e3b9077de918e54b655dc09db036966b6d7c28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:40 compute-0 podman[222621]: 2025-10-03 09:34:40.543476219 +0000 UTC m=+0.199511196 container init 5d55aa75e5f4613160719f15ecc61c6a8937a14873ba7f08319d3e34eded6366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 09:34:40 compute-0 podman[222621]: 2025-10-03 09:34:40.558211149 +0000 UTC m=+0.214246126 container start 5d55aa75e5f4613160719f15ecc61c6a8937a14873ba7f08319d3e34eded6366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 09:34:40 compute-0 podman[222621]: 2025-10-03 09:34:40.562864957 +0000 UTC m=+0.218899944 container attach 5d55aa75e5f4613160719f15ecc61c6a8937a14873ba7f08319d3e34eded6366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:34:41 compute-0 epic_ellis[222638]: {
Oct  3 09:34:41 compute-0 epic_ellis[222638]:    "0": [
Oct  3 09:34:41 compute-0 epic_ellis[222638]:        {
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "devices": [
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "/dev/loop3"
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            ],
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "lv_name": "ceph_lv0",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "lv_size": "21470642176",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "name": "ceph_lv0",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "tags": {
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.cluster_name": "ceph",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.crush_device_class": "",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.encrypted": "0",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.osd_id": "0",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.type": "block",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.vdo": "0"
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            },
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "type": "block",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "vg_name": "ceph_vg0"
Oct  3 09:34:41 compute-0 epic_ellis[222638]:        }
Oct  3 09:34:41 compute-0 epic_ellis[222638]:    ],
Oct  3 09:34:41 compute-0 epic_ellis[222638]:    "1": [
Oct  3 09:34:41 compute-0 epic_ellis[222638]:        {
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "devices": [
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "/dev/loop4"
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            ],
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "lv_name": "ceph_lv1",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "lv_size": "21470642176",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "name": "ceph_lv1",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "tags": {
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.cluster_name": "ceph",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.crush_device_class": "",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.encrypted": "0",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.osd_id": "1",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.type": "block",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.vdo": "0"
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            },
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "type": "block",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "vg_name": "ceph_vg1"
Oct  3 09:34:41 compute-0 epic_ellis[222638]:        }
Oct  3 09:34:41 compute-0 epic_ellis[222638]:    ],
Oct  3 09:34:41 compute-0 epic_ellis[222638]:    "2": [
Oct  3 09:34:41 compute-0 epic_ellis[222638]:        {
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "devices": [
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "/dev/loop5"
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            ],
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "lv_name": "ceph_lv2",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "lv_size": "21470642176",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "name": "ceph_lv2",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "tags": {
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.cluster_name": "ceph",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.crush_device_class": "",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.encrypted": "0",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.osd_id": "2",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.type": "block",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:                "ceph.vdo": "0"
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            },
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "type": "block",
Oct  3 09:34:41 compute-0 epic_ellis[222638]:            "vg_name": "ceph_vg2"
Oct  3 09:34:41 compute-0 epic_ellis[222638]:        }
Oct  3 09:34:41 compute-0 epic_ellis[222638]:    ]
Oct  3 09:34:41 compute-0 epic_ellis[222638]: }
Oct  3 09:34:41 compute-0 systemd[1]: libpod-5d55aa75e5f4613160719f15ecc61c6a8937a14873ba7f08319d3e34eded6366.scope: Deactivated successfully.
Oct  3 09:34:41 compute-0 podman[222621]: 2025-10-03 09:34:41.355878844 +0000 UTC m=+1.011913831 container died 5d55aa75e5f4613160719f15ecc61c6a8937a14873ba7f08319d3e34eded6366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 09:34:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-535a9d178ec3eb4fc03f06a787e3b9077de918e54b655dc09db036966b6d7c28-merged.mount: Deactivated successfully.
Oct  3 09:34:41 compute-0 podman[222621]: 2025-10-03 09:34:41.626826687 +0000 UTC m=+1.282861654 container remove 5d55aa75e5f4613160719f15ecc61c6a8937a14873ba7f08319d3e34eded6366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:34:41 compute-0 systemd[1]: libpod-conmon-5d55aa75e5f4613160719f15ecc61c6a8937a14873ba7f08319d3e34eded6366.scope: Deactivated successfully.
Oct  3 09:34:41 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct  3 09:34:41 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct  3 09:34:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 3.0 KiB/s wr, 97 op/s
Oct  3 09:34:42 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct  3 09:34:42 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct  3 09:34:42 compute-0 podman[222796]: 2025-10-03 09:34:42.464578891 +0000 UTC m=+0.064063436 container create 5faff8978cfee2bfac092a15c010d1910a0d23d86ca04747e7ffa7f440cef219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:42 compute-0 systemd[1]: Started libpod-conmon-5faff8978cfee2bfac092a15c010d1910a0d23d86ca04747e7ffa7f440cef219.scope.
Oct  3 09:34:42 compute-0 podman[222796]: 2025-10-03 09:34:42.439976905 +0000 UTC m=+0.039461500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:42 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:42 compute-0 podman[222796]: 2025-10-03 09:34:42.568412493 +0000 UTC m=+0.167897078 container init 5faff8978cfee2bfac092a15c010d1910a0d23d86ca04747e7ffa7f440cef219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 09:34:42 compute-0 podman[222796]: 2025-10-03 09:34:42.577056798 +0000 UTC m=+0.176541343 container start 5faff8978cfee2bfac092a15c010d1910a0d23d86ca04747e7ffa7f440cef219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:34:42 compute-0 podman[222796]: 2025-10-03 09:34:42.581408637 +0000 UTC m=+0.180893262 container attach 5faff8978cfee2bfac092a15c010d1910a0d23d86ca04747e7ffa7f440cef219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:34:42 compute-0 gracious_kirch[222812]: 167 167
Oct  3 09:34:42 compute-0 systemd[1]: libpod-5faff8978cfee2bfac092a15c010d1910a0d23d86ca04747e7ffa7f440cef219.scope: Deactivated successfully.
Oct  3 09:34:42 compute-0 podman[222796]: 2025-10-03 09:34:42.586449838 +0000 UTC m=+0.185934383 container died 5faff8978cfee2bfac092a15c010d1910a0d23d86ca04747e7ffa7f440cef219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:42 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct  3 09:34:42 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct  3 09:34:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-914831e3d2bcbd5cf3a18bf8d6e0332343b0b94d999b0dc4836270da12c193f8-merged.mount: Deactivated successfully.
Oct  3 09:34:42 compute-0 podman[222796]: 2025-10-03 09:34:42.652818665 +0000 UTC m=+0.252303240 container remove 5faff8978cfee2bfac092a15c010d1910a0d23d86ca04747e7ffa7f440cef219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kirch, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 09:34:42 compute-0 systemd[1]: libpod-conmon-5faff8978cfee2bfac092a15c010d1910a0d23d86ca04747e7ffa7f440cef219.scope: Deactivated successfully.
Oct  3 09:34:42 compute-0 podman[222835]: 2025-10-03 09:34:42.871195771 +0000 UTC m=+0.072513114 container create e56cc9c4704b8bcb033b0e94560de4f6b02d8be5a814189db5d4e8c4ebd585c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 09:34:42 compute-0 podman[222835]: 2025-10-03 09:34:42.831521116 +0000 UTC m=+0.032838509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:34:42 compute-0 systemd[1]: Started libpod-conmon-e56cc9c4704b8bcb033b0e94560de4f6b02d8be5a814189db5d4e8c4ebd585c1.scope.
Oct  3 09:34:42 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f37f4cd242c8f792c43ad743ea6c2f4d561be12533b8ea38ca7bba9245cf571/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f37f4cd242c8f792c43ad743ea6c2f4d561be12533b8ea38ca7bba9245cf571/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f37f4cd242c8f792c43ad743ea6c2f4d561be12533b8ea38ca7bba9245cf571/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f37f4cd242c8f792c43ad743ea6c2f4d561be12533b8ea38ca7bba9245cf571/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:34:43 compute-0 podman[222835]: 2025-10-03 09:34:43.001072354 +0000 UTC m=+0.202389677 container init e56cc9c4704b8bcb033b0e94560de4f6b02d8be5a814189db5d4e8c4ebd585c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:34:43 compute-0 podman[222835]: 2025-10-03 09:34:43.017287241 +0000 UTC m=+0.218604554 container start e56cc9c4704b8bcb033b0e94560de4f6b02d8be5a814189db5d4e8c4ebd585c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:43 compute-0 podman[222835]: 2025-10-03 09:34:43.021843067 +0000 UTC m=+0.223160400 container attach e56cc9c4704b8bcb033b0e94560de4f6b02d8be5a814189db5d4e8c4ebd585c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:34:43 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Oct  3 09:34:43 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Oct  3 09:34:43 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Oct  3 09:34:43 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Oct  3 09:34:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.8 KiB/s wr, 92 op/s
Oct  3 09:34:44 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Oct  3 09:34:44 compute-0 dreamy_turing[222850]: {
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:        "osd_id": 1,
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:        "type": "bluestore"
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:    },
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:        "osd_id": 2,
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:        "type": "bluestore"
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:    },
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:        "osd_id": 0,
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:        "type": "bluestore"
Oct  3 09:34:44 compute-0 dreamy_turing[222850]:    }
Oct  3 09:34:44 compute-0 dreamy_turing[222850]: }
Oct  3 09:34:44 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Oct  3 09:34:44 compute-0 systemd[1]: libpod-e56cc9c4704b8bcb033b0e94560de4f6b02d8be5a814189db5d4e8c4ebd585c1.scope: Deactivated successfully.
Oct  3 09:34:44 compute-0 systemd[1]: libpod-e56cc9c4704b8bcb033b0e94560de4f6b02d8be5a814189db5d4e8c4ebd585c1.scope: Consumed 1.069s CPU time.
Oct  3 09:34:44 compute-0 conmon[222850]: conmon e56cc9c4704b8bcb033b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e56cc9c4704b8bcb033b0e94560de4f6b02d8be5a814189db5d4e8c4ebd585c1.scope/container/memory.events
Oct  3 09:34:44 compute-0 podman[222835]: 2025-10-03 09:34:44.087886063 +0000 UTC m=+1.289203386 container died e56cc9c4704b8bcb033b0e94560de4f6b02d8be5a814189db5d4e8c4ebd585c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:34:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f37f4cd242c8f792c43ad743ea6c2f4d561be12533b8ea38ca7bba9245cf571-merged.mount: Deactivated successfully.
Oct  3 09:34:44 compute-0 podman[222835]: 2025-10-03 09:34:44.159308451 +0000 UTC m=+1.360625754 container remove e56cc9c4704b8bcb033b0e94560de4f6b02d8be5a814189db5d4e8c4ebd585c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:34:44 compute-0 systemd[1]: libpod-conmon-e56cc9c4704b8bcb033b0e94560de4f6b02d8be5a814189db5d4e8c4ebd585c1.scope: Deactivated successfully.
Oct  3 09:34:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:34:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:34:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8ad5d371-84ea-4bb8-bb32-a8d7406dc33c does not exist
Oct  3 09:34:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 54fe8264-2ea7-4f33-9847-d9954c221392 does not exist
Oct  3 09:34:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:34:45 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct  3 09:34:45 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct  3 09:34:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:34:45
Oct  3 09:34:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:34:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:34:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'images']
Oct  3 09:34:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:34:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.8 KiB/s wr, 93 op/s
Oct  3 09:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:34:46 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Oct  3 09:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:34:46 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Oct  3 09:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:34:46 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Oct  3 09:34:46 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Oct  3 09:34:46 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct  3 09:34:46 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct  3 09:34:47 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Oct  3 09:34:47 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Oct  3 09:34:47 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct  3 09:34:47 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct  3 09:34:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  3 09:34:48 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct  3 09:34:48 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct  3 09:34:48 compute-0 podman[222946]: 2025-10-03 09:34:48.829907502 +0000 UTC m=+0.082460502 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 09:34:48 compute-0 podman[222944]: 2025-10-03 09:34:48.843361851 +0000 UTC m=+0.103863064 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, container_name=kepler, version=9.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, release-0.7.12=)
Oct  3 09:34:48 compute-0 podman[222945]: 2025-10-03 09:34:48.842909367 +0000 UTC m=+0.103227084 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:34:48 compute-0 podman[222947]: 2025-10-03 09:34:48.849757156 +0000 UTC m=+0.105585049 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  3 09:34:49 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Oct  3 09:34:49 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Oct  3 09:34:49 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct  3 09:34:49 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct  3 09:34:49 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Oct  3 09:34:49 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Oct  3 09:34:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct  3 09:34:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:34:50 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct  3 09:34:50 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Oct  3 09:34:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Oct  3 09:34:51 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:34:51 compute-0 systemd-logind[798]: New session 42 of user zuul.
Oct  3 09:34:51 compute-0 systemd[1]: Started Session 42 of User zuul.
Oct  3 09:34:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct  3 09:34:51 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:34:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct  3 09:34:51 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: [progress INFO root] update: starting ev 74c424f0-8183-4ea6-b93c-49c414f39dbd (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct  3 09:34:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Oct  3 09:34:51 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:34:51 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:34:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v136: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:34:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  3 09:34:51 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:34:52 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Oct  3 09:34:52 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Oct  3 09:34:52 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct  3 09:34:52 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct  3 09:34:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct  3 09:34:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:34:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:34:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct  3 09:34:52 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct  3 09:34:52 compute-0 ceph-mgr[192071]: [progress INFO root] update: starting ev b080513f-d7ea-4708-ab3e-3cdb9d1d9f3b (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct  3 09:34:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Oct  3 09:34:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:34:52 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:34:52 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:34:52 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:34:52 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:34:52 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:34:52 compute-0 python3.9[223171]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 53 pg[8.0( v 44'4 (0'0,44'4] local-lis/les=43/44 n=4 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.327832222s) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 44'3 mlcod 44'3 active pruub 126.016410828s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 53 pg[8.0( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=10.327832222s) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 44'3 mlcod 0'0 unknown pruub 126.016410828s@ mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 4.18 deep-scrub starts
Oct  3 09:34:53 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 4.18 deep-scrub ok
Oct  3 09:34:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct  3 09:34:53 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:34:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.11( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.12( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.13( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.1c( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.1a( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.1f( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.18( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.19( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.1b( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.4( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.5( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.1d( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.6( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.1e( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.9( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.a( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.7( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.b( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.f( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.e( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.d( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.c( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.8( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.3( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.2( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=1 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.10( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.17( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.15( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.1( v 44'4 (0'0,44'4] local-lis/les=43/44 n=1 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.14( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.16( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=43/44 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:53 compute-0 ceph-mgr[192071]: [progress INFO root] update: starting ev 0aa93614-5118-4ccc-bfeb-3ad8d276970c (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct  3 09:34:53 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:34:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:34:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.19( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.0( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=43/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 44'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.13( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.1e( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.5( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.8( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.7( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.3( v 44'4 (0'0,44'4] local-lis/les=53/54 n=1 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.17( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=53/54 n=1 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.16( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.a( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.1( v 44'4 (0'0,44'4] local-lis/les=53/54 n=1 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 54 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=53/54 n=1 ec=53/43 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v139: 228 pgs: 31 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:34:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  3 09:34:53 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:34:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  3 09:34:53 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:34:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct  3 09:34:54 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:34:54 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:34:54 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:34:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct  3 09:34:54 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct  3 09:34:54 compute-0 ceph-mgr[192071]: [progress INFO root] update: starting ev b839468a-5a24-4ded-8243-e87588021294 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct  3 09:34:54 compute-0 ceph-mgr[192071]: [progress INFO root] complete: finished ev 74c424f0-8183-4ea6-b93c-49c414f39dbd (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct  3 09:34:54 compute-0 ceph-mgr[192071]: [progress INFO root] Completed event 74c424f0-8183-4ea6-b93c-49c414f39dbd (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Oct  3 09:34:54 compute-0 ceph-mgr[192071]: [progress INFO root] complete: finished ev b080513f-d7ea-4708-ab3e-3cdb9d1d9f3b (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct  3 09:34:54 compute-0 ceph-mgr[192071]: [progress INFO root] Completed event b080513f-d7ea-4708-ab3e-3cdb9d1d9f3b (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Oct  3 09:34:54 compute-0 ceph-mgr[192071]: [progress INFO root] complete: finished ev 0aa93614-5118-4ccc-bfeb-3ad8d276970c (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct  3 09:34:54 compute-0 ceph-mgr[192071]: [progress INFO root] Completed event 0aa93614-5118-4ccc-bfeb-3ad8d276970c (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Oct  3 09:34:54 compute-0 ceph-mgr[192071]: [progress INFO root] complete: finished ev b839468a-5a24-4ded-8243-e87588021294 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct  3 09:34:54 compute-0 ceph-mgr[192071]: [progress INFO root] Completed event b839468a-5a24-4ded-8243-e87588021294 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Oct  3 09:34:54 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 55 pg[10.0( v 51'64 (0'0,51'64] local-lis/les=47/48 n=8 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.782315254s) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 51'63 mlcod 51'63 active pruub 124.263282776s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:34:54 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 55 pg[10.0( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=12.782315254s) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 51'63 mlcod 0'0 unknown pruub 124.263282776s@ mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:54 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  3 09:34:54 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:34:54 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:34:54 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  3 09:34:54 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:34:54 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:34:55 compute-0 python3.9[223397]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:34:55 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Oct  3 09:34:55 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Oct  3 09:34:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 55 pg[9.0( v 51'389 (0'0,51'389] local-lis/les=45/46 n=177 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=55 pruub=9.904476166s) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 51'388 mlcod 51'388 active pruub 128.024154663s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 55 pg[9.0( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=55 pruub=9.904476166s) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 51'388 mlcod 0'0 unknown pruub 128.024154663s@ mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct  3 09:34:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct  3 09:34:55 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.1e( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.1b( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.d( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.b( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.a( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.13( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.12( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.11( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.1f( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.10( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.1d( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.1c( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.1a( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.18( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.19( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.7( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.6( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.5( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.4( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.8( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.f( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.9( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.c( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=47/48 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.e( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.3( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.2( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.16( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.14( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.17( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.15( v 51'64 lc 0'0 (0'0,51'64] local-lis/les=47/48 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.15( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.14( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.17( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.16( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.11( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.3( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.2( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.d( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.c( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.9( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.b( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.f( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.8( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.1( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.6( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.7( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.4( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.5( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.1a( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.18( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.a( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.19( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.1e( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.1f( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.1c( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.1d( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.12( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.13( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.10( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.e( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.a( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.1b( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.d( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.1b( v 51'389 lc 0'0 (0'0,51'389] local-lis/les=45/46 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.1d( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.1f( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.1c( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.14( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.5( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.0( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 51'388 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.9( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.2( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.c( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.0( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 51'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.3( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.e( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.14( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.15( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.18( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.4( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.1a( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.a( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.12( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.10( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 56 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=51'389 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 56 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=47/47 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[47,55)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:55 compute-0 podman[223409]: 2025-10-03 09:34:55.871314889 +0000 UTC m=+0.118389457 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9)
Oct  3 09:34:55 compute-0 podman[223408]: 2025-10-03 09:34:55.901996448 +0000 UTC m=+0.155915805 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct  3 09:34:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v142: 290 pgs: 1 peering, 62 unknown, 227 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:34:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  3 09:34:55 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:34:56 compute-0 ceph-mgr[192071]: [progress INFO root] Writing back 16 completed events
Oct  3 09:34:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  3 09:34:56 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:56 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Oct  3 09:34:56 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Oct  3 09:34:56 compute-0 podman[223451]: 2025-10-03 09:34:56.809763926 +0000 UTC m=+0.075127739 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:34:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct  3 09:34:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  3 09:34:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:34:56 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:34:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct  3 09:34:56 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct  3 09:34:57 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Oct  3 09:34:57 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Oct  3 09:34:57 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct  3 09:34:57 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct  3 09:34:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  3 09:34:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v144: 321 pgs: 1 peering, 93 unknown, 227 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 57 pg[11.0( v 51'2 (0'0,51'2] local-lis/les=49/50 n=2 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=11.407567024s) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 51'1 mlcod 51'1 active pruub 132.054550171s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 57 pg[11.0( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=11.407567024s) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 51'1 mlcod 0'0 unknown pruub 132.054550171s@ mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Oct  3 09:34:58 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Oct  3 09:34:58 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct  3 09:34:58 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct  3 09:34:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct  3 09:34:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct  3 09:34:58 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.17( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.16( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.15( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.14( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.13( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.2( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=49/50 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.e( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.f( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.b( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.9( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.c( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.8( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.a( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.3( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.4( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.5( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.6( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.7( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.18( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.1a( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.d( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.1b( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.1c( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.1d( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.1f( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.1e( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.10( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.11( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.19( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.12( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=49/50 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.16( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.13( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.0( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 51'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=57/58 n=1 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.9( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.c( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.a( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.5( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.7( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.d( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.1d( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 58 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=49/49 les/c/f=50/50/0 sis=57) [1] r=0 lpr=57 pi=[49,57)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:34:59 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct  3 09:34:59 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct  3 09:34:59 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct  3 09:34:59 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct  3 09:34:59 compute-0 podman[157165]: time="2025-10-03T09:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:34:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Oct  3 09:34:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6788 "" "Go-http-client/1.1"
Oct  3 09:34:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v146: 321 pgs: 1 peering, 31 unknown, 289 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:35:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:35:00 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Oct  3 09:35:00 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Oct  3 09:35:01 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.13 deep-scrub starts
Oct  3 09:35:01 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.13 deep-scrub ok
Oct  3 09:35:01 compute-0 openstack_network_exporter[159287]: ERROR   09:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:35:01 compute-0 openstack_network_exporter[159287]: ERROR   09:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:35:01 compute-0 openstack_network_exporter[159287]: ERROR   09:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:35:01 compute-0 openstack_network_exporter[159287]: ERROR   09:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:35:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:35:01 compute-0 openstack_network_exporter[159287]: ERROR   09:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:35:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:35:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v147: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:35:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  3 09:35:01 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:35:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  3 09:35:01 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:35:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct  3 09:35:01 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  3 09:35:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  3 09:35:01 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:35:02 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 5.14 deep-scrub starts
Oct  3 09:35:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct  3 09:35:02 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 5.14 deep-scrub ok
Oct  3 09:35:02 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:35:02 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:35:02 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  3 09:35:02 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:35:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct  3 09:35:02 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct  3 09:35:02 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:35:02 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:35:02 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  3 09:35:02 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.619317055s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 128.498168945s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.619229317s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.498168945s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.619455338s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 128.498474121s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.619361877s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 128.498443604s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.618800163s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 128.498260498s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.618773460s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.498260498s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.618719101s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 128.498504639s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.618694305s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.498504639s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.619308472s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.498443604s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.618486404s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.498474121s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.631162643s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 128.511672974s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.631141663s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.511672974s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.630320549s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 128.511016846s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.630928040s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 128.511642456s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.630185127s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.511016846s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.632422447s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 128.513534546s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.632390976s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.513534546s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.630788803s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 128.512100220s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.630766869s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.512100220s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.630588531s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 128.512023926s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.630571365s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 128.512039185s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.630557060s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.512023926s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.630532265s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 128.512084961s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.630385399s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.512039185s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.9( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.630236626s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 128.512069702s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.9( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.630203247s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 128.512069702s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.e( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.630264282s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 128.512268066s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.e( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.630241394s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 128.512268066s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.629306793s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.511642456s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.629610062s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 128.512191772s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.629592896s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.512191772s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.629537582s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 128.512252808s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.629464149s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.512252808s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.d( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.618962288s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 128.498214722s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.d( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.614502907s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 128.498214722s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.14( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.627970695s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 128.512313843s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.14( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.627921104s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 128.512313843s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.627778053s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 128.512298584s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.627761841s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.512298584s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.627707481s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active pruub 128.512344360s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.627371788s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.512344360s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.15( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.627080917s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 51'64 mlcod 51'64 active pruub 128.512329102s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.15( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.627045631s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 51'64 mlcod 0'0 unknown NOTIFY pruub 128.512329102s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=55/56 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.624920845s) [1] r=-1 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.512084961s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[10.9( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[10.13( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[10.8( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[10.10( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[10.11( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[10.15( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[10.4( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[10.7( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[10.1a( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[10.17( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[10.d( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[10.e( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[10.1e( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[10.16( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[10.1( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[10.19( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[10.6( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[10.2( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[10.b( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[10.f( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[10.12( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[10.14( empty local-lis/les=0/0 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.801014900s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.467544556s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[11.17( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[8.14( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.800983429s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.467544556s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.608276367s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.275299072s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.608256340s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.275299072s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.607912064s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.275299072s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.607891083s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.275299072s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.805477142s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.474655151s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.612756729s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 134.282699585s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.612725258s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.282699585s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.804554939s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.474685669s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.804538727s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.474685669s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.611598015s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 134.282058716s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.611578941s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.282058716s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.804456711s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475021362s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.804409981s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475021362s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.804148674s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.474761963s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=57/58 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.804124832s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.474761963s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=53/54 n=1 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.604227066s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.275009155s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=53/54 n=1 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.604211807s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.275009155s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.803914070s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.474838257s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.803896904s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.474838257s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.603793144s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.274810791s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.603778839s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.274810791s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.611220360s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 134.282318115s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.611207008s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.282318115s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.604133606s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.275299072s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.804168701s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475357056s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.604105949s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.275299072s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.804154396s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475357056s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.603460312s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.274810791s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.d( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.603444099s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.274810791s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.d( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.803999901s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475494385s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.d( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.803983688s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475494385s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.603043556s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.274581909s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.611022949s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 134.282608032s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.611008644s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.282608032s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.610583305s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 134.282226562s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.803689003s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475341797s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.803671837s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475341797s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.610556602s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.282226562s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.600825310s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 134.272521973s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.600761414s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.272521973s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.610565186s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 134.282424927s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.602708817s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.274581909s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.610545158s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.282424927s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.9( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.803321838s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475280762s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.9( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.803298950s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475280762s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.610493660s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 134.282501221s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.610478401s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.282501221s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.602389336s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.274566650s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.f( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.602367401s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.274566650s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.802659035s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475280762s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.805455208s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.474655151s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.802632332s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475280762s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.599947929s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.272796631s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.599925995s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.272796631s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.600023270s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.272903442s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.600001335s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.272903442s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.802361488s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475341797s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[9.17( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[11.14( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[9.11( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[11.1( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[11.f( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.801729202s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475418091s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.801695824s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475418091s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[8.c( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[9.d( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[11.e( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[8.10( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[9.f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[9.3( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[9.15( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[8.15( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [2] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[11.2( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[8.2( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [2] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[8.d( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [2] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[11.d( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[11.b( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[11.9( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[11.15( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[11.8( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[8.4( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [2] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[11.18( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[11.1b( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[11.1c( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[9.9( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[8.1b( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [2] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[11.1e( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[11.1f( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[8.1c( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [2] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.598716736s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.272613525s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[11.1a( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.598697662s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.272613525s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.608640671s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 134.282653809s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[8.e( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.608624458s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.282653809s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.801329613s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475463867s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.801312447s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475463867s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=53/54 n=1 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.601016998s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.275344849s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.4( v 44'4 (0'0,44'4] local-lis/les=53/54 n=1 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.600997925s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.275344849s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.608406067s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 134.282897949s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.608385086s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.282897949s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[11.12( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.800858498s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475463867s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.800840378s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475463867s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[9.b( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.597208977s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.272003174s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.609463692s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 134.282638550s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[8.f( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.607436180s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.282638550s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[8.9( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.799698830s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475494385s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[11.4( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[8.6( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.798921585s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475524902s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.798900604s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475524902s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.606242180s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 134.283004761s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[11.11( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.606222153s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.283004761s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.592761040s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.269668579s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.592746735s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.269668579s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.798504829s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475509644s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.798489571s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475509644s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.605961800s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 134.283096313s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.605947495s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.283096313s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.593948364s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.271118164s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.594836235s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.272003174s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[9.7( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[11.6( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.593928337s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.271118164s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.798414230s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475708008s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.798390388s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475708008s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.595207214s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.272613525s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.798307419s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475723267s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.798292160s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475723267s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.595187187s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.272613525s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.605646133s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 134.283172607s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.605629921s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.283172607s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.580449104s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.258056641s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.580430031s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.258056641s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.799674034s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475494385s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.798088074s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475723267s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.798066139s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475723267s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.580367088s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.258148193s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.797972679s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475769043s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.797944069s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475753784s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.797950745s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475769043s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.580338478s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.258148193s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.797924995s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475753784s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.580015182s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.257995605s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.579999924s) [2] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.257995605s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.797538757s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active pruub 137.475738525s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.604867935s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 134.283187866s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.797347069s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475738525s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.604723930s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.283187866s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.590605736s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active pruub 140.269104004s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=53/54 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=15.590590477s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.269104004s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.600451469s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 134.283111572s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59 pruub=9.599127769s) [0] r=-1 lpr=59 pi=[55,59)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.283111572s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 59 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=57/58 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=12.802341461s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.475341797s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[8.11( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [2] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[8.12( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [2] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[9.5( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 59 pg[11.3( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[9.1( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[8.b( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[8.18( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[8.1d( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[9.1d( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[8.1f( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[11.10( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[11.19( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[9.1b( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[8.1a( empty local-lis/les=0/0 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 59 pg[9.13( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:02 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Oct  3 09:35:02 compute-0 systemd[1]: session-42.scope: Consumed 9.476s CPU time.
Oct  3 09:35:02 compute-0 systemd-logind[798]: Session 42 logged out. Waiting for processes to exit.
Oct  3 09:35:02 compute-0 systemd-logind[798]: Removed session 42.
Oct  3 09:35:03 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct  3 09:35:03 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct  3 09:35:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct  3 09:35:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct  3 09:35:03 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:03 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:35:03 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:35:03 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  3 09:35:03 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.13( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.13( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.11( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.11( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.5( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.5( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.b( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.b( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.7( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.7( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.17( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.17( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.9( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.9( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.d( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.d( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.1( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.1( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.3( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.1d( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.3( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.1d( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.1b( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.1b( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.15( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[9.15( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] r=-1 lpr=60 pi=[55,60)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[10.14( v 56'65 lc 51'54 (0'0,56'65] local-lis/les=59/60 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=56'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[10.12( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[10.2( v 51'64 (0'0,51'64] local-lis/les=59/60 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[10.b( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[10.1a( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[10.f( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[10.19( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[10.11( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[10.10( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[10.13( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 60 pg[10.6( v 51'64 (0'0,51'64] local-lis/les=59/60 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[11.1f( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[11.15( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[8.2( v 44'4 (0'0,44'4] local-lis/les=59/60 n=1 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [2] r=0 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[11.3( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[11.8( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[11.d( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=51'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[8.d( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [2] r=0 lpr=59 pi=[53,59)/1 crt=44'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[8.b( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[11.10( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[11.4( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[11.2( v 51'2 (0'0,51'2] local-lis/les=59/60 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[11.18( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[8.1b( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [2] r=0 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[11.9( v 51'2 lc 0'0 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=51'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[11.1b( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[11.1c( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[11.1e( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[11.11( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[11.12( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[8.11( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [2] r=0 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[8.4( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=59/60 n=1 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [2] r=0 lpr=59 pi=[53,59)/1 crt=44'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[11.1a( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[11.b( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [2] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[8.1c( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [2] r=0 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[8.15( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [2] r=0 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[8.6( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[11.14( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[11.6( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[8.10( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[11.e( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[11.f( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[8.c( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[8.18( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[11.19( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[8.f( v 44'4 lc 0'0 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=44'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[11.17( v 51'2 (0'0,51'2] local-lis/les=59/60 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[8.14( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[8.1d( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[8.1f( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[8.1a( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[10.1( v 51'64 (0'0,51'64] local-lis/les=59/60 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[10.16( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[11.1( v 51'2 (0'0,51'2] local-lis/les=59/60 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=59) [0] r=0 lpr=59 pi=[57,59)/1 crt=51'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[10.1e( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[10.e( v 56'65 lc 51'48 (0'0,56'65] local-lis/les=59/60 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=56'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[10.d( v 56'65 lc 51'50 (0'0,56'65] local-lis/les=59/60 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=56'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[10.7( v 51'64 (0'0,51'64] local-lis/les=59/60 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[10.17( v 51'64 (0'0,51'64] local-lis/les=59/60 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[10.4( v 51'64 (0'0,51'64] local-lis/les=59/60 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[10.15( v 56'65 lc 51'46 (0'0,56'65] local-lis/les=59/60 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=56'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[10.8( v 51'64 (0'0,51'64] local-lis/les=59/60 n=1 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=51'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[8.e( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[10.9( v 56'65 lc 51'56 (0'0,56'65] local-lis/les=59/60 n=0 ec=55/47 lis/c=55/55 les/c/f=56/56/0 sis=59) [0] r=0 lpr=59 pi=[55,59)/1 crt=56'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 60 pg[8.12( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [2] r=0 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 60 pg[8.9( v 44'4 (0'0,44'4] local-lis/les=59/60 n=0 ec=53/43 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=44'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v150: 321 pgs: 16 unknown, 32 peering, 273 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:35:04 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.11 deep-scrub starts
Oct  3 09:35:04 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.11 deep-scrub ok
Oct  3 09:35:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct  3 09:35:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct  3 09:35:04 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct  3 09:35:04 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct  3 09:35:04 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct  3 09:35:05 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 61 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:05 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 61 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:05 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 61 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:05 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 61 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:05 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 61 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:05 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 61 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:05 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 61 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:05 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 61 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:05 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 61 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:05 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 61 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:05 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 61 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:05 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 61 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:05 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 61 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:05 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 61 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:05 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 61 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:05 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 61 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=60) [0]/[1] async=[0] r=0 lpr=60 pi=[55,60)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:35:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v152: 321 pgs: 16 unknown, 32 peering, 273 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 278 B/s, 0 objects/s recovering
Oct  3 09:35:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct  3 09:35:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct  3 09:35:06 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct  3 09:35:06 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 62 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=62 pruub=14.738067627s) [0] async=[0] r=-1 lpr=62 pi=[55,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 143.681564331s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:06 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 62 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=62 pruub=14.737992287s) [0] r=-1 lpr=62 pi=[55,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.681564331s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:06 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 62 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=62 pruub=14.737572670s) [0] async=[0] r=-1 lpr=62 pi=[55,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 143.681594849s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:06 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 62 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=62 pruub=14.737487793s) [0] r=-1 lpr=62 pi=[55,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.681594849s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:06 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 62 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=62 pruub=14.736252785s) [0] async=[0] r=-1 lpr=62 pi=[55,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 143.681594849s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:06 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 62 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=62 pruub=14.736191750s) [0] r=-1 lpr=62 pi=[55,62)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.681594849s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:06 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 62 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=62) [0] r=0 lpr=62 pi=[55,62)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:06 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 62 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=62) [0] r=0 lpr=62 pi=[55,62)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:06 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 62 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=62) [0] r=0 lpr=62 pi=[55,62)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:06 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 62 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=62) [0] r=0 lpr=62 pi=[55,62)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:06 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 62 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=62) [0] r=0 lpr=62 pi=[55,62)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:06 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 62 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=62) [0] r=0 lpr=62 pi=[55,62)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct  3 09:35:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct  3 09:35:07 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.729127884s) [0] async=[0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 143.682495117s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.728312492s) [0] async=[0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 143.681777954s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.729019165s) [0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.682495117s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.728261948s) [0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.681777954s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.727966309s) [0] async=[0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 143.681732178s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.727922440s) [0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.681732178s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.727503777s) [0] async=[0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 143.681732178s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.727926254s) [0] async=[0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 143.682205200s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.727436066s) [0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.681732178s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.727875710s) [0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.682205200s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.727317810s) [0] async=[0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 143.681716919s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.727282524s) [0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.681716919s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.727426529s) [0] async=[0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 143.682022095s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.727373123s) [0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.682022095s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.727059364s) [0] async=[0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 143.681762695s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.727014542s) [0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.681762695s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.727729797s) [0] async=[0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 143.682098389s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:07 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 63 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63 pruub=13.726643562s) [0] r=-1 lpr=63 pi=[55,63)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.682098389s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=62) [0] r=0 lpr=62 pi=[55,62)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=62) [0] r=0 lpr=62 pi=[55,62)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 63 pg[9.9( v 51'389 (0'0,51'389] local-lis/les=62/63 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=62) [0] r=0 lpr=62 pi=[55,62)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v155: 321 pgs: 4 unknown, 9 peering, 308 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.1 KiB/s wr, 86 op/s; 439 B/s, 5 objects/s recovering
Oct  3 09:35:08 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.f scrub starts
Oct  3 09:35:08 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.f scrub ok
Oct  3 09:35:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct  3 09:35:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct  3 09:35:08 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct  3 09:35:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 64 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64) [0] r=0 lpr=64 pi=[55,64)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 64 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64) [0] r=0 lpr=64 pi=[55,64)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:08 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 64 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64 pruub=12.720149040s) [0] async=[0] r=-1 lpr=64 pi=[55,64)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 143.682144165s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:08 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 64 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64 pruub=12.720049858s) [0] r=-1 lpr=64 pi=[55,64)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.682144165s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:08 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 64 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64 pruub=12.720027924s) [0] async=[0] r=-1 lpr=64 pi=[55,64)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 143.682556152s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:08 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 64 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=60/61 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64 pruub=12.719973564s) [0] r=-1 lpr=64 pi=[55,64)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.682556152s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 64 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64) [0] r=0 lpr=64 pi=[55,64)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 64 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64) [0] r=0 lpr=64 pi=[55,64)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:08 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 64 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64 pruub=12.718785286s) [0] async=[0] r=-1 lpr=64 pi=[55,64)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 143.681884766s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:08 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 64 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64 pruub=12.718717575s) [0] r=-1 lpr=64 pi=[55,64)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.681884766s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:08 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 64 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64 pruub=12.718741417s) [0] async=[0] r=-1 lpr=64 pi=[55,64)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 143.682144165s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:08 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 64 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=60/61 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64 pruub=12.718704224s) [0] r=-1 lpr=64 pi=[55,64)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.682144165s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 64 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64) [0] r=0 lpr=64 pi=[55,64)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 64 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64) [0] r=0 lpr=64 pi=[55,64)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 64 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64) [0] r=0 lpr=64 pi=[55,64)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 64 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64) [0] r=0 lpr=64 pi=[55,64)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 64 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 64 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 64 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 64 pg[9.1( v 51'389 (0'0,51'389] local-lis/les=63/64 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 64 pg[9.d( v 51'389 (0'0,51'389] local-lis/les=63/64 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 64 pg[9.11( v 51'389 (0'0,51'389] local-lis/les=63/64 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 64 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=63/64 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 64 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=63/64 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 64 pg[9.3( v 51'389 (0'0,51'389] local-lis/les=63/64 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:09 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Oct  3 09:35:09 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Oct  3 09:35:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct  3 09:35:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct  3 09:35:09 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct  3 09:35:09 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 65 pg[9.1d( v 51'389 (0'0,51'389] local-lis/les=64/65 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64) [0] r=0 lpr=64 pi=[55,64)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:09 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 65 pg[9.1b( v 51'389 (0'0,51'389] local-lis/les=64/65 n=5 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64) [0] r=0 lpr=64 pi=[55,64)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:09 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 65 pg[9.5( v 51'389 (0'0,51'389] local-lis/les=64/65 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64) [0] r=0 lpr=64 pi=[55,64)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:09 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 65 pg[9.b( v 51'389 (0'0,51'389] local-lis/les=64/65 n=6 ec=55/45 lis/c=60/55 les/c/f=61/56/0 sis=64) [0] r=0 lpr=64 pi=[55,64)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v158: 321 pgs: 4 unknown, 9 peering, 308 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.2 KiB/s wr, 104 op/s; 105 B/s, 4 objects/s recovering
Oct  3 09:35:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:35:11 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct  3 09:35:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v159: 321 pgs: 4 unknown, 9 peering, 308 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.4 KiB/s wr, 75 op/s; 76 B/s, 3 objects/s recovering
Oct  3 09:35:12 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Oct  3 09:35:12 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Oct  3 09:35:12 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Oct  3 09:35:12 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Oct  3 09:35:13 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Oct  3 09:35:13 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Oct  3 09:35:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v160: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.6 KiB/s wr, 85 op/s; 360 B/s, 14 objects/s recovering
Oct  3 09:35:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct  3 09:35:13 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  3 09:35:14 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct  3 09:35:14 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  3 09:35:14 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct  3 09:35:14 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct  3 09:35:14 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  3 09:35:14 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct  3 09:35:14 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct  3 09:35:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:35:15 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Oct  3 09:35:15 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  3 09:35:15 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Oct  3 09:35:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 410 B/s wr, 17 op/s; 255 B/s, 9 objects/s recovering
Oct  3 09:35:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct  3 09:35:15 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  3 09:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:35:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct  3 09:35:16 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  3 09:35:16 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  3 09:35:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct  3 09:35:16 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct  3 09:35:17 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct  3 09:35:17 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct  3 09:35:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v164: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 383 B/s wr, 16 op/s; 239 B/s, 9 objects/s recovering
Oct  3 09:35:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct  3 09:35:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  3 09:35:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct  3 09:35:18 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  3 09:35:18 compute-0 systemd-logind[798]: New session 43 of user zuul.
Oct  3 09:35:18 compute-0 systemd[1]: Started Session 43 of User zuul.
Oct  3 09:35:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  3 09:35:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct  3 09:35:18 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct  3 09:35:19 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Oct  3 09:35:19 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Oct  3 09:35:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  3 09:35:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  3 09:35:19 compute-0 podman[223654]: 2025-10-03 09:35:19.19393857 +0000 UTC m=+0.085607352 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4)
Oct  3 09:35:19 compute-0 podman[223652]: 2025-10-03 09:35:19.199152246 +0000 UTC m=+0.105729594 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, release=1214.1726694543, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, config_id=edpm, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9)
Oct  3 09:35:19 compute-0 podman[223653]: 2025-10-03 09:35:19.211108287 +0000 UTC m=+0.115677141 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 09:35:19 compute-0 podman[223656]: 2025-10-03 09:35:19.216148908 +0000 UTC m=+0.106994654 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 09:35:19 compute-0 python3.9[223753]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  3 09:35:19 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct  3 09:35:19 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct  3 09:35:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v166: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:35:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct  3 09:35:19 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  3 09:35:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct  3 09:35:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  3 09:35:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct  3 09:35:20 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct  3 09:35:20 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  3 09:35:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:35:20 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Oct  3 09:35:20 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Oct  3 09:35:20 compute-0 python3.9[223928]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:35:21 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct  3 09:35:21 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct  3 09:35:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  3 09:35:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v168: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:35:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct  3 09:35:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  3 09:35:21 compute-0 python3.9[224084]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:35:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct  3 09:35:22 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  3 09:35:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct  3 09:35:22 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct  3 09:35:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  3 09:35:22 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 70 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=13.405026436s) [2] r=-1 lpr=70 pi=[55,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 158.283370972s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:22 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 70 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=13.404897690s) [2] r=-1 lpr=70 pi=[55,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.283370972s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:22 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 70 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=13.405897141s) [2] r=-1 lpr=70 pi=[55,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 158.285079956s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:22 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 70 pg[9.16( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=70) [2] r=0 lpr=70 pi=[55,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:22 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 70 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=13.405852318s) [2] r=-1 lpr=70 pi=[55,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.285079956s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:22 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 70 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=13.405037880s) [2] r=-1 lpr=70 pi=[55,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 158.285125732s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:22 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 70 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=13.405014992s) [2] r=-1 lpr=70 pi=[55,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.285125732s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:22 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 70 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=13.404057503s) [2] r=-1 lpr=70 pi=[55,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 158.285034180s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:22 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 70 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=13.404001236s) [2] r=-1 lpr=70 pi=[55,70)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.285034180s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:22 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 70 pg[9.e( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=70) [2] r=0 lpr=70 pi=[55,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:22 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 70 pg[9.6( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=70) [2] r=0 lpr=70 pi=[55,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:22 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 70 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=70) [2] r=0 lpr=70 pi=[55,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:23 compute-0 python3.9[224237]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:35:23 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct  3 09:35:23 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct  3 09:35:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct  3 09:35:23 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  3 09:35:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct  3 09:35:23 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct  3 09:35:23 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 71 pg[9.e( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[55,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:23 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 71 pg[9.e( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[55,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:23 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 71 pg[9.6( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[55,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:23 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 71 pg[9.6( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[55,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:23 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 71 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[55,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:23 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 71 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[55,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:23 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 71 pg[9.16( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[55,71)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:23 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 71 pg[9.16( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[55,71)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:23 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 71 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] r=0 lpr=71 pi=[55,71)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:23 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 71 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] r=0 lpr=71 pi=[55,71)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:23 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 71 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] r=0 lpr=71 pi=[55,71)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:23 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 71 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] r=0 lpr=71 pi=[55,71)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:23 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 71 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] r=0 lpr=71 pi=[55,71)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:23 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 71 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] r=0 lpr=71 pi=[55,71)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:23 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 71 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] r=0 lpr=71 pi=[55,71)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:23 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 71 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] r=0 lpr=71 pi=[55,71)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v171: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:35:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct  3 09:35:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  3 09:35:24 compute-0 python3.9[224391]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:35:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct  3 09:35:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  3 09:35:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct  3 09:35:24 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Oct  3 09:35:24 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct  3 09:35:24 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Oct  3 09:35:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  3 09:35:24 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 72 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=72 pruub=15.905326843s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=51'389 mlcod 0'0 active pruub 169.549346924s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:24 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 72 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=72 pruub=15.905237198s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 169.549346924s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:24 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 72 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=63/64 n=6 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=72 pruub=15.906095505s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=51'389 mlcod 0'0 active pruub 169.550674438s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:24 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 72 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=63/64 n=6 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=72 pruub=15.906071663s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 169.550674438s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:24 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 72 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=55/45 lis/c=62/62 les/c/f=63/63/0 sis=72 pruub=14.915143967s) [2] r=-1 lpr=72 pi=[62,72)/1 crt=51'389 mlcod 0'0 active pruub 168.560150146s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:24 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 72 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=63/64 n=6 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=72 pruub=15.905404091s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=51'389 mlcod 0'0 active pruub 169.550476074s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:24 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 72 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=55/45 lis/c=62/62 les/c/f=63/63/0 sis=72 pruub=14.915039062s) [2] r=-1 lpr=72 pi=[62,72)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 168.560150146s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:24 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 72 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=63/64 n=6 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=72 pruub=15.905357361s) [2] r=-1 lpr=72 pi=[63,72)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 169.550476074s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:24 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 72 pg[9.17( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=62/62 les/c/f=63/63/0 sis=72) [2] r=0 lpr=72 pi=[62,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:24 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 72 pg[9.f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=72) [2] r=0 lpr=72 pi=[63,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:24 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 72 pg[9.7( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=72) [2] r=0 lpr=72 pi=[63,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:24 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 72 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=72) [2] r=0 lpr=72 pi=[63,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:25 compute-0 python3.9[224541]: ansible-ansible.builtin.service_facts Invoked
Oct  3 09:35:25 compute-0 network[224558]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  3 09:35:25 compute-0 network[224559]: 'network-scripts' will be removed from distribution in near future.
Oct  3 09:35:25 compute-0 network[224560]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  3 09:35:25 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 72 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=71/72 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[55,71)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:25 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 72 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=71/72 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[55,71)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:25 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 72 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=71/72 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[55,71)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:25 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 72 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=71/72 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[55,71)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:35:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct  3 09:35:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct  3 09:35:25 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct  3 09:35:25 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[63,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:25 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[63,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:25 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[63,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:25 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[63,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:25 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=62/62 les/c/f=63/63/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[62,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:25 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=62/62 les/c/f=63/63/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[62,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:25 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[63,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:25 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=-1 lpr=73 pi=[63,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:25 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 73 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=63/64 n=6 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:25 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 73 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=55/45 lis/c=62/62 les/c/f=63/63/0 sis=73) [2]/[0] r=0 lpr=73 pi=[62,73)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:25 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 73 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=55/45 lis/c=62/62 les/c/f=63/63/0 sis=73) [2]/[0] r=0 lpr=73 pi=[62,73)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:25 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 73 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=63/64 n=6 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:25 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 73 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=63/64 n=6 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:25 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 73 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:25 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 73 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:25 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 73 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=63/64 n=6 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] r=0 lpr=73 pi=[63,73)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:25 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  3 09:35:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v174: 321 pgs: 4 unknown, 4 remapped+peering, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:35:26 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct  3 09:35:26 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct  3 09:35:26 compute-0 podman[224569]: 2025-10-03 09:35:26.3031294 +0000 UTC m=+0.072757822 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, release=1755695350, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Oct  3 09:35:26 compute-0 podman[224567]: 2025-10-03 09:35:26.344786949 +0000 UTC m=+0.105940350 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 09:35:26 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct  3 09:35:26 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct  3 09:35:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct  3 09:35:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct  3 09:35:26 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct  3 09:35:26 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 74 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74) [2] r=0 lpr=74 pi=[55,74)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:26 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 74 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74) [2] r=0 lpr=74 pi=[55,74)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:26 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 74 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74) [2] r=0 lpr=74 pi=[55,74)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:26 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 74 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74) [2] r=0 lpr=74 pi=[55,74)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:26 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 74 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74) [2] r=0 lpr=74 pi=[55,74)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:26 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 74 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74) [2] r=0 lpr=74 pi=[55,74)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:26 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 74 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74) [2] r=0 lpr=74 pi=[55,74)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:26 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 74 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74) [2] r=0 lpr=74 pi=[55,74)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:26 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 74 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=71/72 n=5 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74 pruub=14.800114632s) [2] async=[2] r=-1 lpr=74 pi=[55,74)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 163.738555908s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:26 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 74 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=71/72 n=5 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74 pruub=14.800045013s) [2] r=-1 lpr=74 pi=[55,74)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.738555908s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:26 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 74 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=71/72 n=6 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74 pruub=14.800000191s) [2] async=[2] r=-1 lpr=74 pi=[55,74)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 163.738998413s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:26 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 74 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=71/72 n=6 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74 pruub=14.799955368s) [2] r=-1 lpr=74 pi=[55,74)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.738998413s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:26 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 74 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=71/72 n=5 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74 pruub=14.798035622s) [2] async=[2] r=-1 lpr=74 pi=[55,74)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 163.737136841s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:26 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 74 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=71/72 n=5 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74 pruub=14.797936440s) [2] r=-1 lpr=74 pi=[55,74)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.737136841s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:26 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 74 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=71/72 n=6 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74 pruub=14.796911240s) [2] async=[2] r=-1 lpr=74 pi=[55,74)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 163.737136841s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:26 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 74 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=71/72 n=6 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74 pruub=14.796828270s) [2] r=-1 lpr=74 pi=[55,74)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.737136841s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:26 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 74 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=73/74 n=5 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[63,73)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:26 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 74 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=73/74 n=6 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[63,73)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:26 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 74 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=73/74 n=6 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[63,73)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:26 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 74 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=73/74 n=5 ec=55/45 lis/c=62/62 les/c/f=63/63/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[62,73)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:26 compute-0 podman[224641]: 2025-10-03 09:35:26.92510191 +0000 UTC m=+0.058339802 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:35:27 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.8 deep-scrub starts
Oct  3 09:35:27 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.8 deep-scrub ok
Oct  3 09:35:27 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct  3 09:35:27 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct  3 09:35:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct  3 09:35:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct  3 09:35:27 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct  3 09:35:27 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 75 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:27 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 75 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:27 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 75 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:27 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 75 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:27 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 75 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:27 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 75 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:27 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 75 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=73/62 les/c/f=74/63/0 sis=75) [2] r=0 lpr=75 pi=[62,75)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:27 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 75 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=73/62 les/c/f=74/63/0 sis=75) [2] r=0 lpr=75 pi=[62,75)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:27 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 75 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=73/74 n=5 ec=55/45 lis/c=73/63 les/c/f=74/64/0 sis=75 pruub=14.991152763s) [2] async=[2] r=-1 lpr=75 pi=[63,75)/1 crt=51'389 mlcod 51'389 active pruub 171.526779175s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:27 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 75 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=73/74 n=5 ec=55/45 lis/c=73/63 les/c/f=74/64/0 sis=75 pruub=14.991075516s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 171.526779175s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:27 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 75 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=73/74 n=5 ec=55/45 lis/c=73/62 les/c/f=74/63/0 sis=75 pruub=14.989570618s) [2] async=[2] r=-1 lpr=75 pi=[62,75)/1 crt=51'389 mlcod 51'389 active pruub 171.526916504s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:27 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 75 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=73/74 n=5 ec=55/45 lis/c=73/62 les/c/f=74/63/0 sis=75 pruub=14.989523888s) [2] r=-1 lpr=75 pi=[62,75)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 171.526916504s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:27 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 75 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=73/74 n=6 ec=55/45 lis/c=73/63 les/c/f=74/64/0 sis=75 pruub=14.989185333s) [2] async=[2] r=-1 lpr=75 pi=[63,75)/1 crt=51'389 mlcod 51'389 active pruub 171.526824951s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:27 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 75 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=73/74 n=6 ec=55/45 lis/c=73/63 les/c/f=74/64/0 sis=75 pruub=14.989103317s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 171.526824951s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:27 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 75 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=73/74 n=6 ec=55/45 lis/c=73/63 les/c/f=74/64/0 sis=75 pruub=14.988967896s) [2] async=[2] r=-1 lpr=75 pi=[63,75)/1 crt=51'389 mlcod 51'389 active pruub 171.526855469s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:27 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 75 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=74/75 n=5 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74) [2] r=0 lpr=74 pi=[55,74)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:27 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 75 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=73/74 n=6 ec=55/45 lis/c=73/63 les/c/f=74/64/0 sis=75 pruub=14.987938881s) [2] r=-1 lpr=75 pi=[63,75)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 171.526855469s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:27 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 75 pg[9.6( v 51'389 (0'0,51'389] local-lis/les=74/75 n=6 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74) [2] r=0 lpr=74 pi=[55,74)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:27 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 75 pg[9.e( v 51'389 (0'0,51'389] local-lis/les=74/75 n=6 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74) [2] r=0 lpr=74 pi=[55,74)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:27 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 75 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=74/75 n=5 ec=55/45 lis/c=71/55 les/c/f=72/56/0 sis=74) [2] r=0 lpr=74 pi=[55,74)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:27 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct  3 09:35:27 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct  3 09:35:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v177: 321 pgs: 4 unknown, 4 remapped+peering, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:35:28 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Oct  3 09:35:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct  3 09:35:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct  3 09:35:28 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Oct  3 09:35:28 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct  3 09:35:28 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 76 pg[9.17( v 51'389 (0'0,51'389] local-lis/les=75/76 n=5 ec=55/45 lis/c=73/62 les/c/f=74/63/0 sis=75) [2] r=0 lpr=75 pi=[62,75)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:28 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 76 pg[9.f( v 51'389 (0'0,51'389] local-lis/les=75/76 n=6 ec=55/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:28 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 76 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=75/76 n=5 ec=55/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:28 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 76 pg[9.7( v 51'389 (0'0,51'389] local-lis/les=75/76 n=6 ec=55/45 lis/c=73/63 les/c/f=74/64/0 sis=75) [2] r=0 lpr=75 pi=[63,75)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:29 compute-0 python3.9[224899]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:35:29 compute-0 podman[157165]: time="2025-10-03T09:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:35:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Oct  3 09:35:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6789 "" "Go-http-client/1.1"
Oct  3 09:35:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v179: 321 pgs: 4 unknown, 4 remapped+peering, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:35:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:35:30 compute-0 python3.9[225050]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:35:30 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct  3 09:35:30 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct  3 09:35:31 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct  3 09:35:31 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct  3 09:35:31 compute-0 openstack_network_exporter[159287]: ERROR   09:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:35:31 compute-0 openstack_network_exporter[159287]: ERROR   09:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:35:31 compute-0 openstack_network_exporter[159287]: ERROR   09:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:35:31 compute-0 openstack_network_exporter[159287]: ERROR   09:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:35:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:35:31 compute-0 openstack_network_exporter[159287]: ERROR   09:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:35:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:35:31 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Oct  3 09:35:31 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Oct  3 09:35:31 compute-0 python3.9[225204]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:35:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v180: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s rd, 511 B/s wr, 22 op/s; 219 B/s, 9 objects/s recovering
Oct  3 09:35:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct  3 09:35:31 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  3 09:35:32 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.1c deep-scrub starts
Oct  3 09:35:32 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.1c deep-scrub ok
Oct  3 09:35:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct  3 09:35:32 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct  3 09:35:32 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  3 09:35:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct  3 09:35:32 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct  3 09:35:32 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 77 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=77 pruub=11.286869049s) [2] r=-1 lpr=77 pi=[55,77)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 166.284194946s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:32 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 77 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=77 pruub=11.286744118s) [2] r=-1 lpr=77 pi=[55,77)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.284194946s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:32 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 77 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=77 pruub=11.287444115s) [2] r=-1 lpr=77 pi=[55,77)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 166.285339355s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:32 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 77 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=77 pruub=11.287336349s) [2] r=-1 lpr=77 pi=[55,77)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.285339355s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:32 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct  3 09:35:32 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  3 09:35:32 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 77 pg[9.18( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=77) [2] r=0 lpr=77 pi=[55,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:32 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 77 pg[9.8( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=77) [2] r=0 lpr=77 pi=[55,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:33 compute-0 python3.9[225362]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 09:35:33 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.1f deep-scrub starts
Oct  3 09:35:33 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.1f deep-scrub ok
Oct  3 09:35:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct  3 09:35:33 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  3 09:35:33 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct  3 09:35:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct  3 09:35:33 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct  3 09:35:33 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 78 pg[9.8( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[55,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:33 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 78 pg[9.8( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[55,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:33 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 78 pg[9.18( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[55,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:33 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 78 pg[9.18( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[55,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:33 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct  3 09:35:33 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 78 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=78) [2]/[1] r=0 lpr=78 pi=[55,78)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:33 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 78 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=78) [2]/[1] r=0 lpr=78 pi=[55,78)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:33 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 78 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=78) [2]/[1] r=0 lpr=78 pi=[55,78)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:33 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 78 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=78) [2]/[1] r=0 lpr=78 pi=[55,78)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v183: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s rd, 511 B/s wr, 22 op/s; 219 B/s, 9 objects/s recovering
Oct  3 09:35:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct  3 09:35:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  3 09:35:34 compute-0 python3.9[225446]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 09:35:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct  3 09:35:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  3 09:35:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  3 09:35:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct  3 09:35:34 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct  3 09:35:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:35:35 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Oct  3 09:35:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  3 09:35:35 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Oct  3 09:35:35 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 79 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=78/79 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[55,78)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:35 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 79 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=78/79 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[55,78)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:35 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.f deep-scrub starts
Oct  3 09:35:35 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.f deep-scrub ok
Oct  3 09:35:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v185: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s rd, 511 B/s wr, 22 op/s; 219 B/s, 9 objects/s recovering
Oct  3 09:35:36 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct  3 09:35:36 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct  3 09:35:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct  3 09:35:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct  3 09:35:36 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct  3 09:35:36 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 80 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=78/79 n=6 ec=55/45 lis/c=78/55 les/c/f=79/56/0 sis=80 pruub=14.965891838s) [2] async=[2] r=-1 lpr=80 pi=[55,80)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 174.075851440s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:36 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 80 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=78/79 n=6 ec=55/45 lis/c=78/55 les/c/f=79/56/0 sis=80 pruub=14.965827942s) [2] r=-1 lpr=80 pi=[55,80)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.075851440s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:36 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 80 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=78/79 n=5 ec=55/45 lis/c=78/55 les/c/f=79/56/0 sis=80 pruub=14.963700294s) [2] async=[2] r=-1 lpr=80 pi=[55,80)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 174.075881958s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:36 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 80 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=78/79 n=5 ec=55/45 lis/c=78/55 les/c/f=79/56/0 sis=80 pruub=14.962798119s) [2] r=-1 lpr=80 pi=[55,80)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.075881958s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:36 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 80 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=78/55 les/c/f=79/56/0 sis=80) [2] r=0 lpr=80 pi=[55,80)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:36 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 80 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=78/55 les/c/f=79/56/0 sis=80) [2] r=0 lpr=80 pi=[55,80)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:36 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 80 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=78/55 les/c/f=79/56/0 sis=80) [2] r=0 lpr=80 pi=[55,80)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:36 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 80 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=78/55 les/c/f=79/56/0 sis=80) [2] r=0 lpr=80 pi=[55,80)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct  3 09:35:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct  3 09:35:37 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct  3 09:35:37 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 81 pg[9.18( v 51'389 (0'0,51'389] local-lis/les=80/81 n=5 ec=55/45 lis/c=78/55 les/c/f=79/56/0 sis=80) [2] r=0 lpr=80 pi=[55,80)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:37 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 81 pg[9.8( v 51'389 (0'0,51'389] local-lis/les=80/81 n=6 ec=55/45 lis/c=78/55 les/c/f=79/56/0 sis=80) [2] r=0 lpr=80 pi=[55,80)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v188: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:35:38 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct  3 09:35:38 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.950 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.951 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f70b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa35c9f7170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b8940b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35df74380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b894380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e566ba0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f73b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.953 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa35b894080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.955 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa35c9f71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.955 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa35c9f7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e6b9400>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.961 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.961 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.961 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa35c9f7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.963 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa35c9f72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.963 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.964 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f76e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa35c9f7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa35c955970>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.964 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.965 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa35b894350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa35c92f7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa35c9f7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa35c9f7710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa35c9f79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa35e6e6180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa35c9f73e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa35c9f7c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.967 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa35c9f7440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.967 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa35c9f7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.967 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa35c9f7ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.967 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.968 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa35c9f7d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.968 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.968 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa35c9f7e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.968 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.968 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa35c9f7650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.968 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.968 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa35c9f7e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.968 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa35c9f76b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.969 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa35c9f7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.969 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa35c9f7fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.969 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:35:38.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:35:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v189: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:35:40 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.1d deep-scrub starts
Oct  3 09:35:40 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.1d deep-scrub ok
Oct  3 09:35:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:35:40 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.9 deep-scrub starts
Oct  3 09:35:40 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.9 deep-scrub ok
Oct  3 09:35:41 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct  3 09:35:41 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct  3 09:35:41 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct  3 09:35:41 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct  3 09:35:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v190: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 1 objects/s recovering
Oct  3 09:35:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct  3 09:35:41 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  3 09:35:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct  3 09:35:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  3 09:35:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct  3 09:35:42 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct  3 09:35:42 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  3 09:35:43 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct  3 09:35:43 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct  3 09:35:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  3 09:35:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v192: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 1 objects/s recovering
Oct  3 09:35:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct  3 09:35:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  3 09:35:44 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct  3 09:35:44 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct  3 09:35:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct  3 09:35:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  3 09:35:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct  3 09:35:44 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct  3 09:35:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  3 09:35:45 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct  3 09:35:45 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct  3 09:35:45 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.a scrub starts
Oct  3 09:35:45 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.a scrub ok
Oct  3 09:35:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:35:45 compute-0 podman[225684]: 2025-10-03 09:35:45.495223338 +0000 UTC m=+0.074442985 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 09:35:45 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct  3 09:35:45 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct  3 09:35:45 compute-0 podman[225684]: 2025-10-03 09:35:45.621646531 +0000 UTC m=+0.200866168 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 09:35:45 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  3 09:35:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:35:45
Oct  3 09:35:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:35:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:35:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'images', 'cephfs.cephfs.data', '.mgr', 'vms', '.rgw.root']
Oct  3 09:35:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:35:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct  3 09:35:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  3 09:35:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v194: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct  3 09:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:35:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:35:46 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:35:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:35:46 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:35:46 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct  3 09:35:46 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct  3 09:35:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct  3 09:35:46 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  3 09:35:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct  3 09:35:46 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct  3 09:35:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  3 09:35:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:35:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:35:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:35:47 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:35:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:35:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:35:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:35:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:35:47 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8081f722-c93d-484f-afac-4ef1219d8de5 does not exist
Oct  3 09:35:47 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 938896f7-c1ad-4491-af4e-fd9c96b23725 does not exist
Oct  3 09:35:47 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b0c3ad35-f9eb-4c78-87fb-8e62227b0f11 does not exist
Oct  3 09:35:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:35:47 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:35:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:35:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:35:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:35:47 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:35:47 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct  3 09:35:47 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct  3 09:35:47 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct  3 09:35:47 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 84 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=84 pruub=12.310297012s) [2] r=-1 lpr=84 pi=[55,84)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 182.285568237s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:47 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 84 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=84 pruub=12.310245514s) [2] r=-1 lpr=84 pi=[55,84)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.285568237s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:47 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 84 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=84 pruub=12.310091972s) [2] r=-1 lpr=84 pi=[55,84)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 182.286087036s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:47 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 84 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=84 pruub=12.310053825s) [2] r=-1 lpr=84 pi=[55,84)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.286087036s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:47 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 84 pg[9.c( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=84) [2] r=0 lpr=84 pi=[55,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:47 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 84 pg[9.1c( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=84) [2] r=0 lpr=84 pi=[55,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:47 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct  3 09:35:47 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  3 09:35:47 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:35:47 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:35:47 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:35:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v196: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:35:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct  3 09:35:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  3 09:35:48 compute-0 podman[226112]: 2025-10-03 09:35:48.102914382 +0000 UTC m=+0.089951700 container create 96ea11e193daaf93fbdacd483d024f57b4074ffec19e43f9b290878d41140438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 09:35:48 compute-0 podman[226112]: 2025-10-03 09:35:48.060594682 +0000 UTC m=+0.047632070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:35:48 compute-0 systemd[1]: Started libpod-conmon-96ea11e193daaf93fbdacd483d024f57b4074ffec19e43f9b290878d41140438.scope.
Oct  3 09:35:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:35:48 compute-0 podman[226112]: 2025-10-03 09:35:48.244799678 +0000 UTC m=+0.231837016 container init 96ea11e193daaf93fbdacd483d024f57b4074ffec19e43f9b290878d41140438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 09:35:48 compute-0 podman[226112]: 2025-10-03 09:35:48.255906282 +0000 UTC m=+0.242943570 container start 96ea11e193daaf93fbdacd483d024f57b4074ffec19e43f9b290878d41140438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lamport, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:35:48 compute-0 podman[226112]: 2025-10-03 09:35:48.261098028 +0000 UTC m=+0.248135346 container attach 96ea11e193daaf93fbdacd483d024f57b4074ffec19e43f9b290878d41140438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lamport, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:35:48 compute-0 elated_lamport[226127]: 167 167
Oct  3 09:35:48 compute-0 systemd[1]: libpod-96ea11e193daaf93fbdacd483d024f57b4074ffec19e43f9b290878d41140438.scope: Deactivated successfully.
Oct  3 09:35:48 compute-0 podman[226112]: 2025-10-03 09:35:48.265560131 +0000 UTC m=+0.252597459 container died 96ea11e193daaf93fbdacd483d024f57b4074ffec19e43f9b290878d41140438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lamport, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 09:35:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct  3 09:35:48 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  3 09:35:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct  3 09:35:48 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Oct  3 09:35:48 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct  3 09:35:48 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 85 pg[9.1c( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=85) [2]/[1] r=-1 lpr=85 pi=[55,85)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:48 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 85 pg[9.1c( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=85) [2]/[1] r=-1 lpr=85 pi=[55,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:48 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 85 pg[9.c( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=85) [2]/[1] r=-1 lpr=85 pi=[55,85)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:48 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 85 pg[9.c( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=85) [2]/[1] r=-1 lpr=85 pi=[55,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:48 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Oct  3 09:35:48 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 85 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=85) [2]/[1] r=0 lpr=85 pi=[55,85)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:48 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 85 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=55/56 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=85) [2]/[1] r=0 lpr=85 pi=[55,85)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:48 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 85 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=85) [2]/[1] r=0 lpr=85 pi=[55,85)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:48 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 85 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=55/56 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=85) [2]/[1] r=0 lpr=85 pi=[55,85)/1 crt=51'389 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fab66769ccd27efbf77433802f24a5c6d4fd1ec389361a97a8bfe8c22014a4c-merged.mount: Deactivated successfully.
Oct  3 09:35:48 compute-0 podman[226112]: 2025-10-03 09:35:48.34705735 +0000 UTC m=+0.334094638 container remove 96ea11e193daaf93fbdacd483d024f57b4074ffec19e43f9b290878d41140438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lamport, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  3 09:35:48 compute-0 systemd[1]: libpod-conmon-96ea11e193daaf93fbdacd483d024f57b4074ffec19e43f9b290878d41140438.scope: Deactivated successfully.
Oct  3 09:35:48 compute-0 podman[226149]: 2025-10-03 09:35:48.550647415 +0000 UTC m=+0.052789285 container create fe4f422d16955b73d3784244fbabfb3c1e5b819f14a554ce1494d259364d803f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:35:48 compute-0 podman[226149]: 2025-10-03 09:35:48.52794463 +0000 UTC m=+0.030086530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:35:48 compute-0 systemd[1]: Started libpod-conmon-fe4f422d16955b73d3784244fbabfb3c1e5b819f14a554ce1494d259364d803f.scope.
Oct  3 09:35:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b3c40a386b28842aa62d3754d2d027eea192a23e98f1b041c5e918bf174a2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b3c40a386b28842aa62d3754d2d027eea192a23e98f1b041c5e918bf174a2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b3c40a386b28842aa62d3754d2d027eea192a23e98f1b041c5e918bf174a2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b3c40a386b28842aa62d3754d2d027eea192a23e98f1b041c5e918bf174a2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:35:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b3c40a386b28842aa62d3754d2d027eea192a23e98f1b041c5e918bf174a2f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:35:48 compute-0 podman[226149]: 2025-10-03 09:35:48.707923042 +0000 UTC m=+0.210064942 container init fe4f422d16955b73d3784244fbabfb3c1e5b819f14a554ce1494d259364d803f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 09:35:48 compute-0 podman[226149]: 2025-10-03 09:35:48.732681091 +0000 UTC m=+0.234822961 container start fe4f422d16955b73d3784244fbabfb3c1e5b819f14a554ce1494d259364d803f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 09:35:48 compute-0 podman[226149]: 2025-10-03 09:35:48.738532118 +0000 UTC m=+0.240673998 container attach fe4f422d16955b73d3784244fbabfb3c1e5b819f14a554ce1494d259364d803f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  3 09:35:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  3 09:35:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  3 09:35:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct  3 09:35:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct  3 09:35:49 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct  3 09:35:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 86 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=85/86 n=5 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=85) [2]/[1] async=[2] r=0 lpr=85 pi=[55,85)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:49 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 86 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=85/86 n=6 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=85) [2]/[1] async=[2] r=0 lpr=85 pi=[55,85)/1 crt=51'389 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:49 compute-0 tender_pare[226165]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:35:49 compute-0 tender_pare[226165]: --> relative data size: 1.0
Oct  3 09:35:49 compute-0 tender_pare[226165]: --> All data devices are unavailable
Oct  3 09:35:49 compute-0 podman[226197]: 2025-10-03 09:35:49.847337088 +0000 UTC m=+0.092638466 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Oct  3 09:35:49 compute-0 systemd[1]: libpod-fe4f422d16955b73d3784244fbabfb3c1e5b819f14a554ce1494d259364d803f.scope: Deactivated successfully.
Oct  3 09:35:49 compute-0 podman[226149]: 2025-10-03 09:35:49.854483166 +0000 UTC m=+1.356625056 container died fe4f422d16955b73d3784244fbabfb3c1e5b819f14a554ce1494d259364d803f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pare, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:35:49 compute-0 systemd[1]: libpod-fe4f422d16955b73d3784244fbabfb3c1e5b819f14a554ce1494d259364d803f.scope: Consumed 1.054s CPU time.
Oct  3 09:35:49 compute-0 podman[226196]: 2025-10-03 09:35:49.867455339 +0000 UTC m=+0.115489944 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 09:35:49 compute-0 podman[226198]: 2025-10-03 09:35:49.879133732 +0000 UTC m=+0.115060911 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 09:35:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0b3c40a386b28842aa62d3754d2d027eea192a23e98f1b041c5e918bf174a2f-merged.mount: Deactivated successfully.
Oct  3 09:35:49 compute-0 podman[226194]: 2025-10-03 09:35:49.913419886 +0000 UTC m=+0.159210239 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, com.redhat.component=ubi9-container, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.buildah.version=1.29.0, name=ubi9)
Oct  3 09:35:49 compute-0 podman[226149]: 2025-10-03 09:35:49.928650972 +0000 UTC m=+1.430792852 container remove fe4f422d16955b73d3784244fbabfb3c1e5b819f14a554ce1494d259364d803f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  3 09:35:49 compute-0 systemd[1]: libpod-conmon-fe4f422d16955b73d3784244fbabfb3c1e5b819f14a554ce1494d259364d803f.scope: Deactivated successfully.
Oct  3 09:35:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v199: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:35:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct  3 09:35:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct  3 09:35:50 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct  3 09:35:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 87 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=85/86 n=5 ec=55/45 lis/c=85/55 les/c/f=86/56/0 sis=87 pruub=14.996243477s) [2] async=[2] r=-1 lpr=87 pi=[55,87)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 187.784194946s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 87 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=85/86 n=5 ec=55/45 lis/c=85/55 les/c/f=86/56/0 sis=87 pruub=14.996150017s) [2] r=-1 lpr=87 pi=[55,87)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.784194946s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 87 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=85/86 n=6 ec=55/45 lis/c=85/55 les/c/f=86/56/0 sis=87 pruub=14.995787621s) [2] async=[2] r=-1 lpr=87 pi=[55,87)/1 crt=51'389 lcod 0'0 mlcod 0'0 active pruub 187.784149170s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:50 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 87 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=85/86 n=6 ec=55/45 lis/c=85/55 les/c/f=86/56/0 sis=87 pruub=14.995145798s) [2] r=-1 lpr=87 pi=[55,87)/1 crt=51'389 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.784149170s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:35:50 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 87 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=85/55 les/c/f=86/56/0 sis=87) [2] r=0 lpr=87 pi=[55,87)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:50 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 87 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=0/0 n=6 ec=55/45 lis/c=85/55 les/c/f=86/56/0 sis=87) [2] r=0 lpr=87 pi=[55,87)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:50 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 87 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=85/55 les/c/f=86/56/0 sis=87) [2] r=0 lpr=87 pi=[55,87)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:35:50 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 87 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=85/55 les/c/f=86/56/0 sis=87) [2] r=0 lpr=87 pi=[55,87)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:35:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:35:50 compute-0 podman[226434]: 2025-10-03 09:35:50.731067559 +0000 UTC m=+0.050360038 container create 30845eada0b0f65fedeb664a4d1a278e648de5fdf4df656f8f4d3724e97e5b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 09:35:50 compute-0 systemd[1]: Started libpod-conmon-30845eada0b0f65fedeb664a4d1a278e648de5fdf4df656f8f4d3724e97e5b99.scope.
Oct  3 09:35:50 compute-0 podman[226434]: 2025-10-03 09:35:50.713052434 +0000 UTC m=+0.032344933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:35:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:35:50 compute-0 podman[226434]: 2025-10-03 09:35:50.839447726 +0000 UTC m=+0.158740215 container init 30845eada0b0f65fedeb664a4d1a278e648de5fdf4df656f8f4d3724e97e5b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct  3 09:35:50 compute-0 podman[226434]: 2025-10-03 09:35:50.853456222 +0000 UTC m=+0.172748701 container start 30845eada0b0f65fedeb664a4d1a278e648de5fdf4df656f8f4d3724e97e5b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:35:50 compute-0 condescending_rosalind[226450]: 167 167
Oct  3 09:35:50 compute-0 systemd[1]: libpod-30845eada0b0f65fedeb664a4d1a278e648de5fdf4df656f8f4d3724e97e5b99.scope: Deactivated successfully.
Oct  3 09:35:50 compute-0 podman[226434]: 2025-10-03 09:35:50.870435094 +0000 UTC m=+0.189727593 container attach 30845eada0b0f65fedeb664a4d1a278e648de5fdf4df656f8f4d3724e97e5b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 09:35:50 compute-0 podman[226434]: 2025-10-03 09:35:50.870860648 +0000 UTC m=+0.190153137 container died 30845eada0b0f65fedeb664a4d1a278e648de5fdf4df656f8f4d3724e97e5b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rosalind, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:35:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3704fd013a97b12c4d570e6566eab458250b3c2a8300ee9aa17f4dd0f49f6797-merged.mount: Deactivated successfully.
Oct  3 09:35:50 compute-0 podman[226434]: 2025-10-03 09:35:50.92360317 +0000 UTC m=+0.242895649 container remove 30845eada0b0f65fedeb664a4d1a278e648de5fdf4df656f8f4d3724e97e5b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:35:50 compute-0 systemd[1]: libpod-conmon-30845eada0b0f65fedeb664a4d1a278e648de5fdf4df656f8f4d3724e97e5b99.scope: Deactivated successfully.
Oct  3 09:35:51 compute-0 podman[226479]: 2025-10-03 09:35:51.144718104 +0000 UTC m=+0.067222866 container create 394ad4c32d1ebbd5c8189dfdc98f60b5fefe1d02c3d7afa130116fd77c9eb063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:35:51 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Oct  3 09:35:51 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Oct  3 09:35:51 compute-0 systemd[1]: Started libpod-conmon-394ad4c32d1ebbd5c8189dfdc98f60b5fefe1d02c3d7afa130116fd77c9eb063.scope.
Oct  3 09:35:51 compute-0 podman[226479]: 2025-10-03 09:35:51.121594786 +0000 UTC m=+0.044099568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:35:51 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f33d6ba7bf1bf5954348790a001e8b8b6542d353ccbf938482983f701f8156c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f33d6ba7bf1bf5954348790a001e8b8b6542d353ccbf938482983f701f8156c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f33d6ba7bf1bf5954348790a001e8b8b6542d353ccbf938482983f701f8156c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f33d6ba7bf1bf5954348790a001e8b8b6542d353ccbf938482983f701f8156c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:35:51 compute-0 podman[226479]: 2025-10-03 09:35:51.258896856 +0000 UTC m=+0.181401618 container init 394ad4c32d1ebbd5c8189dfdc98f60b5fefe1d02c3d7afa130116fd77c9eb063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 09:35:51 compute-0 podman[226479]: 2025-10-03 09:35:51.271029293 +0000 UTC m=+0.193534045 container start 394ad4c32d1ebbd5c8189dfdc98f60b5fefe1d02c3d7afa130116fd77c9eb063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:35:51 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct  3 09:35:51 compute-0 podman[226479]: 2025-10-03 09:35:51.276206488 +0000 UTC m=+0.198711250 container attach 394ad4c32d1ebbd5c8189dfdc98f60b5fefe1d02c3d7afa130116fd77c9eb063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 09:35:51 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct  3 09:35:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct  3 09:35:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct  3 09:35:51 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct  3 09:35:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 88 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=87/88 n=5 ec=55/45 lis/c=85/55 les/c/f=86/56/0 sis=87) [2] r=0 lpr=87 pi=[55,87)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 88 pg[9.c( v 51'389 (0'0,51'389] local-lis/les=87/88 n=6 ec=55/45 lis/c=85/55 les/c/f=86/56/0 sis=87) [2] r=0 lpr=87 pi=[55,87)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:35:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v202: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:35:52 compute-0 competent_elion[226496]: {
Oct  3 09:35:52 compute-0 competent_elion[226496]:    "0": [
Oct  3 09:35:52 compute-0 competent_elion[226496]:        {
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "devices": [
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "/dev/loop3"
Oct  3 09:35:52 compute-0 competent_elion[226496]:            ],
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "lv_name": "ceph_lv0",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "lv_size": "21470642176",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "name": "ceph_lv0",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "tags": {
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.cluster_name": "ceph",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.crush_device_class": "",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.encrypted": "0",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.osd_id": "0",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.type": "block",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.vdo": "0"
Oct  3 09:35:52 compute-0 competent_elion[226496]:            },
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "type": "block",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "vg_name": "ceph_vg0"
Oct  3 09:35:52 compute-0 competent_elion[226496]:        }
Oct  3 09:35:52 compute-0 competent_elion[226496]:    ],
Oct  3 09:35:52 compute-0 competent_elion[226496]:    "1": [
Oct  3 09:35:52 compute-0 competent_elion[226496]:        {
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "devices": [
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "/dev/loop4"
Oct  3 09:35:52 compute-0 competent_elion[226496]:            ],
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "lv_name": "ceph_lv1",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "lv_size": "21470642176",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "name": "ceph_lv1",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "tags": {
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.cluster_name": "ceph",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.crush_device_class": "",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.encrypted": "0",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.osd_id": "1",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.type": "block",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.vdo": "0"
Oct  3 09:35:52 compute-0 competent_elion[226496]:            },
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "type": "block",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "vg_name": "ceph_vg1"
Oct  3 09:35:52 compute-0 competent_elion[226496]:        }
Oct  3 09:35:52 compute-0 competent_elion[226496]:    ],
Oct  3 09:35:52 compute-0 competent_elion[226496]:    "2": [
Oct  3 09:35:52 compute-0 competent_elion[226496]:        {
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "devices": [
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "/dev/loop5"
Oct  3 09:35:52 compute-0 competent_elion[226496]:            ],
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "lv_name": "ceph_lv2",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "lv_size": "21470642176",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "name": "ceph_lv2",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "tags": {
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.cluster_name": "ceph",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.crush_device_class": "",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.encrypted": "0",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.osd_id": "2",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.type": "block",
Oct  3 09:35:52 compute-0 competent_elion[226496]:                "ceph.vdo": "0"
Oct  3 09:35:52 compute-0 competent_elion[226496]:            },
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "type": "block",
Oct  3 09:35:52 compute-0 competent_elion[226496]:            "vg_name": "ceph_vg2"
Oct  3 09:35:52 compute-0 competent_elion[226496]:        }
Oct  3 09:35:52 compute-0 competent_elion[226496]:    ]
Oct  3 09:35:52 compute-0 competent_elion[226496]: }
Oct  3 09:35:52 compute-0 systemd[1]: libpod-394ad4c32d1ebbd5c8189dfdc98f60b5fefe1d02c3d7afa130116fd77c9eb063.scope: Deactivated successfully.
Oct  3 09:35:52 compute-0 podman[226479]: 2025-10-03 09:35:52.060864388 +0000 UTC m=+0.983369120 container died 394ad4c32d1ebbd5c8189dfdc98f60b5fefe1d02c3d7afa130116fd77c9eb063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f33d6ba7bf1bf5954348790a001e8b8b6542d353ccbf938482983f701f8156c2-merged.mount: Deactivated successfully.
Oct  3 09:35:52 compute-0 podman[226479]: 2025-10-03 09:35:52.142296606 +0000 UTC m=+1.064801338 container remove 394ad4c32d1ebbd5c8189dfdc98f60b5fefe1d02c3d7afa130116fd77c9eb063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:35:52 compute-0 systemd[1]: libpod-conmon-394ad4c32d1ebbd5c8189dfdc98f60b5fefe1d02c3d7afa130116fd77c9eb063.scope: Deactivated successfully.
Oct  3 09:35:52 compute-0 podman[226680]: 2025-10-03 09:35:52.911665848 +0000 UTC m=+0.066615855 container create b600994ed3a22c620a469f828cd93bc5fd4a3e9e9740de40145fb63fd5faf954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brown, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:35:52 compute-0 systemd[1]: Started libpod-conmon-b600994ed3a22c620a469f828cd93bc5fd4a3e9e9740de40145fb63fd5faf954.scope.
Oct  3 09:35:52 compute-0 podman[226680]: 2025-10-03 09:35:52.877039053 +0000 UTC m=+0.031989130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:35:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:35:53 compute-0 podman[226680]: 2025-10-03 09:35:53.02582254 +0000 UTC m=+0.180772557 container init b600994ed3a22c620a469f828cd93bc5fd4a3e9e9740de40145fb63fd5faf954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:35:53 compute-0 podman[226680]: 2025-10-03 09:35:53.036591593 +0000 UTC m=+0.191541580 container start b600994ed3a22c620a469f828cd93bc5fd4a3e9e9740de40145fb63fd5faf954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 09:35:53 compute-0 podman[226680]: 2025-10-03 09:35:53.041455909 +0000 UTC m=+0.196405936 container attach b600994ed3a22c620a469f828cd93bc5fd4a3e9e9740de40145fb63fd5faf954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brown, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 09:35:53 compute-0 peaceful_brown[226695]: 167 167
Oct  3 09:35:53 compute-0 podman[226680]: 2025-10-03 09:35:53.044876927 +0000 UTC m=+0.199826914 container died b600994ed3a22c620a469f828cd93bc5fd4a3e9e9740de40145fb63fd5faf954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brown, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:35:53 compute-0 systemd[1]: libpod-b600994ed3a22c620a469f828cd93bc5fd4a3e9e9740de40145fb63fd5faf954.scope: Deactivated successfully.
Oct  3 09:35:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-07b2bad7c76c15df78d4846ab1088f117c07726962e1c9944f1edf6ef0b639fe-merged.mount: Deactivated successfully.
Oct  3 09:35:53 compute-0 podman[226680]: 2025-10-03 09:35:53.100786002 +0000 UTC m=+0.255735989 container remove b600994ed3a22c620a469f828cd93bc5fd4a3e9e9740de40145fb63fd5faf954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 09:35:53 compute-0 systemd[1]: libpod-conmon-b600994ed3a22c620a469f828cd93bc5fd4a3e9e9740de40145fb63fd5faf954.scope: Deactivated successfully.
Oct  3 09:35:53 compute-0 podman[226719]: 2025-10-03 09:35:53.311135801 +0000 UTC m=+0.065708347 container create e2274e804ff1aca01e06aeac807cc09be057641ea3a2ccf433e5fb35310c4e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  3 09:35:53 compute-0 systemd[1]: Started libpod-conmon-e2274e804ff1aca01e06aeac807cc09be057641ea3a2ccf433e5fb35310c4e75.scope.
Oct  3 09:35:53 compute-0 podman[226719]: 2025-10-03 09:35:53.287364283 +0000 UTC m=+0.041936849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:35:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d2fa8901294178797cc9ea08ed37d107e2d2b6bcbc5596474512f94d8629408/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d2fa8901294178797cc9ea08ed37d107e2d2b6bcbc5596474512f94d8629408/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d2fa8901294178797cc9ea08ed37d107e2d2b6bcbc5596474512f94d8629408/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d2fa8901294178797cc9ea08ed37d107e2d2b6bcbc5596474512f94d8629408/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:35:53 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct  3 09:35:53 compute-0 podman[226719]: 2025-10-03 09:35:53.44775804 +0000 UTC m=+0.202330586 container init e2274e804ff1aca01e06aeac807cc09be057641ea3a2ccf433e5fb35310c4e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:35:53 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct  3 09:35:53 compute-0 podman[226719]: 2025-10-03 09:35:53.457448188 +0000 UTC m=+0.212020714 container start e2274e804ff1aca01e06aeac807cc09be057641ea3a2ccf433e5fb35310c4e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 09:35:53 compute-0 podman[226719]: 2025-10-03 09:35:53.462590042 +0000 UTC m=+0.217162588 container attach e2274e804ff1aca01e06aeac807cc09be057641ea3a2ccf433e5fb35310c4e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:35:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v203: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:35:54 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct  3 09:35:54 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct  3 09:35:54 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct  3 09:35:54 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]: {
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:        "osd_id": 1,
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:        "type": "bluestore"
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:    },
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:        "osd_id": 2,
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:        "type": "bluestore"
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:    },
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:        "osd_id": 0,
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:        "type": "bluestore"
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]:    }
Oct  3 09:35:54 compute-0 dazzling_elbakyan[226735]: }
Oct  3 09:35:54 compute-0 systemd[1]: libpod-e2274e804ff1aca01e06aeac807cc09be057641ea3a2ccf433e5fb35310c4e75.scope: Deactivated successfully.
Oct  3 09:35:54 compute-0 podman[226719]: 2025-10-03 09:35:54.517637087 +0000 UTC m=+1.272209613 container died e2274e804ff1aca01e06aeac807cc09be057641ea3a2ccf433e5fb35310c4e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 09:35:54 compute-0 systemd[1]: libpod-e2274e804ff1aca01e06aeac807cc09be057641ea3a2ccf433e5fb35310c4e75.scope: Consumed 1.057s CPU time.
Oct  3 09:35:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d2fa8901294178797cc9ea08ed37d107e2d2b6bcbc5596474512f94d8629408-merged.mount: Deactivated successfully.
Oct  3 09:35:54 compute-0 podman[226719]: 2025-10-03 09:35:54.594787349 +0000 UTC m=+1.349359875 container remove e2274e804ff1aca01e06aeac807cc09be057641ea3a2ccf433e5fb35310c4e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 09:35:54 compute-0 systemd[1]: libpod-conmon-e2274e804ff1aca01e06aeac807cc09be057641ea3a2ccf433e5fb35310c4e75.scope: Deactivated successfully.
Oct  3 09:35:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:35:54 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:35:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:35:54 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 23432977-23ff-4e5a-a25c-c8120089f407 does not exist
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 9eeac69b-8618-41c0-9326-dfdfeba688a2 does not exist
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.225674773718825e-06 of space, bias 1.0, pg target 0.0006677024321156476 quantized to 32 (current 32)
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:35:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:35:55 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct  3 09:35:55 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct  3 09:35:55 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:35:55 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:35:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:35:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v204: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 1 objects/s recovering
Oct  3 09:35:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct  3 09:35:55 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  3 09:35:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct  3 09:35:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  3 09:35:56 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  3 09:35:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct  3 09:35:56 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct  3 09:35:56 compute-0 podman[226833]: 2025-10-03 09:35:56.849151372 +0000 UTC m=+0.104272478 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct  3 09:35:56 compute-0 podman[226832]: 2025-10-03 09:35:56.878703804 +0000 UTC m=+0.133396396 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  3 09:35:57 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct  3 09:35:57 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct  3 09:35:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  3 09:35:57 compute-0 podman[226875]: 2025-10-03 09:35:57.805455547 +0000 UTC m=+0.062739193 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:35:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v206: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 1 objects/s recovering
Oct  3 09:35:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct  3 09:35:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  3 09:35:58 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct  3 09:35:58 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct  3 09:35:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct  3 09:35:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  3 09:35:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  3 09:35:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct  3 09:35:58 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct  3 09:35:59 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct  3 09:35:59 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct  3 09:35:59 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct  3 09:35:59 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  3 09:35:59 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct  3 09:35:59 compute-0 podman[157165]: time="2025-10-03T09:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:35:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Oct  3 09:35:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6790 "" "Go-http-client/1.1"
Oct  3 09:35:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v208: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Oct  3 09:35:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Oct  3 09:35:59 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  3 09:36:00 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct  3 09:36:00 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct  3 09:36:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct  3 09:36:00 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  3 09:36:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  3 09:36:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct  3 09:36:00 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct  3 09:36:01 compute-0 openstack_network_exporter[159287]: ERROR   09:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:36:01 compute-0 openstack_network_exporter[159287]: ERROR   09:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:36:01 compute-0 openstack_network_exporter[159287]: ERROR   09:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:36:01 compute-0 openstack_network_exporter[159287]: ERROR   09:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:36:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:36:01 compute-0 openstack_network_exporter[159287]: ERROR   09:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:36:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:36:01 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct  3 09:36:01 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct  3 09:36:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  3 09:36:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v210: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:36:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Oct  3 09:36:01 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  3 09:36:02 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct  3 09:36:02 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct  3 09:36:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct  3 09:36:02 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  3 09:36:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct  3 09:36:02 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct  3 09:36:02 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  3 09:36:03 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct  3 09:36:03 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct  3 09:36:03 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  3 09:36:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v212: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:36:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Oct  3 09:36:03 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  3 09:36:04 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Oct  3 09:36:04 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Oct  3 09:36:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct  3 09:36:04 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  3 09:36:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct  3 09:36:04 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct  3 09:36:04 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  3 09:36:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:36:05 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct  3 09:36:05 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct  3 09:36:05 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  3 09:36:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v214: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:36:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Oct  3 09:36:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  3 09:36:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct  3 09:36:06 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  3 09:36:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  3 09:36:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct  3 09:36:06 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct  3 09:36:06 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 94 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=55/45 lis/c=62/62 les/c/f=63/63/0 sis=94 pruub=12.848379135s) [2] r=-1 lpr=94 pi=[62,94)/1 crt=51'389 mlcod 0'0 active pruub 208.562347412s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:06 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 94 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=55/45 lis/c=62/62 les/c/f=63/63/0 sis=94 pruub=12.847397804s) [2] r=-1 lpr=94 pi=[62,94)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 208.562347412s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:06 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=62/62 les/c/f=63/63/0 sis=94) [2] r=0 lpr=94 pi=[62,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:07 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct  3 09:36:07 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct  3 09:36:07 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct  3 09:36:07 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct  3 09:36:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct  3 09:36:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  3 09:36:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct  3 09:36:07 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct  3 09:36:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 95 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=55/45 lis/c=62/62 les/c/f=63/63/0 sis=95) [2]/[0] r=0 lpr=95 pi=[62,95)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:07 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 95 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=62/63 n=5 ec=55/45 lis/c=62/62 les/c/f=63/63/0 sis=95) [2]/[0] r=0 lpr=95 pi=[62,95)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:07 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 95 pg[9.13( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=62/62 les/c/f=63/63/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[62,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:07 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 95 pg[9.13( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=62/62 les/c/f=63/63/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[62,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v217: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:36:08 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.e scrub starts
Oct  3 09:36:08 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.e scrub ok
Oct  3 09:36:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct  3 09:36:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct  3 09:36:08 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct  3 09:36:08 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 96 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=95/96 n=5 ec=55/45 lis/c=62/62 les/c/f=63/63/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[62,95)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:36:09 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct  3 09:36:09 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct  3 09:36:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct  3 09:36:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct  3 09:36:09 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct  3 09:36:09 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 97 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=95/62 les/c/f=96/63/0 sis=97) [2] r=0 lpr=97 pi=[62,97)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:09 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 97 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=95/62 les/c/f=96/63/0 sis=97) [2] r=0 lpr=97 pi=[62,97)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:09 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 97 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=95/96 n=5 ec=55/45 lis/c=95/62 les/c/f=96/63/0 sis=97 pruub=14.943365097s) [2] async=[2] r=-1 lpr=97 pi=[62,97)/1 crt=51'389 mlcod 51'389 active pruub 213.748672485s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:09 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 97 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=95/96 n=5 ec=55/45 lis/c=95/62 les/c/f=96/63/0 sis=97 pruub=14.943223000s) [2] r=-1 lpr=97 pi=[62,97)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 213.748672485s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v220: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct  3 09:36:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:36:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct  3 09:36:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct  3 09:36:10 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct  3 09:36:10 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 98 pg[9.13( v 51'389 (0'0,51'389] local-lis/les=97/98 n=5 ec=55/45 lis/c=95/62 les/c/f=96/63/0 sis=97) [2] r=0 lpr=97 pi=[62,97)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:36:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v222: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 1 objects/s recovering
Oct  3 09:36:12 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct  3 09:36:12 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct  3 09:36:13 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Oct  3 09:36:13 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Oct  3 09:36:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v223: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct  3 09:36:15 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct  3 09:36:15 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct  3 09:36:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:36:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v224: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Oct  3 09:36:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Oct  3 09:36:15 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  3 09:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:36:16 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Oct  3 09:36:16 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Oct  3 09:36:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct  3 09:36:16 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  3 09:36:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct  3 09:36:16 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct  3 09:36:16 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  3 09:36:17 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct  3 09:36:17 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct  3 09:36:17 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct  3 09:36:17 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct  3 09:36:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  3 09:36:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v226: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:36:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Oct  3 09:36:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct  3 09:36:18 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct  3 09:36:18 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct  3 09:36:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct  3 09:36:19 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct  3 09:36:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct  3 09:36:19 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct  3 09:36:19 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 100 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=100 pruub=9.408725739s) [1] r=-1 lpr=100 pi=[63,100)/1 crt=51'389 mlcod 0'0 active pruub 217.546508789s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:19 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 100 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=100 pruub=9.408679962s) [1] r=-1 lpr=100 pi=[63,100)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 217.546508789s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct  3 09:36:19 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=100) [1] r=0 lpr=100 pi=[63,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:19 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct  3 09:36:19 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct  3 09:36:19 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.3 deep-scrub starts
Oct  3 09:36:19 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.3 deep-scrub ok
Oct  3 09:36:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v228: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:36:20 compute-0 podman[226928]: 2025-10-03 09:36:20.019036658 +0000 UTC m=+0.085721965 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:36:20 compute-0 podman[226930]: 2025-10-03 09:36:20.021063994 +0000 UTC m=+0.083247898 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 09:36:20 compute-0 podman[226940]: 2025-10-03 09:36:20.041028891 +0000 UTC m=+0.090493708 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, version=9.4, com.redhat.component=ubi9-container, container_name=kepler, release=1214.1726694543, managed_by=edpm_ansible, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, vcs-type=git, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9)
Oct  3 09:36:20 compute-0 podman[226931]: 2025-10-03 09:36:20.054850811 +0000 UTC m=+0.105280010 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Oct  3 09:36:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct  3 09:36:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct  3 09:36:20 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct  3 09:36:20 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct  3 09:36:20 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 101 pg[9.15( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[63,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:20 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 101 pg[9.15( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[63,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:20 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 101 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=101) [1]/[0] r=0 lpr=101 pi=[63,101)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:20 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 101 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=101) [1]/[0] r=0 lpr=101 pi=[63,101)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:20 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.d scrub starts
Oct  3 09:36:20 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 2.d scrub ok
Oct  3 09:36:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:36:20 compute-0 python3.9[227156]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:36:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct  3 09:36:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct  3 09:36:21 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct  3 09:36:21 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Oct  3 09:36:21 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Oct  3 09:36:21 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 102 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=101/102 n=5 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=101) [1]/[0] async=[1] r=0 lpr=101 pi=[63,101)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:36:21 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct  3 09:36:21 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct  3 09:36:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v231: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:36:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct  3 09:36:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct  3 09:36:22 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 103 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=101/63 les/c/f=102/64/0 sis=103) [1] r=0 lpr=103 pi=[63,103)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:22 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 103 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=101/63 les/c/f=102/64/0 sis=103) [1] r=0 lpr=103 pi=[63,103)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:22 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct  3 09:36:22 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 103 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=101/102 n=5 ec=55/45 lis/c=101/63 les/c/f=102/64/0 sis=103 pruub=15.040397644s) [1] async=[1] r=-1 lpr=103 pi=[63,103)/1 crt=51'389 mlcod 51'389 active pruub 226.273635864s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:22 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 103 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=101/102 n=5 ec=55/45 lis/c=101/63 les/c/f=102/64/0 sis=103 pruub=15.040308952s) [1] r=-1 lpr=103 pi=[63,103)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 226.273635864s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:22 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct  3 09:36:22 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct  3 09:36:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct  3 09:36:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct  3 09:36:23 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct  3 09:36:23 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 104 pg[9.15( v 51'389 (0'0,51'389] local-lis/les=103/104 n=5 ec=55/45 lis/c=101/63 les/c/f=102/64/0 sis=103) [1] r=0 lpr=103 pi=[63,103)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:36:23 compute-0 python3.9[227443]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  3 09:36:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v234: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:36:24 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct  3 09:36:24 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct  3 09:36:24 compute-0 python3.9[227595]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  3 09:36:25 compute-0 python3.9[227747]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:36:25 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct  3 09:36:25 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct  3 09:36:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:36:25 compute-0 python3.9[227899]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  3 09:36:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v235: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 174 B/s wr, 5 op/s; 37 B/s, 1 objects/s recovering
Oct  3 09:36:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct  3 09:36:25 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  3 09:36:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct  3 09:36:26 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  3 09:36:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct  3 09:36:26 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct  3 09:36:26 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct  3 09:36:26 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  3 09:36:26 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct  3 09:36:27 compute-0 podman[228024]: 2025-10-03 09:36:27.073944136 +0000 UTC m=+0.071245594 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.buildah.version=1.33.7, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.openshift.tags=minimal rhel9)
Oct  3 09:36:27 compute-0 podman[228023]: 2025-10-03 09:36:27.130017084 +0000 UTC m=+0.128431478 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:36:27 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct  3 09:36:27 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct  3 09:36:27 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct  3 09:36:27 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct  3 09:36:27 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  3 09:36:27 compute-0 python3.9[228093]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:36:27 compute-0 podman[228248]: 2025-10-03 09:36:27.923334571 +0000 UTC m=+0.065277724 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 09:36:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v237: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Oct  3 09:36:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Oct  3 09:36:27 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct  3 09:36:28 compute-0 python3.9[228249]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:36:28 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct  3 09:36:28 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct  3 09:36:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct  3 09:36:28 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct  3 09:36:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct  3 09:36:28 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct  3 09:36:28 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct  3 09:36:28 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct  3 09:36:28 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct  3 09:36:28 compute-0 python3.9[228350]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:36:29 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct  3 09:36:29 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct  3 09:36:29 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct  3 09:36:29 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 105 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=74/75 n=5 ec=55/45 lis/c=74/74 les/c/f=75/75/0 sis=105 pruub=10.179777145s) [0] r=-1 lpr=105 pi=[74,105)/1 crt=51'389 mlcod 0'0 active pruub 216.198547363s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:29 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 106 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=74/75 n=5 ec=55/45 lis/c=74/74 les/c/f=75/75/0 sis=105 pruub=10.179724693s) [0] r=-1 lpr=105 pi=[74,105)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 216.198547363s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:29 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 106 pg[9.16( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=74/74 les/c/f=75/75/0 sis=105) [0] r=0 lpr=106 pi=[74,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:29 compute-0 podman[157165]: time="2025-10-03T09:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:36:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Oct  3 09:36:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6792 "" "Go-http-client/1.1"
Oct  3 09:36:29 compute-0 python3.9[228502]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  3 09:36:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v239: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 4 op/s; 32 B/s, 1 objects/s recovering
Oct  3 09:36:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Oct  3 09:36:30 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct  3 09:36:30 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Oct  3 09:36:30 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Oct  3 09:36:30 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Oct  3 09:36:30 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Oct  3 09:36:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct  3 09:36:30 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct  3 09:36:30 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.1b deep-scrub starts
Oct  3 09:36:30 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct  3 09:36:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct  3 09:36:30 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct  3 09:36:30 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 107 pg[9.16( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=74/74 les/c/f=75/75/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[74,107)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:30 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 107 pg[9.16( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=74/74 les/c/f=75/75/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[74,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:30 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.1b deep-scrub ok
Oct  3 09:36:30 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 107 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=74/75 n=5 ec=55/45 lis/c=74/74 les/c/f=75/75/0 sis=107) [0]/[2] r=0 lpr=107 pi=[74,107)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:30 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 107 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=74/75 n=5 ec=55/45 lis/c=74/74 les/c/f=75/75/0 sis=107) [0]/[2] r=0 lpr=107 pi=[74,107)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:36:30 compute-0 python3.9[228655]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  3 09:36:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct  3 09:36:31 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct  3 09:36:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct  3 09:36:31 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct  3 09:36:31 compute-0 openstack_network_exporter[159287]: ERROR   09:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:36:31 compute-0 openstack_network_exporter[159287]: ERROR   09:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:36:31 compute-0 openstack_network_exporter[159287]: ERROR   09:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:36:31 compute-0 openstack_network_exporter[159287]: ERROR   09:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:36:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:36:31 compute-0 openstack_network_exporter[159287]: ERROR   09:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:36:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:36:31 compute-0 python3.9[228808]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  3 09:36:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v242: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:36:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Oct  3 09:36:32 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct  3 09:36:32 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct  3 09:36:32 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct  3 09:36:32 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 108 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=107/108 n=5 ec=55/45 lis/c=74/74 les/c/f=75/75/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[74,107)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:36:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct  3 09:36:32 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct  3 09:36:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct  3 09:36:32 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct  3 09:36:32 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct  3 09:36:32 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 109 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=109 pruub=12.141753197s) [2] r=-1 lpr=109 pi=[63,109)/1 crt=51'389 mlcod 0'0 active pruub 233.553512573s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:32 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 109 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=109 pruub=12.141704559s) [2] r=-1 lpr=109 pi=[63,109)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 233.553512573s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:32 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 109 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=107/108 n=5 ec=55/45 lis/c=107/74 les/c/f=108/75/0 sis=109 pruub=15.884590149s) [0] async=[0] r=-1 lpr=109 pi=[74,109)/1 crt=51'389 mlcod 51'389 active pruub 224.954635620s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:32 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 109 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=107/108 n=5 ec=55/45 lis/c=107/74 les/c/f=108/75/0 sis=109 pruub=15.884515762s) [0] r=-1 lpr=109 pi=[74,109)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 224.954635620s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:32 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 109 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=107/74 les/c/f=108/75/0 sis=109) [0] r=0 lpr=109 pi=[74,109)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:32 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 109 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=107/74 les/c/f=108/75/0 sis=109) [0] r=0 lpr=109 pi=[74,109)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:32 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 109 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=109) [2] r=0 lpr=109 pi=[63,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:32 compute-0 python3.9[228960]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  3 09:36:33 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct  3 09:36:33 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct  3 09:36:33 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.1c deep-scrub starts
Oct  3 09:36:33 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.1c deep-scrub ok
Oct  3 09:36:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct  3 09:36:33 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct  3 09:36:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct  3 09:36:33 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct  3 09:36:33 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 110 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=110) [2]/[0] r=0 lpr=110 pi=[63,110)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:33 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 110 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=63/64 n=5 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=110) [2]/[0] r=0 lpr=110 pi=[63,110)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:33 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 110 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[63,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:33 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 110 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[63,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:33 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 110 pg[9.16( v 51'389 (0'0,51'389] local-lis/les=109/110 n=5 ec=55/45 lis/c=107/74 les/c/f=108/75/0 sis=109) [0] r=0 lpr=109 pi=[74,109)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:36:33 compute-0 python3.9[229112]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 09:36:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v245: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct  3 09:36:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct  3 09:36:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct  3 09:36:34 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct  3 09:36:34 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 111 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=110/111 n=5 ec=55/45 lis/c=63/63 les/c/f=64/64/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[63,110)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:36:35 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct  3 09:36:35 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct  3 09:36:35 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct  3 09:36:35 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct  3 09:36:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct  3 09:36:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct  3 09:36:35 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct  3 09:36:35 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 112 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=110/111 n=5 ec=55/45 lis/c=110/63 les/c/f=111/64/0 sis=112 pruub=14.989250183s) [2] async=[2] r=-1 lpr=112 pi=[63,112)/1 crt=51'389 mlcod 51'389 active pruub 239.466110229s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:35 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 112 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=110/111 n=5 ec=55/45 lis/c=110/63 les/c/f=111/64/0 sis=112 pruub=14.989178658s) [2] r=-1 lpr=112 pi=[63,112)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 239.466110229s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:35 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 112 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=110/63 les/c/f=111/64/0 sis=112) [2] r=0 lpr=112 pi=[63,112)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:35 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 112 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=110/63 les/c/f=111/64/0 sis=112) [2] r=0 lpr=112 pi=[63,112)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:36:35 compute-0 python3.9[229265]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:36:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v248: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct  3 09:36:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct  3 09:36:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct  3 09:36:36 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct  3 09:36:36 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 113 pg[9.19( v 51'389 (0'0,51'389] local-lis/les=112/113 n=5 ec=55/45 lis/c=110/63 les/c/f=111/64/0 sis=112) [2] r=0 lpr=112 pi=[63,112)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:36:36 compute-0 python3.9[229417]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:36:37 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct  3 09:36:37 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct  3 09:36:37 compute-0 python3.9[229495]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:36:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v250: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:36:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Oct  3 09:36:38 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct  3 09:36:38 compute-0 python3.9[229647]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:36:38 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Oct  3 09:36:38 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Oct  3 09:36:38 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Oct  3 09:36:38 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Oct  3 09:36:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct  3 09:36:38 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct  3 09:36:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct  3 09:36:38 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct  3 09:36:38 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct  3 09:36:38 compute-0 python3.9[229725]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:36:39 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct  3 09:36:39 compute-0 python3.9[229877]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 09:36:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v252: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 1 objects/s recovering
Oct  3 09:36:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Oct  3 09:36:40 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct  3 09:36:40 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct  3 09:36:40 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct  3 09:36:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:36:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct  3 09:36:40 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct  3 09:36:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct  3 09:36:40 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct  3 09:36:40 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct  3 09:36:41 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct  3 09:36:41 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct  3 09:36:41 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct  3 09:36:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct  3 09:36:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Oct  3 09:36:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct  3 09:36:42 compute-0 python3.9[230028]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:36:42 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Oct  3 09:36:42 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Oct  3 09:36:42 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.17 deep-scrub starts
Oct  3 09:36:42 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.17 deep-scrub ok
Oct  3 09:36:42 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 11.2 deep-scrub starts
Oct  3 09:36:42 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 11.2 deep-scrub ok
Oct  3 09:36:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct  3 09:36:42 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct  3 09:36:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct  3 09:36:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct  3 09:36:42 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct  3 09:36:42 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 116 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=87/88 n=5 ec=55/45 lis/c=87/87 les/c/f=88/88/0 sis=116 pruub=12.794911385s) [0] r=-1 lpr=116 pi=[87,116)/1 crt=51'389 mlcod 0'0 active pruub 232.036926270s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:42 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 116 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=87/88 n=5 ec=55/45 lis/c=87/87 les/c/f=88/88/0 sis=116 pruub=12.794004440s) [0] r=-1 lpr=116 pi=[87,116)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 232.036926270s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:42 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 116 pg[9.1c( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=87/87 les/c/f=88/88/0 sis=116) [0] r=0 lpr=116 pi=[87,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:43 compute-0 python3.9[230180]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  3 09:36:43 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct  3 09:36:43 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct  3 09:36:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct  3 09:36:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct  3 09:36:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct  3 09:36:43 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct  3 09:36:43 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 117 pg[9.1c( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=87/87 les/c/f=88/88/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[87,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:43 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 117 pg[9.1c( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=87/87 les/c/f=88/88/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[87,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:43 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 117 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=87/88 n=5 ec=55/45 lis/c=87/87 les/c/f=88/88/0 sis=117) [0]/[2] r=0 lpr=117 pi=[87,117)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:43 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 117 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=87/88 n=5 ec=55/45 lis/c=87/87 les/c/f=88/88/0 sis=117) [0]/[2] r=0 lpr=117 pi=[87,117)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:43 compute-0 python3.9[230330]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:36:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:36:44 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct  3 09:36:44 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct  3 09:36:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct  3 09:36:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct  3 09:36:44 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct  3 09:36:44 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 118 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=117/118 n=5 ec=55/45 lis/c=87/87 les/c/f=88/88/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[87,117)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:36:45 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Oct  3 09:36:45 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Oct  3 09:36:45 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct  3 09:36:45 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct  3 09:36:45 compute-0 python3.9[230482]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:36:45 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  3 09:36:45 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Oct  3 09:36:45 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  3 09:36:45 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  3 09:36:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:36:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct  3 09:36:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct  3 09:36:45 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct  3 09:36:45 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 119 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=117/118 n=5 ec=55/45 lis/c=117/87 les/c/f=118/88/0 sis=119 pruub=15.095081329s) [0] async=[0] r=-1 lpr=119 pi=[87,119)/1 crt=51'389 mlcod 51'389 active pruub 237.281799316s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:45 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 119 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=117/118 n=5 ec=55/45 lis/c=117/87 les/c/f=118/88/0 sis=119 pruub=15.094964981s) [0] r=-1 lpr=119 pi=[87,119)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 237.281799316s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:45 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 119 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=117/87 les/c/f=118/88/0 sis=119) [0] r=0 lpr=119 pi=[87,119)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:45 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 119 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=117/87 les/c/f=118/88/0 sis=119) [0] r=0 lpr=119 pi=[87,119)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:45 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  3 09:36:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:36:45
Oct  3 09:36:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:36:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:36:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.log', 'backups', '.rgw.root', '.mgr', 'default.rgw.meta']
Oct  3 09:36:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:36:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct  3 09:36:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Oct  3 09:36:46 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct  3 09:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:36:46 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Oct  3 09:36:46 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Oct  3 09:36:46 compute-0 python3.9[230643]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  3 09:36:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct  3 09:36:46 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct  3 09:36:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct  3 09:36:46 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct  3 09:36:46 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 120 pg[9.1c( v 51'389 (0'0,51'389] local-lis/les=119/120 n=5 ec=55/45 lis/c=117/87 les/c/f=118/88/0 sis=119) [0] r=0 lpr=119 pi=[87,119)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:36:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct  3 09:36:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct  3 09:36:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 6 op/s; 49 B/s, 2 objects/s recovering
Oct  3 09:36:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Oct  3 09:36:48 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct  3 09:36:48 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct  3 09:36:48 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct  3 09:36:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct  3 09:36:48 compute-0 python3.9[230795]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:36:48 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct  3 09:36:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct  3 09:36:48 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct  3 09:36:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct  3 09:36:48 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 121 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=74/75 n=5 ec=55/45 lis/c=74/74 les/c/f=75/75/0 sis=121 pruub=14.839378357s) [0] r=-1 lpr=121 pi=[74,121)/1 crt=51'389 mlcod 0'0 active pruub 240.204544067s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:48 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 121 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=74/75 n=5 ec=55/45 lis/c=74/74 les/c/f=75/75/0 sis=121 pruub=14.839303017s) [0] r=-1 lpr=121 pi=[74,121)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 240.204544067s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:48 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 121 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=74/74 les/c/f=75/75/0 sis=121) [0] r=0 lpr=121 pi=[74,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:49 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.e scrub starts
Oct  3 09:36:49 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.e scrub ok
Oct  3 09:36:49 compute-0 python3.9[230949]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:36:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct  3 09:36:49 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct  3 09:36:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct  3 09:36:49 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct  3 09:36:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 122 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=74/75 n=5 ec=55/45 lis/c=74/74 les/c/f=75/75/0 sis=122) [0]/[2] r=0 lpr=122 pi=[74,122)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:49 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 122 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=74/75 n=5 ec=55/45 lis/c=74/74 les/c/f=75/75/0 sis=122) [0]/[2] r=0 lpr=122 pi=[74,122)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:49 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 122 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=74/74 les/c/f=75/75/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[74,122)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:49 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 122 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=74/74 les/c/f=75/75/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[74,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:36:50 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Oct  3 09:36:50 compute-0 systemd[1]: session-43.scope: Consumed 1min 12.432s CPU time.
Oct  3 09:36:50 compute-0 systemd-logind[798]: Session 43 logged out. Waiting for processes to exit.
Oct  3 09:36:50 compute-0 systemd-logind[798]: Removed session 43.
Oct  3 09:36:50 compute-0 podman[230979]: 2025-10-03 09:36:50.189028065 +0000 UTC m=+0.086287903 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, org.label-schema.build-date=20250930)
Oct  3 09:36:50 compute-0 podman[230977]: 2025-10-03 09:36:50.198874009 +0000 UTC m=+0.093001188 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, version=9.4, distribution-scope=public, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.buildah.version=1.29.0, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 09:36:50 compute-0 podman[230980]: 2025-10-03 09:36:50.213616279 +0000 UTC m=+0.107419858 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  3 09:36:50 compute-0 podman[230978]: 2025-10-03 09:36:50.216395768 +0000 UTC m=+0.114051849 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:36:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:36:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct  3 09:36:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct  3 09:36:50 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct  3 09:36:50 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 123 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=122/123 n=5 ec=55/45 lis/c=74/74 les/c/f=75/75/0 sis=122) [0]/[2] async=[0] r=0 lpr=122 pi=[74,122)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:36:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct  3 09:36:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct  3 09:36:51 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct  3 09:36:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 124 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=122/123 n=5 ec=55/45 lis/c=122/74 les/c/f=123/75/0 sis=124 pruub=15.250021935s) [0] async=[0] r=-1 lpr=124 pi=[74,124)/1 crt=51'389 mlcod 51'389 active pruub 243.676239014s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:51 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 124 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=122/123 n=5 ec=55/45 lis/c=122/74 les/c/f=123/75/0 sis=124 pruub=15.249917030s) [0] r=-1 lpr=124 pi=[74,124)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 243.676239014s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 124 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=122/74 les/c/f=123/75/0 sis=124) [0] r=0 lpr=124 pi=[74,124)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:51 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 124 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=122/74 les/c/f=123/75/0 sis=124) [0] r=0 lpr=124 pi=[74,124)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:36:52 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Oct  3 09:36:52 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Oct  3 09:36:52 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 8.d scrub starts
Oct  3 09:36:52 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 8.d scrub ok
Oct  3 09:36:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct  3 09:36:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct  3 09:36:52 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct  3 09:36:52 compute-0 ceph-osd[205584]: osd.0 pg_epoch: 125 pg[9.1e( v 51'389 (0'0,51'389] local-lis/les=124/125 n=5 ec=55/45 lis/c=122/74 les/c/f=123/75/0 sis=124) [0] r=0 lpr=124 pi=[74,124)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  3 09:36:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  3 09:36:54 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:36:54 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 11.d scrub starts
Oct  3 09:36:54 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 11.d scrub ok
Oct  3 09:36:54 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.1e deep-scrub starts
Oct  3 09:36:54 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.1e deep-scrub ok
Oct  3 09:36:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct  3 09:36:54 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:36:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct  3 09:36:54 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct  3 09:36:54 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 126 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=75/76 n=5 ec=55/45 lis/c=75/75 les/c/f=76/76/0 sis=126 pruub=9.763368607s) [1] r=-1 lpr=126 pi=[75,126)/1 crt=51'389 mlcod 0'0 active pruub 241.221633911s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:54 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  3 09:36:54 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 126 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=75/76 n=5 ec=55/45 lis/c=75/75 les/c/f=76/76/0 sis=126 pruub=9.763294220s) [1] r=-1 lpr=126 pi=[75,126)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 241.221633911s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:54 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 126 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=75/75 les/c/f=76/76/0 sis=126) [1] r=0 lpr=126 pi=[75,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:36:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:36:54 compute-0 systemd-logind[798]: New session 44 of user zuul.
Oct  3 09:36:54 compute-0 systemd[1]: Started Session 44 of User zuul.
Oct  3 09:36:55 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct  3 09:36:55 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct  3 09:36:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:36:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct  3 09:36:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct  3 09:36:55 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct  3 09:36:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 127 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=75/75 les/c/f=76/76/0 sis=127) [1]/[2] r=-1 lpr=127 pi=[75,127)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:55 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 127 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/45 lis/c=75/75 les/c/f=76/76/0 sis=127) [1]/[2] r=-1 lpr=127 pi=[75,127)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 127 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=75/76 n=5 ec=55/45 lis/c=75/75 les/c/f=76/76/0 sis=127) [1]/[2] r=0 lpr=127 pi=[75,127)/1 crt=51'389 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:55 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 127 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=75/76 n=5 ec=55/45 lis/c=75/75 les/c/f=76/76/0 sis=127) [1]/[2] r=0 lpr=127 pi=[75,127)/1 crt=51'389 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:55 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  3 09:36:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:36:55 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:36:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:36:55 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:36:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:36:55 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:36:55 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d76fb773-10a3-4e77-b31b-977d78b09ed4 does not exist
Oct  3 09:36:55 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b9008e98-4f25-4e47-adad-5f89a9b69f26 does not exist
Oct  3 09:36:55 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f150a0ff-6113-4dd4-aee0-e2ffc3a100f1 does not exist
Oct  3 09:36:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:36:55 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:36:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:36:55 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:36:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:36:55 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:36:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  3 09:36:56 compute-0 python3.9[231339]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:36:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct  3 09:36:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct  3 09:36:56 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct  3 09:36:56 compute-0 podman[231507]: 2025-10-03 09:36:56.589605397 +0000 UTC m=+0.049172279 container create f1fc294f9d85ba473bdd9302f114cb776d33316fc6d7661463e2260ba28c6e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elgamal, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:36:56 compute-0 systemd[193541]: Created slice User Background Tasks Slice.
Oct  3 09:36:56 compute-0 systemd[193541]: Starting Cleanup of User's Temporary Files and Directories...
Oct  3 09:36:56 compute-0 systemd[1]: Started libpod-conmon-f1fc294f9d85ba473bdd9302f114cb776d33316fc6d7661463e2260ba28c6e65.scope.
Oct  3 09:36:56 compute-0 systemd[193541]: Finished Cleanup of User's Temporary Files and Directories.
Oct  3 09:36:56 compute-0 podman[231507]: 2025-10-03 09:36:56.570640372 +0000 UTC m=+0.030207274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:36:56 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:36:56 compute-0 podman[231507]: 2025-10-03 09:36:56.685975781 +0000 UTC m=+0.145542673 container init f1fc294f9d85ba473bdd9302f114cb776d33316fc6d7661463e2260ba28c6e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elgamal, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 09:36:56 compute-0 podman[231507]: 2025-10-03 09:36:56.695267888 +0000 UTC m=+0.154834770 container start f1fc294f9d85ba473bdd9302f114cb776d33316fc6d7661463e2260ba28c6e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elgamal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:36:56 compute-0 podman[231507]: 2025-10-03 09:36:56.699563024 +0000 UTC m=+0.159129936 container attach f1fc294f9d85ba473bdd9302f114cb776d33316fc6d7661463e2260ba28c6e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:36:56 compute-0 suspicious_elgamal[231525]: 167 167
Oct  3 09:36:56 compute-0 podman[231507]: 2025-10-03 09:36:56.703977736 +0000 UTC m=+0.163544618 container died f1fc294f9d85ba473bdd9302f114cb776d33316fc6d7661463e2260ba28c6e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elgamal, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:36:56 compute-0 systemd[1]: libpod-f1fc294f9d85ba473bdd9302f114cb776d33316fc6d7661463e2260ba28c6e65.scope: Deactivated successfully.
Oct  3 09:36:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f158303955c21e7e7f1ecf18122e5e054a67ff60b96d1ffd62e7bff66d812c38-merged.mount: Deactivated successfully.
Oct  3 09:36:56 compute-0 podman[231507]: 2025-10-03 09:36:56.755043234 +0000 UTC m=+0.214610116 container remove f1fc294f9d85ba473bdd9302f114cb776d33316fc6d7661463e2260ba28c6e65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 09:36:56 compute-0 systemd[1]: libpod-conmon-f1fc294f9d85ba473bdd9302f114cb776d33316fc6d7661463e2260ba28c6e65.scope: Deactivated successfully.
Oct  3 09:36:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:36:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:36:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:36:56 compute-0 podman[231599]: 2025-10-03 09:36:56.927460675 +0000 UTC m=+0.053233259 container create a78563356d98921b36a693698fd7bd0075c98cdbff18784d7f831cf798009cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kowalevski, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:36:56 compute-0 systemd[1]: Started libpod-conmon-a78563356d98921b36a693698fd7bd0075c98cdbff18784d7f831cf798009cdc.scope.
Oct  3 09:36:56 compute-0 podman[231599]: 2025-10-03 09:36:56.905542466 +0000 UTC m=+0.031315050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:36:57 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed617bc81e3436218ccaa4e1fc668d70f544a480c0d1c25b483f0f25faf1935/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed617bc81e3436218ccaa4e1fc668d70f544a480c0d1c25b483f0f25faf1935/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed617bc81e3436218ccaa4e1fc668d70f544a480c0d1c25b483f0f25faf1935/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed617bc81e3436218ccaa4e1fc668d70f544a480c0d1c25b483f0f25faf1935/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed617bc81e3436218ccaa4e1fc668d70f544a480c0d1c25b483f0f25faf1935/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:36:57 compute-0 podman[231599]: 2025-10-03 09:36:57.048168375 +0000 UTC m=+0.173940979 container init a78563356d98921b36a693698fd7bd0075c98cdbff18784d7f831cf798009cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:36:57 compute-0 podman[231599]: 2025-10-03 09:36:57.066327434 +0000 UTC m=+0.192100008 container start a78563356d98921b36a693698fd7bd0075c98cdbff18784d7f831cf798009cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct  3 09:36:57 compute-0 podman[231599]: 2025-10-03 09:36:57.071343274 +0000 UTC m=+0.197115878 container attach a78563356d98921b36a693698fd7bd0075c98cdbff18784d7f831cf798009cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:36:57 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 128 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=127/128 n=5 ec=55/45 lis/c=75/75 les/c/f=76/76/0 sis=127) [1]/[2] async=[1] r=0 lpr=127 pi=[75,127)/1 crt=51'389 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:36:57 compute-0 podman[231668]: 2025-10-03 09:36:57.304379968 +0000 UTC m=+0.104646739 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, release=1755695350, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Oct  3 09:36:57 compute-0 podman[231667]: 2025-10-03 09:36:57.31887802 +0000 UTC m=+0.117819459 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  3 09:36:57 compute-0 python3.9[231732]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  3 09:36:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct  3 09:36:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct  3 09:36:57 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct  3 09:36:57 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 129 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=127/75 les/c/f=128/76/0 sis=129) [1] r=0 lpr=129 pi=[75,129)/1 luod=0'0 crt=51'389 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:57 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 129 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=0/0 n=5 ec=55/45 lis/c=127/75 les/c/f=128/76/0 sis=129) [1] r=0 lpr=129 pi=[75,129)/1 crt=51'389 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  3 09:36:57 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 129 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=127/128 n=5 ec=55/45 lis/c=127/75 les/c/f=128/76/0 sis=129 pruub=15.328033447s) [1] async=[1] r=-1 lpr=129 pi=[75,129)/1 crt=51'389 mlcod 51'389 active pruub 249.904479980s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  3 09:36:57 compute-0 ceph-osd[207741]: osd.2 pg_epoch: 129 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=127/128 n=5 ec=55/45 lis/c=127/75 les/c/f=128/76/0 sis=129 pruub=15.327948570s) [1] r=-1 lpr=129 pi=[75,129)/1 crt=51'389 mlcod 0'0 unknown NOTIFY pruub 249.904479980s@ mbc={}] state<Start>: transitioning to Stray
Oct  3 09:36:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:36:58 compute-0 hungry_kowalevski[231615]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:36:58 compute-0 hungry_kowalevski[231615]: --> relative data size: 1.0
Oct  3 09:36:58 compute-0 hungry_kowalevski[231615]: --> All data devices are unavailable
Oct  3 09:36:58 compute-0 systemd[1]: libpod-a78563356d98921b36a693698fd7bd0075c98cdbff18784d7f831cf798009cdc.scope: Deactivated successfully.
Oct  3 09:36:58 compute-0 systemd[1]: libpod-a78563356d98921b36a693698fd7bd0075c98cdbff18784d7f831cf798009cdc.scope: Consumed 1.058s CPU time.
Oct  3 09:36:58 compute-0 podman[231599]: 2025-10-03 09:36:58.2063432 +0000 UTC m=+1.332115814 container died a78563356d98921b36a693698fd7bd0075c98cdbff18784d7f831cf798009cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 09:36:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ed617bc81e3436218ccaa4e1fc668d70f544a480c0d1c25b483f0f25faf1935-merged.mount: Deactivated successfully.
Oct  3 09:36:58 compute-0 podman[231599]: 2025-10-03 09:36:58.308065115 +0000 UTC m=+1.433837689 container remove a78563356d98921b36a693698fd7bd0075c98cdbff18784d7f831cf798009cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kowalevski, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:36:58 compute-0 systemd[1]: libpod-conmon-a78563356d98921b36a693698fd7bd0075c98cdbff18784d7f831cf798009cdc.scope: Deactivated successfully.
Oct  3 09:36:58 compute-0 podman[231868]: 2025-10-03 09:36:58.357852373 +0000 UTC m=+0.116207958 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:36:58 compute-0 python3.9[231969]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 09:36:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct  3 09:36:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct  3 09:36:58 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct  3 09:36:58 compute-0 ceph-osd[206733]: osd.1 pg_epoch: 130 pg[9.1f( v 51'389 (0'0,51'389] local-lis/les=129/130 n=5 ec=55/45 lis/c=127/75 les/c/f=128/76/0 sis=129) [1] r=0 lpr=129 pi=[75,129)/1 crt=51'389 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  3 09:36:59 compute-0 podman[232094]: 2025-10-03 09:36:59.058261386 +0000 UTC m=+0.056754112 container create 91c35511812e32c53097614eb5ec08e4b8f16ed4baeaaf8d2e586dcfec36466a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 09:36:59 compute-0 systemd[1]: Started libpod-conmon-91c35511812e32c53097614eb5ec08e4b8f16ed4baeaaf8d2e586dcfec36466a.scope.
Oct  3 09:36:59 compute-0 podman[232094]: 2025-10-03 09:36:59.035924284 +0000 UTC m=+0.034417060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:36:59 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:36:59 compute-0 podman[232094]: 2025-10-03 09:36:59.152498202 +0000 UTC m=+0.150990948 container init 91c35511812e32c53097614eb5ec08e4b8f16ed4baeaaf8d2e586dcfec36466a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamarr, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:36:59 compute-0 podman[232094]: 2025-10-03 09:36:59.167707507 +0000 UTC m=+0.166200223 container start 91c35511812e32c53097614eb5ec08e4b8f16ed4baeaaf8d2e586dcfec36466a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamarr, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:36:59 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct  3 09:36:59 compute-0 podman[232094]: 2025-10-03 09:36:59.172204471 +0000 UTC m=+0.170697197 container attach 91c35511812e32c53097614eb5ec08e4b8f16ed4baeaaf8d2e586dcfec36466a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 09:36:59 compute-0 busy_lamarr[232110]: 167 167
Oct  3 09:36:59 compute-0 podman[232094]: 2025-10-03 09:36:59.174980059 +0000 UTC m=+0.173472805 container died 91c35511812e32c53097614eb5ec08e4b8f16ed4baeaaf8d2e586dcfec36466a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 09:36:59 compute-0 systemd[1]: libpod-91c35511812e32c53097614eb5ec08e4b8f16ed4baeaaf8d2e586dcfec36466a.scope: Deactivated successfully.
Oct  3 09:36:59 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct  3 09:36:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6a78bd525b53a112382a955751ef14f833ee19224a63ff0fc570bf46f1c17c5-merged.mount: Deactivated successfully.
Oct  3 09:36:59 compute-0 podman[232094]: 2025-10-03 09:36:59.227543656 +0000 UTC m=+0.226036372 container remove 91c35511812e32c53097614eb5ec08e4b8f16ed4baeaaf8d2e586dcfec36466a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:36:59 compute-0 systemd[1]: libpod-conmon-91c35511812e32c53097614eb5ec08e4b8f16ed4baeaaf8d2e586dcfec36466a.scope: Deactivated successfully.
Oct  3 09:36:59 compute-0 podman[232159]: 2025-10-03 09:36:59.406415111 +0000 UTC m=+0.043613051 container create 84d2986d529db703c5b16df6472b9c1f808cff3a942feae67805cbf7971069a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:36:59 compute-0 systemd[1]: Started libpod-conmon-84d2986d529db703c5b16df6472b9c1f808cff3a942feae67805cbf7971069a3.scope.
Oct  3 09:36:59 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:36:59 compute-0 podman[232159]: 2025-10-03 09:36:59.387888391 +0000 UTC m=+0.025086361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b4bcd62e7ae1e5a2a8315d1285a15b41d1e70a66268d410d6b8ab6d66102b64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b4bcd62e7ae1e5a2a8315d1285a15b41d1e70a66268d410d6b8ab6d66102b64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b4bcd62e7ae1e5a2a8315d1285a15b41d1e70a66268d410d6b8ab6d66102b64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:36:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b4bcd62e7ae1e5a2a8315d1285a15b41d1e70a66268d410d6b8ab6d66102b64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:36:59 compute-0 podman[232159]: 2025-10-03 09:36:59.512988591 +0000 UTC m=+0.150186551 container init 84d2986d529db703c5b16df6472b9c1f808cff3a942feae67805cbf7971069a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:36:59 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Oct  3 09:36:59 compute-0 podman[232159]: 2025-10-03 09:36:59.527934618 +0000 UTC m=+0.165132558 container start 84d2986d529db703c5b16df6472b9c1f808cff3a942feae67805cbf7971069a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  3 09:36:59 compute-0 podman[232159]: 2025-10-03 09:36:59.532203194 +0000 UTC m=+0.169401134 container attach 84d2986d529db703c5b16df6472b9c1f808cff3a942feae67805cbf7971069a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_poincare, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:36:59 compute-0 ceph-osd[205584]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Oct  3 09:36:59 compute-0 podman[157165]: time="2025-10-03T09:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:36:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34390 "" "Go-http-client/1.1"
Oct  3 09:36:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7218 "" "Go-http-client/1.1"
Oct  3 09:36:59 compute-0 python3.9[232226]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  3 09:37:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 321 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  3 09:37:00 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct  3 09:37:00 compute-0 ceph-osd[206733]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct  3 09:37:00 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct  3 09:37:00 compute-0 ceph-osd[207741]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]: {
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:    "0": [
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:        {
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "devices": [
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "/dev/loop3"
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            ],
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "lv_name": "ceph_lv0",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "lv_size": "21470642176",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "name": "ceph_lv0",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "tags": {
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.cluster_name": "ceph",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.crush_device_class": "",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.encrypted": "0",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.osd_id": "0",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.type": "block",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.vdo": "0"
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            },
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "type": "block",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "vg_name": "ceph_vg0"
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:        }
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:    ],
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:    "1": [
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:        {
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "devices": [
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "/dev/loop4"
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            ],
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "lv_name": "ceph_lv1",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "lv_size": "21470642176",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "name": "ceph_lv1",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:            "tags": {
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.cluster_name": "ceph",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.crush_device_class": "",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.encrypted": "0",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.osd_id": "1",
Oct  3 09:37:00 compute-0 inspiring_poincare[232216]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:39:27 compute-0 python3.9[249755]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  3 09:39:27 compute-0 rsyslogd[187556]: imjournal: 2085 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct  3 09:39:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:39:28 compute-0 python3.9[249907]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:39:29 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  3 09:39:29 compute-0 podman[249987]: 2025-10-03 09:39:29.23994212 +0000 UTC m=+0.086306523 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-type=git, maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct  3 09:39:29 compute-0 podman[249986]: 2025-10-03 09:39:29.2883172 +0000 UTC m=+0.140595224 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct  3 09:39:29 compute-0 python3.9[250109]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct  3 09:39:29 compute-0 podman[157165]: time="2025-10-03T09:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:39:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Oct  3 09:39:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6804 "" "Go-http-client/1.1"
Oct  3 09:39:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:39:30 compute-0 python3.9[250261]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.lsn52clm follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:39:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:39:30 compute-0 podman[250308]: 2025-10-03 09:39:30.814147769 +0000 UTC m=+0.080707653 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:39:31 compute-0 openstack_network_exporter[159287]: ERROR   09:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:39:31 compute-0 openstack_network_exporter[159287]: ERROR   09:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:39:31 compute-0 openstack_network_exporter[159287]: ERROR   09:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:39:31 compute-0 openstack_network_exporter[159287]: ERROR   09:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:39:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:39:31 compute-0 openstack_network_exporter[159287]: ERROR   09:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:39:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:39:31 compute-0 python3.9[250410]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.lsn52clm mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759484369.9024966-44-93968640761058/.source.lsn52clm _original_basename=.s2lsrty4 follow=False checksum=ddaa840ce4fcd66a1d008fee730f44d5d6239ead backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:39:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:39:32 compute-0 python3.9[250562]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:39:33 compute-0 python3.9[250714]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDK4WTt4xfXLVwcRwaUjEcQuTzZaA6+xfZoKvSEmlHXlteBKLg49zRr9WMOm7VuGvF7VdZSGZC2rKIhWoMtv/Znw8t2sgD9s7fQHlhJarA5UzZOuA8UEmlEVuJiIuxqO0U/3vocfIPfFsINVOJJSQcsXmBmar2rJHMSLTcxSZ1gIJKbt4zWALA2xd4rm0RJPMmAbCVBx//Q3Tq/agJ2+esCcGprB3rJZ1KETzXEaZTnp1ea7xZsb4B+QM07L7PAvMed0ELxdUlDDtPWDl3nVmt8mTFmVUF8XkQMWDrXfT8L5r9vBDYFTXbmUT6hwYElNZuSJRsz2AKj8T1Ww4RjWM/3+nwJzUIFYQ1qDgTnfO/gQb2hkSPHxm+uYPCy8XJUvX0JTq4Dy9phAnvTBBiZRUBL7IJWCoAUrQqgQDzz/cmBGY/h/9WXab5t62pvyq5GjSyKQeCgl6C+LNizUU5DJqjNzstWIz1tzFVetJjz68d7g9MMwlniuw/XjIuijz9+zj8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC/sH/Biub/ue+Yt01F7tQoZjOq2HzQ6x0hqVBc5qpVY#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKW8xTeb68zZuBoaFZxe0+liZSD/t9iQ2YlLG27C9NpUXcRSwJq28L2aw0M8BztmsIWjN+83014f6s2TAnQ4raE=#012 create=True mode=0644 path=/tmp/ansible.lsn52clm state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:39:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:39:34 compute-0 python3.9[250866]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.lsn52clm' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:39:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:39:35 compute-0 python3.9[251020]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.lsn52clm state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:39:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:39:36 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Oct  3 09:39:36 compute-0 systemd[1]: session-49.scope: Consumed 7.347s CPU time.
Oct  3 09:39:36 compute-0 systemd-logind[798]: Session 49 logged out. Waiting for processes to exit.
Oct  3 09:39:36 compute-0 systemd-logind[798]: Removed session 49.
Oct  3 09:39:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:39:38 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Oct  3 09:39:38 compute-0 systemd[1]: session-26.scope: Consumed 2min 30.096s CPU time.
Oct  3 09:39:38 compute-0 systemd-logind[798]: Session 26 logged out. Waiting for processes to exit.
Oct  3 09:39:38 compute-0 systemd-logind[798]: Removed session 26.
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.951 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.952 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f70b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa35c9f7170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b8940b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35df74380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b894380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e566ba0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f73b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e6b9400>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f76e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa35b894080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa35c9f71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa35c9f7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa35c9f7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa35c9f72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa35c9f7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa35c955970>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa35b894350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa35c92f7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa35c9f7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa35c9f7710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa35c9f79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa35e6e6180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa35c9f73e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa35c9f7c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa35c9f7440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa35c9f7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa35c9f7ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa35c9f7d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.967 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa35c9f7e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.967 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa35c9f7650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.968 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa35c9f7e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.969 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa35c9f76b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.970 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.970 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa35c9f7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.970 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.971 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa35c9f7fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.971 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.973 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.974 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.975 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.975 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.975 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.976 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.976 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.976 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.976 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.977 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.977 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.977 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.977 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.978 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.978 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:39:38.978 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:39:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:39:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:39:41 compute-0 systemd-logind[798]: New session 50 of user zuul.
Oct  3 09:39:41 compute-0 systemd[1]: Started Session 50 of User zuul.
Oct  3 09:39:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:39:42 compute-0 python3.9[251200]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:39:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:39:44 compute-0 python3.9[251356]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  3 09:39:45 compute-0 python3.9[251510]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 09:39:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:39:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:39:45
Oct  3 09:39:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:39:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:39:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['vms', 'images', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'backups']
Oct  3 09:39:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:39:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:39:46 compute-0 python3.9[251663]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:39:47 compute-0 python3.9[251816]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:39:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:39:48 compute-0 python3.9[251968]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:39:49 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Oct  3 09:39:49 compute-0 systemd[1]: session-50.scope: Consumed 5.643s CPU time.
Oct  3 09:39:49 compute-0 systemd-logind[798]: Session 50 logged out. Waiting for processes to exit.
Oct  3 09:39:49 compute-0 systemd-logind[798]: Removed session 50.
Oct  3 09:39:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:39:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:39:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:39:52 compute-0 podman[251995]: 2025-10-03 09:39:52.830818157 +0000 UTC m=+0.072511889 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 09:39:52 compute-0 podman[251994]: 2025-10-03 09:39:52.842829893 +0000 UTC m=+0.084656490 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, release-0.7.12=, com.redhat.component=ubi9-container, config_id=edpm, distribution-scope=public, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., name=ubi9)
Oct  3 09:39:52 compute-0 podman[251997]: 2025-10-03 09:39:52.860670959 +0000 UTC m=+0.083455962 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Oct  3 09:39:52 compute-0 podman[251996]: 2025-10-03 09:39:52.896568746 +0000 UTC m=+0.126753058 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:39:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:39:54 compute-0 systemd-logind[798]: New session 51 of user zuul.
Oct  3 09:39:54 compute-0 systemd[1]: Started Session 51 of User zuul.
Oct  3 09:39:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:39:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:39:56 compute-0 python3.9[252223]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:39:57 compute-0 python3.9[252379]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 09:39:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:39:58 compute-0 python3.9[252463]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  3 09:39:59 compute-0 podman[157165]: time="2025-10-03T09:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:39:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Oct  3 09:39:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6805 "" "Go-http-client/1.1"
Oct  3 09:39:59 compute-0 podman[252466]: 2025-10-03 09:39:59.85215851 +0000 UTC m=+0.083369910 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct  3 09:39:59 compute-0 podman[252465]: 2025-10-03 09:39:59.900308522 +0000 UTC m=+0.148864700 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  3 09:40:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:40:00 compute-0 python3.9[252660]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:40:01 compute-0 podman[252662]: 2025-10-03 09:40:01.336473241 +0000 UTC m=+0.091982437 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 09:40:01 compute-0 openstack_network_exporter[159287]: ERROR   09:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:40:01 compute-0 openstack_network_exporter[159287]: ERROR   09:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:40:01 compute-0 openstack_network_exporter[159287]: ERROR   09:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:40:01 compute-0 openstack_network_exporter[159287]: ERROR   09:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:40:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:40:01 compute-0 openstack_network_exporter[159287]: ERROR   09:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:40:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:40:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:02 compute-0 python3.9[252834]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  3 09:40:03 compute-0 python3.9[252984]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:40:03 compute-0 python3.9[253134]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:40:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:04 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Oct  3 09:40:04 compute-0 systemd[1]: session-51.scope: Consumed 7.476s CPU time.
Oct  3 09:40:04 compute-0 systemd-logind[798]: Session 51 logged out. Waiting for processes to exit.
Oct  3 09:40:04 compute-0 systemd-logind[798]: Removed session 51.
Oct  3 09:40:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:40:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:09 compute-0 systemd-logind[798]: New session 52 of user zuul.
Oct  3 09:40:09 compute-0 systemd[1]: Started Session 52 of User zuul.
Oct  3 09:40:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:40:10 compute-0 python3.9[253314]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:40:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:13 compute-0 python3.9[253470]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:40:13 compute-0 python3.9[253622]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:40:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:14 compute-0 python3.9[253774]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:40:15 compute-0 python3.9[253852]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/ovn/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:40:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:16 compute-0 python3.9[254004]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:17 compute-0 python3.9[254082]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/ovn/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:17 compute-0 python3.9[254234]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:18 compute-0 python3.9[254312]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/ovn/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:19 compute-0 python3.9[254464]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:40:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:20 compute-0 python3.9[254617]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:40:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:40:21 compute-0 python3.9[254769]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:21 compute-0 python3.9[254847]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:22 compute-0 python3.9[254999]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:23 compute-0 podman[255054]: 2025-10-03 09:40:23.14135814 +0000 UTC m=+0.071990103 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:40:23 compute-0 podman[255050]: 2025-10-03 09:40:23.147049013 +0000 UTC m=+0.078100589 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, architecture=x86_64, container_name=kepler, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 09:40:23 compute-0 podman[255059]: 2025-10-03 09:40:23.165594211 +0000 UTC m=+0.086831761 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 09:40:23 compute-0 podman[255057]: 2025-10-03 09:40:23.219267622 +0000 UTC m=+0.133399562 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:40:23 compute-0 python3.9[255141]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:24 compute-0 python3.9[255308]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:24 compute-0 python3.9[255386]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:40:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:40:25 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:40:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:40:25 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:40:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:40:25 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:40:25 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:40:25 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d7b718ff-a4a9-491d-a444-a25d0b4127d6 does not exist
Oct  3 09:40:25 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6a6efaf4-b3c6-4f08-b787-644733010fff does not exist
Oct  3 09:40:25 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 94a6eb49-6f44-4ba9-be4b-2486f65c342e does not exist
Oct  3 09:40:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:40:25 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:40:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:40:25 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:40:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:40:25 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:40:25 compute-0 python3.9[255657]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:40:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:26 compute-0 podman[255961]: 2025-10-03 09:40:26.615196734 +0000 UTC m=+0.054663784 container create c9031fef2301b99976afa333b7d8a0ed39391ab998c1e9bfc5a8b10df14de417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:40:26 compute-0 systemd[1]: Started libpod-conmon-c9031fef2301b99976afa333b7d8a0ed39391ab998c1e9bfc5a8b10df14de417.scope.
Oct  3 09:40:26 compute-0 podman[255961]: 2025-10-03 09:40:26.593480424 +0000 UTC m=+0.032947484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:40:26 compute-0 python3.9[255953]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:40:26 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:40:26 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:40:26 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:40:26 compute-0 podman[255961]: 2025-10-03 09:40:26.735622437 +0000 UTC m=+0.175089557 container init c9031fef2301b99976afa333b7d8a0ed39391ab998c1e9bfc5a8b10df14de417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:40:26 compute-0 podman[255961]: 2025-10-03 09:40:26.74845962 +0000 UTC m=+0.187926660 container start c9031fef2301b99976afa333b7d8a0ed39391ab998c1e9bfc5a8b10df14de417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:40:26 compute-0 podman[255961]: 2025-10-03 09:40:26.756419207 +0000 UTC m=+0.195886287 container attach c9031fef2301b99976afa333b7d8a0ed39391ab998c1e9bfc5a8b10df14de417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:40:26 compute-0 happy_morse[255977]: 167 167
Oct  3 09:40:26 compute-0 systemd[1]: libpod-c9031fef2301b99976afa333b7d8a0ed39391ab998c1e9bfc5a8b10df14de417.scope: Deactivated successfully.
Oct  3 09:40:26 compute-0 podman[255961]: 2025-10-03 09:40:26.761040417 +0000 UTC m=+0.200507447 container died c9031fef2301b99976afa333b7d8a0ed39391ab998c1e9bfc5a8b10df14de417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc893b5aa2ef0897ae3ebc1248ef37437f002b9473988e5ca78d4ee27749e0e8-merged.mount: Deactivated successfully.
Oct  3 09:40:26 compute-0 podman[255961]: 2025-10-03 09:40:26.824938027 +0000 UTC m=+0.264405067 container remove c9031fef2301b99976afa333b7d8a0ed39391ab998c1e9bfc5a8b10df14de417 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_morse, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 09:40:26 compute-0 systemd[1]: libpod-conmon-c9031fef2301b99976afa333b7d8a0ed39391ab998c1e9bfc5a8b10df14de417.scope: Deactivated successfully.
Oct  3 09:40:27 compute-0 podman[256049]: 2025-10-03 09:40:27.039979551 +0000 UTC m=+0.062001511 container create d5ce0607ff04e95c3c1daa79d94383dba2fcad802279054e30ac9b743b66a5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_moser, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:40:27 compute-0 systemd[1]: Started libpod-conmon-d5ce0607ff04e95c3c1daa79d94383dba2fcad802279054e30ac9b743b66a5d1.scope.
Oct  3 09:40:27 compute-0 podman[256049]: 2025-10-03 09:40:27.01575973 +0000 UTC m=+0.037781740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:40:27 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd36fe8d3b76da129423d035b1cb86cf902ddf1a12c5d8b73c4362b044d6b4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd36fe8d3b76da129423d035b1cb86cf902ddf1a12c5d8b73c4362b044d6b4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd36fe8d3b76da129423d035b1cb86cf902ddf1a12c5d8b73c4362b044d6b4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd36fe8d3b76da129423d035b1cb86cf902ddf1a12c5d8b73c4362b044d6b4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd36fe8d3b76da129423d035b1cb86cf902ddf1a12c5d8b73c4362b044d6b4f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:40:27 compute-0 podman[256049]: 2025-10-03 09:40:27.174437896 +0000 UTC m=+0.196459866 container init d5ce0607ff04e95c3c1daa79d94383dba2fcad802279054e30ac9b743b66a5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_moser, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 09:40:27 compute-0 podman[256049]: 2025-10-03 09:40:27.183312522 +0000 UTC m=+0.205334472 container start d5ce0607ff04e95c3c1daa79d94383dba2fcad802279054e30ac9b743b66a5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:40:27 compute-0 podman[256049]: 2025-10-03 09:40:27.188636544 +0000 UTC m=+0.210658494 container attach d5ce0607ff04e95c3c1daa79d94383dba2fcad802279054e30ac9b743b66a5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 09:40:27 compute-0 python3.9[256172]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:28 compute-0 goofy_moser[256115]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:40:28 compute-0 goofy_moser[256115]: --> relative data size: 1.0
Oct  3 09:40:28 compute-0 goofy_moser[256115]: --> All data devices are unavailable
Oct  3 09:40:28 compute-0 systemd[1]: libpod-d5ce0607ff04e95c3c1daa79d94383dba2fcad802279054e30ac9b743b66a5d1.scope: Deactivated successfully.
Oct  3 09:40:28 compute-0 systemd[1]: libpod-d5ce0607ff04e95c3c1daa79d94383dba2fcad802279054e30ac9b743b66a5d1.scope: Consumed 1.074s CPU time.
Oct  3 09:40:28 compute-0 podman[256049]: 2025-10-03 09:40:28.336446545 +0000 UTC m=+1.358468595 container died d5ce0607ff04e95c3c1daa79d94383dba2fcad802279054e30ac9b743b66a5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 09:40:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-3dd36fe8d3b76da129423d035b1cb86cf902ddf1a12c5d8b73c4362b044d6b4f-merged.mount: Deactivated successfully.
Oct  3 09:40:28 compute-0 podman[256049]: 2025-10-03 09:40:28.431536731 +0000 UTC m=+1.453558681 container remove d5ce0607ff04e95c3c1daa79d94383dba2fcad802279054e30ac9b743b66a5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:40:28 compute-0 systemd[1]: libpod-conmon-d5ce0607ff04e95c3c1daa79d94383dba2fcad802279054e30ac9b743b66a5d1.scope: Deactivated successfully.
Oct  3 09:40:28 compute-0 python3.9[256318]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759484426.932101-165-53820038124456/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=30f05c498f967ab4ade94cffa9bf6856920ebe7a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:29 compute-0 podman[256616]: 2025-10-03 09:40:29.286696466 +0000 UTC m=+0.074595367 container create 817f56cf9c5f970c1ae446939b7c33e756d3b1abce99945e8ef16e4415bdea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 09:40:29 compute-0 python3.9[256603]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:29 compute-0 systemd[1]: Started libpod-conmon-817f56cf9c5f970c1ae446939b7c33e756d3b1abce99945e8ef16e4415bdea37.scope.
Oct  3 09:40:29 compute-0 podman[256616]: 2025-10-03 09:40:29.255525951 +0000 UTC m=+0.043424822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:40:29 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:40:29 compute-0 podman[256616]: 2025-10-03 09:40:29.405341501 +0000 UTC m=+0.193240392 container init 817f56cf9c5f970c1ae446939b7c33e756d3b1abce99945e8ef16e4415bdea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  3 09:40:29 compute-0 podman[256616]: 2025-10-03 09:40:29.413912828 +0000 UTC m=+0.201811689 container start 817f56cf9c5f970c1ae446939b7c33e756d3b1abce99945e8ef16e4415bdea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 09:40:29 compute-0 podman[256616]: 2025-10-03 09:40:29.41739091 +0000 UTC m=+0.205289771 container attach 817f56cf9c5f970c1ae446939b7c33e756d3b1abce99945e8ef16e4415bdea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:40:29 compute-0 trusting_mendel[256633]: 167 167
Oct  3 09:40:29 compute-0 systemd[1]: libpod-817f56cf9c5f970c1ae446939b7c33e756d3b1abce99945e8ef16e4415bdea37.scope: Deactivated successfully.
Oct  3 09:40:29 compute-0 conmon[256633]: conmon 817f56cf9c5f970c1ae4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-817f56cf9c5f970c1ae446939b7c33e756d3b1abce99945e8ef16e4415bdea37.scope/container/memory.events
Oct  3 09:40:29 compute-0 podman[256616]: 2025-10-03 09:40:29.423374213 +0000 UTC m=+0.211273074 container died 817f56cf9c5f970c1ae446939b7c33e756d3b1abce99945e8ef16e4415bdea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:40:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c6fd269594ec9b8080e509ef5429b945b65f29cd5956a25a63a5088e1b04092-merged.mount: Deactivated successfully.
Oct  3 09:40:29 compute-0 podman[256616]: 2025-10-03 09:40:29.482102377 +0000 UTC m=+0.270001248 container remove 817f56cf9c5f970c1ae446939b7c33e756d3b1abce99945e8ef16e4415bdea37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:40:29 compute-0 systemd[1]: libpod-conmon-817f56cf9c5f970c1ae446939b7c33e756d3b1abce99945e8ef16e4415bdea37.scope: Deactivated successfully.
Oct  3 09:40:29 compute-0 podman[256712]: 2025-10-03 09:40:29.69680935 +0000 UTC m=+0.066099202 container create bdeb4ae15ad0685dfad86554db2acd18f965cb23ecad2c7dc17f22a71f6b3de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 09:40:29 compute-0 podman[157165]: time="2025-10-03T09:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:40:29 compute-0 systemd[1]: Started libpod-conmon-bdeb4ae15ad0685dfad86554db2acd18f965cb23ecad2c7dc17f22a71f6b3de8.scope.
Oct  3 09:40:29 compute-0 podman[256712]: 2025-10-03 09:40:29.677729545 +0000 UTC m=+0.047019417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:40:29 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c54cce425e9880069ce433eefbe3fb1cf5b742f10df272a3e972ecba1202e94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c54cce425e9880069ce433eefbe3fb1cf5b742f10df272a3e972ecba1202e94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c54cce425e9880069ce433eefbe3fb1cf5b742f10df272a3e972ecba1202e94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c54cce425e9880069ce433eefbe3fb1cf5b742f10df272a3e972ecba1202e94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:40:29 compute-0 podman[256712]: 2025-10-03 09:40:29.806119185 +0000 UTC m=+0.175409057 container init bdeb4ae15ad0685dfad86554db2acd18f965cb23ecad2c7dc17f22a71f6b3de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 09:40:29 compute-0 podman[256712]: 2025-10-03 09:40:29.829495939 +0000 UTC m=+0.198785801 container start bdeb4ae15ad0685dfad86554db2acd18f965cb23ecad2c7dc17f22a71f6b3de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wescoff, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:40:29 compute-0 podman[256712]: 2025-10-03 09:40:29.835028757 +0000 UTC m=+0.204318619 container attach bdeb4ae15ad0685dfad86554db2acd18f965cb23ecad2c7dc17f22a71f6b3de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 09:40:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34387 "" "Go-http-client/1.1"
Oct  3 09:40:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7226 "" "Go-http-client/1.1"
Oct  3 09:40:30 compute-0 podman[256799]: 2025-10-03 09:40:30.037524476 +0000 UTC m=+0.094388164 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Oct  3 09:40:30 compute-0 podman[256800]: 2025-10-03 09:40:30.07114682 +0000 UTC m=+0.129612100 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 09:40:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:30 compute-0 python3.9[256801]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759484428.7057118-165-78382970301502/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=05088e0c1f92b3abd58f217858280bd4aaa04b30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]: {
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:    "0": [
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:        {
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "devices": [
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "/dev/loop3"
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            ],
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "lv_name": "ceph_lv0",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "lv_size": "21470642176",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "name": "ceph_lv0",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "tags": {
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.cluster_name": "ceph",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.crush_device_class": "",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.encrypted": "0",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.osd_id": "0",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.type": "block",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.vdo": "0"
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            },
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "type": "block",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "vg_name": "ceph_vg0"
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:        }
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:    ],
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:    "1": [
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:        {
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "devices": [
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "/dev/loop4"
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            ],
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "lv_name": "ceph_lv1",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "lv_size": "21470642176",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "name": "ceph_lv1",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "tags": {
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.cluster_name": "ceph",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.crush_device_class": "",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.encrypted": "0",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.osd_id": "1",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.type": "block",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.vdo": "0"
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            },
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "type": "block",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "vg_name": "ceph_vg1"
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:        }
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:    ],
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:    "2": [
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:        {
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "devices": [
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "/dev/loop5"
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            ],
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "lv_name": "ceph_lv2",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "lv_size": "21470642176",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "name": "ceph_lv2",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "tags": {
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.cluster_name": "ceph",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.crush_device_class": "",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.encrypted": "0",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.osd_id": "2",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.type": "block",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:                "ceph.vdo": "0"
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            },
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "type": "block",
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:            "vg_name": "ceph_vg2"
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:        }
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]:    ]
Oct  3 09:40:30 compute-0 agitated_wescoff[256766]: }
Oct  3 09:40:30 compute-0 systemd[1]: libpod-bdeb4ae15ad0685dfad86554db2acd18f965cb23ecad2c7dc17f22a71f6b3de8.scope: Deactivated successfully.
Oct  3 09:40:30 compute-0 podman[256712]: 2025-10-03 09:40:30.673274963 +0000 UTC m=+1.042564845 container died bdeb4ae15ad0685dfad86554db2acd18f965cb23ecad2c7dc17f22a71f6b3de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 09:40:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c54cce425e9880069ce433eefbe3fb1cf5b742f10df272a3e972ecba1202e94-merged.mount: Deactivated successfully.
Oct  3 09:40:30 compute-0 podman[256712]: 2025-10-03 09:40:30.7565898 +0000 UTC m=+1.125879652 container remove bdeb4ae15ad0685dfad86554db2acd18f965cb23ecad2c7dc17f22a71f6b3de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct  3 09:40:30 compute-0 systemd[1]: libpod-conmon-bdeb4ae15ad0685dfad86554db2acd18f965cb23ecad2c7dc17f22a71f6b3de8.scope: Deactivated successfully.
Oct  3 09:40:31 compute-0 python3.9[257033]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:31 compute-0 openstack_network_exporter[159287]: ERROR   09:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:40:31 compute-0 openstack_network_exporter[159287]: ERROR   09:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:40:31 compute-0 openstack_network_exporter[159287]: ERROR   09:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:40:31 compute-0 openstack_network_exporter[159287]: ERROR   09:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:40:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:40:31 compute-0 openstack_network_exporter[159287]: ERROR   09:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:40:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:40:31 compute-0 podman[257242]: 2025-10-03 09:40:31.572775867 +0000 UTC m=+0.101997190 container create 64f88fd83a01c82601650010408db963bac7cfe5f2fc97712e7df321ce14a62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_morse, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:40:31 compute-0 podman[257241]: 2025-10-03 09:40:31.579171653 +0000 UTC m=+0.104525632 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 09:40:31 compute-0 podman[257242]: 2025-10-03 09:40:31.51056866 +0000 UTC m=+0.039790003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:40:31 compute-0 systemd[1]: Started libpod-conmon-64f88fd83a01c82601650010408db963bac7cfe5f2fc97712e7df321ce14a62f.scope.
Oct  3 09:40:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:40:31 compute-0 podman[257242]: 2025-10-03 09:40:31.682075831 +0000 UTC m=+0.211297204 container init 64f88fd83a01c82601650010408db963bac7cfe5f2fc97712e7df321ce14a62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_morse, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:40:31 compute-0 podman[257242]: 2025-10-03 09:40:31.698216781 +0000 UTC m=+0.227438104 container start 64f88fd83a01c82601650010408db963bac7cfe5f2fc97712e7df321ce14a62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_morse, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 09:40:31 compute-0 podman[257242]: 2025-10-03 09:40:31.702794719 +0000 UTC m=+0.232016052 container attach 64f88fd83a01c82601650010408db963bac7cfe5f2fc97712e7df321ce14a62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:40:31 compute-0 thirsty_morse[257311]: 167 167
Oct  3 09:40:31 compute-0 systemd[1]: libpod-64f88fd83a01c82601650010408db963bac7cfe5f2fc97712e7df321ce14a62f.scope: Deactivated successfully.
Oct  3 09:40:31 compute-0 podman[257242]: 2025-10-03 09:40:31.709598068 +0000 UTC m=+0.238819391 container died 64f88fd83a01c82601650010408db963bac7cfe5f2fc97712e7df321ce14a62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 09:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ba7c01644f96140a6f79f245545bc187ae83e70fe688986381ff98edf5cd35b-merged.mount: Deactivated successfully.
Oct  3 09:40:31 compute-0 podman[257242]: 2025-10-03 09:40:31.763645032 +0000 UTC m=+0.292866355 container remove 64f88fd83a01c82601650010408db963bac7cfe5f2fc97712e7df321ce14a62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_morse, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 09:40:31 compute-0 python3.9[257305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759484430.3825402-165-249156867380880/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7dc71eb78a59f7521a9c3ae9491ac8d7874f438a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:31 compute-0 systemd[1]: libpod-conmon-64f88fd83a01c82601650010408db963bac7cfe5f2fc97712e7df321ce14a62f.scope: Deactivated successfully.
Oct  3 09:40:31 compute-0 podman[257356]: 2025-10-03 09:40:31.962774453 +0000 UTC m=+0.059927534 container create 47dda51ff772a51dacbd199f176869bcbd626781cd88e1fdc8adc865251b0d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  3 09:40:32 compute-0 systemd[1]: Started libpod-conmon-47dda51ff772a51dacbd199f176869bcbd626781cd88e1fdc8adc865251b0d4a.scope.
Oct  3 09:40:32 compute-0 podman[257356]: 2025-10-03 09:40:31.93354574 +0000 UTC m=+0.030698901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:40:32 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ec34649bf90501aa4d2e9c87276ee028c095c4998c7201b851ec77fdb07289/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ec34649bf90501aa4d2e9c87276ee028c095c4998c7201b851ec77fdb07289/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ec34649bf90501aa4d2e9c87276ee028c095c4998c7201b851ec77fdb07289/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ec34649bf90501aa4d2e9c87276ee028c095c4998c7201b851ec77fdb07289/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:40:32 compute-0 podman[257356]: 2025-10-03 09:40:32.096857846 +0000 UTC m=+0.194010937 container init 47dda51ff772a51dacbd199f176869bcbd626781cd88e1fdc8adc865251b0d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:40:32 compute-0 podman[257356]: 2025-10-03 09:40:32.113001986 +0000 UTC m=+0.210155087 container start 47dda51ff772a51dacbd199f176869bcbd626781cd88e1fdc8adc865251b0d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:40:32 compute-0 podman[257356]: 2025-10-03 09:40:32.11837968 +0000 UTC m=+0.215532771 container attach 47dda51ff772a51dacbd199f176869bcbd626781cd88e1fdc8adc865251b0d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 09:40:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:32 compute-0 python3.9[257507]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:40:33 compute-0 recursing_buck[257393]: {
Oct  3 09:40:33 compute-0 recursing_buck[257393]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:40:33 compute-0 recursing_buck[257393]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:40:33 compute-0 recursing_buck[257393]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:40:33 compute-0 recursing_buck[257393]:        "osd_id": 1,
Oct  3 09:40:33 compute-0 recursing_buck[257393]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:40:33 compute-0 recursing_buck[257393]:        "type": "bluestore"
Oct  3 09:40:33 compute-0 recursing_buck[257393]:    },
Oct  3 09:40:33 compute-0 recursing_buck[257393]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:40:33 compute-0 recursing_buck[257393]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:40:33 compute-0 recursing_buck[257393]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:40:33 compute-0 recursing_buck[257393]:        "osd_id": 2,
Oct  3 09:40:33 compute-0 recursing_buck[257393]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:40:33 compute-0 recursing_buck[257393]:        "type": "bluestore"
Oct  3 09:40:33 compute-0 recursing_buck[257393]:    },
Oct  3 09:40:33 compute-0 recursing_buck[257393]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:40:33 compute-0 recursing_buck[257393]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:40:33 compute-0 recursing_buck[257393]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:40:33 compute-0 recursing_buck[257393]:        "osd_id": 0,
Oct  3 09:40:33 compute-0 recursing_buck[257393]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:40:33 compute-0 recursing_buck[257393]:        "type": "bluestore"
Oct  3 09:40:33 compute-0 recursing_buck[257393]:    }
Oct  3 09:40:33 compute-0 recursing_buck[257393]: }
Oct  3 09:40:33 compute-0 systemd[1]: libpod-47dda51ff772a51dacbd199f176869bcbd626781cd88e1fdc8adc865251b0d4a.scope: Deactivated successfully.
Oct  3 09:40:33 compute-0 systemd[1]: libpod-47dda51ff772a51dacbd199f176869bcbd626781cd88e1fdc8adc865251b0d4a.scope: Consumed 1.074s CPU time.
Oct  3 09:40:33 compute-0 podman[257356]: 2025-10-03 09:40:33.188922439 +0000 UTC m=+1.286075550 container died 47dda51ff772a51dacbd199f176869bcbd626781cd88e1fdc8adc865251b0d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  3 09:40:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-55ec34649bf90501aa4d2e9c87276ee028c095c4998c7201b851ec77fdb07289-merged.mount: Deactivated successfully.
Oct  3 09:40:33 compute-0 podman[257356]: 2025-10-03 09:40:33.265097196 +0000 UTC m=+1.362250297 container remove 47dda51ff772a51dacbd199f176869bcbd626781cd88e1fdc8adc865251b0d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:40:33 compute-0 systemd[1]: libpod-conmon-47dda51ff772a51dacbd199f176869bcbd626781cd88e1fdc8adc865251b0d4a.scope: Deactivated successfully.
Oct  3 09:40:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:40:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:40:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:40:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:40:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 756a17af-aedd-4a76-8d63-bc91a17da95d does not exist
Oct  3 09:40:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 61b09024-22e5-4b79-b275-1a8338df847c does not exist
Oct  3 09:40:33 compute-0 python3.9[257697]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:40:33 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:40:33 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:40:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:34 compute-0 python3.9[257902]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:34 compute-0 python3.9[257980]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:40:35.555651) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484435555680, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1496, "num_deletes": 250, "total_data_size": 2292328, "memory_usage": 2323120, "flush_reason": "Manual Compaction"}
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484435569809, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1342427, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7438, "largest_seqno": 8933, "table_properties": {"data_size": 1337345, "index_size": 2286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14016, "raw_average_key_size": 20, "raw_value_size": 1325697, "raw_average_value_size": 1932, "num_data_blocks": 109, "num_entries": 686, "num_filter_entries": 686, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759484286, "oldest_key_time": 1759484286, "file_creation_time": 1759484435, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 14259 microseconds, and 5171 cpu microseconds.
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:40:35.569904) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1342427 bytes OK
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:40:35.569930) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:40:35.572693) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:40:35.572712) EVENT_LOG_v1 {"time_micros": 1759484435572705, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:40:35.572734) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2285614, prev total WAL file size 2285614, number of live WAL files 2.
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:40:35.574017) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1310KB)], [20(7170KB)]
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484435574041, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8684599, "oldest_snapshot_seqno": -1}
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3362 keys, 6929552 bytes, temperature: kUnknown
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484435629527, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6929552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6903201, "index_size": 16833, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 80723, "raw_average_key_size": 24, "raw_value_size": 6838581, "raw_average_value_size": 2034, "num_data_blocks": 746, "num_entries": 3362, "num_filter_entries": 3362, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759484435, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:40:35.629750) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6929552 bytes
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:40:35.631995) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.3 rd, 124.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.0 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(11.6) write-amplify(5.2) OK, records in: 3805, records dropped: 443 output_compression: NoCompression
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:40:35.632013) EVENT_LOG_v1 {"time_micros": 1759484435632004, "job": 6, "event": "compaction_finished", "compaction_time_micros": 55564, "compaction_time_cpu_micros": 14837, "output_level": 6, "num_output_files": 1, "total_output_size": 6929552, "num_input_records": 3805, "num_output_records": 3362, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484435632337, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484435633358, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:40:35.573875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:40:35.633573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:40:35.633578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:40:35.633580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:40:35.633581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:40:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:40:35.633582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:40:35 compute-0 python3.9[258132]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:36 compute-0 python3.9[258210]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:36 compute-0 python3.9[258362]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:37 compute-0 python3.9[258440]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:38 compute-0 python3.9[258592]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:40:39 compute-0 python3.9[258744]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:40:39 compute-0 python3.9[258896]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:40 compute-0 python3.9[258974]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:40:41 compute-0 python3.9[259126]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:41 compute-0 python3.9[259204]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:42 compute-0 python3.9[259356]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:43 compute-0 python3.9[259434]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:44 compute-0 python3.9[259586]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:40:45 compute-0 python3.9[259738]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:40:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:40:45
Oct  3 09:40:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:40:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:40:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'default.rgw.control', 'images', '.rgw.root', 'backups', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes']
Oct  3 09:40:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:40:45 compute-0 python3.9[259816]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:40:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:40:47 compute-0 python3.9[259968]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:40:48 compute-0 python3.9[260120]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:48 compute-0 python3.9[260198]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:49 compute-0 python3.9[260351]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:40:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:40:50 compute-0 python3.9[260503]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:51 compute-0 python3.9[260626]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759484449.9973118-375-45776514491086/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0840bbf95bceacbb578c810a3eebb37e12b74c19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:52 compute-0 python3.9[260778]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:40:53 compute-0 podman[260904]: 2025-10-03 09:40:53.404630635 +0000 UTC m=+0.096439691 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true)
Oct  3 09:40:53 compute-0 podman[260903]: 2025-10-03 09:40:53.407599601 +0000 UTC m=+0.088105133 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 09:40:53 compute-0 podman[260902]: 2025-10-03 09:40:53.415371281 +0000 UTC m=+0.098732595 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, config_id=edpm)
Oct  3 09:40:53 compute-0 podman[260905]: 2025-10-03 09:40:53.437747673 +0000 UTC m=+0.116010123 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:40:53 compute-0 python3.9[260996]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:54 compute-0 python3.9[261085]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:40:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:40:55 compute-0 python3.9[261237]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:40:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 09:40:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 1995 writes, 8936 keys, 1995 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 1995 writes, 1995 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1995 writes, 8936 keys, 1995 commit groups, 1.0 writes per commit group, ingest: 11.01 MB, 0.02 MB/s#012Interval WAL: 1995 writes, 1995 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     84.5      0.10              0.04         3    0.033       0      0       0.0       0.0#012  L6      1/0    6.61 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    119.8    106.3      0.13              0.03         2    0.064    7214    733       0.0       0.0#012 Sum      1/0    6.61 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     67.7     96.8      0.23              0.07         5    0.045    7214    733       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     70.6    100.7      0.22              0.07         4    0.054    7214    733       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    119.8    106.3      0.13              0.03         2    0.064    7214    733       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     92.7      0.09              0.04         2    0.045       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.6      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56005dddb1f0#2 capacity: 308.00 MB usage: 611.61 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(35,520.39 KB,0.164998%) FilterBlock(6,28.30 KB,0.00897197%) IndexBlock(6,62.92 KB,0.0199504%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  3 09:40:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:40:56 compute-0 python3.9[261389]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:56 compute-0 python3.9[261467]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:57 compute-0 python3.9[261619]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:40:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:40:58 compute-0 python3.9[261771]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:40:58 compute-0 python3.9[261894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759484457.6553867-441-7135352063021/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0840bbf95bceacbb578c810a3eebb37e12b74c19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:40:59 compute-0 podman[157165]: time="2025-10-03T09:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:40:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Oct  3 09:40:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6806 "" "Go-http-client/1.1"
Oct  3 09:40:59 compute-0 python3.9[262046]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:41:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:00 compute-0 podman[262171]: 2025-10-03 09:41:00.431335181 +0000 UTC m=+0.092170273 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct  3 09:41:00 compute-0 podman[262170]: 2025-10-03 09:41:00.465032248 +0000 UTC m=+0.130104727 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct  3 09:41:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:41:00 compute-0 python3.9[262237]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:41:01 compute-0 python3.9[262318]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:41:01 compute-0 openstack_network_exporter[159287]: ERROR   09:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:41:01 compute-0 openstack_network_exporter[159287]: ERROR   09:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:41:01 compute-0 openstack_network_exporter[159287]: ERROR   09:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:41:01 compute-0 openstack_network_exporter[159287]: ERROR   09:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:41:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:41:01 compute-0 openstack_network_exporter[159287]: ERROR   09:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:41:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:41:01 compute-0 podman[262418]: 2025-10-03 09:41:01.810451811 +0000 UTC m=+0.068616643 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:41:02 compute-0 python3.9[262493]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:41:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:02 compute-0 python3.9[262645]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:41:03 compute-0 python3.9[262723]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:41:03 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Oct  3 09:41:03 compute-0 systemd[1]: session-52.scope: Consumed 46.691s CPU time.
Oct  3 09:41:03 compute-0 systemd-logind[798]: Session 52 logged out. Waiting for processes to exit.
Oct  3 09:41:03 compute-0 systemd-logind[798]: Removed session 52.
Oct  3 09:41:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:41:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:09 compute-0 systemd-logind[798]: New session 53 of user zuul.
Oct  3 09:41:09 compute-0 systemd[1]: Started Session 53 of User zuul.
Oct  3 09:41:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:41:10 compute-0 python3.9[262903]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:41:11 compute-0 python3.9[263055]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:41:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:12 compute-0 python3.9[263178]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759484470.919946-34-48901999155656/.source.conf _original_basename=ceph.conf follow=False checksum=eaa276b6ce57e0a51bcc3ce74d3b797e1a24d5d7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:41:12 compute-0 python3.9[263330]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:41:13 compute-0 python3.9[263453]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759484472.4977643-34-99785980684504/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=51658b1d55381b8ed429b76b468ec988b4f1d33b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:41:13 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Oct  3 09:41:13 compute-0 systemd[1]: session-53.scope: Consumed 3.098s CPU time.
Oct  3 09:41:13 compute-0 systemd-logind[798]: Session 53 logged out. Waiting for processes to exit.
Oct  3 09:41:13 compute-0 systemd-logind[798]: Removed session 53.
Oct  3 09:41:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:41:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:20 compute-0 systemd-logind[798]: New session 54 of user zuul.
Oct  3 09:41:20 compute-0 systemd[1]: Started Session 54 of User zuul.
Oct  3 09:41:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:41:21 compute-0 python3.9[263632]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:41:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:22 compute-0 python3.9[263788]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:41:23 compute-0 python3.9[263940]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:41:23 compute-0 podman[263942]: 2025-10-03 09:41:23.594049707 +0000 UTC m=+0.079906150 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 09:41:23 compute-0 podman[263941]: 2025-10-03 09:41:23.59506821 +0000 UTC m=+0.085431938 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.buildah.version=1.29.0, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., container_name=kepler, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible)
Oct  3 09:41:23 compute-0 podman[263943]: 2025-10-03 09:41:23.623697258 +0000 UTC m=+0.106244985 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  3 09:41:23 compute-0 podman[263944]: 2025-10-03 09:41:23.627865041 +0000 UTC m=+0.111175933 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 09:41:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:24 compute-0 python3.9[264168]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:41:25 compute-0 python3.9[264320]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  3 09:41:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:41:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:27 compute-0 python3.9[264472]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 09:41:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:28 compute-0 python3.9[264556]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 09:41:29 compute-0 podman[157165]: time="2025-10-03T09:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:41:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Oct  3 09:41:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6813 "" "Go-http-client/1.1"
Oct  3 09:41:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:41:30 compute-0 podman[264635]: 2025-10-03 09:41:30.84562058 +0000 UTC m=+0.108149726 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Oct  3 09:41:30 compute-0 podman[264634]: 2025-10-03 09:41:30.868947398 +0000 UTC m=+0.136326530 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  3 09:41:31 compute-0 python3.9[264753]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  3 09:41:31 compute-0 openstack_network_exporter[159287]: ERROR   09:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:41:31 compute-0 openstack_network_exporter[159287]: ERROR   09:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:41:31 compute-0 openstack_network_exporter[159287]: ERROR   09:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:41:31 compute-0 openstack_network_exporter[159287]: ERROR   09:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:41:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:41:31 compute-0 openstack_network_exporter[159287]: ERROR   09:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:41:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:41:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:32 compute-0 podman[264880]: 2025-10-03 09:41:32.212357227 +0000 UTC m=+0.057391500 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 09:41:32 compute-0 python3[264933]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct  3 09:41:33 compute-0 python3.9[265085]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:41:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:41:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:41:34 compute-0 python3.9[265350]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:41:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:41:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:41:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:41:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:41:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 27799391-30c3-4958-8f10-5840b16ac8d6 does not exist
Oct  3 09:41:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6433edd8-29fe-49c0-b99c-500e11148f65 does not exist
Oct  3 09:41:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6db062c9-486f-4777-a477-a61d3c0e007f does not exist
Oct  3 09:41:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:41:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:41:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:41:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:41:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:41:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:41:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:41:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:41:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:41:35 compute-0 python3.9[265518]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:41:35 compute-0 podman[265610]: 2025-10-03 09:41:35.316178176 +0000 UTC m=+0.089800129 container create e265a73f849f33e39bf7256d62641a83c502354cef7971b4f4286ee4351fa8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shamir, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 09:41:35 compute-0 podman[265610]: 2025-10-03 09:41:35.267436665 +0000 UTC m=+0.041058638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:41:35 compute-0 systemd[1]: Started libpod-conmon-e265a73f849f33e39bf7256d62641a83c502354cef7971b4f4286ee4351fa8c0.scope.
Oct  3 09:41:35 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:41:35 compute-0 podman[265610]: 2025-10-03 09:41:35.534085506 +0000 UTC m=+0.307707459 container init e265a73f849f33e39bf7256d62641a83c502354cef7971b4f4286ee4351fa8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shamir, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:41:35 compute-0 podman[265610]: 2025-10-03 09:41:35.547726103 +0000 UTC m=+0.321348056 container start e265a73f849f33e39bf7256d62641a83c502354cef7971b4f4286ee4351fa8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shamir, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:41:35 compute-0 cool_shamir[265691]: 167 167
Oct  3 09:41:35 compute-0 systemd[1]: libpod-e265a73f849f33e39bf7256d62641a83c502354cef7971b4f4286ee4351fa8c0.scope: Deactivated successfully.
Oct  3 09:41:35 compute-0 conmon[265691]: conmon e265a73f849f33e39bf7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e265a73f849f33e39bf7256d62641a83c502354cef7971b4f4286ee4351fa8c0.scope/container/memory.events
Oct  3 09:41:35 compute-0 podman[265610]: 2025-10-03 09:41:35.594982368 +0000 UTC m=+0.368604321 container attach e265a73f849f33e39bf7256d62641a83c502354cef7971b4f4286ee4351fa8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 09:41:35 compute-0 podman[265610]: 2025-10-03 09:41:35.59630902 +0000 UTC m=+0.369930973 container died e265a73f849f33e39bf7256d62641a83c502354cef7971b4f4286ee4351fa8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 09:41:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:41:35 compute-0 python3.9[265764]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:41:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b1c19ffe5de40e597ebaba8f023dc38f9a6228bd8820cee3b3a6e4f3d367944-merged.mount: Deactivated successfully.
Oct  3 09:41:35 compute-0 podman[265610]: 2025-10-03 09:41:35.923515193 +0000 UTC m=+0.697137146 container remove e265a73f849f33e39bf7256d62641a83c502354cef7971b4f4286ee4351fa8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 09:41:35 compute-0 systemd[1]: libpod-conmon-e265a73f849f33e39bf7256d62641a83c502354cef7971b4f4286ee4351fa8c0.scope: Deactivated successfully.
Oct  3 09:41:36 compute-0 podman[265798]: 2025-10-03 09:41:36.122462527 +0000 UTC m=+0.062983638 container create 22b3adb193216a834fa39cde52013afd2f7bbf33e82fc03cfc51a702c4db13f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noether, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:41:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:36 compute-0 systemd[1]: Started libpod-conmon-22b3adb193216a834fa39cde52013afd2f7bbf33e82fc03cfc51a702c4db13f4.scope.
Oct  3 09:41:36 compute-0 podman[265798]: 2025-10-03 09:41:36.100049059 +0000 UTC m=+0.040570190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:41:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b506cebb60b6161d17a8c9e5e874a69e33da07810e7b31cea840c83d4eeb8ff9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b506cebb60b6161d17a8c9e5e874a69e33da07810e7b31cea840c83d4eeb8ff9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b506cebb60b6161d17a8c9e5e874a69e33da07810e7b31cea840c83d4eeb8ff9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b506cebb60b6161d17a8c9e5e874a69e33da07810e7b31cea840c83d4eeb8ff9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b506cebb60b6161d17a8c9e5e874a69e33da07810e7b31cea840c83d4eeb8ff9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:41:36 compute-0 podman[265798]: 2025-10-03 09:41:36.235738307 +0000 UTC m=+0.176259438 container init 22b3adb193216a834fa39cde52013afd2f7bbf33e82fc03cfc51a702c4db13f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 09:41:36 compute-0 podman[265798]: 2025-10-03 09:41:36.249796437 +0000 UTC m=+0.190317548 container start 22b3adb193216a834fa39cde52013afd2f7bbf33e82fc03cfc51a702c4db13f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noether, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 09:41:36 compute-0 podman[265798]: 2025-10-03 09:41:36.254473326 +0000 UTC m=+0.194994437 container attach 22b3adb193216a834fa39cde52013afd2f7bbf33e82fc03cfc51a702c4db13f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 09:41:36 compute-0 python3.9[265870]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ay_0tz4s recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:41:37 compute-0 python3.9[266030]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:41:37 compute-0 strange_noether[265847]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:41:37 compute-0 strange_noether[265847]: --> relative data size: 1.0
Oct  3 09:41:37 compute-0 strange_noether[265847]: --> All data devices are unavailable
Oct  3 09:41:37 compute-0 systemd[1]: libpod-22b3adb193216a834fa39cde52013afd2f7bbf33e82fc03cfc51a702c4db13f4.scope: Deactivated successfully.
Oct  3 09:41:37 compute-0 systemd[1]: libpod-22b3adb193216a834fa39cde52013afd2f7bbf33e82fc03cfc51a702c4db13f4.scope: Consumed 1.155s CPU time.
Oct  3 09:41:37 compute-0 podman[265798]: 2025-10-03 09:41:37.486526359 +0000 UTC m=+1.427047530 container died 22b3adb193216a834fa39cde52013afd2f7bbf33e82fc03cfc51a702c4db13f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noether, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 09:41:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b506cebb60b6161d17a8c9e5e874a69e33da07810e7b31cea840c83d4eeb8ff9-merged.mount: Deactivated successfully.
Oct  3 09:41:37 compute-0 podman[265798]: 2025-10-03 09:41:37.579502718 +0000 UTC m=+1.520023829 container remove 22b3adb193216a834fa39cde52013afd2f7bbf33e82fc03cfc51a702c4db13f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:41:37 compute-0 systemd[1]: libpod-conmon-22b3adb193216a834fa39cde52013afd2f7bbf33e82fc03cfc51a702c4db13f4.scope: Deactivated successfully.
Oct  3 09:41:37 compute-0 python3.9[266149]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:41:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:38 compute-0 podman[266348]: 2025-10-03 09:41:38.356761669 +0000 UTC m=+0.074161117 container create f0100547ee5baa301867e1cfec35105a975affa874d7af23c70d40848667e9e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:41:38 compute-0 podman[266348]: 2025-10-03 09:41:38.315213428 +0000 UTC m=+0.032612986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:41:38 compute-0 systemd[1]: Started libpod-conmon-f0100547ee5baa301867e1cfec35105a975affa874d7af23c70d40848667e9e9.scope.
Oct  3 09:41:38 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:41:38 compute-0 podman[266348]: 2025-10-03 09:41:38.483334674 +0000 UTC m=+0.200734152 container init f0100547ee5baa301867e1cfec35105a975affa874d7af23c70d40848667e9e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:41:38 compute-0 podman[266348]: 2025-10-03 09:41:38.491174805 +0000 UTC m=+0.208574253 container start f0100547ee5baa301867e1cfec35105a975affa874d7af23c70d40848667e9e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 09:41:38 compute-0 adoring_franklin[266386]: 167 167
Oct  3 09:41:38 compute-0 systemd[1]: libpod-f0100547ee5baa301867e1cfec35105a975affa874d7af23c70d40848667e9e9.scope: Deactivated successfully.
Oct  3 09:41:38 compute-0 podman[266348]: 2025-10-03 09:41:38.49975223 +0000 UTC m=+0.217151678 container attach f0100547ee5baa301867e1cfec35105a975affa874d7af23c70d40848667e9e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:41:38 compute-0 podman[266348]: 2025-10-03 09:41:38.500589397 +0000 UTC m=+0.217988855 container died f0100547ee5baa301867e1cfec35105a975affa874d7af23c70d40848667e9e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 09:41:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-67241bf9a6fffaf56032f4fdfa6ab196a92a71222feb716b57b5b94348b5540b-merged.mount: Deactivated successfully.
Oct  3 09:41:38 compute-0 podman[266348]: 2025-10-03 09:41:38.576507749 +0000 UTC m=+0.293907187 container remove f0100547ee5baa301867e1cfec35105a975affa874d7af23c70d40848667e9e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 09:41:38 compute-0 systemd[1]: libpod-conmon-f0100547ee5baa301867e1cfec35105a975affa874d7af23c70d40848667e9e9.scope: Deactivated successfully.
Oct  3 09:41:38 compute-0 podman[266465]: 2025-10-03 09:41:38.762617691 +0000 UTC m=+0.059417294 container create cbbac717c36dbdab2e5f91849ffc038f60c338f9e51ceb41c7bec77256d8fa7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_turing, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:41:38 compute-0 python3.9[266459]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:41:38 compute-0 systemd[1]: Started libpod-conmon-cbbac717c36dbdab2e5f91849ffc038f60c338f9e51ceb41c7bec77256d8fa7f.scope.
Oct  3 09:41:38 compute-0 podman[266465]: 2025-10-03 09:41:38.73105361 +0000 UTC m=+0.027853233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:41:38 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caea8d5feb6103803e089d402e8ee78fcaef36ea07cf939befed1563711f3db3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caea8d5feb6103803e089d402e8ee78fcaef36ea07cf939befed1563711f3db3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caea8d5feb6103803e089d402e8ee78fcaef36ea07cf939befed1563711f3db3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caea8d5feb6103803e089d402e8ee78fcaef36ea07cf939befed1563711f3db3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:41:38 compute-0 podman[266465]: 2025-10-03 09:41:38.893819325 +0000 UTC m=+0.190618928 container init cbbac717c36dbdab2e5f91849ffc038f60c338f9e51ceb41c7bec77256d8fa7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_turing, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:41:38 compute-0 podman[266465]: 2025-10-03 09:41:38.911323486 +0000 UTC m=+0.208123089 container start cbbac717c36dbdab2e5f91849ffc038f60c338f9e51ceb41c7bec77256d8fa7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_turing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 09:41:38 compute-0 podman[266465]: 2025-10-03 09:41:38.918875897 +0000 UTC m=+0.215675500 container attach cbbac717c36dbdab2e5f91849ffc038f60c338f9e51ceb41c7bec77256d8fa7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_turing, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.952 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.953 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f70b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa35c9f7170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b8940b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35df74380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b894380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e566ba0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f73b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa35b894080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa35c9f71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa35c9f7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa35c9f7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa35c9f72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa35c9f7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa35c955970>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa35b894350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e6b9400>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa35c92f7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa35c9f7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa35c9f7710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa35c9f79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa35e6e6180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa35c9f73e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa35c9f7c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.961 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.963 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.963 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f76e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.963 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.963 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': [], 'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa35c9f7440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa35c9f7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa35c9f7ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa35c9f7d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa35c9f7e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa35c9f7650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa35c9f7e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa35c9f76b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa35c9f7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa35c9f7fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:41:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:41:39 compute-0 infallible_turing[266482]: {
Oct  3 09:41:39 compute-0 infallible_turing[266482]:    "0": [
Oct  3 09:41:39 compute-0 infallible_turing[266482]:        {
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "devices": [
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "/dev/loop3"
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            ],
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "lv_name": "ceph_lv0",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "lv_size": "21470642176",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "name": "ceph_lv0",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "tags": {
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.cluster_name": "ceph",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.crush_device_class": "",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.encrypted": "0",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.osd_id": "0",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.type": "block",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.vdo": "0"
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            },
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "type": "block",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "vg_name": "ceph_vg0"
Oct  3 09:41:39 compute-0 infallible_turing[266482]:        }
Oct  3 09:41:39 compute-0 infallible_turing[266482]:    ],
Oct  3 09:41:39 compute-0 infallible_turing[266482]:    "1": [
Oct  3 09:41:39 compute-0 infallible_turing[266482]:        {
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "devices": [
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "/dev/loop4"
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            ],
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "lv_name": "ceph_lv1",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "lv_size": "21470642176",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "name": "ceph_lv1",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "tags": {
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.cluster_name": "ceph",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.crush_device_class": "",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.encrypted": "0",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.osd_id": "1",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.type": "block",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.vdo": "0"
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            },
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "type": "block",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "vg_name": "ceph_vg1"
Oct  3 09:41:39 compute-0 infallible_turing[266482]:        }
Oct  3 09:41:39 compute-0 infallible_turing[266482]:    ],
Oct  3 09:41:39 compute-0 infallible_turing[266482]:    "2": [
Oct  3 09:41:39 compute-0 infallible_turing[266482]:        {
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "devices": [
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "/dev/loop5"
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            ],
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "lv_name": "ceph_lv2",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "lv_size": "21470642176",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "name": "ceph_lv2",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "tags": {
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.cluster_name": "ceph",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.crush_device_class": "",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.encrypted": "0",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.osd_id": "2",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.type": "block",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:                "ceph.vdo": "0"
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            },
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "type": "block",
Oct  3 09:41:39 compute-0 infallible_turing[266482]:            "vg_name": "ceph_vg2"
Oct  3 09:41:39 compute-0 infallible_turing[266482]:        }
Oct  3 09:41:39 compute-0 infallible_turing[266482]:    ]
Oct  3 09:41:39 compute-0 infallible_turing[266482]: }
Oct  3 09:41:39 compute-0 systemd[1]: libpod-cbbac717c36dbdab2e5f91849ffc038f60c338f9e51ceb41c7bec77256d8fa7f.scope: Deactivated successfully.
Oct  3 09:41:39 compute-0 podman[266465]: 2025-10-03 09:41:39.797371682 +0000 UTC m=+1.094171295 container died cbbac717c36dbdab2e5f91849ffc038f60c338f9e51ceb41c7bec77256d8fa7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:41:39 compute-0 python3[266639]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  3 09:41:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-caea8d5feb6103803e089d402e8ee78fcaef36ea07cf939befed1563711f3db3-merged.mount: Deactivated successfully.
Oct  3 09:41:39 compute-0 podman[266465]: 2025-10-03 09:41:39.950096556 +0000 UTC m=+1.246896159 container remove cbbac717c36dbdab2e5f91849ffc038f60c338f9e51ceb41c7bec77256d8fa7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:41:39 compute-0 systemd[1]: libpod-conmon-cbbac717c36dbdab2e5f91849ffc038f60c338f9e51ceb41c7bec77256d8fa7f.scope: Deactivated successfully.
Oct  3 09:41:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:41:40 compute-0 podman[266947]: 2025-10-03 09:41:40.755931903 +0000 UTC m=+0.078669602 container create 416c7a4d2a04558293ca43068a3d5a8ac981c7cdab414faf03f13a0d9297f7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 09:41:40 compute-0 podman[266947]: 2025-10-03 09:41:40.708341388 +0000 UTC m=+0.031079087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:41:40 compute-0 systemd[1]: Started libpod-conmon-416c7a4d2a04558293ca43068a3d5a8ac981c7cdab414faf03f13a0d9297f7b0.scope.
Oct  3 09:41:40 compute-0 python3.9[266939]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:41:40 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:41:40 compute-0 podman[266947]: 2025-10-03 09:41:40.887431405 +0000 UTC m=+0.210169094 container init 416c7a4d2a04558293ca43068a3d5a8ac981c7cdab414faf03f13a0d9297f7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:41:40 compute-0 podman[266947]: 2025-10-03 09:41:40.896130844 +0000 UTC m=+0.218868513 container start 416c7a4d2a04558293ca43068a3d5a8ac981c7cdab414faf03f13a0d9297f7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  3 09:41:40 compute-0 gallant_raman[266963]: 167 167
Oct  3 09:41:40 compute-0 systemd[1]: libpod-416c7a4d2a04558293ca43068a3d5a8ac981c7cdab414faf03f13a0d9297f7b0.scope: Deactivated successfully.
Oct  3 09:41:40 compute-0 podman[266947]: 2025-10-03 09:41:40.905115292 +0000 UTC m=+0.227852971 container attach 416c7a4d2a04558293ca43068a3d5a8ac981c7cdab414faf03f13a0d9297f7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:41:40 compute-0 podman[266947]: 2025-10-03 09:41:40.906155585 +0000 UTC m=+0.228893234 container died 416c7a4d2a04558293ca43068a3d5a8ac981c7cdab414faf03f13a0d9297f7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 09:41:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd4f25b96682f13c8223ee27b7a89d7ee520da0205a0c6379de2b74886bd6d4a-merged.mount: Deactivated successfully.
Oct  3 09:41:40 compute-0 podman[266947]: 2025-10-03 09:41:40.974444093 +0000 UTC m=+0.297181752 container remove 416c7a4d2a04558293ca43068a3d5a8ac981c7cdab414faf03f13a0d9297f7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:41:41 compute-0 systemd[1]: libpod-conmon-416c7a4d2a04558293ca43068a3d5a8ac981c7cdab414faf03f13a0d9297f7b0.scope: Deactivated successfully.
Oct  3 09:41:41 compute-0 podman[267023]: 2025-10-03 09:41:41.15413657 +0000 UTC m=+0.050952773 container create 39b5b97a89c48899cdf77ca1e6d021343e238d48bb79302652c54274431e43ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  3 09:41:41 compute-0 systemd[1]: Started libpod-conmon-39b5b97a89c48899cdf77ca1e6d021343e238d48bb79302652c54274431e43ea.scope.
Oct  3 09:41:41 compute-0 podman[267023]: 2025-10-03 09:41:41.133093676 +0000 UTC m=+0.029909889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:41:41 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60de5c9752f4d6bb32eed6138bdf467408a552b6fd5dfd959ce12cb7e4913a11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60de5c9752f4d6bb32eed6138bdf467408a552b6fd5dfd959ce12cb7e4913a11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60de5c9752f4d6bb32eed6138bdf467408a552b6fd5dfd959ce12cb7e4913a11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60de5c9752f4d6bb32eed6138bdf467408a552b6fd5dfd959ce12cb7e4913a11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:41:41 compute-0 podman[267023]: 2025-10-03 09:41:41.293754073 +0000 UTC m=+0.190570296 container init 39b5b97a89c48899cdf77ca1e6d021343e238d48bb79302652c54274431e43ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_blackburn, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 09:41:41 compute-0 podman[267023]: 2025-10-03 09:41:41.3127069 +0000 UTC m=+0.209523103 container start 39b5b97a89c48899cdf77ca1e6d021343e238d48bb79302652c54274431e43ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 09:41:41 compute-0 podman[267023]: 2025-10-03 09:41:41.331728029 +0000 UTC m=+0.228544242 container attach 39b5b97a89c48899cdf77ca1e6d021343e238d48bb79302652c54274431e43ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_blackburn, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:41:41 compute-0 python3.9[267081]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:41:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:42 compute-0 python3.9[267242]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]: {
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:        "osd_id": 1,
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:        "type": "bluestore"
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:    },
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:        "osd_id": 2,
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:        "type": "bluestore"
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:    },
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:        "osd_id": 0,
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:        "type": "bluestore"
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]:    }
Oct  3 09:41:42 compute-0 peaceful_blackburn[267078]: }
Oct  3 09:41:42 compute-0 systemd[1]: libpod-39b5b97a89c48899cdf77ca1e6d021343e238d48bb79302652c54274431e43ea.scope: Deactivated successfully.
Oct  3 09:41:42 compute-0 systemd[1]: libpod-39b5b97a89c48899cdf77ca1e6d021343e238d48bb79302652c54274431e43ea.scope: Consumed 1.069s CPU time.
Oct  3 09:41:42 compute-0 podman[267289]: 2025-10-03 09:41:42.438615692 +0000 UTC m=+0.029823606 container died 39b5b97a89c48899cdf77ca1e6d021343e238d48bb79302652c54274431e43ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:41:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-60de5c9752f4d6bb32eed6138bdf467408a552b6fd5dfd959ce12cb7e4913a11-merged.mount: Deactivated successfully.
Oct  3 09:41:42 compute-0 podman[267289]: 2025-10-03 09:41:42.526371874 +0000 UTC m=+0.117579768 container remove 39b5b97a89c48899cdf77ca1e6d021343e238d48bb79302652c54274431e43ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_blackburn, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  3 09:41:42 compute-0 systemd[1]: libpod-conmon-39b5b97a89c48899cdf77ca1e6d021343e238d48bb79302652c54274431e43ea.scope: Deactivated successfully.
Oct  3 09:41:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:41:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:41:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:41:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:41:42 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev deedaa36-753e-4ab1-8ac0-45e6f0ebddd1 does not exist
Oct  3 09:41:42 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f52ebc84-f5d6-44cb-8771-3600e45c295a does not exist
Oct  3 09:41:42 compute-0 python3.9[267355]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:41:43 compute-0 python3.9[267557]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:41:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:41:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:41:44 compute-0 python3.9[267635]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:41:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:44 compute-0 python3.9[267787]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:41:45 compute-0 python3.9[267865]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:41:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:41:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:41:45
Oct  3 09:41:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:41:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:41:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', '.rgw.root', 'volumes', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'default.rgw.control']
Oct  3 09:41:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:41:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:46 compute-0 python3.9[268017]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:41:46 compute-0 python3.9[268095]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:41:47 compute-0 python3.9[268247]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:41:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:48 compute-0 python3.9[268402]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:41:49 compute-0 python3.9[268554]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:41:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:50 compute-0 python3.9[268708]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:41:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:41:51 compute-0 python3.9[268860]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:41:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:52 compute-0 python3.9[269010]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:41:53 compute-0 python3.9[269163]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:41:53 compute-0 ovs-vsctl[269164]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:41:53.703984) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484513704025, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 846, "num_deletes": 251, "total_data_size": 1149729, "memory_usage": 1166368, "flush_reason": "Manual Compaction"}
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484513712851, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1139263, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8934, "largest_seqno": 9779, "table_properties": {"data_size": 1135036, "index_size": 1943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8927, "raw_average_key_size": 18, "raw_value_size": 1126549, "raw_average_value_size": 2351, "num_data_blocks": 90, "num_entries": 479, "num_filter_entries": 479, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759484435, "oldest_key_time": 1759484435, "file_creation_time": 1759484513, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 8942 microseconds, and 3941 cpu microseconds.
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:41:53.712922) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1139263 bytes OK
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:41:53.712946) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:41:53.715624) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:41:53.715651) EVENT_LOG_v1 {"time_micros": 1759484513715644, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:41:53.715674) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1145572, prev total WAL file size 1145572, number of live WAL files 2.
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:41:53.716561) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1112KB)], [23(6767KB)]
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484513716649, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8068815, "oldest_snapshot_seqno": -1}
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3327 keys, 6470074 bytes, temperature: kUnknown
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484513796151, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6470074, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6445079, "index_size": 15606, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80725, "raw_average_key_size": 24, "raw_value_size": 6382138, "raw_average_value_size": 1918, "num_data_blocks": 680, "num_entries": 3327, "num_filter_entries": 3327, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759484513, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:41:53.796414) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6470074 bytes
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:41:53.819176) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 101.4 rd, 81.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 6.6 +0.0 blob) out(6.2 +0.0 blob), read-write-amplify(12.8) write-amplify(5.7) OK, records in: 3841, records dropped: 514 output_compression: NoCompression
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:41:53.819215) EVENT_LOG_v1 {"time_micros": 1759484513819199, "job": 8, "event": "compaction_finished", "compaction_time_micros": 79559, "compaction_time_cpu_micros": 19353, "output_level": 6, "num_output_files": 1, "total_output_size": 6470074, "num_input_records": 3841, "num_output_records": 3327, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484513819909, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484513821129, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:41:53.716311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:41:53.821995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:41:53.821999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:41:53.822000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:41:53.822002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:41:53 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:41:53.822003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:41:53 compute-0 podman[269166]: 2025-10-03 09:41:53.822792845 +0000 UTC m=+0.081162822 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:41:53 compute-0 podman[269168]: 2025-10-03 09:41:53.830530432 +0000 UTC m=+0.076614236 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:41:53 compute-0 podman[269165]: 2025-10-03 09:41:53.859899893 +0000 UTC m=+0.119401786 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, version=9.4, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9)
Oct  3 09:41:53 compute-0 podman[269167]: 2025-10-03 09:41:53.862579778 +0000 UTC m=+0.111102040 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.license=GPLv2)
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:54 compute-0 python3.9[269390]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:41:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:41:55 compute-0 python3.9[269543]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:41:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:41:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:56 compute-0 python3.9[269697]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:41:57 compute-0 python3.9[269849]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:41:57 compute-0 python3.9[269927]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:41:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:41:58 compute-0 python3.9[270079]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:41:58 compute-0 python3.9[270157]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:41:59 compute-0 python3.9[270309]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:41:59 compute-0 podman[157165]: time="2025-10-03T09:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:41:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Oct  3 09:41:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6810 "" "Go-http-client/1.1"
Oct  3 09:42:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:00 compute-0 python3.9[270461]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:42:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:42:00 compute-0 python3.9[270539]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:42:01 compute-0 openstack_network_exporter[159287]: ERROR   09:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:42:01 compute-0 openstack_network_exporter[159287]: ERROR   09:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:42:01 compute-0 openstack_network_exporter[159287]: ERROR   09:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:42:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:42:01 compute-0 openstack_network_exporter[159287]: ERROR   09:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:42:01 compute-0 openstack_network_exporter[159287]: ERROR   09:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:42:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:42:01 compute-0 podman[270664]: 2025-10-03 09:42:01.601505667 +0000 UTC m=+0.074442156 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, maintainer=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Oct  3 09:42:01 compute-0 podman[270663]: 2025-10-03 09:42:01.621469786 +0000 UTC m=+0.095901033 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  3 09:42:01 compute-0 python3.9[270730]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:42:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:02 compute-0 python3.9[270813]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:42:02 compute-0 podman[270913]: 2025-10-03 09:42:02.793754822 +0000 UTC m=+0.065242782 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 09:42:03 compute-0 python3.9[270987]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:42:03 compute-0 systemd[1]: Reloading.
Oct  3 09:42:03 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:42:03 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:42:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:04 compute-0 python3.9[271176]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:42:05 compute-0 python3.9[271254]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:42:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:42:05 compute-0 python3.9[271406]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:42:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:06 compute-0 python3.9[271484]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:42:07 compute-0 python3.9[271636]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:42:07 compute-0 systemd[1]: Reloading.
Oct  3 09:42:07 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:42:07 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:42:08 compute-0 systemd[1]: Starting Create netns directory...
Oct  3 09:42:08 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  3 09:42:08 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  3 09:42:08 compute-0 systemd[1]: Finished Create netns directory.
Oct  3 09:42:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:08 compute-0 python3.9[271830]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:42:09 compute-0 python3.9[271982]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:42:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:10 compute-0 python3.9[272060]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ovn_controller/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ovn_controller/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:42:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:42:11 compute-0 python3.9[272212]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:42:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:12 compute-0 python3.9[272364]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:42:12 compute-0 python3.9[272442]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ovn_controller.json _original_basename=.2nwp7qa2 recurse=False state=file path=/var/lib/kolla/config_files/ovn_controller.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:42:13 compute-0 python3.9[272594]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:42:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:42:15 compute-0 python3.9[272976]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct  3 09:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:42:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:16 compute-0 python3.9[273128]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 09:42:17 compute-0 python3.9[273280]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  3 09:42:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:19 compute-0 python3[273459]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 09:42:20 compute-0 python3[273459]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f",#012          "Digest": "sha256:129e24971fee94cc60b5f440605f1512fb932a884e38e64122f38f11f942e3b9",#012          "RepoTags": [#012               "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012          ],#012          "RepoDigests": [#012               "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:129e24971fee94cc60b5f440605f1512fb932a884e38e64122f38f11f942e3b9"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-10-02T06:41:04.763416897Z",#012          "Config": {#012               "User": "root",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.3",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20251001",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 9 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "a0eac564d779a7eaac46c9816bff261a",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 345627081,#012          "VirtualSize": 345627081,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/8f7765a0696e83d3cfed8c8e9a70a4344fabd76a523317a36aee407406588981/diff:/var/lib/containers/storage/overlay/661e15e0dfc445ecdff08d434d5cb11b0b9a54f42dd69506bb77f4c8cd8adb25/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/4757c1c767cdaf82eab12df0ed4287df67b8e29aa1208326d810f1ccc3ae859d/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/4757c1c767cdaf82eab12df0ed4287df67b8e29aa1208326d810f1ccc3ae859d/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012                    "sha256:c7c80f27a004d53fb75b6d30a961f2416ea855138d9e550000fa093a1e5e384d",#012                    "sha256:2581cff67e17c51811bac9607dcd596a85156992ccb768e403301479a37d51fb",#012                    "sha256:b0f2967db57e02040537a064ba6efcf4aa5c9caf2b7b1633852dac7a10163ec7"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.3",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20251001",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 9 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "a0eac564d779a7eaac46c9816bff261a",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "root",#012          "History": [#012               {#012                    "created": "2025-10-01T03:48:01.636308726Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-01T03:48:01.636415187Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251001\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-01T03:48:09.404099909Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-10-02T06:10:09.757191184Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream9",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-02T06:10:09.757211565Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-02T06:10:09.757229405Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-02T06:10:09.757245856Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-02T06:10:09.757279147Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-02T06:10:09.757304688Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-02T06:10:10.233672718Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-02T06:10:47.227633956Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-02T06:10:50.639117027Z",#012                    "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-
Oct  3 09:42:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:42:20 compute-0 python3.9[273669]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:42:21 compute-0 python3.9[273823]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:42:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:22 compute-0 python3.9[273899]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:42:23 compute-0 python3.9[274050]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759484542.4357188-536-32363705930641/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:42:24 compute-0 python3.9[274126]: ansible-systemd Invoked with state=started name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:42:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:24 compute-0 podman[274131]: 2025-10-03 09:42:24.305035323 +0000 UTC m=+0.076432279 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct  3 09:42:24 compute-0 podman[274129]: 2025-10-03 09:42:24.316503831 +0000 UTC m=+0.095060547 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:42:24 compute-0 podman[274128]: 2025-10-03 09:42:24.319071863 +0000 UTC m=+0.096570375 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., name=ubi9, com.redhat.component=ubi9-container, container_name=kepler, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, distribution-scope=public, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct  3 09:42:24 compute-0 podman[274130]: 2025-10-03 09:42:24.325099006 +0000 UTC m=+0.096496032 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:42:25 compute-0 python3.9[274362]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:42:25 compute-0 ovs-vsctl[274363]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct  3 09:42:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:42:25 compute-0 python3.9[274515]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:42:25 compute-0 ovs-vsctl[274517]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct  3 09:42:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:26 compute-0 python3.9[274670]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:42:26 compute-0 ovs-vsctl[274671]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct  3 09:42:27 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Oct  3 09:42:27 compute-0 systemd[1]: session-54.scope: Consumed 54.825s CPU time.
Oct  3 09:42:27 compute-0 systemd-logind[798]: Session 54 logged out. Waiting for processes to exit.
Oct  3 09:42:27 compute-0 systemd-logind[798]: Removed session 54.
Oct  3 09:42:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:29 compute-0 podman[157165]: time="2025-10-03T09:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:42:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Oct  3 09:42:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6819 "" "Go-http-client/1.1"
Oct  3 09:42:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:42:31 compute-0 openstack_network_exporter[159287]: ERROR   09:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:42:31 compute-0 openstack_network_exporter[159287]: ERROR   09:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:42:31 compute-0 openstack_network_exporter[159287]: ERROR   09:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:42:31 compute-0 openstack_network_exporter[159287]: ERROR   09:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:42:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:42:31 compute-0 openstack_network_exporter[159287]: ERROR   09:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:42:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:42:31 compute-0 podman[274697]: 2025-10-03 09:42:31.863279274 +0000 UTC m=+0.106933987 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_id=edpm)
Oct  3 09:42:31 compute-0 podman[274696]: 2025-10-03 09:42:31.907903353 +0000 UTC m=+0.152494787 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  3 09:42:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:32 compute-0 systemd-logind[798]: New session 55 of user zuul.
Oct  3 09:42:32 compute-0 systemd[1]: Started Session 55 of User zuul.
Oct  3 09:42:33 compute-0 podman[274866]: 2025-10-03 09:42:33.31489908 +0000 UTC m=+0.091044188 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 09:42:33 compute-0 python3.9[274908]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:42:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:35 compute-0 python3.9[275071]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:42:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:42:35 compute-0 python3.9[275223]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:42:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:36 compute-0 python3.9[275375]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:42:37 compute-0 python3.9[275527]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:42:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:38 compute-0 python3.9[275679]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:42:39 compute-0 python3.9[275829]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:42:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:40 compute-0 python3.9[275981]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  3 09:42:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:42:41 compute-0 python3.9[276131]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:42:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:42 compute-0 python3.9[276252]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759484561.2885735-86-157031567649832/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:42:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:42:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:42:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:42:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:42:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:42:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:42:43 compute-0 python3.9[276521]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:42:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:44 compute-0 python3.9[276755]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759484563.0173976-101-164078159632579/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:42:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:42:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:42:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:42:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:42:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:42:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:42:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ca0eb783-99e5-4184-b2b4-93981dad288f does not exist
Oct  3 09:42:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3def219f-1f61-4291-ad88-e883ef02d094 does not exist
Oct  3 09:42:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d5ed05c2-39ee-4637-8ec7-3ab98de3a2d8 does not exist
Oct  3 09:42:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:42:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:42:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:42:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:42:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:42:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:42:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:42:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:42:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:42:45 compute-0 podman[277060]: 2025-10-03 09:42:45.080099549 +0000 UTC m=+0.125111450 container create ee4305f6c63bac044f0d5d4bf660b24b0b391c460a6ade3b6d91044d35965202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_williams, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 09:42:45 compute-0 podman[277060]: 2025-10-03 09:42:44.994873858 +0000 UTC m=+0.039885809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:42:45 compute-0 systemd[1]: Started libpod-conmon-ee4305f6c63bac044f0d5d4bf660b24b0b391c460a6ade3b6d91044d35965202.scope.
Oct  3 09:42:45 compute-0 python3.9[277046]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 09:42:45 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:42:45 compute-0 podman[277060]: 2025-10-03 09:42:45.194640798 +0000 UTC m=+0.239652719 container init ee4305f6c63bac044f0d5d4bf660b24b0b391c460a6ade3b6d91044d35965202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:42:45 compute-0 podman[277060]: 2025-10-03 09:42:45.203626836 +0000 UTC m=+0.248638737 container start ee4305f6c63bac044f0d5d4bf660b24b0b391c460a6ade3b6d91044d35965202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 09:42:45 compute-0 admiring_williams[277076]: 167 167
Oct  3 09:42:45 compute-0 podman[277060]: 2025-10-03 09:42:45.209159474 +0000 UTC m=+0.254171395 container attach ee4305f6c63bac044f0d5d4bf660b24b0b391c460a6ade3b6d91044d35965202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Oct  3 09:42:45 compute-0 systemd[1]: libpod-ee4305f6c63bac044f0d5d4bf660b24b0b391c460a6ade3b6d91044d35965202.scope: Deactivated successfully.
Oct  3 09:42:45 compute-0 podman[277060]: 2025-10-03 09:42:45.209804614 +0000 UTC m=+0.254816515 container died ee4305f6c63bac044f0d5d4bf660b24b0b391c460a6ade3b6d91044d35965202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:42:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-57c90292fe819fec2cfb41068927e2c3ed273f5e0261e5b61bf33cbe150783f2-merged.mount: Deactivated successfully.
Oct  3 09:42:45 compute-0 podman[277060]: 2025-10-03 09:42:45.262257905 +0000 UTC m=+0.307269806 container remove ee4305f6c63bac044f0d5d4bf660b24b0b391c460a6ade3b6d91044d35965202 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:42:45 compute-0 systemd[1]: libpod-conmon-ee4305f6c63bac044f0d5d4bf660b24b0b391c460a6ade3b6d91044d35965202.scope: Deactivated successfully.
Oct  3 09:42:45 compute-0 podman[277108]: 2025-10-03 09:42:45.450465144 +0000 UTC m=+0.055617623 container create 4d5fde900c14949f1e1c8fb06ffd30e7baeebc33d06e1b97feb1313a1c1d6238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 09:42:45 compute-0 systemd[1]: Started libpod-conmon-4d5fde900c14949f1e1c8fb06ffd30e7baeebc33d06e1b97feb1313a1c1d6238.scope.
Oct  3 09:42:45 compute-0 podman[277108]: 2025-10-03 09:42:45.426183796 +0000 UTC m=+0.031336325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:42:45 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2995999b62c64d7a10db7dc63ab2557d0fe1a07e994e10dafee47bf416f03f9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2995999b62c64d7a10db7dc63ab2557d0fe1a07e994e10dafee47bf416f03f9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2995999b62c64d7a10db7dc63ab2557d0fe1a07e994e10dafee47bf416f03f9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2995999b62c64d7a10db7dc63ab2557d0fe1a07e994e10dafee47bf416f03f9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:42:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2995999b62c64d7a10db7dc63ab2557d0fe1a07e994e10dafee47bf416f03f9e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:42:45 compute-0 podman[277108]: 2025-10-03 09:42:45.561214182 +0000 UTC m=+0.166366661 container init 4d5fde900c14949f1e1c8fb06ffd30e7baeebc33d06e1b97feb1313a1c1d6238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:42:45 compute-0 podman[277108]: 2025-10-03 09:42:45.574668663 +0000 UTC m=+0.179821132 container start 4d5fde900c14949f1e1c8fb06ffd30e7baeebc33d06e1b97feb1313a1c1d6238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_volhard, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 09:42:45 compute-0 podman[277108]: 2025-10-03 09:42:45.579813028 +0000 UTC m=+0.184965517 container attach 4d5fde900c14949f1e1c8fb06ffd30e7baeebc33d06e1b97feb1313a1c1d6238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_volhard, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:42:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:42:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:42:45
Oct  3 09:42:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:42:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:42:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['vms', 'volumes', 'images', '.rgw.root', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'default.rgw.log']
Oct  3 09:42:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:42:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:42:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:42:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:42:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:42:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:42:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:42:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:46 compute-0 python3.9[277204]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 09:42:46 compute-0 brave_volhard[277124]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:42:46 compute-0 brave_volhard[277124]: --> relative data size: 1.0
Oct  3 09:42:46 compute-0 brave_volhard[277124]: --> All data devices are unavailable
Oct  3 09:42:46 compute-0 systemd[1]: libpod-4d5fde900c14949f1e1c8fb06ffd30e7baeebc33d06e1b97feb1313a1c1d6238.scope: Deactivated successfully.
Oct  3 09:42:46 compute-0 systemd[1]: libpod-4d5fde900c14949f1e1c8fb06ffd30e7baeebc33d06e1b97feb1313a1c1d6238.scope: Consumed 1.092s CPU time.
Oct  3 09:42:46 compute-0 podman[277230]: 2025-10-03 09:42:46.790642701 +0000 UTC m=+0.041844812 container died 4d5fde900c14949f1e1c8fb06ffd30e7baeebc33d06e1b97feb1313a1c1d6238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:42:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-2995999b62c64d7a10db7dc63ab2557d0fe1a07e994e10dafee47bf416f03f9e-merged.mount: Deactivated successfully.
Oct  3 09:42:47 compute-0 podman[277230]: 2025-10-03 09:42:47.3256056 +0000 UTC m=+0.576807681 container remove 4d5fde900c14949f1e1c8fb06ffd30e7baeebc33d06e1b97feb1313a1c1d6238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_volhard, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:42:47 compute-0 systemd[1]: libpod-conmon-4d5fde900c14949f1e1c8fb06ffd30e7baeebc33d06e1b97feb1313a1c1d6238.scope: Deactivated successfully.
Oct  3 09:42:48 compute-0 podman[277455]: 2025-10-03 09:42:48.121741345 +0000 UTC m=+0.066890073 container create beaf0ee4f2644863f0dc572c5d270304c43dcb96418f8b7531e16e5800880f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 09:42:48 compute-0 podman[277455]: 2025-10-03 09:42:48.084835233 +0000 UTC m=+0.029983981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:42:48 compute-0 systemd[1]: Started libpod-conmon-beaf0ee4f2644863f0dc572c5d270304c43dcb96418f8b7531e16e5800880f33.scope.
Oct  3 09:42:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:42:48 compute-0 podman[277455]: 2025-10-03 09:42:48.263937931 +0000 UTC m=+0.209086689 container init beaf0ee4f2644863f0dc572c5d270304c43dcb96418f8b7531e16e5800880f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:42:48 compute-0 podman[277455]: 2025-10-03 09:42:48.273042293 +0000 UTC m=+0.218191021 container start beaf0ee4f2644863f0dc572c5d270304c43dcb96418f8b7531e16e5800880f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 09:42:48 compute-0 sharp_bardeen[277470]: 167 167
Oct  3 09:42:48 compute-0 podman[277455]: 2025-10-03 09:42:48.279599333 +0000 UTC m=+0.224748061 container attach beaf0ee4f2644863f0dc572c5d270304c43dcb96418f8b7531e16e5800880f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:42:48 compute-0 systemd[1]: libpod-beaf0ee4f2644863f0dc572c5d270304c43dcb96418f8b7531e16e5800880f33.scope: Deactivated successfully.
Oct  3 09:42:48 compute-0 podman[277455]: 2025-10-03 09:42:48.281576496 +0000 UTC m=+0.226725234 container died beaf0ee4f2644863f0dc572c5d270304c43dcb96418f8b7531e16e5800880f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:42:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2a4527045b3f136d41687b72f3dbf50c02d812f80a1279548a02a92b34e8fbd-merged.mount: Deactivated successfully.
Oct  3 09:42:48 compute-0 podman[277455]: 2025-10-03 09:42:48.348307824 +0000 UTC m=+0.293456552 container remove beaf0ee4f2644863f0dc572c5d270304c43dcb96418f8b7531e16e5800880f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:42:48 compute-0 systemd[1]: libpod-conmon-beaf0ee4f2644863f0dc572c5d270304c43dcb96418f8b7531e16e5800880f33.scope: Deactivated successfully.
Oct  3 09:42:48 compute-0 podman[277556]: 2025-10-03 09:42:48.552485205 +0000 UTC m=+0.051625514 container create 73c56dd64a40a1c303460f8a19020da465f58d73c62787270bd06459c041e950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ride, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:42:48 compute-0 systemd[1]: Started libpod-conmon-73c56dd64a40a1c303460f8a19020da465f58d73c62787270bd06459c041e950.scope.
Oct  3 09:42:48 compute-0 podman[277556]: 2025-10-03 09:42:48.534756008 +0000 UTC m=+0.033896337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:42:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92902ca322299397604a2bbca4f5e9b51d436b68d9bd7e8fe4d3ecc6572c75db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92902ca322299397604a2bbca4f5e9b51d436b68d9bd7e8fe4d3ecc6572c75db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92902ca322299397604a2bbca4f5e9b51d436b68d9bd7e8fe4d3ecc6572c75db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92902ca322299397604a2bbca4f5e9b51d436b68d9bd7e8fe4d3ecc6572c75db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:42:48 compute-0 podman[277556]: 2025-10-03 09:42:48.661968983 +0000 UTC m=+0.161109312 container init 73c56dd64a40a1c303460f8a19020da465f58d73c62787270bd06459c041e950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ride, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 09:42:48 compute-0 podman[277556]: 2025-10-03 09:42:48.678961218 +0000 UTC m=+0.178101527 container start 73c56dd64a40a1c303460f8a19020da465f58d73c62787270bd06459c041e950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 09:42:48 compute-0 podman[277556]: 2025-10-03 09:42:48.683420141 +0000 UTC m=+0.182560470 container attach 73c56dd64a40a1c303460f8a19020da465f58d73c62787270bd06459c041e950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ride, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:42:48 compute-0 python3.9[277581]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  3 09:42:49 compute-0 affectionate_ride[277585]: {
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:    "0": [
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:        {
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "devices": [
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "/dev/loop3"
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            ],
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "lv_name": "ceph_lv0",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "lv_size": "21470642176",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "name": "ceph_lv0",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "tags": {
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.cluster_name": "ceph",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.crush_device_class": "",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.encrypted": "0",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.osd_id": "0",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.type": "block",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.vdo": "0"
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            },
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "type": "block",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "vg_name": "ceph_vg0"
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:        }
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:    ],
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:    "1": [
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:        {
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "devices": [
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "/dev/loop4"
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            ],
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "lv_name": "ceph_lv1",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "lv_size": "21470642176",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "name": "ceph_lv1",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "tags": {
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.cluster_name": "ceph",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.crush_device_class": "",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.encrypted": "0",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.osd_id": "1",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.type": "block",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.vdo": "0"
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            },
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "type": "block",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "vg_name": "ceph_vg1"
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:        }
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:    ],
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:    "2": [
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:        {
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "devices": [
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "/dev/loop5"
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            ],
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "lv_name": "ceph_lv2",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "lv_size": "21470642176",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "name": "ceph_lv2",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "tags": {
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.cluster_name": "ceph",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.crush_device_class": "",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.encrypted": "0",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.osd_id": "2",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.type": "block",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:                "ceph.vdo": "0"
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            },
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "type": "block",
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:            "vg_name": "ceph_vg2"
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:        }
Oct  3 09:42:49 compute-0 affectionate_ride[277585]:    ]
Oct  3 09:42:49 compute-0 affectionate_ride[277585]: }
Oct  3 09:42:49 compute-0 systemd[1]: libpod-73c56dd64a40a1c303460f8a19020da465f58d73c62787270bd06459c041e950.scope: Deactivated successfully.
Oct  3 09:42:49 compute-0 podman[277556]: 2025-10-03 09:42:49.557044469 +0000 UTC m=+1.056184778 container died 73c56dd64a40a1c303460f8a19020da465f58d73c62787270bd06459c041e950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ride, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-92902ca322299397604a2bbca4f5e9b51d436b68d9bd7e8fe4d3ecc6572c75db-merged.mount: Deactivated successfully.
Oct  3 09:42:49 compute-0 podman[277556]: 2025-10-03 09:42:49.629379057 +0000 UTC m=+1.128519366 container remove 73c56dd64a40a1c303460f8a19020da465f58d73c62787270bd06459c041e950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ride, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:42:49 compute-0 systemd[1]: libpod-conmon-73c56dd64a40a1c303460f8a19020da465f58d73c62787270bd06459c041e950.scope: Deactivated successfully.
Oct  3 09:42:49 compute-0 python3.9[277746]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:42:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:50 compute-0 python3.9[277981]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759484569.1457305-138-226051477721393/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:42:50 compute-0 podman[278019]: 2025-10-03 09:42:50.429864823 +0000 UTC m=+0.054699364 container create 3fbe2bc8eb3a00031760bbe3619b5a21a014841d2638f24195d813cb750b44e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lederberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 09:42:50 compute-0 systemd[1]: Started libpod-conmon-3fbe2bc8eb3a00031760bbe3619b5a21a014841d2638f24195d813cb750b44e7.scope.
Oct  3 09:42:50 compute-0 podman[278019]: 2025-10-03 09:42:50.410093509 +0000 UTC m=+0.034928070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:42:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:42:50 compute-0 podman[278019]: 2025-10-03 09:42:50.563026529 +0000 UTC m=+0.187861090 container init 3fbe2bc8eb3a00031760bbe3619b5a21a014841d2638f24195d813cb750b44e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lederberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct  3 09:42:50 compute-0 podman[278019]: 2025-10-03 09:42:50.575141137 +0000 UTC m=+0.199975678 container start 3fbe2bc8eb3a00031760bbe3619b5a21a014841d2638f24195d813cb750b44e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 09:42:50 compute-0 podman[278019]: 2025-10-03 09:42:50.579560118 +0000 UTC m=+0.204394689 container attach 3fbe2bc8eb3a00031760bbe3619b5a21a014841d2638f24195d813cb750b44e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:42:50 compute-0 practical_lederberg[278066]: 167 167
Oct  3 09:42:50 compute-0 systemd[1]: libpod-3fbe2bc8eb3a00031760bbe3619b5a21a014841d2638f24195d813cb750b44e7.scope: Deactivated successfully.
Oct  3 09:42:50 compute-0 podman[278019]: 2025-10-03 09:42:50.5836539 +0000 UTC m=+0.208488441 container died 3fbe2bc8eb3a00031760bbe3619b5a21a014841d2638f24195d813cb750b44e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 09:42:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-687d3800bb411403e815c788473ba95e0099b1b64b9bbb421b9c429544d51b9b-merged.mount: Deactivated successfully.
Oct  3 09:42:50 compute-0 podman[278019]: 2025-10-03 09:42:50.626072349 +0000 UTC m=+0.250906890 container remove 3fbe2bc8eb3a00031760bbe3619b5a21a014841d2638f24195d813cb750b44e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:42:50 compute-0 systemd[1]: libpod-conmon-3fbe2bc8eb3a00031760bbe3619b5a21a014841d2638f24195d813cb750b44e7.scope: Deactivated successfully.
Oct  3 09:42:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:42:50 compute-0 podman[278157]: 2025-10-03 09:42:50.804403852 +0000 UTC m=+0.049901749 container create 6af0f8b1ebae66eed4833b97e8078f2d90ffb1ea8472549d0c440c982e7c2f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 09:42:50 compute-0 systemd[1]: Started libpod-conmon-6af0f8b1ebae66eed4833b97e8078f2d90ffb1ea8472549d0c440c982e7c2f4a.scope.
Oct  3 09:42:50 compute-0 podman[278157]: 2025-10-03 09:42:50.782698717 +0000 UTC m=+0.028196644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:42:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7816767b23e4a30a93a53e9cf8feb79d44692bee05239da206ca7d5a19c38187/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7816767b23e4a30a93a53e9cf8feb79d44692bee05239da206ca7d5a19c38187/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7816767b23e4a30a93a53e9cf8feb79d44692bee05239da206ca7d5a19c38187/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7816767b23e4a30a93a53e9cf8feb79d44692bee05239da206ca7d5a19c38187/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:42:50 compute-0 podman[278157]: 2025-10-03 09:42:50.914102897 +0000 UTC m=+0.159600824 container init 6af0f8b1ebae66eed4833b97e8078f2d90ffb1ea8472549d0c440c982e7c2f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 09:42:50 compute-0 podman[278157]: 2025-10-03 09:42:50.932109343 +0000 UTC m=+0.177607240 container start 6af0f8b1ebae66eed4833b97e8078f2d90ffb1ea8472549d0c440c982e7c2f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_buck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 09:42:50 compute-0 podman[278157]: 2025-10-03 09:42:50.937124225 +0000 UTC m=+0.182622142 container attach 6af0f8b1ebae66eed4833b97e8078f2d90ffb1ea8472549d0c440c982e7c2f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_buck, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 09:42:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 09:42:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 5464 writes, 23K keys, 5464 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5464 writes, 781 syncs, 7.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5464 writes, 23K keys, 5464 commit groups, 1.0 writes per commit group, ingest: 18.61 MB, 0.03 MB/s#012Interval WAL: 5464 writes, 781 syncs, 7.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d1bcb0dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d1bcb0dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  3 09:42:51 compute-0 python3.9[278227]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:42:51 compute-0 python3.9[278355]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759484570.5531874-138-259306128053995/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:42:51 compute-0 suspicious_buck[278219]: {
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:        "osd_id": 1,
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:        "type": "bluestore"
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:    },
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:        "osd_id": 2,
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:        "type": "bluestore"
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:    },
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:        "osd_id": 0,
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:        "type": "bluestore"
Oct  3 09:42:51 compute-0 suspicious_buck[278219]:    }
Oct  3 09:42:51 compute-0 suspicious_buck[278219]: }
Oct  3 09:42:51 compute-0 systemd[1]: libpod-6af0f8b1ebae66eed4833b97e8078f2d90ffb1ea8472549d0c440c982e7c2f4a.scope: Deactivated successfully.
Oct  3 09:42:51 compute-0 podman[278157]: 2025-10-03 09:42:51.99479827 +0000 UTC m=+1.240296167 container died 6af0f8b1ebae66eed4833b97e8078f2d90ffb1ea8472549d0c440c982e7c2f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 09:42:51 compute-0 systemd[1]: libpod-6af0f8b1ebae66eed4833b97e8078f2d90ffb1ea8472549d0c440c982e7c2f4a.scope: Consumed 1.050s CPU time.
Oct  3 09:42:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7816767b23e4a30a93a53e9cf8feb79d44692bee05239da206ca7d5a19c38187-merged.mount: Deactivated successfully.
Oct  3 09:42:52 compute-0 podman[278157]: 2025-10-03 09:42:52.054854013 +0000 UTC m=+1.300351910 container remove 6af0f8b1ebae66eed4833b97e8078f2d90ffb1ea8472549d0c440c982e7c2f4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 09:42:52 compute-0 systemd[1]: libpod-conmon-6af0f8b1ebae66eed4833b97e8078f2d90ffb1ea8472549d0c440c982e7c2f4a.scope: Deactivated successfully.
Oct  3 09:42:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:42:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:42:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:42:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:42:52 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 84c4b8a0-62c7-4d3d-ac2f-fafc1bbacce9 does not exist
Oct  3 09:42:52 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 1292ac6b-473d-4148-8d2f-e28cfd6c39a2 does not exist
Oct  3 09:42:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:42:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:42:53 compute-0 python3.9[278591]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:42:53 compute-0 python3.9[278712]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759484572.6327488-182-51006269682499/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:54 compute-0 python3.9[278862]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:42:54 compute-0 podman[278934]: 2025-10-03 09:42:54.828380551 +0000 UTC m=+0.084424426 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:42:54 compute-0 podman[278933]: 2025-10-03 09:42:54.829158086 +0000 UTC m=+0.085355336 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git)
Oct  3 09:42:54 compute-0 podman[278936]: 2025-10-03 09:42:54.855995996 +0000 UTC m=+0.106941637 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  3 09:42:54 compute-0 podman[278935]: 2025-10-03 09:42:54.856322427 +0000 UTC m=+0.100813331 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:42:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:42:55 compute-0 python3.9[279061]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759484573.9112046-182-226375609916076/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:42:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:42:55 compute-0 python3.9[279211]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:42:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:56 compute-0 python3.9[279365]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:42:57 compute-0 python3.9[279517]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:42:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 09:42:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6640 writes, 27K keys, 6640 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6640 writes, 1171 syncs, 5.67 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6640 writes, 27K keys, 6640 commit groups, 1.0 writes per commit group, ingest: 19.57 MB, 0.03 MB/s#012Interval WAL: 6640 writes, 1171 syncs, 5.67 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5650663e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5650663e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct  3 09:42:57 compute-0 python3.9[279595]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:42:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:42:58 compute-0 python3.9[279747]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:42:59 compute-0 python3.9[279825]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:42:59 compute-0 podman[157165]: time="2025-10-03T09:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:42:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Oct  3 09:42:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6817 "" "Go-http-client/1.1"
Oct  3 09:43:00 compute-0 python3.9[279977]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:43:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:43:01 compute-0 python3.9[280129]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:43:01 compute-0 openstack_network_exporter[159287]: ERROR   09:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:43:01 compute-0 openstack_network_exporter[159287]: ERROR   09:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:43:01 compute-0 openstack_network_exporter[159287]: ERROR   09:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:43:01 compute-0 openstack_network_exporter[159287]: ERROR   09:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:43:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:43:01 compute-0 openstack_network_exporter[159287]: ERROR   09:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:43:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:43:01 compute-0 python3.9[280207]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:43:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:02 compute-0 podman[280331]: 2025-10-03 09:43:02.21246051 +0000 UTC m=+0.129642065 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  3 09:43:02 compute-0 podman[280332]: 2025-10-03 09:43:02.21714623 +0000 UTC m=+0.119008154 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350)
Oct  3 09:43:02 compute-0 python3.9[280399]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:43:02 compute-0 python3.9[280482]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:43:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 09:43:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5558 writes, 24K keys, 5558 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5558 writes, 826 syncs, 6.73 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5558 writes, 24K keys, 5558 commit groups, 1.0 writes per commit group, ingest: 18.67 MB, 0.03 MB/s#012Interval WAL: 5558 writes, 826 syncs, 6.73 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56167acccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56167acccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  3 09:43:03 compute-0 podman[280606]: 2025-10-03 09:43:03.588705691 +0000 UTC m=+0.062553734 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 09:43:03 compute-0 python3.9[280656]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:43:03 compute-0 systemd[1]: Reloading.
Oct  3 09:43:04 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:43:04 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:43:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:05 compute-0 python3.9[280844]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:43:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:43:05 compute-0 ceph-mgr[192071]: [devicehealth INFO root] Check health
Oct  3 09:43:05 compute-0 python3.9[280922]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:43:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:06 compute-0 python3.9[281074]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:43:07 compute-0 python3.9[281152]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:43:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:08 compute-0 python3.9[281304]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:43:08 compute-0 systemd[1]: Reloading.
Oct  3 09:43:08 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:43:08 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:43:08 compute-0 systemd[1]: Starting Create netns directory...
Oct  3 09:43:08 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  3 09:43:08 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  3 09:43:08 compute-0 systemd[1]: Finished Create netns directory.
Oct  3 09:43:09 compute-0 python3.9[281497]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:43:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:10 compute-0 python3.9[281649]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:43:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:43:11 compute-0 python3.9[281772]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759484589.920292-333-131665888683883/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:43:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:12 compute-0 python3.9[281924]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:43:13 compute-0 python3.9[282076]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:43:13 compute-0 python3.9[282199]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759484592.6189895-358-175289405479212/.source.json _original_basename=.u6ct2zc0 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:43:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:14 compute-0 python3.9[282351]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:43:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:43:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:43:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:43:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:43:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:43:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:43:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:43:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:17 compute-0 python3.9[282778]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct  3 09:43:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:18 compute-0 python3.9[282930]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 09:43:19 compute-0 python3.9[283082]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  3 09:43:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:43:21 compute-0 python3[283261]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 09:43:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:43:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:29 compute-0 podman[157165]: time="2025-10-03T09:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:43:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32819 "" "Go-http-client/1.1"
Oct  3 09:43:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6819 "" "Go-http-client/1.1"
Oct  3 09:43:29 compute-0 podman[283318]: 2025-10-03 09:43:29.954645922 +0000 UTC m=+4.770727532 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:43:29 compute-0 podman[283320]: 2025-10-03 09:43:29.96460602 +0000 UTC m=+4.771576740 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  3 09:43:29 compute-0 podman[283319]: 2025-10-03 09:43:29.966335795 +0000 UTC m=+4.778545252 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 09:43:29 compute-0 podman[283317]: 2025-10-03 09:43:29.991710245 +0000 UTC m=+4.815739520 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, name=ubi9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, release=1214.1726694543, config_id=edpm, release-0.7.12=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9)
Oct  3 09:43:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:43:31 compute-0 openstack_network_exporter[159287]: ERROR   09:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:43:31 compute-0 openstack_network_exporter[159287]: ERROR   09:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:43:31 compute-0 openstack_network_exporter[159287]: ERROR   09:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:43:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:43:31 compute-0 openstack_network_exporter[159287]: ERROR   09:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:43:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:43:31 compute-0 openstack_network_exporter[159287]: ERROR   09:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:43:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:33 compute-0 podman[283424]: 2025-10-03 09:43:33.244660833 +0000 UTC m=+0.503983096 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41)
Oct  3 09:43:33 compute-0 podman[283274]: 2025-10-03 09:43:33.27399007 +0000 UTC m=+11.669995432 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  3 09:43:33 compute-0 podman[283423]: 2025-10-03 09:43:33.309175603 +0000 UTC m=+0.571462021 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:43:33 compute-0 podman[283489]: 2025-10-03 09:43:33.493736917 +0000 UTC m=+0.077465434 container create 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct  3 09:43:33 compute-0 podman[283489]: 2025-10-03 09:43:33.448856784 +0000 UTC m=+0.032585341 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  3 09:43:33 compute-0 python3[283261]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  3 09:43:33 compute-0 podman[283531]: 2025-10-03 09:43:33.812616621 +0000 UTC m=+0.067239607 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:43:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:34 compute-0 python3.9[283699]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:43:35 compute-0 python3.9[283853]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:43:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:43:35 compute-0 python3.9[283929]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:43:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:36 compute-0 python3.9[284080]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759484616.0204372-446-243067140459320/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:43:37 compute-0 python3.9[284156]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 09:43:37 compute-0 systemd[1]: Reloading.
Oct  3 09:43:37 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:43:37 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:43:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:38 compute-0 python3.9[284266]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:43:38 compute-0 systemd[1]: Reloading.
Oct  3 09:43:38 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:43:38 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:43:38 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.952 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.953 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.953 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f70b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa35c9f7170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b8940b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35df74380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b894380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e566ba0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f73b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e6b9400>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.956 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa35b894080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa35c9f71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa35c9f7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa35c9f7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa35c9f72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa35c9f7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa35c955970>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f76e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa35b894350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa35c92f7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa35c9f7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa35c9f7710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa35c9f79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa35e6e6180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa35c9f73e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa35c9f7c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa35c9f7440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa35c9f7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa35c9f7ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa35c9f7d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa35c9f7e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa35c9f7650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa35c9f7e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa35c9f76b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa35c9f7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa35c9f7fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:43:38.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:43:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff0d6267c1569a3c3d6e1767652e19e1ca3438eb5146ed3294aca74a39cb523/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct  3 09:43:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff0d6267c1569a3c3d6e1767652e19e1ca3438eb5146ed3294aca74a39cb523/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  3 09:43:39 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5.
Oct  3 09:43:39 compute-0 podman[284307]: 2025-10-03 09:43:39.080722297 +0000 UTC m=+0.143276037 container init 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: + sudo -E kolla_set_configs
Oct  3 09:43:39 compute-0 podman[284307]: 2025-10-03 09:43:39.11246051 +0000 UTC m=+0.175014220 container start 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 09:43:39 compute-0 edpm-start-podman-container[284307]: ovn_metadata_agent
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: INFO:__main__:Validating config file
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: INFO:__main__:Copying service configuration files
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: INFO:__main__:Writing out command to execute
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: INFO:__main__:Setting permission for /var/lib/neutron
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: ++ cat /run_command
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: + CMD=neutron-ovn-metadata-agent
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: + ARGS=
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: + sudo kolla_copy_cacerts
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: + [[ ! -n '' ]]
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: + . kolla_extend_start
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: Running command: 'neutron-ovn-metadata-agent'
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: + umask 0022
Oct  3 09:43:39 compute-0 ovn_metadata_agent[284320]: + exec neutron-ovn-metadata-agent
Oct  3 09:43:39 compute-0 edpm-start-podman-container[284306]: Creating additional drop-in dependency for "ovn_metadata_agent" (99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5)
Oct  3 09:43:39 compute-0 podman[284330]: 2025-10-03 09:43:39.229647904 +0000 UTC m=+0.106306926 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 09:43:39 compute-0 systemd[1]: Reloading.
Oct  3 09:43:39 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:43:39 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:43:39 compute-0 systemd[1]: Started ovn_metadata_agent container.
Oct  3 09:43:40 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Oct  3 09:43:40 compute-0 systemd[1]: session-55.scope: Consumed 1min 16.820s CPU time.
Oct  3 09:43:40 compute-0 systemd-logind[798]: Session 55 logged out. Waiting for processes to exit.
Oct  3 09:43:40 compute-0 systemd-logind[798]: Removed session 55.
Oct  3 09:43:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.520 284328 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.520 284328 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.520 284328 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.521 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.521 284328 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.521 284328 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.521 284328 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.521 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.521 284328 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.521 284328 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.522 284328 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.522 284328 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.522 284328 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.522 284328 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.522 284328 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.523 284328 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.523 284328 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.523 284328 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.523 284328 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.523 284328 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.523 284328 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.524 284328 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.524 284328 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.524 284328 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.524 284328 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.524 284328 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.524 284328 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.525 284328 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.525 284328 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.525 284328 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.525 284328 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.525 284328 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.525 284328 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.525 284328 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.526 284328 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.526 284328 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.526 284328 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.526 284328 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.526 284328 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.526 284328 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.527 284328 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.527 284328 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.527 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.527 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.527 284328 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.528 284328 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.528 284328 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.528 284328 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.528 284328 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.528 284328 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.528 284328 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.528 284328 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.529 284328 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.529 284328 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.529 284328 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.529 284328 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.529 284328 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.529 284328 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.529 284328 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.530 284328 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.530 284328 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.530 284328 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.530 284328 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.530 284328 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.531 284328 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.531 284328 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.531 284328 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.531 284328 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.531 284328 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.531 284328 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.531 284328 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.532 284328 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.532 284328 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.532 284328 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.532 284328 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.532 284328 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.533 284328 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.533 284328 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.533 284328 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.533 284328 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.533 284328 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.533 284328 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.533 284328 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.534 284328 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.534 284328 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.534 284328 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.534 284328 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.534 284328 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.535 284328 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.535 284328 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.535 284328 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.535 284328 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.535 284328 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.536 284328 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.536 284328 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.536 284328 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.536 284328 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.536 284328 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.536 284328 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.536 284328 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.536 284328 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.536 284328 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.537 284328 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.537 284328 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.537 284328 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.537 284328 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.537 284328 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.537 284328 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.537 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.537 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.538 284328 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.538 284328 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.538 284328 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.538 284328 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.538 284328 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.538 284328 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.538 284328 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.538 284328 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.539 284328 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.539 284328 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.539 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.539 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.539 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.540 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.540 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.540 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.540 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.540 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.540 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.540 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.540 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.541 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.541 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.541 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.541 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.541 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.541 284328 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.541 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.542 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.542 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.542 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.542 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.542 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.542 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.542 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.542 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.543 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.543 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.543 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.543 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.543 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.543 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.543 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.544 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.544 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.544 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.544 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.544 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.544 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.544 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.544 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.544 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.545 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.545 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.545 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.545 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.545 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.545 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.545 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.546 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.546 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.546 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.546 284328 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.546 284328 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.547 284328 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.547 284328 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.547 284328 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.547 284328 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.547 284328 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.547 284328 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.548 284328 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.548 284328 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.548 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.548 284328 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.548 284328 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.548 284328 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.549 284328 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.549 284328 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.549 284328 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.549 284328 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.549 284328 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.550 284328 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.550 284328 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.550 284328 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.550 284328 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.550 284328 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.550 284328 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.551 284328 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.551 284328 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.551 284328 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.551 284328 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.551 284328 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.551 284328 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.551 284328 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.552 284328 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.552 284328 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.552 284328 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.552 284328 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.552 284328 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.553 284328 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.553 284328 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.553 284328 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.553 284328 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.553 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.553 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.554 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.554 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.554 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.554 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.554 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.554 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.554 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.555 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.555 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.555 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.555 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.556 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.556 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.556 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.556 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.556 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.556 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.556 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.556 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.557 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.557 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.557 284328 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.557 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.557 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.557 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.558 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.558 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.558 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.558 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.558 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.558 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.559 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.559 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.559 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.559 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.559 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.559 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.560 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.560 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.560 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.560 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.560 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.560 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.561 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.561 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.561 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.561 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.561 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.561 284328 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.562 284328 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.562 284328 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.562 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.562 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.562 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.562 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.562 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.563 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.563 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.563 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.563 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.563 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.564 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.564 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.564 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.564 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.564 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.564 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.564 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.564 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.565 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.565 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.565 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.565 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.565 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.565 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.566 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.566 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.566 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.566 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.566 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.566 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.566 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.567 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.567 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.567 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.567 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.567 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.567 284328 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.567 284328 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.579 284328 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.579 284328 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.579 284328 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.579 284328 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.580 284328 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.595 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 41fabae1-2dc7-46e2-b697-d9133d158399 (UUID: 41fabae1-2dc7-46e2-b697-d9133d158399) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.622 284328 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.622 284328 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.622 284328 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.622 284328 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.625 284328 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.632 284328 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.637 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '41fabae1-2dc7-46e2-b697-d9133d158399'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], external_ids={}, name=41fabae1-2dc7-46e2-b697-d9133d158399, nb_cfg_timestamp=1759483190481, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.638 284328 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fd3e3345e20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.639 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.639 284328 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.639 284328 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.639 284328 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.645 284328 DEBUG oslo_service.service [-] Started child 284434 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.649 284434 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-457204'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.649 284328 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp08tx0xsh/privsep.sock']#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.667 284434 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.668 284434 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.668 284434 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.672 284434 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.678 284434 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  3 09:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:41.683 284434 INFO eventlet.wsgi.server [-] (284434) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct  3 09:43:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:42.331 284328 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  3 09:43:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:42.331 284328 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp08tx0xsh/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  3 09:43:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:42.184 284439 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  3 09:43:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:42.189 284439 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  3 09:43:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:42.191 284439 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct  3 09:43:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:42.191 284439 INFO oslo.privsep.daemon [-] privsep daemon running as pid 284439#033[00m
Oct  3 09:43:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:42.334 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[5f085286-99e2-4401-9bba-f03997a59df0]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 09:43:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:42.849 284439 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:43:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:42.849 284439 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:43:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:42.850 284439 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.418 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[33ac4c6f-b2de-4771-b566-30081b575fd0]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.420 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, column=external_ids, values=({'neutron:ovn-metadata-id': '7b1fc8c1-d25e-5259-903e-f5ca3cc9301d'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.526 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.532 284328 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.532 284328 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.532 284328 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.532 284328 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.532 284328 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.532 284328 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.533 284328 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.533 284328 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.533 284328 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.533 284328 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.533 284328 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.533 284328 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.533 284328 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.534 284328 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.534 284328 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.534 284328 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.534 284328 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.534 284328 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.534 284328 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.534 284328 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.535 284328 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.535 284328 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.535 284328 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.535 284328 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.535 284328 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.535 284328 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.535 284328 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.536 284328 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.536 284328 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.536 284328 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.536 284328 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.536 284328 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.536 284328 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.536 284328 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.537 284328 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.537 284328 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.537 284328 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.537 284328 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.538 284328 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.538 284328 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.538 284328 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.538 284328 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.538 284328 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.538 284328 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.538 284328 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.538 284328 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.539 284328 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.539 284328 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.539 284328 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.539 284328 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.539 284328 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.539 284328 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.539 284328 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.539 284328 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.539 284328 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.540 284328 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.540 284328 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.540 284328 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.540 284328 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.540 284328 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.540 284328 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.540 284328 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.541 284328 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.541 284328 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.541 284328 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.541 284328 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.541 284328 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.541 284328 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.541 284328 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.541 284328 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.542 284328 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.542 284328 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.542 284328 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.542 284328 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.542 284328 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.542 284328 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.542 284328 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.543 284328 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.543 284328 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.543 284328 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.543 284328 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.543 284328 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.543 284328 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.543 284328 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.543 284328 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.544 284328 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.544 284328 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.544 284328 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.544 284328 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.544 284328 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.544 284328 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.544 284328 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.544 284328 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.545 284328 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.545 284328 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.545 284328 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.545 284328 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.545 284328 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.545 284328 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.545 284328 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.545 284328 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.546 284328 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.546 284328 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.546 284328 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.546 284328 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.546 284328 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.546 284328 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.546 284328 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.547 284328 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.547 284328 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.547 284328 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.547 284328 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.547 284328 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.547 284328 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.548 284328 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.548 284328 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.548 284328 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.548 284328 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.548 284328 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.548 284328 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.548 284328 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.549 284328 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.549 284328 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.549 284328 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.549 284328 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.549 284328 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.549 284328 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.549 284328 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.549 284328 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.550 284328 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.550 284328 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.550 284328 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.550 284328 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.550 284328 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.550 284328 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.550 284328 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.551 284328 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.551 284328 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.551 284328 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.551 284328 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.551 284328 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.551 284328 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.551 284328 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.551 284328 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.552 284328 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.552 284328 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.552 284328 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.552 284328 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.552 284328 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.552 284328 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.552 284328 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.552 284328 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.553 284328 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.553 284328 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.553 284328 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.553 284328 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.553 284328 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.553 284328 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.553 284328 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.553 284328 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.553 284328 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.554 284328 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.554 284328 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.554 284328 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.554 284328 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.554 284328 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.554 284328 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.554 284328 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.555 284328 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.555 284328 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.555 284328 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.555 284328 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.555 284328 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.555 284328 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.556 284328 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.556 284328 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.556 284328 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.556 284328 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.556 284328 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.556 284328 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.556 284328 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.557 284328 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.557 284328 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.557 284328 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.557 284328 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.557 284328 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.557 284328 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.557 284328 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.558 284328 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.558 284328 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.558 284328 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.558 284328 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.558 284328 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.558 284328 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.559 284328 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.559 284328 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.559 284328 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.559 284328 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.559 284328 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.559 284328 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.559 284328 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.559 284328 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.559 284328 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.560 284328 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.560 284328 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.560 284328 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.560 284328 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.560 284328 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.560 284328 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.560 284328 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.561 284328 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.561 284328 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.561 284328 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.561 284328 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.561 284328 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.561 284328 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.561 284328 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.561 284328 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.562 284328 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.562 284328 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.562 284328 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.562 284328 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.562 284328 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.562 284328 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.562 284328 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.563 284328 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.563 284328 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.563 284328 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.563 284328 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.563 284328 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.563 284328 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.563 284328 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.564 284328 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.564 284328 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.564 284328 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.564 284328 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.564 284328 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.564 284328 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.564 284328 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.564 284328 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.565 284328 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.565 284328 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.565 284328 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.565 284328 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.565 284328 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.565 284328 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.565 284328 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.565 284328 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.565 284328 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.566 284328 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.566 284328 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.566 284328 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.566 284328 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.566 284328 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.566 284328 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.566 284328 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.566 284328 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.567 284328 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.567 284328 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.567 284328 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.567 284328 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.567 284328 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.567 284328 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.567 284328 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.567 284328 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.568 284328 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.568 284328 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.568 284328 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.568 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.568 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.568 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.568 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.568 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.569 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.569 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.569 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.569 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.569 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.569 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.569 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.569 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.570 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.570 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.570 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.570 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.570 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.570 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.571 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.571 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.571 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.571 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.571 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.571 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.571 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.571 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.572 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.572 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.572 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.572 284328 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.572 284328 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.572 284328 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.572 284328 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.572 284328 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:43:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:43:43.573 284328 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  3 09:43:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:45 compute-0 systemd-logind[798]: New session 56 of user zuul.
Oct  3 09:43:45 compute-0 systemd[1]: Started Session 56 of User zuul.
Oct  3 09:43:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:43:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:43:45
Oct  3 09:43:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:43:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:43:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'images']
Oct  3 09:43:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:43:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:43:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:43:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:43:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:43:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:43:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:43:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:46 compute-0 python3.9[284597]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:43:47 compute-0 python3.9[284753]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:43:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:48 compute-0 python3.9[284918]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 09:43:48 compute-0 systemd[1]: Reloading.
Oct  3 09:43:49 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:43:49 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:43:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:50 compute-0 python3.9[285104]: ansible-ansible.builtin.service_facts Invoked
Oct  3 09:43:50 compute-0 network[285121]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  3 09:43:50 compute-0 network[285122]: 'network-scripts' will be removed from distribution in near future.
Oct  3 09:43:50 compute-0 network[285123]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  3 09:43:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:43:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  3 09:43:53 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 09:43:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:43:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:43:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:43:53 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:43:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:43:53 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:43:53 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 4c3522cb-331a-4e3f-8727-ce07f3806a6b does not exist
Oct  3 09:43:53 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 95831e4f-2e3e-4b8a-b22a-464255b48011 does not exist
Oct  3 09:43:53 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 177c5fdf-d4c0-4d47-94dc-4de4ddd877bf does not exist
Oct  3 09:43:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:43:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:43:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:43:53 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:43:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:43:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:43:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 09:43:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:43:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:43:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:43:53 compute-0 podman[285505]: 2025-10-03 09:43:53.782114299 +0000 UTC m=+0.065317247 container create f37d0d99b4a738aed9f13a613700e09bd31c905a48addd273779b88f67eb5f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_joliot, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:43:53 compute-0 podman[285505]: 2025-10-03 09:43:53.744499878 +0000 UTC m=+0.027702836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:43:53 compute-0 systemd[1]: Started libpod-conmon-f37d0d99b4a738aed9f13a613700e09bd31c905a48addd273779b88f67eb5f9c.scope.
Oct  3 09:43:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:43:53 compute-0 podman[285505]: 2025-10-03 09:43:53.911215252 +0000 UTC m=+0.194418280 container init f37d0d99b4a738aed9f13a613700e09bd31c905a48addd273779b88f67eb5f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:43:53 compute-0 podman[285505]: 2025-10-03 09:43:53.93213809 +0000 UTC m=+0.215341028 container start f37d0d99b4a738aed9f13a613700e09bd31c905a48addd273779b88f67eb5f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_joliot, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 09:43:53 compute-0 podman[285505]: 2025-10-03 09:43:53.937660497 +0000 UTC m=+0.220863485 container attach f37d0d99b4a738aed9f13a613700e09bd31c905a48addd273779b88f67eb5f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:43:53 compute-0 sharp_joliot[285526]: 167 167
Oct  3 09:43:53 compute-0 systemd[1]: libpod-f37d0d99b4a738aed9f13a613700e09bd31c905a48addd273779b88f67eb5f9c.scope: Deactivated successfully.
Oct  3 09:43:53 compute-0 podman[285505]: 2025-10-03 09:43:53.942862923 +0000 UTC m=+0.226065861 container died f37d0d99b4a738aed9f13a613700e09bd31c905a48addd273779b88f67eb5f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_joliot, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c44e19125edbe6e7bb1990b287d49daf1991c4ca4de7bb1bafe0e76fe0c2b2a-merged.mount: Deactivated successfully.
Oct  3 09:43:54 compute-0 podman[285505]: 2025-10-03 09:43:54.017739104 +0000 UTC m=+0.300942042 container remove f37d0d99b4a738aed9f13a613700e09bd31c905a48addd273779b88f67eb5f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_joliot, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 09:43:54 compute-0 systemd[1]: libpod-conmon-f37d0d99b4a738aed9f13a613700e09bd31c905a48addd273779b88f67eb5f9c.scope: Deactivated successfully.
Oct  3 09:43:54 compute-0 podman[285575]: 2025-10-03 09:43:54.204134657 +0000 UTC m=+0.048303874 container create f9b2fe433bb79927f1492ce496517194bc21494c3948dda8c3f848f0e8f5b692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:54 compute-0 systemd[1]: Started libpod-conmon-f9b2fe433bb79927f1492ce496517194bc21494c3948dda8c3f848f0e8f5b692.scope.
Oct  3 09:43:54 compute-0 podman[285575]: 2025-10-03 09:43:54.186419011 +0000 UTC m=+0.030588258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:43:54 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f1f1f0c0648807c929fcf0a57b632ba9a0cc00c9bff3c502c713b81cc47192/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f1f1f0c0648807c929fcf0a57b632ba9a0cc00c9bff3c502c713b81cc47192/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f1f1f0c0648807c929fcf0a57b632ba9a0cc00c9bff3c502c713b81cc47192/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f1f1f0c0648807c929fcf0a57b632ba9a0cc00c9bff3c502c713b81cc47192/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:43:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f1f1f0c0648807c929fcf0a57b632ba9a0cc00c9bff3c502c713b81cc47192/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:43:54 compute-0 podman[285575]: 2025-10-03 09:43:54.307044994 +0000 UTC m=+0.151214211 container init f9b2fe433bb79927f1492ce496517194bc21494c3948dda8c3f848f0e8f5b692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ganguly, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:43:54 compute-0 podman[285575]: 2025-10-03 09:43:54.324080448 +0000 UTC m=+0.168249675 container start f9b2fe433bb79927f1492ce496517194bc21494c3948dda8c3f848f0e8f5b692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 09:43:54 compute-0 podman[285575]: 2025-10-03 09:43:54.329570733 +0000 UTC m=+0.173739980 container attach f9b2fe433bb79927f1492ce496517194bc21494c3948dda8c3f848f0e8f5b692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:43:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:43:54 compute-0 python3.9[285723]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:43:55 compute-0 cranky_ganguly[285614]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:43:55 compute-0 cranky_ganguly[285614]: --> relative data size: 1.0
Oct  3 09:43:55 compute-0 cranky_ganguly[285614]: --> All data devices are unavailable
Oct  3 09:43:55 compute-0 systemd[1]: libpod-f9b2fe433bb79927f1492ce496517194bc21494c3948dda8c3f848f0e8f5b692.scope: Deactivated successfully.
Oct  3 09:43:55 compute-0 systemd[1]: libpod-f9b2fe433bb79927f1492ce496517194bc21494c3948dda8c3f848f0e8f5b692.scope: Consumed 1.074s CPU time.
Oct  3 09:43:55 compute-0 podman[285575]: 2025-10-03 09:43:55.469771817 +0000 UTC m=+1.313941054 container died f9b2fe433bb79927f1492ce496517194bc21494c3948dda8c3f848f0e8f5b692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ganguly, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:43:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0f1f1f0c0648807c929fcf0a57b632ba9a0cc00c9bff3c502c713b81cc47192-merged.mount: Deactivated successfully.
Oct  3 09:43:55 compute-0 podman[285575]: 2025-10-03 09:43:55.537386416 +0000 UTC m=+1.381555633 container remove f9b2fe433bb79927f1492ce496517194bc21494c3948dda8c3f848f0e8f5b692 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_ganguly, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  3 09:43:55 compute-0 systemd[1]: libpod-conmon-f9b2fe433bb79927f1492ce496517194bc21494c3948dda8c3f848f0e8f5b692.scope: Deactivated successfully.
Oct  3 09:43:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:43:55 compute-0 python3.9[285913]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:43:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:56 compute-0 podman[286149]: 2025-10-03 09:43:56.281519762 +0000 UTC m=+0.048880863 container create b2e7431d1ae8334a72f7fc8ebb927d7126ed649a93d0fcd3d556322e992f9c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:43:56 compute-0 systemd[1]: Started libpod-conmon-b2e7431d1ae8334a72f7fc8ebb927d7126ed649a93d0fcd3d556322e992f9c3e.scope.
Oct  3 09:43:56 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:43:56 compute-0 podman[286149]: 2025-10-03 09:43:56.264589931 +0000 UTC m=+0.031951062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:43:56 compute-0 podman[286149]: 2025-10-03 09:43:56.371287669 +0000 UTC m=+0.138648800 container init b2e7431d1ae8334a72f7fc8ebb927d7126ed649a93d0fcd3d556322e992f9c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hellman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 09:43:56 compute-0 podman[286149]: 2025-10-03 09:43:56.381895987 +0000 UTC m=+0.149257088 container start b2e7431d1ae8334a72f7fc8ebb927d7126ed649a93d0fcd3d556322e992f9c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:43:56 compute-0 podman[286149]: 2025-10-03 09:43:56.386035979 +0000 UTC m=+0.153397100 container attach b2e7431d1ae8334a72f7fc8ebb927d7126ed649a93d0fcd3d556322e992f9c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hellman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:43:56 compute-0 angry_hellman[286191]: 167 167
Oct  3 09:43:56 compute-0 systemd[1]: libpod-b2e7431d1ae8334a72f7fc8ebb927d7126ed649a93d0fcd3d556322e992f9c3e.scope: Deactivated successfully.
Oct  3 09:43:56 compute-0 podman[286149]: 2025-10-03 09:43:56.389700936 +0000 UTC m=+0.157062067 container died b2e7431d1ae8334a72f7fc8ebb927d7126ed649a93d0fcd3d556322e992f9c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hellman, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 09:43:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d717f8c993b27a509b19c8df2c44f8e784d828a32ca79cb95305b0302d7939b-merged.mount: Deactivated successfully.
Oct  3 09:43:56 compute-0 podman[286149]: 2025-10-03 09:43:56.438849666 +0000 UTC m=+0.206210767 container remove b2e7431d1ae8334a72f7fc8ebb927d7126ed649a93d0fcd3d556322e992f9c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hellman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:43:56 compute-0 systemd[1]: libpod-conmon-b2e7431d1ae8334a72f7fc8ebb927d7126ed649a93d0fcd3d556322e992f9c3e.scope: Deactivated successfully.
Oct  3 09:43:56 compute-0 podman[286243]: 2025-10-03 09:43:56.614788275 +0000 UTC m=+0.054161531 container create 19d0ff0e1e73c053d07c08015856e03206a87ca6442f56d2620a0a2568088bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hoover, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:43:56 compute-0 systemd[1]: Started libpod-conmon-19d0ff0e1e73c053d07c08015856e03206a87ca6442f56d2620a0a2568088bcc.scope.
Oct  3 09:43:56 compute-0 podman[286243]: 2025-10-03 09:43:56.591040647 +0000 UTC m=+0.030413943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:43:56 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e6b4285d920a8264f3ea3ff87a25ca8c4e6ee7414399852b7123fd6f4e4d449/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e6b4285d920a8264f3ea3ff87a25ca8c4e6ee7414399852b7123fd6f4e4d449/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e6b4285d920a8264f3ea3ff87a25ca8c4e6ee7414399852b7123fd6f4e4d449/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:43:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e6b4285d920a8264f3ea3ff87a25ca8c4e6ee7414399852b7123fd6f4e4d449/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:43:56 compute-0 podman[286243]: 2025-10-03 09:43:56.729730846 +0000 UTC m=+0.169104142 container init 19d0ff0e1e73c053d07c08015856e03206a87ca6442f56d2620a0a2568088bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hoover, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:43:56 compute-0 podman[286243]: 2025-10-03 09:43:56.744206328 +0000 UTC m=+0.183579594 container start 19d0ff0e1e73c053d07c08015856e03206a87ca6442f56d2620a0a2568088bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct  3 09:43:56 compute-0 podman[286243]: 2025-10-03 09:43:56.748359251 +0000 UTC m=+0.187732537 container attach 19d0ff0e1e73c053d07c08015856e03206a87ca6442f56d2620a0a2568088bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 09:43:56 compute-0 python3.9[286236]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:43:57 compute-0 exciting_hoover[286259]: {
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:    "0": [
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:        {
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "devices": [
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "/dev/loop3"
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            ],
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "lv_name": "ceph_lv0",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "lv_size": "21470642176",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "name": "ceph_lv0",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "tags": {
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.cluster_name": "ceph",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.crush_device_class": "",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.encrypted": "0",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.osd_id": "0",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.type": "block",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.vdo": "0"
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            },
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "type": "block",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "vg_name": "ceph_vg0"
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:        }
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:    ],
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:    "1": [
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:        {
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "devices": [
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "/dev/loop4"
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            ],
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "lv_name": "ceph_lv1",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "lv_size": "21470642176",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "name": "ceph_lv1",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "tags": {
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.cluster_name": "ceph",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.crush_device_class": "",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.encrypted": "0",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.osd_id": "1",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.type": "block",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.vdo": "0"
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            },
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "type": "block",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "vg_name": "ceph_vg1"
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:        }
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:    ],
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:    "2": [
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:        {
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "devices": [
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "/dev/loop5"
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            ],
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "lv_name": "ceph_lv2",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "lv_size": "21470642176",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "name": "ceph_lv2",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "tags": {
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.cluster_name": "ceph",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.crush_device_class": "",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.encrypted": "0",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.osd_id": "2",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.type": "block",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:                "ceph.vdo": "0"
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            },
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "type": "block",
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:            "vg_name": "ceph_vg2"
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:        }
Oct  3 09:43:57 compute-0 exciting_hoover[286259]:    ]
Oct  3 09:43:57 compute-0 exciting_hoover[286259]: }
Oct  3 09:43:57 compute-0 systemd[1]: libpod-19d0ff0e1e73c053d07c08015856e03206a87ca6442f56d2620a0a2568088bcc.scope: Deactivated successfully.
Oct  3 09:43:57 compute-0 podman[286421]: 2025-10-03 09:43:57.606961011 +0000 UTC m=+0.034269555 container died 19d0ff0e1e73c053d07c08015856e03206a87ca6442f56d2620a0a2568088bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:43:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e6b4285d920a8264f3ea3ff87a25ca8c4e6ee7414399852b7123fd6f4e4d449-merged.mount: Deactivated successfully.
Oct  3 09:43:57 compute-0 python3.9[286416]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:43:57 compute-0 podman[286421]: 2025-10-03 09:43:57.677321189 +0000 UTC m=+0.104629713 container remove 19d0ff0e1e73c053d07c08015856e03206a87ca6442f56d2620a0a2568088bcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:43:57 compute-0 systemd[1]: libpod-conmon-19d0ff0e1e73c053d07c08015856e03206a87ca6442f56d2620a0a2568088bcc.scope: Deactivated successfully.
Oct  3 09:43:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:43:58 compute-0 podman[286725]: 2025-10-03 09:43:58.405915238 +0000 UTC m=+0.042735086 container create a37ca297d5033d5f7b89350de68944cf80cec0014224b468f6cce568894506a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 09:43:58 compute-0 systemd[1]: Started libpod-conmon-a37ca297d5033d5f7b89350de68944cf80cec0014224b468f6cce568894506a2.scope.
Oct  3 09:43:58 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:43:58 compute-0 podman[286725]: 2025-10-03 09:43:58.386992254 +0000 UTC m=+0.023812132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:43:58 compute-0 podman[286725]: 2025-10-03 09:43:58.556973512 +0000 UTC m=+0.193793440 container init a37ca297d5033d5f7b89350de68944cf80cec0014224b468f6cce568894506a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 09:43:58 compute-0 podman[286725]: 2025-10-03 09:43:58.568225852 +0000 UTC m=+0.205045700 container start a37ca297d5033d5f7b89350de68944cf80cec0014224b468f6cce568894506a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 09:43:58 compute-0 musing_fermi[286741]: 167 167
Oct  3 09:43:58 compute-0 systemd[1]: libpod-a37ca297d5033d5f7b89350de68944cf80cec0014224b468f6cce568894506a2.scope: Deactivated successfully.
Oct  3 09:43:58 compute-0 python3.9[286695]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:43:58 compute-0 podman[286725]: 2025-10-03 09:43:58.653171885 +0000 UTC m=+0.289991753 container attach a37ca297d5033d5f7b89350de68944cf80cec0014224b468f6cce568894506a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:43:58 compute-0 podman[286725]: 2025-10-03 09:43:58.654121464 +0000 UTC m=+0.290941312 container died a37ca297d5033d5f7b89350de68944cf80cec0014224b468f6cce568894506a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:43:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bf269a7e443b7225ccad760bf6051945ef28cdac2966875bb27e279b77a5e48-merged.mount: Deactivated successfully.
Oct  3 09:43:59 compute-0 python3.9[286909]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:43:59 compute-0 podman[157165]: time="2025-10-03T09:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:44:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:00 compute-0 python3.9[287062]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:44:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:44:01 compute-0 openstack_network_exporter[159287]: ERROR   09:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:44:01 compute-0 openstack_network_exporter[159287]: ERROR   09:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:44:01 compute-0 openstack_network_exporter[159287]: ERROR   09:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:44:01 compute-0 openstack_network_exporter[159287]: ERROR   09:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:44:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:44:01 compute-0 openstack_network_exporter[159287]: ERROR   09:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:44:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:44:01 compute-0 podman[286725]: 2025-10-03 09:44:01.906287508 +0000 UTC m=+3.543107356 container remove a37ca297d5033d5f7b89350de68944cf80cec0014224b468f6cce568894506a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:44:01 compute-0 python3.9[287215]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:01 compute-0 systemd[1]: libpod-conmon-a37ca297d5033d5f7b89350de68944cf80cec0014224b468f6cce568894506a2.scope: Deactivated successfully.
Oct  3 09:44:01 compute-0 podman[157165]: @ - - [03/Oct/2025:09:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35731 "" "Go-http-client/1.1"
Oct  3 09:44:02 compute-0 podman[157165]: @ - - [03/Oct/2025:09:44:01 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7249 "" "Go-http-client/1.1"
Oct  3 09:44:02 compute-0 podman[287247]: 2025-10-03 09:44:02.086454742 +0000 UTC m=+0.025249648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:44:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:02 compute-0 python3.9[287423]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:03 compute-0 podman[287247]: 2025-10-03 09:44:03.070318003 +0000 UTC m=+1.009112889 container create 917c18b07d1b05e6150f2c1112d49a68379636b28158399b079d4cf27d01daf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  3 09:44:03 compute-0 podman[287361]: 2025-10-03 09:44:03.683364952 +0000 UTC m=+1.279919157 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 09:44:03 compute-0 systemd[1]: Started libpod-conmon-917c18b07d1b05e6150f2c1112d49a68379636b28158399b079d4cf27d01daf4.scope.
Oct  3 09:44:03 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08a7a7f32ca27488b7d9d80dedc911bdd4888a29def796f162b33378f2a5f58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08a7a7f32ca27488b7d9d80dedc911bdd4888a29def796f162b33378f2a5f58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08a7a7f32ca27488b7d9d80dedc911bdd4888a29def796f162b33378f2a5f58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08a7a7f32ca27488b7d9d80dedc911bdd4888a29def796f162b33378f2a5f58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:44:03 compute-0 podman[287558]: 2025-10-03 09:44:03.761026232 +0000 UTC m=+0.360256276 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct  3 09:44:03 compute-0 podman[287359]: 2025-10-03 09:44:03.764609257 +0000 UTC m=+1.367878947 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release-0.7.12=, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9)
Oct  3 09:44:03 compute-0 podman[287360]: 2025-10-03 09:44:03.766565489 +0000 UTC m=+1.366820363 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:44:03 compute-0 python3.9[287603]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:03 compute-0 podman[287362]: 2025-10-03 09:44:03.77444526 +0000 UTC m=+1.367700940 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Oct  3 09:44:03 compute-0 podman[287247]: 2025-10-03 09:44:03.877005736 +0000 UTC m=+1.815800642 container init 917c18b07d1b05e6150f2c1112d49a68379636b28158399b079d4cf27d01daf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_agnesi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 09:44:03 compute-0 podman[287247]: 2025-10-03 09:44:03.888698529 +0000 UTC m=+1.827493415 container start 917c18b07d1b05e6150f2c1112d49a68379636b28158399b079d4cf27d01daf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:44:03 compute-0 podman[287247]: 2025-10-03 09:44:03.954588574 +0000 UTC m=+1.893383490 container attach 917c18b07d1b05e6150f2c1112d49a68379636b28158399b079d4cf27d01daf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:44:03 compute-0 podman[287557]: 2025-10-03 09:44:03.992796494 +0000 UTC m=+0.596906455 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 09:44:04 compute-0 podman[287726]: 2025-10-03 09:44:04.140738888 +0000 UTC m=+0.099596011 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 09:44:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:04 compute-0 python3.9[287838]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]: {
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:        "osd_id": 1,
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:        "type": "bluestore"
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:    },
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:        "osd_id": 2,
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:        "type": "bluestore"
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:    },
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:        "osd_id": 0,
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:        "type": "bluestore"
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]:    }
Oct  3 09:44:04 compute-0 wonderful_agnesi[287632]: }
Oct  3 09:44:04 compute-0 systemd[1]: libpod-917c18b07d1b05e6150f2c1112d49a68379636b28158399b079d4cf27d01daf4.scope: Deactivated successfully.
Oct  3 09:44:04 compute-0 podman[287247]: 2025-10-03 09:44:04.977987737 +0000 UTC m=+2.916782623 container died 917c18b07d1b05e6150f2c1112d49a68379636b28158399b079d4cf27d01daf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  3 09:44:04 compute-0 systemd[1]: libpod-917c18b07d1b05e6150f2c1112d49a68379636b28158399b079d4cf27d01daf4.scope: Consumed 1.047s CPU time.
Oct  3 09:44:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a08a7a7f32ca27488b7d9d80dedc911bdd4888a29def796f162b33378f2a5f58-merged.mount: Deactivated successfully.
Oct  3 09:44:05 compute-0 podman[287247]: 2025-10-03 09:44:05.294916869 +0000 UTC m=+3.233711755 container remove 917c18b07d1b05e6150f2c1112d49a68379636b28158399b079d4cf27d01daf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 09:44:05 compute-0 systemd[1]: libpod-conmon-917c18b07d1b05e6150f2c1112d49a68379636b28158399b079d4cf27d01daf4.scope: Deactivated successfully.
Oct  3 09:44:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:44:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:44:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:44:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:44:05 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 99e2bb94-41e1-4780-85b4-1615620e1423 does not exist
Oct  3 09:44:05 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f2a7fc29-7513-449f-ba5e-728826984416 does not exist
Oct  3 09:44:05 compute-0 python3.9[288029]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:44:05 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:44:05 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:44:06 compute-0 python3.9[288231]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:06 compute-0 python3.9[288383]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:07 compute-0 python3.9[288535]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:08 compute-0 python3.9[288687]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:09 compute-0 python3.9[288839]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:09 compute-0 python3.9[288991]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:09 compute-0 podman[288992]: 2025-10-03 09:44:09.877695267 +0000 UTC m=+0.120763778 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 09:44:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:10 compute-0 python3.9[289162]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:44:11 compute-0 python3.9[289314]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:11 compute-0 python3.9[289466]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:12 compute-0 python3.9[289618]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:44:13 compute-0 python3.9[289770]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  3 09:44:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:14 compute-0 python3.9[289922]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 09:44:14 compute-0 systemd[1]: Reloading.
Oct  3 09:44:14 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:44:14 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:44:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:44:15 compute-0 python3.9[290109]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:44:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:44:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:44:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:44:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:44:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:44:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:44:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:16 compute-0 python3.9[290262]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:44:17 compute-0 python3.9[290415]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:44:18 compute-0 python3.9[290568]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:44:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:18 compute-0 python3.9[290721]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:44:19 compute-0 python3.9[290874]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:44:20 compute-0 python3.9[291028]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:44:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:44:21 compute-0 python3.9[291181]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct  3 09:44:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:22 compute-0 python3.9[291334]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 09:44:23 compute-0 python3.9[291418]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 09:44:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:44:26 compute-0 python3.9[291571]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  3 09:44:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:27 compute-0 python3.9[291726]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  3 09:44:27 compute-0 python3.9[291881]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  3 09:44:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:28 compute-0 python3.9[292036]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  3 09:44:29 compute-0 podman[157165]: time="2025-10-03T09:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:44:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35731 "" "Go-http-client/1.1"
Oct  3 09:44:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7253 "" "Go-http-client/1.1"
Oct  3 09:44:29 compute-0 python3.9[292191]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:44:30 compute-0 python3.9[292346]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:31 compute-0 openstack_network_exporter[159287]: ERROR   09:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:44:31 compute-0 openstack_network_exporter[159287]: ERROR   09:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:44:31 compute-0 openstack_network_exporter[159287]: ERROR   09:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:44:31 compute-0 openstack_network_exporter[159287]: ERROR   09:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:44:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:44:31 compute-0 openstack_network_exporter[159287]: ERROR   09:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:44:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:44:31 compute-0 python3.9[292501]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:32 compute-0 python3.9[292656]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:33 compute-0 python3.9[292811]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:33 compute-0 podman[292812]: 2025-10-03 09:44:33.81298632 +0000 UTC m=+0.078673274 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 09:44:33 compute-0 podman[292834]: 2025-10-03 09:44:33.894637967 +0000 UTC m=+0.071827065 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:44:33 compute-0 podman[292832]: 2025-10-03 09:44:33.908092767 +0000 UTC m=+0.087823286 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, release-0.7.12=, version=9.4, managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, io.openshift.tags=base rhel9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 09:44:33 compute-0 podman[292836]: 2025-10-03 09:44:33.925070118 +0000 UTC m=+0.094208709 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2)
Oct  3 09:44:33 compute-0 podman[292835]: 2025-10-03 09:44:33.932725653 +0000 UTC m=+0.101744420 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, version=9.6, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=, release=1755695350)
Oct  3 09:44:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 0 B/s wr, 12 op/s
Oct  3 09:44:34 compute-0 podman[293032]: 2025-10-03 09:44:34.502037465 +0000 UTC m=+0.061095872 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:44:34 compute-0 podman[293033]: 2025-10-03 09:44:34.565904095 +0000 UTC m=+0.122977808 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct  3 09:44:34 compute-0 python3.9[293102]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  3 09:44:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:44:35 compute-0 python3.9[293262]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Oct  3 09:44:36 compute-0 python3.9[293417]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:37 compute-0 python3.9[293572]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 09:44:38 compute-0 python3.9[293727]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:39 compute-0 python3.9[293882]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:39 compute-0 podman[293886]: 2025-10-03 09:44:39.988212895 +0000 UTC m=+0.073856250 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 09:44:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 09:44:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:44:40 compute-0 python3.9[294057]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:44:41.569 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:44:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:44:41.570 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:44:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:44:41.570 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:44:41 compute-0 python3.9[294212]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 09:44:42 compute-0 python3.9[294367]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:43 compute-0 python3.9[294522]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 09:44:44 compute-0 python3.9[294677]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:44:45 compute-0 python3.9[294832]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:44:45
Oct  3 09:44:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:44:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:44:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'backups', '.rgw.root', 'volumes', 'default.rgw.control', 'vms']
Oct  3 09:44:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:44:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:44:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:44:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:44:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:44:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:44:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:44:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Oct  3 09:44:46 compute-0 python3.9[294987]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:47 compute-0 python3.9[295142]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 0 B/s wr, 9 op/s
Oct  3 09:44:48 compute-0 python3.9[295297]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  3 09:44:49 compute-0 python3.9[295453]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:44:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:50 compute-0 python3.9[295605]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:44:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:44:51 compute-0 python3.9[295757]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:44:52 compute-0 python3.9[295909]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:44:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:52 compute-0 python3.9[296061]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:44:53 compute-0 python3.9[296213]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:54 compute-0 python3.9[296365]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:44:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:44:55 compute-0 python3.9[296443]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtlogd.conf _original_basename=virtlogd.conf recurse=False state=file path=/etc/libvirt/virtlogd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:44:56 compute-0 python3.9[296595]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:44:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:56 compute-0 python3.9[296673]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtnodedevd.conf _original_basename=virtnodedevd.conf recurse=False state=file path=/etc/libvirt/virtnodedevd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:57 compute-0 python3.9[296825]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:44:57 compute-0 python3.9[296903]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtproxyd.conf _original_basename=virtproxyd.conf recurse=False state=file path=/etc/libvirt/virtproxyd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:44:58 compute-0 python3.9[297055]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:44:59 compute-0 python3.9[297133]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtqemud.conf _original_basename=virtqemud.conf recurse=False state=file path=/etc/libvirt/virtqemud.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:44:59 compute-0 podman[157165]: time="2025-10-03T09:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:44:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35731 "" "Go-http-client/1.1"
Oct  3 09:44:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7252 "" "Go-http-client/1.1"
Oct  3 09:45:00 compute-0 python3.9[297285]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:00 compute-0 python3.9[297363]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/qemu.conf _original_basename=qemu.conf.j2 recurse=False state=file path=/etc/libvirt/qemu.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:45:01 compute-0 python3.9[297515]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:01 compute-0 openstack_network_exporter[159287]: ERROR   09:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:45:01 compute-0 openstack_network_exporter[159287]: ERROR   09:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:45:01 compute-0 openstack_network_exporter[159287]: ERROR   09:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:45:01 compute-0 openstack_network_exporter[159287]: ERROR   09:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:45:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:45:01 compute-0 openstack_network_exporter[159287]: ERROR   09:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:45:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:45:01 compute-0 python3.9[297593]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtsecretd.conf _original_basename=virtsecretd.conf recurse=False state=file path=/etc/libvirt/virtsecretd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:02 compute-0 python3.9[297745]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:03 compute-0 python3.9[297823]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0600 owner=libvirt dest=/etc/libvirt/auth.conf _original_basename=auth.conf recurse=False state=file path=/etc/libvirt/auth.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:03 compute-0 python3.9[297975]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:03 compute-0 podman[297976]: 2025-10-03 09:45:03.99465167 +0000 UTC m=+0.080946876 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Oct  3 09:45:04 compute-0 podman[298014]: 2025-10-03 09:45:04.107427841 +0000 UTC m=+0.079456688 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct  3 09:45:04 compute-0 podman[298020]: 2025-10-03 09:45:04.132189803 +0000 UTC m=+0.100449570 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:45:04 compute-0 podman[298022]: 2025-10-03 09:45:04.136461219 +0000 UTC m=+0.095037707 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 09:45:04 compute-0 podman[298021]: 2025-10-03 09:45:04.14900522 +0000 UTC m=+0.114107046 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, distribution-scope=public, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct  3 09:45:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:04 compute-0 python3.9[298151]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/sasl2/libvirt.conf _original_basename=sasl_libvirt.conf recurse=False state=file path=/etc/sasl2/libvirt.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:04 compute-0 podman[298199]: 2025-10-03 09:45:04.81627957 +0000 UTC m=+0.076235856 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:45:04 compute-0 podman[298204]: 2025-10-03 09:45:04.885715827 +0000 UTC m=+0.141975655 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller)
Oct  3 09:45:05 compute-0 python3.9[298349]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct  3 09:45:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:45:06 compute-0 python3.9[298602]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:45:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:45:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:45:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:45:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:45:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:45:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b9a9b16f-1ff5-4c88-8cd9-fc422cff5d12 does not exist
Oct  3 09:45:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 9cc790d2-2736-4cf6-921d-64e674404062 does not exist
Oct  3 09:45:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a39bc9df-9f48-4ba5-af4c-8b25ceff86f7 does not exist
Oct  3 09:45:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:45:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:45:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:45:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:45:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:45:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:45:06 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:45:06 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:45:06 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:45:06 compute-0 python3.9[298882]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:07 compute-0 podman[298950]: 2025-10-03 09:45:07.11676852 +0000 UTC m=+0.065399230 container create 62b74cb833473fae262eaf87a17e868cfe855b5df67a11f0c5d72f347f99559e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:45:07 compute-0 systemd[1]: Started libpod-conmon-62b74cb833473fae262eaf87a17e868cfe855b5df67a11f0c5d72f347f99559e.scope.
Oct  3 09:45:07 compute-0 podman[298950]: 2025-10-03 09:45:07.089138637 +0000 UTC m=+0.037769347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:45:07 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:45:07 compute-0 podman[298950]: 2025-10-03 09:45:07.223191668 +0000 UTC m=+0.171822438 container init 62b74cb833473fae262eaf87a17e868cfe855b5df67a11f0c5d72f347f99559e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Oct  3 09:45:07 compute-0 podman[298950]: 2025-10-03 09:45:07.241195183 +0000 UTC m=+0.189825863 container start 62b74cb833473fae262eaf87a17e868cfe855b5df67a11f0c5d72f347f99559e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:45:07 compute-0 podman[298950]: 2025-10-03 09:45:07.245913854 +0000 UTC m=+0.194544624 container attach 62b74cb833473fae262eaf87a17e868cfe855b5df67a11f0c5d72f347f99559e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:45:07 compute-0 silly_banach[299015]: 167 167
Oct  3 09:45:07 compute-0 systemd[1]: libpod-62b74cb833473fae262eaf87a17e868cfe855b5df67a11f0c5d72f347f99559e.scope: Deactivated successfully.
Oct  3 09:45:07 compute-0 conmon[299015]: conmon 62b74cb833473fae262e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62b74cb833473fae262eaf87a17e868cfe855b5df67a11f0c5d72f347f99559e.scope/container/memory.events
Oct  3 09:45:07 compute-0 podman[298950]: 2025-10-03 09:45:07.253267789 +0000 UTC m=+0.201898469 container died 62b74cb833473fae262eaf87a17e868cfe855b5df67a11f0c5d72f347f99559e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f5425af9bc89b3d32de9c6b115f946921142367c4a54645d5deb3c04d236819-merged.mount: Deactivated successfully.
Oct  3 09:45:07 compute-0 podman[298950]: 2025-10-03 09:45:07.312873622 +0000 UTC m=+0.261504312 container remove 62b74cb833473fae262eaf87a17e868cfe855b5df67a11f0c5d72f347f99559e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_banach, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct  3 09:45:07 compute-0 systemd[1]: libpod-conmon-62b74cb833473fae262eaf87a17e868cfe855b5df67a11f0c5d72f347f99559e.scope: Deactivated successfully.
Oct  3 09:45:07 compute-0 podman[299111]: 2025-10-03 09:45:07.488484791 +0000 UTC m=+0.040779914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:45:07 compute-0 podman[299111]: 2025-10-03 09:45:07.717729492 +0000 UTC m=+0.270024635 container create b9f6d25814e94a427fd3e7c69571d455a68518d83da40f7482dce4b243a7bc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 09:45:07 compute-0 python3.9[299120]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:07 compute-0 systemd[1]: Started libpod-conmon-b9f6d25814e94a427fd3e7c69571d455a68518d83da40f7482dce4b243a7bc6e.scope.
Oct  3 09:45:07 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf6ddfc6bdd9a38bb594686d74dfb7a92b88b573110f23ba3effa78e835d34e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf6ddfc6bdd9a38bb594686d74dfb7a92b88b573110f23ba3effa78e835d34e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf6ddfc6bdd9a38bb594686d74dfb7a92b88b573110f23ba3effa78e835d34e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf6ddfc6bdd9a38bb594686d74dfb7a92b88b573110f23ba3effa78e835d34e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf6ddfc6bdd9a38bb594686d74dfb7a92b88b573110f23ba3effa78e835d34e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:45:08 compute-0 podman[299111]: 2025-10-03 09:45:08.016730031 +0000 UTC m=+0.569025184 container init b9f6d25814e94a427fd3e7c69571d455a68518d83da40f7482dce4b243a7bc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 09:45:08 compute-0 podman[299111]: 2025-10-03 09:45:08.030695327 +0000 UTC m=+0.582990450 container start b9f6d25814e94a427fd3e7c69571d455a68518d83da40f7482dce4b243a7bc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feistel, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct  3 09:45:08 compute-0 podman[299111]: 2025-10-03 09:45:08.124506524 +0000 UTC m=+0.676801667 container attach b9f6d25814e94a427fd3e7c69571d455a68518d83da40f7482dce4b243a7bc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:45:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:08 compute-0 python3.9[299286]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:09 compute-0 youthful_feistel[299150]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:45:09 compute-0 youthful_feistel[299150]: --> relative data size: 1.0
Oct  3 09:45:09 compute-0 youthful_feistel[299150]: --> All data devices are unavailable
Oct  3 09:45:09 compute-0 systemd[1]: libpod-b9f6d25814e94a427fd3e7c69571d455a68518d83da40f7482dce4b243a7bc6e.scope: Deactivated successfully.
Oct  3 09:45:09 compute-0 systemd[1]: libpod-b9f6d25814e94a427fd3e7c69571d455a68518d83da40f7482dce4b243a7bc6e.scope: Consumed 1.053s CPU time.
Oct  3 09:45:09 compute-0 podman[299463]: 2025-10-03 09:45:09.202596853 +0000 UTC m=+0.035368270 container died b9f6d25814e94a427fd3e7c69571d455a68518d83da40f7482dce4b243a7bc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 09:45:09 compute-0 python3.9[299462]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaf6ddfc6bdd9a38bb594686d74dfb7a92b88b573110f23ba3effa78e835d34e-merged.mount: Deactivated successfully.
Oct  3 09:45:09 compute-0 podman[299463]: 2025-10-03 09:45:09.411099783 +0000 UTC m=+0.243871190 container remove b9f6d25814e94a427fd3e7c69571d455a68518d83da40f7482dce4b243a7bc6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:45:09 compute-0 systemd[1]: libpod-conmon-b9f6d25814e94a427fd3e7c69571d455a68518d83da40f7482dce4b243a7bc6e.scope: Deactivated successfully.
Oct  3 09:45:10 compute-0 python3.9[299729]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:10 compute-0 podman[299790]: 2025-10-03 09:45:10.289724853 +0000 UTC m=+0.084913104 container create f1ce0f9f43ab09f44f4532d038b4871712d517e464f126cf6a8141a146be3f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 09:45:10 compute-0 systemd[1]: Started libpod-conmon-f1ce0f9f43ab09f44f4532d038b4871712d517e464f126cf6a8141a146be3f80.scope.
Oct  3 09:45:10 compute-0 podman[299790]: 2025-10-03 09:45:10.250154159 +0000 UTC m=+0.045342440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:45:10 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:45:10 compute-0 podman[299790]: 2025-10-03 09:45:10.41050951 +0000 UTC m=+0.205697761 container init f1ce0f9f43ab09f44f4532d038b4871712d517e464f126cf6a8141a146be3f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_faraday, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 09:45:10 compute-0 podman[299790]: 2025-10-03 09:45:10.419965762 +0000 UTC m=+0.215154013 container start f1ce0f9f43ab09f44f4532d038b4871712d517e464f126cf6a8141a146be3f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 09:45:10 compute-0 relaxed_faraday[299832]: 167 167
Oct  3 09:45:10 compute-0 systemd[1]: libpod-f1ce0f9f43ab09f44f4532d038b4871712d517e464f126cf6a8141a146be3f80.scope: Deactivated successfully.
Oct  3 09:45:10 compute-0 podman[299790]: 2025-10-03 09:45:10.476974453 +0000 UTC m=+0.272162724 container attach f1ce0f9f43ab09f44f4532d038b4871712d517e464f126cf6a8141a146be3f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_faraday, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:45:10 compute-0 podman[299790]: 2025-10-03 09:45:10.477370075 +0000 UTC m=+0.272558326 container died f1ce0f9f43ab09f44f4532d038b4871712d517e464f126cf6a8141a146be3f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_faraday, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:45:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-cee42c72a7e4fa0aa28c9263e3cde0553597f3771b6464764bd4360bd38b18cf-merged.mount: Deactivated successfully.
Oct  3 09:45:10 compute-0 podman[299790]: 2025-10-03 09:45:10.614153874 +0000 UTC m=+0.409342125 container remove f1ce0f9f43ab09f44f4532d038b4871712d517e464f126cf6a8141a146be3f80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 09:45:10 compute-0 systemd[1]: libpod-conmon-f1ce0f9f43ab09f44f4532d038b4871712d517e464f126cf6a8141a146be3f80.scope: Deactivated successfully.
Oct  3 09:45:10 compute-0 podman[299827]: 2025-10-03 09:45:10.666792144 +0000 UTC m=+0.315606410 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true)
Oct  3 09:45:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:45:10 compute-0 podman[299978]: 2025-10-03 09:45:10.84480406 +0000 UTC m=+0.092848936 container create 5b6dd398895c6cda7fecdadba36f7efc80c165860688a645b378619b8b1d187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 09:45:10 compute-0 podman[299978]: 2025-10-03 09:45:10.786331553 +0000 UTC m=+0.034376459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:45:10 compute-0 python3.9[299971]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:10 compute-0 systemd[1]: Started libpod-conmon-5b6dd398895c6cda7fecdadba36f7efc80c165860688a645b378619b8b1d187b.scope.
Oct  3 09:45:10 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9917a183ea237a005853b2775cdb4a00a5974ccb7d5cd1ddab64cac023a0e2e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9917a183ea237a005853b2775cdb4a00a5974ccb7d5cd1ddab64cac023a0e2e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9917a183ea237a005853b2775cdb4a00a5974ccb7d5cd1ddab64cac023a0e2e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9917a183ea237a005853b2775cdb4a00a5974ccb7d5cd1ddab64cac023a0e2e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:45:10 compute-0 podman[299978]: 2025-10-03 09:45:10.993878891 +0000 UTC m=+0.241923797 container init 5b6dd398895c6cda7fecdadba36f7efc80c165860688a645b378619b8b1d187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:45:11 compute-0 podman[299978]: 2025-10-03 09:45:11.009524371 +0000 UTC m=+0.257569247 container start 5b6dd398895c6cda7fecdadba36f7efc80c165860688a645b378619b8b1d187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct  3 09:45:11 compute-0 podman[299978]: 2025-10-03 09:45:11.013503978 +0000 UTC m=+0.261548854 container attach 5b6dd398895c6cda7fecdadba36f7efc80c165860688a645b378619b8b1d187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct  3 09:45:11 compute-0 python3.9[300150]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:11 compute-0 cranky_curran[299992]: {
Oct  3 09:45:11 compute-0 cranky_curran[299992]:    "0": [
Oct  3 09:45:11 compute-0 cranky_curran[299992]:        {
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "devices": [
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "/dev/loop3"
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            ],
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "lv_name": "ceph_lv0",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "lv_size": "21470642176",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "name": "ceph_lv0",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "tags": {
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.cluster_name": "ceph",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.crush_device_class": "",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.encrypted": "0",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.osd_id": "0",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.type": "block",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.vdo": "0"
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            },
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "type": "block",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "vg_name": "ceph_vg0"
Oct  3 09:45:11 compute-0 cranky_curran[299992]:        }
Oct  3 09:45:11 compute-0 cranky_curran[299992]:    ],
Oct  3 09:45:11 compute-0 cranky_curran[299992]:    "1": [
Oct  3 09:45:11 compute-0 cranky_curran[299992]:        {
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "devices": [
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "/dev/loop4"
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            ],
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "lv_name": "ceph_lv1",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "lv_size": "21470642176",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "name": "ceph_lv1",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "tags": {
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.cluster_name": "ceph",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.crush_device_class": "",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.encrypted": "0",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.osd_id": "1",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.type": "block",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.vdo": "0"
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            },
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "type": "block",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "vg_name": "ceph_vg1"
Oct  3 09:45:11 compute-0 cranky_curran[299992]:        }
Oct  3 09:45:11 compute-0 cranky_curran[299992]:    ],
Oct  3 09:45:11 compute-0 cranky_curran[299992]:    "2": [
Oct  3 09:45:11 compute-0 cranky_curran[299992]:        {
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "devices": [
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "/dev/loop5"
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            ],
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "lv_name": "ceph_lv2",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "lv_size": "21470642176",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "name": "ceph_lv2",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "tags": {
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.cluster_name": "ceph",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.crush_device_class": "",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.encrypted": "0",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.osd_id": "2",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.type": "block",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:                "ceph.vdo": "0"
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            },
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "type": "block",
Oct  3 09:45:11 compute-0 cranky_curran[299992]:            "vg_name": "ceph_vg2"
Oct  3 09:45:11 compute-0 cranky_curran[299992]:        }
Oct  3 09:45:11 compute-0 cranky_curran[299992]:    ]
Oct  3 09:45:11 compute-0 cranky_curran[299992]: }
Oct  3 09:45:11 compute-0 systemd[1]: libpod-5b6dd398895c6cda7fecdadba36f7efc80c165860688a645b378619b8b1d187b.scope: Deactivated successfully.
Oct  3 09:45:11 compute-0 podman[299978]: 2025-10-03 09:45:11.911744295 +0000 UTC m=+1.159789171 container died 5b6dd398895c6cda7fecdadba36f7efc80c165860688a645b378619b8b1d187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 09:45:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-9917a183ea237a005853b2775cdb4a00a5974ccb7d5cd1ddab64cac023a0e2e1-merged.mount: Deactivated successfully.
Oct  3 09:45:11 compute-0 podman[299978]: 2025-10-03 09:45:11.987708261 +0000 UTC m=+1.235753137 container remove 5b6dd398895c6cda7fecdadba36f7efc80c165860688a645b378619b8b1d187b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:45:12 compute-0 systemd[1]: libpod-conmon-5b6dd398895c6cda7fecdadba36f7efc80c165860688a645b378619b8b1d187b.scope: Deactivated successfully.
Oct  3 09:45:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:12 compute-0 python3.9[300374]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:12 compute-0 podman[300506]: 2025-10-03 09:45:12.711117273 +0000 UTC m=+0.060040488 container create 8f7fa57a418c68d0469ef7c89f593040b10e083e13d1bfc3a635ed0d49079d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 09:45:12 compute-0 systemd[1]: Started libpod-conmon-8f7fa57a418c68d0469ef7c89f593040b10e083e13d1bfc3a635ed0d49079d99.scope.
Oct  3 09:45:12 compute-0 podman[300506]: 2025-10-03 09:45:12.689367889 +0000 UTC m=+0.038291124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:45:12 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:45:12 compute-0 podman[300506]: 2025-10-03 09:45:12.812983997 +0000 UTC m=+0.161907222 container init 8f7fa57a418c68d0469ef7c89f593040b10e083e13d1bfc3a635ed0d49079d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_villani, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 09:45:12 compute-0 podman[300506]: 2025-10-03 09:45:12.825739974 +0000 UTC m=+0.174663189 container start 8f7fa57a418c68d0469ef7c89f593040b10e083e13d1bfc3a635ed0d49079d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:45:12 compute-0 podman[300506]: 2025-10-03 09:45:12.830628931 +0000 UTC m=+0.179552146 container attach 8f7fa57a418c68d0469ef7c89f593040b10e083e13d1bfc3a635ed0d49079d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_villani, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:45:12 compute-0 epic_villani[300558]: 167 167
Oct  3 09:45:12 compute-0 systemd[1]: libpod-8f7fa57a418c68d0469ef7c89f593040b10e083e13d1bfc3a635ed0d49079d99.scope: Deactivated successfully.
Oct  3 09:45:12 compute-0 podman[300506]: 2025-10-03 09:45:12.835191706 +0000 UTC m=+0.184114931 container died 8f7fa57a418c68d0469ef7c89f593040b10e083e13d1bfc3a635ed0d49079d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_villani, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 09:45:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1f06dc56006bc9d595f4cb49a78c149c0e0310702f70963d5a6afaebe5aa436-merged.mount: Deactivated successfully.
Oct  3 09:45:12 compute-0 podman[300506]: 2025-10-03 09:45:12.890465841 +0000 UTC m=+0.239389046 container remove 8f7fa57a418c68d0469ef7c89f593040b10e083e13d1bfc3a635ed0d49079d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 09:45:12 compute-0 systemd[1]: libpod-conmon-8f7fa57a418c68d0469ef7c89f593040b10e083e13d1bfc3a635ed0d49079d99.scope: Deactivated successfully.
Oct  3 09:45:13 compute-0 podman[300642]: 2025-10-03 09:45:13.070589664 +0000 UTC m=+0.044877064 container create 492e69599db32845d56ce46598fd9eb684292296e02d8090703768a5e6b38148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct  3 09:45:13 compute-0 systemd[1]: Started libpod-conmon-492e69599db32845d56ce46598fd9eb684292296e02d8090703768a5e6b38148.scope.
Oct  3 09:45:13 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:45:13 compute-0 podman[300642]: 2025-10-03 09:45:13.052287159 +0000 UTC m=+0.026574579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/060404837f8023609c3bacb03c17fa5413343c32b1e65488b8335a0113594a2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/060404837f8023609c3bacb03c17fa5413343c32b1e65488b8335a0113594a2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/060404837f8023609c3bacb03c17fa5413343c32b1e65488b8335a0113594a2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/060404837f8023609c3bacb03c17fa5413343c32b1e65488b8335a0113594a2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:45:13 compute-0 podman[300642]: 2025-10-03 09:45:13.19977576 +0000 UTC m=+0.174063250 container init 492e69599db32845d56ce46598fd9eb684292296e02d8090703768a5e6b38148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 09:45:13 compute-0 podman[300642]: 2025-10-03 09:45:13.210350458 +0000 UTC m=+0.184637888 container start 492e69599db32845d56ce46598fd9eb684292296e02d8090703768a5e6b38148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:45:13 compute-0 podman[300642]: 2025-10-03 09:45:13.216855016 +0000 UTC m=+0.191142436 container attach 492e69599db32845d56ce46598fd9eb684292296e02d8090703768a5e6b38148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:45:13 compute-0 python3.9[300644]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:13 compute-0 python3.9[300814]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:14 compute-0 goofy_euler[300658]: {
Oct  3 09:45:14 compute-0 goofy_euler[300658]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:45:14 compute-0 goofy_euler[300658]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:45:14 compute-0 goofy_euler[300658]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:45:14 compute-0 goofy_euler[300658]:        "osd_id": 1,
Oct  3 09:45:14 compute-0 goofy_euler[300658]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:45:14 compute-0 goofy_euler[300658]:        "type": "bluestore"
Oct  3 09:45:14 compute-0 goofy_euler[300658]:    },
Oct  3 09:45:14 compute-0 goofy_euler[300658]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:45:14 compute-0 goofy_euler[300658]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:45:14 compute-0 goofy_euler[300658]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:45:14 compute-0 goofy_euler[300658]:        "osd_id": 2,
Oct  3 09:45:14 compute-0 goofy_euler[300658]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:45:14 compute-0 goofy_euler[300658]:        "type": "bluestore"
Oct  3 09:45:14 compute-0 goofy_euler[300658]:    },
Oct  3 09:45:14 compute-0 goofy_euler[300658]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:45:14 compute-0 goofy_euler[300658]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:45:14 compute-0 goofy_euler[300658]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:45:14 compute-0 goofy_euler[300658]:        "osd_id": 0,
Oct  3 09:45:14 compute-0 goofy_euler[300658]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:45:14 compute-0 goofy_euler[300658]:        "type": "bluestore"
Oct  3 09:45:14 compute-0 goofy_euler[300658]:    }
Oct  3 09:45:14 compute-0 goofy_euler[300658]: }
Oct  3 09:45:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:14 compute-0 systemd[1]: libpod-492e69599db32845d56ce46598fd9eb684292296e02d8090703768a5e6b38148.scope: Deactivated successfully.
Oct  3 09:45:14 compute-0 systemd[1]: libpod-492e69599db32845d56ce46598fd9eb684292296e02d8090703768a5e6b38148.scope: Consumed 1.059s CPU time.
Oct  3 09:45:14 compute-0 podman[300642]: 2025-10-03 09:45:14.284016386 +0000 UTC m=+1.258303776 container died 492e69599db32845d56ce46598fd9eb684292296e02d8090703768a5e6b38148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 09:45:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-060404837f8023609c3bacb03c17fa5413343c32b1e65488b8335a0113594a2f-merged.mount: Deactivated successfully.
Oct  3 09:45:14 compute-0 podman[300642]: 2025-10-03 09:45:14.389406613 +0000 UTC m=+1.363694013 container remove 492e69599db32845d56ce46598fd9eb684292296e02d8090703768a5e6b38148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  3 09:45:14 compute-0 systemd[1]: libpod-conmon-492e69599db32845d56ce46598fd9eb684292296e02d8090703768a5e6b38148.scope: Deactivated successfully.
Oct  3 09:45:14 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:45:14 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:45:14 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:45:14 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:45:14 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a75174c3-0420-4672-8fa7-e572bf032d2c does not exist
Oct  3 09:45:14 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 077a3524-a903-4885-b710-e086fa323149 does not exist
Oct  3 09:45:14 compute-0 python3.9[301037]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:15 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:45:15 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:45:15 compute-0 python3.9[301208]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:45:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:45:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:45:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:45:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:45:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:45:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:45:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:16 compute-0 python3.9[301360]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:17 compute-0 python3.9[301512]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:17 compute-0 python3.9[301590]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtlogd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtlogd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:18 compute-0 python3.9[301742]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:18 compute-0 python3.9[301820]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:19 compute-0 python3.9[301973]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:20 compute-0 python3.9[302051]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:45:21 compute-0 python3.9[302203]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:21 compute-0 python3.9[302281]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:22 compute-0 python3.9[302433]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:23 compute-0 python3.9[302511]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:24 compute-0 python3.9[302663]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:24 compute-0 python3.9[302741]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:25 compute-0 python3.9[302893]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:45:25 compute-0 python3.9[302971]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:26 compute-0 python3.9[303123]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:27 compute-0 python3.9[303201]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:27 compute-0 python3.9[303353]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:28 compute-0 python3.9[303431]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:29 compute-0 python3.9[303583]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:29 compute-0 python3.9[303661]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:29 compute-0 podman[157165]: time="2025-10-03T09:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:45:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35731 "" "Go-http-client/1.1"
Oct  3 09:45:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7256 "" "Go-http-client/1.1"
Oct  3 09:45:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:30 compute-0 python3.9[303813]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:45:30 compute-0 python3.9[303891]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:31 compute-0 openstack_network_exporter[159287]: ERROR   09:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:45:31 compute-0 openstack_network_exporter[159287]: ERROR   09:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:45:31 compute-0 openstack_network_exporter[159287]: ERROR   09:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:45:31 compute-0 openstack_network_exporter[159287]: ERROR   09:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:45:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:45:31 compute-0 openstack_network_exporter[159287]: ERROR   09:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:45:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:45:31 compute-0 python3.9[304043]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:32 compute-0 python3.9[304121]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:32 compute-0 python3.9[304273]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:33 compute-0 python3.9[304351]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:34 compute-0 python3.9[304503]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:34 compute-0 podman[304554]: 2025-10-03 09:45:34.49473459 +0000 UTC m=+0.081137162 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 09:45:34 compute-0 podman[304555]: 2025-10-03 09:45:34.517708419 +0000 UTC m=+0.102221471 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 09:45:34 compute-0 podman[304553]: 2025-10-03 09:45:34.517812193 +0000 UTC m=+0.107155820 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 09:45:34 compute-0 podman[304556]: 2025-10-03 09:45:34.518371301 +0000 UTC m=+0.098191431 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350)
Oct  3 09:45:34 compute-0 podman[304557]: 2025-10-03 09:45:34.519683612 +0000 UTC m=+0.094035296 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 09:45:34 compute-0 python3.9[304661]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:35 compute-0 podman[304800]: 2025-10-03 09:45:35.298562165 +0000 UTC m=+0.075427908 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 09:45:35 compute-0 podman[304801]: 2025-10-03 09:45:35.360427206 +0000 UTC m=+0.132946999 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 09:45:35 compute-0 python3.9[304857]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:45:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:45:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:36 compute-0 python3.9[305028]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct  3 09:45:38 compute-0 python3.9[305180]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.953 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.953 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.954 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f70b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa35c9f7170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b8940b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35df74380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b894380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e566ba0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f73b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e6b9400>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f76e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa35b894080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa35c9f71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa35c9f7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa35c9f7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa35c9f72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa35c9f7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.958 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa35c955970>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa35b894350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa35c92f7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa35c9f7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa35c9f7710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa35c9f79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa35e6e6180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa35c9f73e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.959 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa35c9f7c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa35c9f7440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa35c9f7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa35c9f7ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa35c9f7d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa35c9f7e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa35c9f7650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa35c9f7e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa35c9f76b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa35c9f7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa35c9f7fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.961 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.962 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:45:38.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:45:39 compute-0 python3.9[305332]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:39 compute-0 python3.9[305485]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:40 compute-0 python3.9[305637]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:45:40 compute-0 podman[305685]: 2025-10-03 09:45:40.811415197 +0000 UTC m=+0.069862179 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:45:41 compute-0 python3.9[305808]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:45:41.570 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:45:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:45:41.571 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:45:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:45:41.571 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:45:42 compute-0 python3.9[305960]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:42 compute-0 python3.9[306112]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:43 compute-0 python3.9[306264]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:44 compute-0 python3.9[306416]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:45 compute-0 python3.9[306568]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:45:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:45:45
Oct  3 09:45:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:45:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:45:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'images', '.mgr']
Oct  3 09:45:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:45:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:45:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:45:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:45:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:45:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:45:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:45:46 compute-0 python3.9[306720]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:45:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:46 compute-0 python3.9[306872]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  3 09:45:47 compute-0 python3.9[307024]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:45:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:48 compute-0 python3.9[307178]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  3 09:45:49 compute-0 python3.9[307328]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:50 compute-0 python3.9[307450]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759484748.9295-1017-3928114099597/.source.xml follow=False _original_basename=secret.xml.j2 checksum=6870ecbd581f90a388add90a2feca1b9c9900054 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:45:51 compute-0 python3.9[307602]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 9b4e8c9a-5555-5510-a631-4742a1182561#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:45:51 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct  3 09:45:51 compute-0 systemd[1]: Started libvirt secret daemon.
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:45:51.825385) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484751825432, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2035, "num_deletes": 251, "total_data_size": 3493575, "memory_usage": 3542640, "flush_reason": "Manual Compaction"}
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484751916003, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3428895, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9780, "largest_seqno": 11814, "table_properties": {"data_size": 3419633, "index_size": 5884, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17730, "raw_average_key_size": 19, "raw_value_size": 3401322, "raw_average_value_size": 3729, "num_data_blocks": 267, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759484514, "oldest_key_time": 1759484514, "file_creation_time": 1759484751, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 90746 microseconds, and 14353 cpu microseconds.
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:45:51.916124) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3428895 bytes OK
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:45:51.916154) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:45:51.948931) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:45:51.948972) EVENT_LOG_v1 {"time_micros": 1759484751948963, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:45:51.948994) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3485103, prev total WAL file size 3485103, number of live WAL files 2.
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:45:51.950168) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3348KB)], [26(6318KB)]
Oct  3 09:45:51 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484751950195, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9898969, "oldest_snapshot_seqno": -1}
Oct  3 09:45:52 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3725 keys, 8137038 bytes, temperature: kUnknown
Oct  3 09:45:52 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484752104491, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8137038, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8108041, "index_size": 18583, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9349, "raw_key_size": 89530, "raw_average_key_size": 24, "raw_value_size": 8036708, "raw_average_value_size": 2157, "num_data_blocks": 804, "num_entries": 3725, "num_filter_entries": 3725, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759484751, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:45:52 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 09:45:52 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:45:52.104711) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8137038 bytes
Oct  3 09:45:52 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:45:52.108132) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 64.1 rd, 52.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.2 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(5.3) write-amplify(2.4) OK, records in: 4239, records dropped: 514 output_compression: NoCompression
Oct  3 09:45:52 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:45:52.108166) EVENT_LOG_v1 {"time_micros": 1759484752108149, "job": 10, "event": "compaction_finished", "compaction_time_micros": 154375, "compaction_time_cpu_micros": 25841, "output_level": 6, "num_output_files": 1, "total_output_size": 8137038, "num_input_records": 4239, "num_output_records": 3725, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 09:45:52 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:45:52 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484752111665, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct  3 09:45:52 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:45:52 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484752114829, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct  3 09:45:52 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:45:51.950073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:45:52 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:45:52.115477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:45:52 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:45:52.115483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:45:52 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:45:52.115484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:45:52 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:45:52.115486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:45:52 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:45:52.115488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:45:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:52 compute-0 python3.9[307781]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:45:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:45:55 compute-0 python3.9[308244]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:45:55 compute-0 python3.9[308396]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:56 compute-0 python3.9[308474]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/libvirt.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/libvirt.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:57 compute-0 python3.9[308626]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:58 compute-0 python3.9[308778]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:45:58 compute-0 python3.9[308856]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:45:59 compute-0 python3.9[309008]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:45:59 compute-0 podman[157165]: time="2025-10-03T09:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:45:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35731 "" "Go-http-client/1.1"
Oct  3 09:45:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7255 "" "Go-http-client/1.1"
Oct  3 09:45:59 compute-0 python3.9[309086]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ju5wvhn9 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:46:00 compute-0 python3.9[309238]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:46:01 compute-0 python3.9[309316]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:01 compute-0 openstack_network_exporter[159287]: ERROR   09:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:46:01 compute-0 openstack_network_exporter[159287]: ERROR   09:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:46:01 compute-0 openstack_network_exporter[159287]: ERROR   09:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:46:01 compute-0 openstack_network_exporter[159287]: ERROR   09:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:46:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:46:01 compute-0 openstack_network_exporter[159287]: ERROR   09:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:46:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:46:02 compute-0 python3.9[309468]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:46:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:03 compute-0 python3[309621]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  3 09:46:04 compute-0 python3.9[309773]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:46:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:04 compute-0 podman[309855]: 2025-10-03 09:46:04.638206661 +0000 UTC m=+0.081695830 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct  3 09:46:04 compute-0 podman[309853]: 2025-10-03 09:46:04.641687953 +0000 UTC m=+0.088632564 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Oct  3 09:46:04 compute-0 podman[309862]: 2025-10-03 09:46:04.644915867 +0000 UTC m=+0.087686573 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:46:04 compute-0 podman[309851]: 2025-10-03 09:46:04.652978256 +0000 UTC m=+0.113278536 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 09:46:04 compute-0 podman[309852]: 2025-10-03 09:46:04.660959553 +0000 UTC m=+0.118319609 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vendor=Red Hat, Inc., name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.4, com.redhat.component=ubi9-container, config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Oct  3 09:46:04 compute-0 python3.9[309854]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:05 compute-0 podman[310070]: 2025-10-03 09:46:05.428509461 +0000 UTC m=+0.059393532 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 09:46:05 compute-0 podman[310122]: 2025-10-03 09:46:05.581927088 +0000 UTC m=+0.118298018 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  3 09:46:05 compute-0 python3.9[310123]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:46:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:46:06 compute-0 python3.9[310225]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:07 compute-0 python3.9[310377]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:46:07 compute-0 python3.9[310455]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:08 compute-0 python3.9[310607]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:46:08 compute-0 python3.9[310685]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:09 compute-0 python3.9[310837]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:46:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:10 compute-0 python3.9[310915]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:46:11 compute-0 podman[311039]: 2025-10-03 09:46:11.024863589 +0000 UTC m=+0.121059466 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  3 09:46:11 compute-0 python3.9[311086]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:46:12 compute-0 python3.9[311241]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:13 compute-0 python3.9[311393]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:46:13 compute-0 python3.9[311546]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:46:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:14 compute-0 python3.9[311698]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:15 compute-0 podman[312016]: 2025-10-03 09:46:15.58395754 +0000 UTC m=+0.099288995 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:46:15 compute-0 python3.9[312015]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:46:15 compute-0 podman[312016]: 2025-10-03 09:46:15.676493338 +0000 UTC m=+0.191824763 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:46:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:46:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:46:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:46:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:46:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:46:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:46:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:46:16 compute-0 python3.9[312134]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/edpm_libvirt.target _original_basename=edpm_libvirt.target recurse=False state=file path=/etc/systemd/system/edpm_libvirt.target force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:46:16 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:46:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:46:16 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:46:17 compute-0 python3.9[312446]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:46:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:46:17 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:46:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:46:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:46:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:46:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:46:17 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 277ef91a-67fd-40bc-8ddb-5404bffd82e7 does not exist
Oct  3 09:46:17 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a09f52db-3dca-49a2-bce3-66e44915bd36 does not exist
Oct  3 09:46:17 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 075fb36f-09ba-468c-87c7-c6c7e1078f50 does not exist
Oct  3 09:46:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:46:17 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:46:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:46:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:46:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:46:17 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:46:17 compute-0 python3.9[312602]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/edpm_libvirt_guests.service _original_basename=edpm_libvirt_guests.service recurse=False state=file path=/etc/systemd/system/edpm_libvirt_guests.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:46:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:46:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:46:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:46:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:46:18 compute-0 podman[312869]: 2025-10-03 09:46:18.16894076 +0000 UTC m=+0.048010376 container create fa4c4b35add0dbd57b40d6d7acb71244487ca50f69bf3ea9f18ec281d8f47033 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sinoussi, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:46:18 compute-0 systemd[1]: Started libpod-conmon-fa4c4b35add0dbd57b40d6d7acb71244487ca50f69bf3ea9f18ec281d8f47033.scope.
Oct  3 09:46:18 compute-0 podman[312869]: 2025-10-03 09:46:18.147745897 +0000 UTC m=+0.026815563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:46:18 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:46:18 compute-0 podman[312869]: 2025-10-03 09:46:18.266670055 +0000 UTC m=+0.145739701 container init fa4c4b35add0dbd57b40d6d7acb71244487ca50f69bf3ea9f18ec281d8f47033 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sinoussi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:46:18 compute-0 podman[312869]: 2025-10-03 09:46:18.274542098 +0000 UTC m=+0.153611714 container start fa4c4b35add0dbd57b40d6d7acb71244487ca50f69bf3ea9f18ec281d8f47033 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sinoussi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:46:18 compute-0 podman[312869]: 2025-10-03 09:46:18.27831995 +0000 UTC m=+0.157389566 container attach fa4c4b35add0dbd57b40d6d7acb71244487ca50f69bf3ea9f18ec281d8f47033 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sinoussi, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:46:18 compute-0 sweet_sinoussi[312909]: 167 167
Oct  3 09:46:18 compute-0 systemd[1]: libpod-fa4c4b35add0dbd57b40d6d7acb71244487ca50f69bf3ea9f18ec281d8f47033.scope: Deactivated successfully.
Oct  3 09:46:18 compute-0 podman[312869]: 2025-10-03 09:46:18.281157541 +0000 UTC m=+0.160227177 container died fa4c4b35add0dbd57b40d6d7acb71244487ca50f69bf3ea9f18ec281d8f47033 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 09:46:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7afa26d9631bad4e9b1d33b0c89d7cf9d2c55903b0dc643806426f23405fa6d7-merged.mount: Deactivated successfully.
Oct  3 09:46:18 compute-0 podman[312869]: 2025-10-03 09:46:18.350292015 +0000 UTC m=+0.229361631 container remove fa4c4b35add0dbd57b40d6d7acb71244487ca50f69bf3ea9f18ec281d8f47033 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 09:46:18 compute-0 systemd[1]: libpod-conmon-fa4c4b35add0dbd57b40d6d7acb71244487ca50f69bf3ea9f18ec281d8f47033.scope: Deactivated successfully.
Oct  3 09:46:18 compute-0 python3.9[312908]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:46:18 compute-0 podman[312936]: 2025-10-03 09:46:18.569570961 +0000 UTC m=+0.082142074 container create b4b7ebb42abd0d14ad2044b4b56a9d2ef11f7cc25f0b2c313fe1bba98bed62c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 09:46:18 compute-0 podman[312936]: 2025-10-03 09:46:18.531187786 +0000 UTC m=+0.043758909 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:46:18 compute-0 systemd[1]: Started libpod-conmon-b4b7ebb42abd0d14ad2044b4b56a9d2ef11f7cc25f0b2c313fe1bba98bed62c6.scope.
Oct  3 09:46:18 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ee7f9293dcd6dd3553a0ffc01aef42ce365b6ee3cdaf0104af1c3bcdb914a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ee7f9293dcd6dd3553a0ffc01aef42ce365b6ee3cdaf0104af1c3bcdb914a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ee7f9293dcd6dd3553a0ffc01aef42ce365b6ee3cdaf0104af1c3bcdb914a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ee7f9293dcd6dd3553a0ffc01aef42ce365b6ee3cdaf0104af1c3bcdb914a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:46:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ee7f9293dcd6dd3553a0ffc01aef42ce365b6ee3cdaf0104af1c3bcdb914a0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:46:18 compute-0 podman[312936]: 2025-10-03 09:46:18.723417502 +0000 UTC m=+0.235988605 container init b4b7ebb42abd0d14ad2044b4b56a9d2ef11f7cc25f0b2c313fe1bba98bed62c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  3 09:46:18 compute-0 podman[312936]: 2025-10-03 09:46:18.73703299 +0000 UTC m=+0.249604073 container start b4b7ebb42abd0d14ad2044b4b56a9d2ef11f7cc25f0b2c313fe1bba98bed62c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:46:18 compute-0 podman[312936]: 2025-10-03 09:46:18.742132654 +0000 UTC m=+0.254703757 container attach b4b7ebb42abd0d14ad2044b4b56a9d2ef11f7cc25f0b2c313fe1bba98bed62c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 09:46:19 compute-0 python3.9[313032]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/virt-guest-shutdown.target _original_basename=virt-guest-shutdown.target recurse=False state=file path=/etc/systemd/system/virt-guest-shutdown.target force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:19 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Oct  3 09:46:19 compute-0 systemd[1]: session-56.scope: Consumed 2min 13.376s CPU time.
Oct  3 09:46:19 compute-0 systemd-logind[798]: Session 56 logged out. Waiting for processes to exit.
Oct  3 09:46:19 compute-0 systemd-logind[798]: Removed session 56.
Oct  3 09:46:19 compute-0 focused_matsumoto[312996]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:46:19 compute-0 focused_matsumoto[312996]: --> relative data size: 1.0
Oct  3 09:46:19 compute-0 focused_matsumoto[312996]: --> All data devices are unavailable
Oct  3 09:46:19 compute-0 systemd[1]: libpod-b4b7ebb42abd0d14ad2044b4b56a9d2ef11f7cc25f0b2c313fe1bba98bed62c6.scope: Deactivated successfully.
Oct  3 09:46:19 compute-0 systemd[1]: libpod-b4b7ebb42abd0d14ad2044b4b56a9d2ef11f7cc25f0b2c313fe1bba98bed62c6.scope: Consumed 1.071s CPU time.
Oct  3 09:46:19 compute-0 podman[312936]: 2025-10-03 09:46:19.86794654 +0000 UTC m=+1.380517643 container died b4b7ebb42abd0d14ad2044b4b56a9d2ef11f7cc25f0b2c313fe1bba98bed62c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:46:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8ee7f9293dcd6dd3553a0ffc01aef42ce365b6ee3cdaf0104af1c3bcdb914a0-merged.mount: Deactivated successfully.
Oct  3 09:46:19 compute-0 podman[312936]: 2025-10-03 09:46:19.94437289 +0000 UTC m=+1.456943993 container remove b4b7ebb42abd0d14ad2044b4b56a9d2ef11f7cc25f0b2c313fe1bba98bed62c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct  3 09:46:19 compute-0 systemd[1]: libpod-conmon-b4b7ebb42abd0d14ad2044b4b56a9d2ef11f7cc25f0b2c313fe1bba98bed62c6.scope: Deactivated successfully.
Oct  3 09:46:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:46:20 compute-0 podman[313234]: 2025-10-03 09:46:20.721039061 +0000 UTC m=+0.059193666 container create a9e92d5d92c23ffdc293f37fbdd6525421a8b18525697d42d357f1e5f625c135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_rubin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:46:20 compute-0 systemd[1]: Started libpod-conmon-a9e92d5d92c23ffdc293f37fbdd6525421a8b18525697d42d357f1e5f625c135.scope.
Oct  3 09:46:20 compute-0 podman[313234]: 2025-10-03 09:46:20.698666191 +0000 UTC m=+0.036820826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:46:20 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:46:20 compute-0 podman[313234]: 2025-10-03 09:46:20.857085809 +0000 UTC m=+0.195240444 container init a9e92d5d92c23ffdc293f37fbdd6525421a8b18525697d42d357f1e5f625c135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_rubin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:46:20 compute-0 podman[313234]: 2025-10-03 09:46:20.870738068 +0000 UTC m=+0.208892683 container start a9e92d5d92c23ffdc293f37fbdd6525421a8b18525697d42d357f1e5f625c135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_rubin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 09:46:20 compute-0 podman[313234]: 2025-10-03 09:46:20.876031468 +0000 UTC m=+0.214186103 container attach a9e92d5d92c23ffdc293f37fbdd6525421a8b18525697d42d357f1e5f625c135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:46:20 compute-0 adoring_rubin[313251]: 167 167
Oct  3 09:46:20 compute-0 systemd[1]: libpod-a9e92d5d92c23ffdc293f37fbdd6525421a8b18525697d42d357f1e5f625c135.scope: Deactivated successfully.
Oct  3 09:46:20 compute-0 podman[313234]: 2025-10-03 09:46:20.88044928 +0000 UTC m=+0.218603895 container died a9e92d5d92c23ffdc293f37fbdd6525421a8b18525697d42d357f1e5f625c135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:46:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b7039faa031af2083168fa5b701d0be10448666c225d8a1b9066883ae63c8c1-merged.mount: Deactivated successfully.
Oct  3 09:46:20 compute-0 podman[313234]: 2025-10-03 09:46:20.931497213 +0000 UTC m=+0.269651828 container remove a9e92d5d92c23ffdc293f37fbdd6525421a8b18525697d42d357f1e5f625c135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:46:20 compute-0 systemd[1]: libpod-conmon-a9e92d5d92c23ffdc293f37fbdd6525421a8b18525697d42d357f1e5f625c135.scope: Deactivated successfully.
Oct  3 09:46:21 compute-0 podman[313273]: 2025-10-03 09:46:21.137175542 +0000 UTC m=+0.055449787 container create 3153a4268e199deb7e2a15392f39b78ce4e224b4d93333690bae209152775384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:46:21 compute-0 systemd[1]: Started libpod-conmon-3153a4268e199deb7e2a15392f39b78ce4e224b4d93333690bae209152775384.scope.
Oct  3 09:46:21 compute-0 podman[313273]: 2025-10-03 09:46:21.119894935 +0000 UTC m=+0.038169200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:46:21 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/745199b5048feedda5d32080f2d9628fbd5b898a375d1052570a0e41c895f1fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/745199b5048feedda5d32080f2d9628fbd5b898a375d1052570a0e41c895f1fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/745199b5048feedda5d32080f2d9628fbd5b898a375d1052570a0e41c895f1fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/745199b5048feedda5d32080f2d9628fbd5b898a375d1052570a0e41c895f1fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:46:21 compute-0 podman[313273]: 2025-10-03 09:46:21.245339812 +0000 UTC m=+0.163614107 container init 3153a4268e199deb7e2a15392f39b78ce4e224b4d93333690bae209152775384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 09:46:21 compute-0 podman[313273]: 2025-10-03 09:46:21.258918598 +0000 UTC m=+0.177192843 container start 3153a4268e199deb7e2a15392f39b78ce4e224b4d93333690bae209152775384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:46:21 compute-0 podman[313273]: 2025-10-03 09:46:21.263542027 +0000 UTC m=+0.181816302 container attach 3153a4268e199deb7e2a15392f39b78ce4e224b4d93333690bae209152775384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_almeida, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]: {
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:    "0": [
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:        {
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "devices": [
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "/dev/loop3"
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            ],
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "lv_name": "ceph_lv0",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "lv_size": "21470642176",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "name": "ceph_lv0",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "tags": {
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.cluster_name": "ceph",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.crush_device_class": "",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.encrypted": "0",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.osd_id": "0",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.type": "block",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.vdo": "0"
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            },
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "type": "block",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "vg_name": "ceph_vg0"
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:        }
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:    ],
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:    "1": [
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:        {
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "devices": [
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "/dev/loop4"
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            ],
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "lv_name": "ceph_lv1",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "lv_size": "21470642176",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "name": "ceph_lv1",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "tags": {
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.cluster_name": "ceph",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.crush_device_class": "",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.encrypted": "0",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.osd_id": "1",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.type": "block",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.vdo": "0"
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            },
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "type": "block",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "vg_name": "ceph_vg1"
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:        }
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:    ],
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:    "2": [
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:        {
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "devices": [
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "/dev/loop5"
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            ],
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "lv_name": "ceph_lv2",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "lv_size": "21470642176",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "name": "ceph_lv2",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "tags": {
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.cluster_name": "ceph",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.crush_device_class": "",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.encrypted": "0",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.osd_id": "2",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.type": "block",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:                "ceph.vdo": "0"
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            },
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "type": "block",
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:            "vg_name": "ceph_vg2"
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:        }
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]:    ]
Oct  3 09:46:22 compute-0 vigorous_almeida[313289]: }
Oct  3 09:46:22 compute-0 systemd[1]: libpod-3153a4268e199deb7e2a15392f39b78ce4e224b4d93333690bae209152775384.scope: Deactivated successfully.
Oct  3 09:46:22 compute-0 podman[313273]: 2025-10-03 09:46:22.091086006 +0000 UTC m=+1.009360251 container died 3153a4268e199deb7e2a15392f39b78ce4e224b4d93333690bae209152775384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_almeida, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 09:46:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-745199b5048feedda5d32080f2d9628fbd5b898a375d1052570a0e41c895f1fa-merged.mount: Deactivated successfully.
Oct  3 09:46:22 compute-0 podman[313273]: 2025-10-03 09:46:22.155706275 +0000 UTC m=+1.073980520 container remove 3153a4268e199deb7e2a15392f39b78ce4e224b4d93333690bae209152775384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:46:22 compute-0 systemd[1]: libpod-conmon-3153a4268e199deb7e2a15392f39b78ce4e224b4d93333690bae209152775384.scope: Deactivated successfully.
Oct  3 09:46:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:23 compute-0 podman[313449]: 2025-10-03 09:46:23.040146364 +0000 UTC m=+0.068570607 container create a1292ee32be750f5f634043b1e24fd94ceff0adf542ee5fbbb3c51c3f7bf9331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_booth, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:46:23 compute-0 podman[313449]: 2025-10-03 09:46:23.000512819 +0000 UTC m=+0.028937082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:46:23 compute-0 systemd[1]: Started libpod-conmon-a1292ee32be750f5f634043b1e24fd94ceff0adf542ee5fbbb3c51c3f7bf9331.scope.
Oct  3 09:46:23 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:46:23 compute-0 podman[313449]: 2025-10-03 09:46:23.210653391 +0000 UTC m=+0.239077714 container init a1292ee32be750f5f634043b1e24fd94ceff0adf542ee5fbbb3c51c3f7bf9331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:46:23 compute-0 podman[313449]: 2025-10-03 09:46:23.222841404 +0000 UTC m=+0.251265677 container start a1292ee32be750f5f634043b1e24fd94ceff0adf542ee5fbbb3c51c3f7bf9331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_booth, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:46:23 compute-0 vibrant_booth[313465]: 167 167
Oct  3 09:46:23 compute-0 systemd[1]: libpod-a1292ee32be750f5f634043b1e24fd94ceff0adf542ee5fbbb3c51c3f7bf9331.scope: Deactivated successfully.
Oct  3 09:46:23 compute-0 podman[313449]: 2025-10-03 09:46:23.24356254 +0000 UTC m=+0.271986893 container attach a1292ee32be750f5f634043b1e24fd94ceff0adf542ee5fbbb3c51c3f7bf9331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_booth, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 09:46:23 compute-0 podman[313449]: 2025-10-03 09:46:23.244074027 +0000 UTC m=+0.272498340 container died a1292ee32be750f5f634043b1e24fd94ceff0adf542ee5fbbb3c51c3f7bf9331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_booth, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 09:46:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-45247afd88cc72f9d7c771dc9c53acf48062c98e1510d0d998284fa44aaf8d85-merged.mount: Deactivated successfully.
Oct  3 09:46:23 compute-0 podman[313449]: 2025-10-03 09:46:23.307741315 +0000 UTC m=+0.336165558 container remove a1292ee32be750f5f634043b1e24fd94ceff0adf542ee5fbbb3c51c3f7bf9331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_booth, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:46:23 compute-0 systemd[1]: libpod-conmon-a1292ee32be750f5f634043b1e24fd94ceff0adf542ee5fbbb3c51c3f7bf9331.scope: Deactivated successfully.
Oct  3 09:46:23 compute-0 podman[313488]: 2025-10-03 09:46:23.506557663 +0000 UTC m=+0.053901795 container create 5067c8a3a320e6fffc67308d14ecc5441d648846f85ff0a193403b2b6b7a1086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Oct  3 09:46:23 compute-0 systemd[1]: Started libpod-conmon-5067c8a3a320e6fffc67308d14ecc5441d648846f85ff0a193403b2b6b7a1086.scope.
Oct  3 09:46:23 compute-0 podman[313488]: 2025-10-03 09:46:23.486167617 +0000 UTC m=+0.033511829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:46:23 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d2c29f74fd3ce809bfcbb03e8344560c197cb463fd80e029793d7223a5bb2f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d2c29f74fd3ce809bfcbb03e8344560c197cb463fd80e029793d7223a5bb2f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d2c29f74fd3ce809bfcbb03e8344560c197cb463fd80e029793d7223a5bb2f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:46:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d2c29f74fd3ce809bfcbb03e8344560c197cb463fd80e029793d7223a5bb2f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:46:23 compute-0 podman[313488]: 2025-10-03 09:46:23.614488076 +0000 UTC m=+0.161832238 container init 5067c8a3a320e6fffc67308d14ecc5441d648846f85ff0a193403b2b6b7a1086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  3 09:46:23 compute-0 podman[313488]: 2025-10-03 09:46:23.630427709 +0000 UTC m=+0.177771841 container start 5067c8a3a320e6fffc67308d14ecc5441d648846f85ff0a193403b2b6b7a1086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 09:46:23 compute-0 podman[313488]: 2025-10-03 09:46:23.635025806 +0000 UTC m=+0.182369958 container attach 5067c8a3a320e6fffc67308d14ecc5441d648846f85ff0a193403b2b6b7a1086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 09:46:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]: {
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:        "osd_id": 1,
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:        "type": "bluestore"
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:    },
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:        "osd_id": 2,
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:        "type": "bluestore"
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:    },
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:        "osd_id": 0,
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:        "type": "bluestore"
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]:    }
Oct  3 09:46:24 compute-0 thirsty_almeida[313503]: }
Oct  3 09:46:24 compute-0 systemd[1]: libpod-5067c8a3a320e6fffc67308d14ecc5441d648846f85ff0a193403b2b6b7a1086.scope: Deactivated successfully.
Oct  3 09:46:24 compute-0 podman[313488]: 2025-10-03 09:46:24.713318273 +0000 UTC m=+1.260662425 container died 5067c8a3a320e6fffc67308d14ecc5441d648846f85ff0a193403b2b6b7a1086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 09:46:24 compute-0 systemd[1]: libpod-5067c8a3a320e6fffc67308d14ecc5441d648846f85ff0a193403b2b6b7a1086.scope: Consumed 1.077s CPU time.
Oct  3 09:46:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d2c29f74fd3ce809bfcbb03e8344560c197cb463fd80e029793d7223a5bb2f2-merged.mount: Deactivated successfully.
Oct  3 09:46:24 compute-0 podman[313488]: 2025-10-03 09:46:24.782109358 +0000 UTC m=+1.329453480 container remove 5067c8a3a320e6fffc67308d14ecc5441d648846f85ff0a193403b2b6b7a1086 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:46:24 compute-0 systemd[1]: libpod-conmon-5067c8a3a320e6fffc67308d14ecc5441d648846f85ff0a193403b2b6b7a1086.scope: Deactivated successfully.
Oct  3 09:46:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:46:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:46:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:46:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:46:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 1fb3d6de-72ef-419c-af62-d1b34d6f66f1 does not exist
Oct  3 09:46:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 4f6723bf-b37a-4c54-b47f-5a14e1a32aa0 does not exist
Oct  3 09:46:24 compute-0 systemd-logind[798]: New session 57 of user zuul.
Oct  3 09:46:24 compute-0 systemd[1]: Started Session 57 of User zuul.
Oct  3 09:46:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:46:25 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:46:25 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:46:26 compute-0 python3.9[313750]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:46:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:27 compute-0 python3.9[313906]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:46:28 compute-0 python3.9[314058]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:46:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:28 compute-0 python3.9[314210]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:46:29 compute-0 python3.9[314362]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  3 09:46:29 compute-0 podman[157165]: time="2025-10-03T09:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:46:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35731 "" "Go-http-client/1.1"
Oct  3 09:46:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7253 "" "Go-http-client/1.1"
Oct  3 09:46:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:30 compute-0 python3.9[314514]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:46:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:46:31 compute-0 openstack_network_exporter[159287]: ERROR   09:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:46:31 compute-0 openstack_network_exporter[159287]: ERROR   09:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:46:31 compute-0 openstack_network_exporter[159287]: ERROR   09:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:46:31 compute-0 openstack_network_exporter[159287]: ERROR   09:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:46:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:46:31 compute-0 openstack_network_exporter[159287]: ERROR   09:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:46:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:46:31 compute-0 python3.9[314666]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:46:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:32 compute-0 python3.9[314820]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:46:32 compute-0 systemd[1]: Reloading.
Oct  3 09:46:33 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:46:33 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:46:34 compute-0 python3.9[315009]: ansible-ansible.builtin.service_facts Invoked
Oct  3 09:46:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:34 compute-0 network[315026]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  3 09:46:34 compute-0 network[315027]: 'network-scripts' will be removed from distribution in near future.
Oct  3 09:46:34 compute-0 network[315028]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  3 09:46:35 compute-0 podman[315042]: 2025-10-03 09:46:35.391929888 +0000 UTC m=+0.078297460 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true)
Oct  3 09:46:35 compute-0 podman[315038]: 2025-10-03 09:46:35.399656086 +0000 UTC m=+0.091167794 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 09:46:35 compute-0 podman[315037]: 2025-10-03 09:46:35.404766661 +0000 UTC m=+0.098464700 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm)
Oct  3 09:46:35 compute-0 podman[315034]: 2025-10-03 09:46:35.406093694 +0000 UTC m=+0.112060497 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, version=9.4, container_name=kepler, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1214.1726694543, maintainer=Red Hat, Inc., name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Oct  3 09:46:35 compute-0 podman[315036]: 2025-10-03 09:46:35.422534593 +0000 UTC m=+0.124900080 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:46:35 compute-0 podman[315140]: 2025-10-03 09:46:35.592550883 +0000 UTC m=+0.101656881 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 09:46:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:46:35 compute-0 podman[315170]: 2025-10-03 09:46:35.72514265 +0000 UTC m=+0.103850333 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  3 09:46:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:39 compute-0 python3.9[315448]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:46:39 compute-0 systemd[1]: Reloading.
Oct  3 09:46:39 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:46:39 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:46:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:40 compute-0 python3.9[315634]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:46:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:46:41 compute-0 podman[315758]: 2025-10-03 09:46:41.4698042 +0000 UTC m=+0.093753078 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct  3 09:46:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:46:41.571 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:46:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:46:41.571 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:46:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:46:41.571 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:46:41 compute-0 python3.9[315802]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  3 09:46:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:42 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 09:46:42 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 09:46:43 compute-0 podman[315814]: 2025-10-03 09:46:43.717977412 +0000 UTC m=+1.958491532 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  3 09:46:43 compute-0 podman[315870]: 2025-10-03 09:46:43.917842422 +0000 UTC m=+0.064315770 container create 399d8c42cdea3e250abe913ba009792c06553121ef1f803e4a7fc28892d35003 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:46:43 compute-0 NetworkManager[45015]: <info>  [1759484803.9760] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/21)
Oct  3 09:46:43 compute-0 podman[315870]: 2025-10-03 09:46:43.891812515 +0000 UTC m=+0.038285863 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  3 09:46:43 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct  3 09:46:43 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct  3 09:46:43 compute-0 kernel: veth0: entered allmulticast mode
Oct  3 09:46:43 compute-0 kernel: veth0: entered promiscuous mode
Oct  3 09:46:44 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct  3 09:46:44 compute-0 kernel: podman0: port 1(veth0) entered forwarding state
Oct  3 09:46:44 compute-0 NetworkManager[45015]: <info>  [1759484804.0025] device (veth0): carrier: link connected
Oct  3 09:46:44 compute-0 NetworkManager[45015]: <info>  [1759484804.0029] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/22)
Oct  3 09:46:44 compute-0 NetworkManager[45015]: <info>  [1759484804.0033] device (podman0): carrier: link connected
Oct  3 09:46:44 compute-0 systemd-udevd[315897]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 09:46:44 compute-0 systemd-udevd[315894]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 09:46:44 compute-0 NetworkManager[45015]: <info>  [1759484804.0333] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 09:46:44 compute-0 NetworkManager[45015]: <info>  [1759484804.0343] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  3 09:46:44 compute-0 NetworkManager[45015]: <info>  [1759484804.0354] device (podman0): Activation: starting connection 'podman0' (d456e272-40ad-43a4-bb59-6852eb739898)
Oct  3 09:46:44 compute-0 NetworkManager[45015]: <info>  [1759484804.0356] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  3 09:46:44 compute-0 NetworkManager[45015]: <info>  [1759484804.0360] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  3 09:46:44 compute-0 NetworkManager[45015]: <info>  [1759484804.0366] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  3 09:46:44 compute-0 NetworkManager[45015]: <info>  [1759484804.0369] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  3 09:46:44 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  3 09:46:44 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  3 09:46:44 compute-0 NetworkManager[45015]: <info>  [1759484804.0688] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  3 09:46:44 compute-0 NetworkManager[45015]: <info>  [1759484804.0691] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  3 09:46:44 compute-0 NetworkManager[45015]: <info>  [1759484804.0698] device (podman0): Activation: successful, device activated.
Oct  3 09:46:44 compute-0 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct  3 09:46:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:44 compute-0 systemd[1]: Started libpod-conmon-399d8c42cdea3e250abe913ba009792c06553121ef1f803e4a7fc28892d35003.scope.
Oct  3 09:46:44 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:46:44 compute-0 podman[315870]: 2025-10-03 09:46:44.407120657 +0000 UTC m=+0.553594075 container init 399d8c42cdea3e250abe913ba009792c06553121ef1f803e4a7fc28892d35003 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 09:46:44 compute-0 podman[315870]: 2025-10-03 09:46:44.425161047 +0000 UTC m=+0.571634385 container start 399d8c42cdea3e250abe913ba009792c06553121ef1f803e4a7fc28892d35003 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  3 09:46:44 compute-0 podman[315870]: 2025-10-03 09:46:44.429766986 +0000 UTC m=+0.576240374 container attach 399d8c42cdea3e250abe913ba009792c06553121ef1f803e4a7fc28892d35003 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  3 09:46:44 compute-0 iscsid_config[316027]: iqn.1994-05.com.redhat:41d070fd3edf#015
Oct  3 09:46:44 compute-0 systemd[1]: libpod-399d8c42cdea3e250abe913ba009792c06553121ef1f803e4a7fc28892d35003.scope: Deactivated successfully.
Oct  3 09:46:44 compute-0 conmon[316027]: conmon 399d8c42cdea3e250abe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-399d8c42cdea3e250abe913ba009792c06553121ef1f803e4a7fc28892d35003.scope/container/memory.events
Oct  3 09:46:44 compute-0 podman[315870]: 2025-10-03 09:46:44.43397001 +0000 UTC m=+0.580443358 container died 399d8c42cdea3e250abe913ba009792c06553121ef1f803e4a7fc28892d35003 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  3 09:46:44 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct  3 09:46:44 compute-0 kernel: veth0 (unregistering): left allmulticast mode
Oct  3 09:46:44 compute-0 kernel: veth0 (unregistering): left promiscuous mode
Oct  3 09:46:44 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct  3 09:46:44 compute-0 NetworkManager[45015]: <info>  [1759484804.5265] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  3 09:46:44 compute-0 systemd[1]: run-netns-netns\x2de6843bf4\x2d908b\x2d1d04\x2d4e45\x2d31a83df40979.mount: Deactivated successfully.
Oct  3 09:46:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-399d8c42cdea3e250abe913ba009792c06553121ef1f803e4a7fc28892d35003-userdata-shm.mount: Deactivated successfully.
Oct  3 09:46:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-04ef1986beadd14d7452c6a12d55d552e8f6219e19c6dfa8a8428d82c243faf1-merged.mount: Deactivated successfully.
Oct  3 09:46:44 compute-0 podman[315870]: 2025-10-03 09:46:44.928515173 +0000 UTC m=+1.074988511 container remove 399d8c42cdea3e250abe913ba009792c06553121ef1f803e4a7fc28892d35003 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:46:44 compute-0 systemd[1]: libpod-conmon-399d8c42cdea3e250abe913ba009792c06553121ef1f803e4a7fc28892d35003.scope: Deactivated successfully.
Oct  3 09:46:44 compute-0 python3.9[315802]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid:current-podified /usr/sbin/iscsi-iname
Oct  3 09:46:45 compute-0 python3.9[315802]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct  3 09:46:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:46:45 compute-0 python3.9[316263]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:46:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:46:45
Oct  3 09:46:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:46:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:46:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'backups', 'volumes', 'cephfs.cephfs.meta', '.mgr']
Oct  3 09:46:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:46:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:46:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:46:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:46:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:46:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:46:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:46:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:46 compute-0 python3.9[316386]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759484805.2659895-119-119604309583938/.source.iscsi _original_basename=.pk4dyjsk follow=False checksum=4180b55a0648d379ab892bb8f6ff5e103e3c2a1d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:47 compute-0 python3.9[316538]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:48 compute-0 python3.9[316688]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:46:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:49 compute-0 python3.9[316842]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:50 compute-0 python3.9[316995]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:46:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:46:50 compute-0 python3.9[317147]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:46:51 compute-0 python3.9[317225]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:46:52 compute-0 python3.9[317377]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:46:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:52 compute-0 python3.9[317455]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:46:53 compute-0 python3.9[317607]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:54 compute-0 python3.9[317759]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:54 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  3 09:46:54 compute-0 python3.9[317837]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:46:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:46:55 compute-0 python3.9[317989]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:46:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:46:56 compute-0 python3.9[318067]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:56 compute-0 python3.9[318219]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:46:56 compute-0 systemd[1]: Reloading.
Oct  3 09:46:57 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:46:57 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:46:58 compute-0 python3.9[318408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:46:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:46:58 compute-0 python3.9[318486]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:46:59 compute-0 python3.9[318638]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:46:59 compute-0 podman[157165]: time="2025-10-03T09:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:46:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35731 "" "Go-http-client/1.1"
Oct  3 09:46:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7254 "" "Go-http-client/1.1"
Oct  3 09:46:59 compute-0 python3.9[318716]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:47:00 compute-0 python3.9[318868]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:47:00 compute-0 systemd[1]: Reloading.
Oct  3 09:47:00 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:47:00 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:47:01 compute-0 systemd[1]: Starting Create netns directory...
Oct  3 09:47:01 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  3 09:47:01 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  3 09:47:01 compute-0 systemd[1]: Finished Create netns directory.
Oct  3 09:47:01 compute-0 openstack_network_exporter[159287]: ERROR   09:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:47:01 compute-0 openstack_network_exporter[159287]: ERROR   09:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:47:01 compute-0 openstack_network_exporter[159287]: ERROR   09:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:47:01 compute-0 openstack_network_exporter[159287]: ERROR   09:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:47:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:47:01 compute-0 openstack_network_exporter[159287]: ERROR   09:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:47:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:47:02 compute-0 python3.9[319061]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:47:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:02 compute-0 python3.9[319213]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:47:03 compute-0 python3.9[319336]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759484822.350821-273-24784385872415/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:47:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:04 compute-0 python3.9[319488]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:47:05 compute-0 python3.9[319640]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:47:05 compute-0 podman[319736]: 2025-10-03 09:47:05.683851415 +0000 UTC m=+0.076796373 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 09:47:05 compute-0 podman[319735]: 2025-10-03 09:47:05.693780114 +0000 UTC m=+0.086722912 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, container_name=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, version=9.4, architecture=x86_64)
Oct  3 09:47:05 compute-0 podman[319737]: 2025-10-03 09:47:05.704789138 +0000 UTC m=+0.093657624 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 09:47:05 compute-0 podman[319739]: 2025-10-03 09:47:05.719988668 +0000 UTC m=+0.105470506 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  3 09:47:05 compute-0 podman[319738]: 2025-10-03 09:47:05.723795639 +0000 UTC m=+0.105742523 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, name=ubi9-minimal, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41)
Oct  3 09:47:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:47:05 compute-0 podman[319852]: 2025-10-03 09:47:05.78129594 +0000 UTC m=+0.061391926 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:47:05 compute-0 podman[319868]: 2025-10-03 09:47:05.859427574 +0000 UTC m=+0.096855627 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:47:05 compute-0 python3.9[319858]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759484824.6196804-298-71651950277228/.source.json _original_basename=.8ol0y4nn follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:06 compute-0 python3.9[320058]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:09 compute-0 python3.9[320485]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct  3 09:47:10 compute-0 python3.9[320637]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 09:47:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:47:11 compute-0 python3.9[320789]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  3 09:47:11 compute-0 podman[320839]: 2025-10-03 09:47:11.816042785 +0000 UTC m=+0.076336267 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  3 09:47:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:12 compute-0 python3[320985]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 09:47:13 compute-0 podman[321019]: 2025-10-03 09:47:13.204032457 +0000 UTC m=+0.085006766 container create ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, config_id=iscsid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct  3 09:47:13 compute-0 podman[321019]: 2025-10-03 09:47:13.15594207 +0000 UTC m=+0.036916419 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  3 09:47:13 compute-0 python3[320985]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  3 09:47:14 compute-0 python3.9[321207]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:47:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:15 compute-0 python3.9[321361]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:15 compute-0 python3.9[321437]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:47:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:47:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:47:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:47:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:47:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:47:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:47:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:47:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:16 compute-0 python3.9[321588]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759484835.6913517-386-8651521078206/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:17 compute-0 python3.9[321664]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 09:47:17 compute-0 systemd[1]: Reloading.
Oct  3 09:47:17 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:47:17 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:47:18 compute-0 python3.9[321776]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:47:18 compute-0 systemd[1]: Reloading.
Oct  3 09:47:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:18 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:47:18 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:47:18 compute-0 systemd[1]: Starting iscsid container...
Oct  3 09:47:18 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/946bfd2b6e52fd742e742972f814141ed26f99d9e5c0473bdce6478eba82d51c/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  3 09:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/946bfd2b6e52fd742e742972f814141ed26f99d9e5c0473bdce6478eba82d51c/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct  3 09:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/946bfd2b6e52fd742e742972f814141ed26f99d9e5c0473bdce6478eba82d51c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  3 09:47:18 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0.
Oct  3 09:47:18 compute-0 podman[321815]: 2025-10-03 09:47:18.881044422 +0000 UTC m=+0.132719672 container init ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  3 09:47:18 compute-0 iscsid[321830]: + sudo -E kolla_set_configs
Oct  3 09:47:18 compute-0 podman[321815]: 2025-10-03 09:47:18.915321734 +0000 UTC m=+0.166996994 container start ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 09:47:18 compute-0 podman[321815]: iscsid
Oct  3 09:47:18 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct  3 09:47:18 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  3 09:47:18 compute-0 systemd[1]: Started iscsid container.
Oct  3 09:47:18 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  3 09:47:18 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct  3 09:47:19 compute-0 podman[321837]: 2025-10-03 09:47:19.032312319 +0000 UTC m=+0.105790576 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2)
Oct  3 09:47:19 compute-0 systemd[1]: ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0-685618ab2c5154d2.service: Main process exited, code=exited, status=1/FAILURE
Oct  3 09:47:19 compute-0 systemd[1]: ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0-685618ab2c5154d2.service: Failed with result 'exit-code'.
Oct  3 09:47:19 compute-0 systemd[321850]: Queued start job for default target Main User Target.
Oct  3 09:47:19 compute-0 systemd[321850]: Created slice User Application Slice.
Oct  3 09:47:19 compute-0 systemd[321850]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  3 09:47:19 compute-0 systemd[321850]: Started Daily Cleanup of User's Temporary Directories.
Oct  3 09:47:19 compute-0 systemd[321850]: Reached target Paths.
Oct  3 09:47:19 compute-0 systemd[321850]: Reached target Timers.
Oct  3 09:47:19 compute-0 systemd[321850]: Starting D-Bus User Message Bus Socket...
Oct  3 09:47:19 compute-0 systemd[321850]: Starting Create User's Volatile Files and Directories...
Oct  3 09:47:19 compute-0 systemd[321850]: Listening on D-Bus User Message Bus Socket.
Oct  3 09:47:19 compute-0 systemd[321850]: Reached target Sockets.
Oct  3 09:47:19 compute-0 systemd[321850]: Finished Create User's Volatile Files and Directories.
Oct  3 09:47:19 compute-0 systemd[321850]: Reached target Basic System.
Oct  3 09:47:19 compute-0 systemd[321850]: Reached target Main User Target.
Oct  3 09:47:19 compute-0 systemd[321850]: Startup finished in 171ms.
Oct  3 09:47:19 compute-0 systemd[1]: Started User Manager for UID 0.
Oct  3 09:47:19 compute-0 systemd[1]: Started Session c3 of User root.
Oct  3 09:47:19 compute-0 iscsid[321830]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  3 09:47:19 compute-0 iscsid[321830]: INFO:__main__:Validating config file
Oct  3 09:47:19 compute-0 iscsid[321830]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  3 09:47:19 compute-0 iscsid[321830]: INFO:__main__:Writing out command to execute
Oct  3 09:47:19 compute-0 systemd[1]: session-c3.scope: Deactivated successfully.
Oct  3 09:47:19 compute-0 iscsid[321830]: ++ cat /run_command
Oct  3 09:47:19 compute-0 iscsid[321830]: + CMD='/usr/sbin/iscsid -f'
Oct  3 09:47:19 compute-0 iscsid[321830]: + ARGS=
Oct  3 09:47:19 compute-0 iscsid[321830]: + sudo kolla_copy_cacerts
Oct  3 09:47:19 compute-0 systemd[1]: Started Session c4 of User root.
Oct  3 09:47:19 compute-0 systemd[1]: session-c4.scope: Deactivated successfully.
Oct  3 09:47:19 compute-0 iscsid[321830]: + [[ ! -n '' ]]
Oct  3 09:47:19 compute-0 iscsid[321830]: + . kolla_extend_start
Oct  3 09:47:19 compute-0 iscsid[321830]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct  3 09:47:19 compute-0 iscsid[321830]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct  3 09:47:19 compute-0 iscsid[321830]: Running command: '/usr/sbin/iscsid -f'
Oct  3 09:47:19 compute-0 iscsid[321830]: + umask 0022
Oct  3 09:47:19 compute-0 iscsid[321830]: + exec /usr/sbin/iscsid -f
Oct  3 09:47:19 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Oct  3 09:47:19 compute-0 python3.9[322034]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:47:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:20 compute-0 python3.9[322186]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:47:21 compute-0 python3.9[322338]: ansible-ansible.builtin.service_facts Invoked
Oct  3 09:47:21 compute-0 network[322355]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  3 09:47:21 compute-0 network[322356]: 'network-scripts' will be removed from distribution in near future.
Oct  3 09:47:21 compute-0 network[322357]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  3 09:47:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:47:25 compute-0 python3.9[322745]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  3 09:47:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:47:25 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:47:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:47:25 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:47:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:47:25 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:47:25 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 1f7fab03-c656-42e5-8d2b-583dcec5933a does not exist
Oct  3 09:47:25 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 25ff32c1-7e2f-4e1c-ad43-3f9d3339bb0f does not exist
Oct  3 09:47:25 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ee0d5949-0e20-4679-b992-72526eafb939 does not exist
Oct  3 09:47:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:47:25 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:47:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:47:25 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:47:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:47:25 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:47:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:26 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:47:26 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:47:26 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:47:26 compute-0 podman[323051]: 2025-10-03 09:47:26.667812702 +0000 UTC m=+0.071874803 container create 949f78c4653221ba98b9f3f0f39c23fdc60bb96dc4e15a2ff9dc85aa1f731c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:47:26 compute-0 python3.9[323037]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct  3 09:47:26 compute-0 systemd[1]: Started libpod-conmon-949f78c4653221ba98b9f3f0f39c23fdc60bb96dc4e15a2ff9dc85aa1f731c78.scope.
Oct  3 09:47:26 compute-0 podman[323051]: 2025-10-03 09:47:26.646120554 +0000 UTC m=+0.050182675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:47:26 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:47:26 compute-0 podman[323051]: 2025-10-03 09:47:26.783984491 +0000 UTC m=+0.188046642 container init 949f78c4653221ba98b9f3f0f39c23fdc60bb96dc4e15a2ff9dc85aa1f731c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct  3 09:47:26 compute-0 podman[323051]: 2025-10-03 09:47:26.795531823 +0000 UTC m=+0.199593924 container start 949f78c4653221ba98b9f3f0f39c23fdc60bb96dc4e15a2ff9dc85aa1f731c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct  3 09:47:26 compute-0 podman[323051]: 2025-10-03 09:47:26.800227343 +0000 UTC m=+0.204289454 container attach 949f78c4653221ba98b9f3f0f39c23fdc60bb96dc4e15a2ff9dc85aa1f731c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:47:26 compute-0 suspicious_banach[323068]: 167 167
Oct  3 09:47:26 compute-0 systemd[1]: libpod-949f78c4653221ba98b9f3f0f39c23fdc60bb96dc4e15a2ff9dc85aa1f731c78.scope: Deactivated successfully.
Oct  3 09:47:26 compute-0 podman[323051]: 2025-10-03 09:47:26.803897482 +0000 UTC m=+0.207959583 container died 949f78c4653221ba98b9f3f0f39c23fdc60bb96dc4e15a2ff9dc85aa1f731c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:47:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcba66b1e5d2029e623a716847032dfd8b098b47fff3d0d50296a3c9938369b7-merged.mount: Deactivated successfully.
Oct  3 09:47:26 compute-0 podman[323051]: 2025-10-03 09:47:26.865618378 +0000 UTC m=+0.269680489 container remove 949f78c4653221ba98b9f3f0f39c23fdc60bb96dc4e15a2ff9dc85aa1f731c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_banach, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:47:26 compute-0 systemd[1]: libpod-conmon-949f78c4653221ba98b9f3f0f39c23fdc60bb96dc4e15a2ff9dc85aa1f731c78.scope: Deactivated successfully.
Oct  3 09:47:27 compute-0 podman[323140]: 2025-10-03 09:47:27.04316199 +0000 UTC m=+0.049623037 container create 7d8d832a8c056ac410dab46aab412961c5546733d5e20ad2554fda324921b4b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pascal, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:47:27 compute-0 systemd[1]: Started libpod-conmon-7d8d832a8c056ac410dab46aab412961c5546733d5e20ad2554fda324921b4b3.scope.
Oct  3 09:47:27 compute-0 podman[323140]: 2025-10-03 09:47:27.024964855 +0000 UTC m=+0.031425932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:47:27 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ce60e259f3bd6cc15e831da2136bd36b3f293d4de092d3afc53fd396f43ba52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ce60e259f3bd6cc15e831da2136bd36b3f293d4de092d3afc53fd396f43ba52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ce60e259f3bd6cc15e831da2136bd36b3f293d4de092d3afc53fd396f43ba52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ce60e259f3bd6cc15e831da2136bd36b3f293d4de092d3afc53fd396f43ba52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ce60e259f3bd6cc15e831da2136bd36b3f293d4de092d3afc53fd396f43ba52/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:47:27 compute-0 podman[323140]: 2025-10-03 09:47:27.153431849 +0000 UTC m=+0.159892916 container init 7d8d832a8c056ac410dab46aab412961c5546733d5e20ad2554fda324921b4b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:47:27 compute-0 podman[323140]: 2025-10-03 09:47:27.170138676 +0000 UTC m=+0.176599723 container start 7d8d832a8c056ac410dab46aab412961c5546733d5e20ad2554fda324921b4b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 09:47:27 compute-0 podman[323140]: 2025-10-03 09:47:27.174395623 +0000 UTC m=+0.180856690 container attach 7d8d832a8c056ac410dab46aab412961c5546733d5e20ad2554fda324921b4b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:47:27 compute-0 python3.9[323266]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:47:28 compute-0 python3.9[323399]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759484846.9844935-460-28067211492597/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:28 compute-0 sharp_pascal[323186]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:47:28 compute-0 sharp_pascal[323186]: --> relative data size: 1.0
Oct  3 09:47:28 compute-0 sharp_pascal[323186]: --> All data devices are unavailable
Oct  3 09:47:28 compute-0 systemd[1]: libpod-7d8d832a8c056ac410dab46aab412961c5546733d5e20ad2554fda324921b4b3.scope: Deactivated successfully.
Oct  3 09:47:28 compute-0 systemd[1]: libpod-7d8d832a8c056ac410dab46aab412961c5546733d5e20ad2554fda324921b4b3.scope: Consumed 1.059s CPU time.
Oct  3 09:47:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:28 compute-0 podman[323416]: 2025-10-03 09:47:28.355520389 +0000 UTC m=+0.046519078 container died 7d8d832a8c056ac410dab46aab412961c5546733d5e20ad2554fda324921b4b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  3 09:47:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ce60e259f3bd6cc15e831da2136bd36b3f293d4de092d3afc53fd396f43ba52-merged.mount: Deactivated successfully.
Oct  3 09:47:28 compute-0 podman[323416]: 2025-10-03 09:47:28.436287048 +0000 UTC m=+0.127285727 container remove 7d8d832a8c056ac410dab46aab412961c5546733d5e20ad2554fda324921b4b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_pascal, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:47:28 compute-0 systemd[1]: libpod-conmon-7d8d832a8c056ac410dab46aab412961c5546733d5e20ad2554fda324921b4b3.scope: Deactivated successfully.
Oct  3 09:47:29 compute-0 python3.9[323678]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:29 compute-0 podman[323743]: 2025-10-03 09:47:29.268507087 +0000 UTC m=+0.105184475 container create 1d9b25d803e49af47be1b9fb800e5f672148b42d6dce8db39d5ef899aa182f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_faraday, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 09:47:29 compute-0 podman[323743]: 2025-10-03 09:47:29.192733198 +0000 UTC m=+0.029410616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:47:29 compute-0 systemd[1]: Started libpod-conmon-1d9b25d803e49af47be1b9fb800e5f672148b42d6dce8db39d5ef899aa182f0d.scope.
Oct  3 09:47:29 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct  3 09:47:29 compute-0 systemd[321850]: Activating special unit Exit the Session...
Oct  3 09:47:29 compute-0 systemd[321850]: Stopped target Main User Target.
Oct  3 09:47:29 compute-0 systemd[321850]: Stopped target Basic System.
Oct  3 09:47:29 compute-0 systemd[321850]: Stopped target Paths.
Oct  3 09:47:29 compute-0 systemd[321850]: Stopped target Sockets.
Oct  3 09:47:29 compute-0 systemd[321850]: Stopped target Timers.
Oct  3 09:47:29 compute-0 systemd[321850]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  3 09:47:29 compute-0 systemd[321850]: Closed D-Bus User Message Bus Socket.
Oct  3 09:47:29 compute-0 systemd[321850]: Stopped Create User's Volatile Files and Directories.
Oct  3 09:47:29 compute-0 systemd[321850]: Removed slice User Application Slice.
Oct  3 09:47:29 compute-0 systemd[321850]: Reached target Shutdown.
Oct  3 09:47:29 compute-0 systemd[321850]: Finished Exit the Session.
Oct  3 09:47:29 compute-0 systemd[321850]: Reached target Exit the Session.
Oct  3 09:47:29 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct  3 09:47:29 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct  3 09:47:29 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:47:29 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  3 09:47:29 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  3 09:47:29 compute-0 podman[323743]: 2025-10-03 09:47:29.376466171 +0000 UTC m=+0.213143589 container init 1d9b25d803e49af47be1b9fb800e5f672148b42d6dce8db39d5ef899aa182f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:47:29 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  3 09:47:29 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  3 09:47:29 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct  3 09:47:29 compute-0 podman[323743]: 2025-10-03 09:47:29.387278909 +0000 UTC m=+0.223956317 container start 1d9b25d803e49af47be1b9fb800e5f672148b42d6dce8db39d5ef899aa182f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_faraday, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 09:47:29 compute-0 podman[323743]: 2025-10-03 09:47:29.392029662 +0000 UTC m=+0.228707080 container attach 1d9b25d803e49af47be1b9fb800e5f672148b42d6dce8db39d5ef899aa182f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:47:29 compute-0 nice_faraday[323807]: 167 167
Oct  3 09:47:29 compute-0 systemd[1]: libpod-1d9b25d803e49af47be1b9fb800e5f672148b42d6dce8db39d5ef899aa182f0d.scope: Deactivated successfully.
Oct  3 09:47:29 compute-0 podman[323743]: 2025-10-03 09:47:29.404571836 +0000 UTC m=+0.241249254 container died 1d9b25d803e49af47be1b9fb800e5f672148b42d6dce8db39d5ef899aa182f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:47:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2e4cb458657c623a565967b17503cf2c06b2e90b09b4a70d0bf0beddd37728c-merged.mount: Deactivated successfully.
Oct  3 09:47:29 compute-0 podman[323743]: 2025-10-03 09:47:29.449421439 +0000 UTC m=+0.286098837 container remove 1d9b25d803e49af47be1b9fb800e5f672148b42d6dce8db39d5ef899aa182f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:47:29 compute-0 systemd[1]: libpod-conmon-1d9b25d803e49af47be1b9fb800e5f672148b42d6dce8db39d5ef899aa182f0d.scope: Deactivated successfully.
Oct  3 09:47:29 compute-0 podman[323882]: 2025-10-03 09:47:29.631471767 +0000 UTC m=+0.059176536 container create 15c477ead325878711ab5e46a4389d1552ac578153205d679d5803437c314742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 09:47:29 compute-0 systemd[1]: Started libpod-conmon-15c477ead325878711ab5e46a4389d1552ac578153205d679d5803437c314742.scope.
Oct  3 09:47:29 compute-0 podman[323882]: 2025-10-03 09:47:29.611838695 +0000 UTC m=+0.039543494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:47:29 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc8e22d3ae9c32755d59d9bce2e19ee8e0e4d52b22b52de16e4ecf149924a6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc8e22d3ae9c32755d59d9bce2e19ee8e0e4d52b22b52de16e4ecf149924a6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc8e22d3ae9c32755d59d9bce2e19ee8e0e4d52b22b52de16e4ecf149924a6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cc8e22d3ae9c32755d59d9bce2e19ee8e0e4d52b22b52de16e4ecf149924a6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:47:29 compute-0 podman[323882]: 2025-10-03 09:47:29.739459131 +0000 UTC m=+0.167163950 container init 15c477ead325878711ab5e46a4389d1552ac578153205d679d5803437c314742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:47:29 compute-0 podman[323882]: 2025-10-03 09:47:29.763792074 +0000 UTC m=+0.191496843 container start 15c477ead325878711ab5e46a4389d1552ac578153205d679d5803437c314742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:47:29 compute-0 podman[157165]: time="2025-10-03T09:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:47:29 compute-0 podman[323882]: 2025-10-03 09:47:29.768568528 +0000 UTC m=+0.196273307 container attach 15c477ead325878711ab5e46a4389d1552ac578153205d679d5803437c314742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct  3 09:47:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 39771 "" "Go-http-client/1.1"
Oct  3 09:47:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8079 "" "Go-http-client/1.1"
Oct  3 09:47:29 compute-0 python3.9[323926]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 09:47:30 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  3 09:47:30 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct  3 09:47:30 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct  3 09:47:30 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct  3 09:47:30 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct  3 09:47:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:30 compute-0 friendly_williams[323927]: {
Oct  3 09:47:30 compute-0 friendly_williams[323927]:    "0": [
Oct  3 09:47:30 compute-0 friendly_williams[323927]:        {
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "devices": [
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "/dev/loop3"
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            ],
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "lv_name": "ceph_lv0",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "lv_size": "21470642176",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "name": "ceph_lv0",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "tags": {
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.cluster_name": "ceph",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.crush_device_class": "",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.encrypted": "0",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.osd_id": "0",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.type": "block",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.vdo": "0"
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            },
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "type": "block",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "vg_name": "ceph_vg0"
Oct  3 09:47:30 compute-0 friendly_williams[323927]:        }
Oct  3 09:47:30 compute-0 friendly_williams[323927]:    ],
Oct  3 09:47:30 compute-0 friendly_williams[323927]:    "1": [
Oct  3 09:47:30 compute-0 friendly_williams[323927]:        {
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "devices": [
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "/dev/loop4"
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            ],
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "lv_name": "ceph_lv1",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "lv_size": "21470642176",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "name": "ceph_lv1",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "tags": {
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.cluster_name": "ceph",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.crush_device_class": "",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.encrypted": "0",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.osd_id": "1",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.type": "block",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.vdo": "0"
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            },
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "type": "block",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "vg_name": "ceph_vg1"
Oct  3 09:47:30 compute-0 friendly_williams[323927]:        }
Oct  3 09:47:30 compute-0 friendly_williams[323927]:    ],
Oct  3 09:47:30 compute-0 friendly_williams[323927]:    "2": [
Oct  3 09:47:30 compute-0 friendly_williams[323927]:        {
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "devices": [
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "/dev/loop5"
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            ],
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "lv_name": "ceph_lv2",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "lv_size": "21470642176",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "name": "ceph_lv2",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "tags": {
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.cluster_name": "ceph",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.crush_device_class": "",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.encrypted": "0",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.osd_id": "2",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.type": "block",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:                "ceph.vdo": "0"
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            },
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "type": "block",
Oct  3 09:47:30 compute-0 friendly_williams[323927]:            "vg_name": "ceph_vg2"
Oct  3 09:47:30 compute-0 friendly_williams[323927]:        }
Oct  3 09:47:30 compute-0 friendly_williams[323927]:    ]
Oct  3 09:47:30 compute-0 friendly_williams[323927]: }
Oct  3 09:47:30 compute-0 systemd[1]: libpod-15c477ead325878711ab5e46a4389d1552ac578153205d679d5803437c314742.scope: Deactivated successfully.
Oct  3 09:47:30 compute-0 podman[323882]: 2025-10-03 09:47:30.589030909 +0000 UTC m=+1.016735678 container died 15c477ead325878711ab5e46a4389d1552ac578153205d679d5803437c314742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:47:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cc8e22d3ae9c32755d59d9bce2e19ee8e0e4d52b22b52de16e4ecf149924a6c-merged.mount: Deactivated successfully.
Oct  3 09:47:30 compute-0 podman[323882]: 2025-10-03 09:47:30.699381579 +0000 UTC m=+1.127086348 container remove 15c477ead325878711ab5e46a4389d1552ac578153205d679d5803437c314742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 09:47:30 compute-0 systemd[1]: libpod-conmon-15c477ead325878711ab5e46a4389d1552ac578153205d679d5803437c314742.scope: Deactivated successfully.
Oct  3 09:47:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:47:30 compute-0 python3.9[324091]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:47:31 compute-0 openstack_network_exporter[159287]: ERROR   09:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:47:31 compute-0 openstack_network_exporter[159287]: ERROR   09:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:47:31 compute-0 openstack_network_exporter[159287]: ERROR   09:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:47:31 compute-0 openstack_network_exporter[159287]: ERROR   09:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:47:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:47:31 compute-0 openstack_network_exporter[159287]: ERROR   09:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:47:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:47:31 compute-0 podman[324393]: 2025-10-03 09:47:31.458266429 +0000 UTC m=+0.056757248 container create 3522193a37d28ac6b03806fe43435760ba519bfd448311dab3297d48e4f6f161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:47:31 compute-0 systemd[1]: Started libpod-conmon-3522193a37d28ac6b03806fe43435760ba519bfd448311dab3297d48e4f6f161.scope.
Oct  3 09:47:31 compute-0 podman[324393]: 2025-10-03 09:47:31.436457727 +0000 UTC m=+0.034948566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:47:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:47:31 compute-0 podman[324393]: 2025-10-03 09:47:31.560026473 +0000 UTC m=+0.158517312 container init 3522193a37d28ac6b03806fe43435760ba519bfd448311dab3297d48e4f6f161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 09:47:31 compute-0 podman[324393]: 2025-10-03 09:47:31.571601496 +0000 UTC m=+0.170092315 container start 3522193a37d28ac6b03806fe43435760ba519bfd448311dab3297d48e4f6f161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:47:31 compute-0 podman[324393]: 2025-10-03 09:47:31.57547029 +0000 UTC m=+0.173961139 container attach 3522193a37d28ac6b03806fe43435760ba519bfd448311dab3297d48e4f6f161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:47:31 compute-0 stoic_liskov[324410]: 167 167
Oct  3 09:47:31 compute-0 systemd[1]: libpod-3522193a37d28ac6b03806fe43435760ba519bfd448311dab3297d48e4f6f161.scope: Deactivated successfully.
Oct  3 09:47:31 compute-0 podman[324393]: 2025-10-03 09:47:31.581858956 +0000 UTC m=+0.180349775 container died 3522193a37d28ac6b03806fe43435760ba519bfd448311dab3297d48e4f6f161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:47:31 compute-0 python3.9[324405]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:47:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed0733b1753a7f44de5030c66af2c7d04e42b0819de0a4f88553762f90ed7a9f-merged.mount: Deactivated successfully.
Oct  3 09:47:31 compute-0 podman[324393]: 2025-10-03 09:47:31.638633583 +0000 UTC m=+0.237124392 container remove 3522193a37d28ac6b03806fe43435760ba519bfd448311dab3297d48e4f6f161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 09:47:31 compute-0 systemd[1]: libpod-conmon-3522193a37d28ac6b03806fe43435760ba519bfd448311dab3297d48e4f6f161.scope: Deactivated successfully.
Oct  3 09:47:31 compute-0 podman[324460]: 2025-10-03 09:47:31.843588077 +0000 UTC m=+0.061113637 container create 36a503d5bdcd4ecd589e29be369214caf9624a2aeb1289d20d9dd87636bce9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_merkle, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:47:31 compute-0 systemd[1]: Started libpod-conmon-36a503d5bdcd4ecd589e29be369214caf9624a2aeb1289d20d9dd87636bce9c9.scope.
Oct  3 09:47:31 compute-0 podman[324460]: 2025-10-03 09:47:31.819039137 +0000 UTC m=+0.036564717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:47:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a716e6aaedbfcc8a6399bac776344959d0c7163f44eaa028e46b04e557542d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a716e6aaedbfcc8a6399bac776344959d0c7163f44eaa028e46b04e557542d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a716e6aaedbfcc8a6399bac776344959d0c7163f44eaa028e46b04e557542d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:47:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a716e6aaedbfcc8a6399bac776344959d0c7163f44eaa028e46b04e557542d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:47:31 compute-0 podman[324460]: 2025-10-03 09:47:31.982204017 +0000 UTC m=+0.199729587 container init 36a503d5bdcd4ecd589e29be369214caf9624a2aeb1289d20d9dd87636bce9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_merkle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 09:47:31 compute-0 podman[324460]: 2025-10-03 09:47:31.992283901 +0000 UTC m=+0.209809461 container start 36a503d5bdcd4ecd589e29be369214caf9624a2aeb1289d20d9dd87636bce9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:47:32 compute-0 podman[324460]: 2025-10-03 09:47:32.001529689 +0000 UTC m=+0.219055279 container attach 36a503d5bdcd4ecd589e29be369214caf9624a2aeb1289d20d9dd87636bce9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_merkle, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 09:47:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:32 compute-0 python3.9[324608]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:47:32 compute-0 recursing_merkle[324499]: {
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:        "osd_id": 1,
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:        "type": "bluestore"
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:    },
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:        "osd_id": 2,
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:        "type": "bluestore"
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:    },
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:        "osd_id": 0,
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:        "type": "bluestore"
Oct  3 09:47:32 compute-0 recursing_merkle[324499]:    }
Oct  3 09:47:32 compute-0 recursing_merkle[324499]: }
Oct  3 09:47:33 compute-0 systemd[1]: libpod-36a503d5bdcd4ecd589e29be369214caf9624a2aeb1289d20d9dd87636bce9c9.scope: Deactivated successfully.
Oct  3 09:47:33 compute-0 systemd[1]: libpod-36a503d5bdcd4ecd589e29be369214caf9624a2aeb1289d20d9dd87636bce9c9.scope: Consumed 1.032s CPU time.
Oct  3 09:47:33 compute-0 podman[324713]: 2025-10-03 09:47:33.071977434 +0000 UTC m=+0.034161790 container died 36a503d5bdcd4ecd589e29be369214caf9624a2aeb1289d20d9dd87636bce9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 09:47:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a716e6aaedbfcc8a6399bac776344959d0c7163f44eaa028e46b04e557542d9-merged.mount: Deactivated successfully.
Oct  3 09:47:33 compute-0 podman[324713]: 2025-10-03 09:47:33.142475072 +0000 UTC m=+0.104659408 container remove 36a503d5bdcd4ecd589e29be369214caf9624a2aeb1289d20d9dd87636bce9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_merkle, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 09:47:33 compute-0 systemd[1]: libpod-conmon-36a503d5bdcd4ecd589e29be369214caf9624a2aeb1289d20d9dd87636bce9c9.scope: Deactivated successfully.
Oct  3 09:47:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:47:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:47:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:47:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:47:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 51caf8db-e1e4-4acd-a302-3cbe95bcd5d6 does not exist
Oct  3 09:47:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 896d827e-856c-4921-b637-0588a6b78362 does not exist
Oct  3 09:47:33 compute-0 python3.9[324853]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:47:33 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:47:33 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:47:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:34 compute-0 python3.9[324976]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759484852.7080388-518-51932871204947/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:47:35 compute-0 podman[325060]: 2025-10-03 09:47:35.833165477 +0000 UTC m=+0.080438359 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.buildah.version=1.33.7, distribution-scope=public, com.redhat.component=ubi9-minimal-container)
Oct  3 09:47:35 compute-0 podman[325054]: 2025-10-03 09:47:35.828938121 +0000 UTC m=+0.087016170 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:47:35 compute-0 podman[325068]: 2025-10-03 09:47:35.842825318 +0000 UTC m=+0.084753468 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 09:47:35 compute-0 podman[325053]: 2025-10-03 09:47:35.843722937 +0000 UTC m=+0.103730488 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, com.redhat.component=ubi9-container, container_name=kepler, name=ubi9, vcs-type=git, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=)
Oct  3 09:47:35 compute-0 podman[325055]: 2025-10-03 09:47:35.857020415 +0000 UTC m=+0.110967111 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 09:47:35 compute-0 podman[325150]: 2025-10-03 09:47:35.914198213 +0000 UTC m=+0.059786913 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 09:47:36 compute-0 podman[325174]: 2025-10-03 09:47:36.070958636 +0000 UTC m=+0.126346564 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  3 09:47:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:36 compute-0 python3.9[325275]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:47:37 compute-0 python3.9[325428]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:38 compute-0 python3.9[325580]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.954 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.955 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f70b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa35c9f7170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b8940b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35df74380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.956 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b894380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e566ba0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f73b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e6b9400>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.957 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa35b894080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa35c9f71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa35c9f7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa35c9f7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa35c9f72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa35c9f7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.961 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.961 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f76e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa35c955970>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa35b894350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa35c92f7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.962 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.962 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa35c9f7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa35c9f7710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa35c9f79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.963 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa35e6e6180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.963 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa35c9f73e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa35c9f7c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa35c9f7440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa35c9f7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa35c9f7ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa35c9f7d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa35c9f7e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa35c9f7650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa35c9f7e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa35c9f76b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa35c9f7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa35c9f7fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.968 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.969 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.969 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.969 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.969 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.970 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.971 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:47:38.972 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:47:39 compute-0 python3.9[325733]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:40 compute-0 python3.9[325885]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:47:41 compute-0 python3.9[326037]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:47:41.572 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:47:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:47:41.573 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:47:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:47:41.573 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:47:41 compute-0 python3.9[326189]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:42 compute-0 podman[326313]: 2025-10-03 09:47:42.605478053 +0000 UTC m=+0.087155545 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 09:47:42 compute-0 python3.9[326359]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:43 compute-0 python3.9[326512]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:47:44 compute-0 python3.9[326666]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:45 compute-0 python3.9[326818]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:47:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:47:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:47:45
Oct  3 09:47:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:47:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:47:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.rgw.root', '.mgr', 'volumes', 'default.rgw.control', 'images', 'default.rgw.log']
Oct  3 09:47:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:47:46 compute-0 python3.9[326970]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:47:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:47:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:47:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:47:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:47:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:47:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:47:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:46 compute-0 python3.9[327048]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:47:47 compute-0 python3.9[327200]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:47:47 compute-0 python3.9[327278]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:47:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:48 compute-0 python3.9[327430]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:49 compute-0 podman[327554]: 2025-10-03 09:47:49.342625327 +0000 UTC m=+0.079891762 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  3 09:47:49 compute-0 python3.9[327601]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:47:50 compute-0 python3.9[327681]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:47:51 compute-0 python3.9[327833]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:47:51 compute-0 python3.9[327911]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:52 compute-0 python3.9[328063]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:47:52 compute-0 systemd[1]: Reloading.
Oct  3 09:47:52 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:47:52 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:47:53 compute-0 python3.9[328253]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:47:54 compute-0 python3.9[328331]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:54 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:54 compute-0 python3.9[328484]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:47:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:47:55 compute-0 python3.9[328562]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:47:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:47:56 compute-0 python3.9[328714]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:47:56 compute-0 systemd[1]: Reloading.
Oct  3 09:47:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:56 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:47:56 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:47:56 compute-0 systemd[1]: Starting Create netns directory...
Oct  3 09:47:56 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  3 09:47:56 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  3 09:47:56 compute-0 systemd[1]: Finished Create netns directory.
Oct  3 09:47:57 compute-0 python3.9[328908]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:47:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:47:58 compute-0 python3.9[329060]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:47:59 compute-0 python3.9[329183]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759484877.9099436-725-168571264761772/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:47:59 compute-0 podman[157165]: time="2025-10-03T09:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:47:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38198 "" "Go-http-client/1.1"
Oct  3 09:47:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7670 "" "Go-http-client/1.1"
Oct  3 09:48:00 compute-0 python3.9[329335]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:48:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:48:00 compute-0 python3.9[329487]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:48:01 compute-0 python3.9[329610]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759484880.2590714-750-141353274804558/.source.json _original_basename=.zx_br127 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:01 compute-0 openstack_network_exporter[159287]: ERROR   09:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:48:01 compute-0 openstack_network_exporter[159287]: ERROR   09:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:48:01 compute-0 openstack_network_exporter[159287]: ERROR   09:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:48:01 compute-0 openstack_network_exporter[159287]: ERROR   09:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:48:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:48:01 compute-0 openstack_network_exporter[159287]: ERROR   09:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:48:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:48:02 compute-0 python3.9[329762]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:04 compute-0 python3.9[330189]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct  3 09:48:05 compute-0 python3.9[330341]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 09:48:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:48:06 compute-0 podman[330494]: 2025-10-03 09:48:06.010060334 +0000 UTC m=+0.081539534 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:48:06 compute-0 podman[330497]: 2025-10-03 09:48:06.01116199 +0000 UTC m=+0.071229073 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 09:48:06 compute-0 podman[330495]: 2025-10-03 09:48:06.014560169 +0000 UTC m=+0.083183537 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 09:48:06 compute-0 podman[330493]: 2025-10-03 09:48:06.016631226 +0000 UTC m=+0.090598306 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, release-0.7.12=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, config_id=edpm, vcs-type=git, container_name=kepler)
Oct  3 09:48:06 compute-0 podman[330496]: 2025-10-03 09:48:06.039558414 +0000 UTC m=+0.101403074 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct  3 09:48:06 compute-0 podman[330590]: 2025-10-03 09:48:06.114020058 +0000 UTC m=+0.078604279 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:48:06 compute-0 python3.9[330511]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  3 09:48:06 compute-0 podman[330616]: 2025-10-03 09:48:06.252894506 +0000 UTC m=+0.106557498 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 09:48:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:07 compute-0 python3[330816]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 09:48:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:09 compute-0 podman[330828]: 2025-10-03 09:48:09.536087112 +0000 UTC m=+1.633258931 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct  3 09:48:09 compute-0 podman[330880]: 2025-10-03 09:48:09.748886847 +0000 UTC m=+0.063993169 container create b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:48:09 compute-0 podman[330880]: 2025-10-03 09:48:09.717068463 +0000 UTC m=+0.032174805 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct  3 09:48:09 compute-0 python3[330816]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct  3 09:48:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:10 compute-0 python3.9[331065]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:48:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:48:11 compute-0 python3.9[331219]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:12 compute-0 python3.9[331295]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:48:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:12 compute-0 podman[331446]: 2025-10-03 09:48:12.757878992 +0000 UTC m=+0.070407666 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:48:12 compute-0 python3.9[331447]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759484892.1902652-838-83798944659711/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:13 compute-0 python3.9[331541]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 09:48:13 compute-0 systemd[1]: Reloading.
Oct  3 09:48:13 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:48:13 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:48:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:14 compute-0 python3.9[331653]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:48:14 compute-0 systemd[1]: Reloading.
Oct  3 09:48:14 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:48:14 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:48:15 compute-0 systemd[1]: Starting multipathd container...
Oct  3 09:48:15 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2deb05bbbea1748a7d558c9a3c05c53c1feca4a7ab3a2115cd815475b343d7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  3 09:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2deb05bbbea1748a7d558c9a3c05c53c1feca4a7ab3a2115cd815475b343d7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  3 09:48:15 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92.
Oct  3 09:48:15 compute-0 podman[331692]: 2025-10-03 09:48:15.431468246 +0000 UTC m=+0.274605374 container init b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct  3 09:48:15 compute-0 multipathd[331706]: + sudo -E kolla_set_configs
Oct  3 09:48:15 compute-0 podman[331692]: 2025-10-03 09:48:15.477289281 +0000 UTC m=+0.320426419 container start b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct  3 09:48:15 compute-0 podman[331692]: multipathd
Oct  3 09:48:15 compute-0 multipathd[331706]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  3 09:48:15 compute-0 multipathd[331706]: INFO:__main__:Validating config file
Oct  3 09:48:15 compute-0 multipathd[331706]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  3 09:48:15 compute-0 multipathd[331706]: INFO:__main__:Writing out command to execute
Oct  3 09:48:15 compute-0 systemd[1]: Started multipathd container.
Oct  3 09:48:15 compute-0 multipathd[331706]: ++ cat /run_command
Oct  3 09:48:15 compute-0 multipathd[331706]: + CMD='/usr/sbin/multipathd -d'
Oct  3 09:48:15 compute-0 multipathd[331706]: + ARGS=
Oct  3 09:48:15 compute-0 multipathd[331706]: + sudo kolla_copy_cacerts
Oct  3 09:48:15 compute-0 multipathd[331706]: + [[ ! -n '' ]]
Oct  3 09:48:15 compute-0 multipathd[331706]: + . kolla_extend_start
Oct  3 09:48:15 compute-0 multipathd[331706]: Running command: '/usr/sbin/multipathd -d'
Oct  3 09:48:15 compute-0 multipathd[331706]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  3 09:48:15 compute-0 multipathd[331706]: + umask 0022
Oct  3 09:48:15 compute-0 multipathd[331706]: + exec /usr/sbin/multipathd -d
Oct  3 09:48:15 compute-0 podman[331713]: 2025-10-03 09:48:15.584596592 +0000 UTC m=+0.089662365 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 09:48:15 compute-0 multipathd[331706]: 4078.255246 | --------start up--------
Oct  3 09:48:15 compute-0 multipathd[331706]: 4078.255285 | read /etc/multipath.conf
Oct  3 09:48:15 compute-0 systemd[1]: b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92-4f73b5d8397540b0.service: Main process exited, code=exited, status=1/FAILURE
Oct  3 09:48:15 compute-0 systemd[1]: b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92-4f73b5d8397540b0.service: Failed with result 'exit-code'.
Oct  3 09:48:15 compute-0 multipathd[331706]: 4078.264610 | path checkers start up
Oct  3 09:48:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:48:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:48:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:48:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:48:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:48:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:48:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:48:16 compute-0 python3.9[331897]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:48:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:17 compute-0 python3.9[332051]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:48:18 compute-0 python3.9[332215]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 09:48:18 compute-0 systemd[1]: Stopping multipathd container...
Oct  3 09:48:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:18 compute-0 multipathd[331706]: 4081.067172 | exit (signal)
Oct  3 09:48:18 compute-0 multipathd[331706]: 4081.068084 | --------shut down-------
Oct  3 09:48:18 compute-0 systemd[1]: libpod-b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92.scope: Deactivated successfully.
Oct  3 09:48:18 compute-0 podman[332219]: 2025-10-03 09:48:18.444741059 +0000 UTC m=+0.133128594 container died b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, config_id=multipathd)
Oct  3 09:48:18 compute-0 systemd[1]: b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92-4f73b5d8397540b0.timer: Deactivated successfully.
Oct  3 09:48:18 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92.
Oct  3 09:48:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92-userdata-shm.mount: Deactivated successfully.
Oct  3 09:48:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c2deb05bbbea1748a7d558c9a3c05c53c1feca4a7ab3a2115cd815475b343d7-merged.mount: Deactivated successfully.
Oct  3 09:48:18 compute-0 podman[332219]: 2025-10-03 09:48:18.957103461 +0000 UTC m=+0.645490956 container cleanup b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:48:18 compute-0 podman[332219]: multipathd
Oct  3 09:48:19 compute-0 podman[332246]: multipathd
Oct  3 09:48:19 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct  3 09:48:19 compute-0 systemd[1]: Stopped multipathd container.
Oct  3 09:48:19 compute-0 systemd[1]: Starting multipathd container...
Oct  3 09:48:19 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2deb05bbbea1748a7d558c9a3c05c53c1feca4a7ab3a2115cd815475b343d7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  3 09:48:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c2deb05bbbea1748a7d558c9a3c05c53c1feca4a7ab3a2115cd815475b343d7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  3 09:48:19 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92.
Oct  3 09:48:19 compute-0 podman[332257]: 2025-10-03 09:48:19.220288747 +0000 UTC m=+0.149892693 container init b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:48:19 compute-0 multipathd[332270]: + sudo -E kolla_set_configs
Oct  3 09:48:19 compute-0 podman[332257]: 2025-10-03 09:48:19.248066661 +0000 UTC m=+0.177670587 container start b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 09:48:19 compute-0 podman[332257]: multipathd
Oct  3 09:48:19 compute-0 systemd[1]: Started multipathd container.
Oct  3 09:48:19 compute-0 multipathd[332270]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  3 09:48:19 compute-0 multipathd[332270]: INFO:__main__:Validating config file
Oct  3 09:48:19 compute-0 multipathd[332270]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  3 09:48:19 compute-0 multipathd[332270]: INFO:__main__:Writing out command to execute
Oct  3 09:48:19 compute-0 multipathd[332270]: ++ cat /run_command
Oct  3 09:48:19 compute-0 multipathd[332270]: + CMD='/usr/sbin/multipathd -d'
Oct  3 09:48:19 compute-0 multipathd[332270]: + ARGS=
Oct  3 09:48:19 compute-0 multipathd[332270]: + sudo kolla_copy_cacerts
Oct  3 09:48:19 compute-0 multipathd[332270]: + [[ ! -n '' ]]
Oct  3 09:48:19 compute-0 multipathd[332270]: + . kolla_extend_start
Oct  3 09:48:19 compute-0 multipathd[332270]: Running command: '/usr/sbin/multipathd -d'
Oct  3 09:48:19 compute-0 multipathd[332270]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  3 09:48:19 compute-0 multipathd[332270]: + umask 0022
Oct  3 09:48:19 compute-0 multipathd[332270]: + exec /usr/sbin/multipathd -d
Oct  3 09:48:19 compute-0 podman[332277]: 2025-10-03 09:48:19.351708425 +0000 UTC m=+0.093213810 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 09:48:19 compute-0 systemd[1]: b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92-3c7237491d2cb6c7.service: Main process exited, code=exited, status=1/FAILURE
Oct  3 09:48:19 compute-0 systemd[1]: b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92-3c7237491d2cb6c7.service: Failed with result 'exit-code'.
Oct  3 09:48:19 compute-0 multipathd[332270]: 4082.034489 | --------start up--------
Oct  3 09:48:19 compute-0 multipathd[332270]: 4082.034536 | read /etc/multipath.conf
Oct  3 09:48:19 compute-0 multipathd[332270]: 4082.045766 | path checkers start up
Oct  3 09:48:19 compute-0 podman[332322]: 2025-10-03 09:48:19.486421328 +0000 UTC m=+0.093345594 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct  3 09:48:20 compute-0 python3.9[332482]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:48:21 compute-0 python3.9[332634]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  3 09:48:21 compute-0 python3.9[332786]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct  3 09:48:21 compute-0 kernel: Key type psk registered
Oct  3 09:48:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:22 compute-0 python3.9[332949]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:48:23 compute-0 python3.9[333072]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759484902.1920712-918-20608032418239/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:24 compute-0 python3.9[333224]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:25 compute-0 python3.9[333376]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 09:48:25 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  3 09:48:25 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct  3 09:48:25 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct  3 09:48:25 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct  3 09:48:25 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct  3 09:48:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:48:26 compute-0 python3.9[333532]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  3 09:48:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:27 compute-0 python3.9[333616]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  3 09:48:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:29 compute-0 podman[157165]: time="2025-10-03T09:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:48:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 40789 "" "Go-http-client/1.1"
Oct  3 09:48:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8081 "" "Go-http-client/1.1"
Oct  3 09:48:30 compute-0 systemd[1]: Reloading.
Oct  3 09:48:30 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:48:30 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:48:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:30 compute-0 systemd[1]: Reloading.
Oct  3 09:48:30 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:48:30 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:48:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:48:31 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  3 09:48:31 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  3 09:48:31 compute-0 lvm[333733]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  3 09:48:31 compute-0 lvm[333733]: VG ceph_vg0 finished
Oct  3 09:48:31 compute-0 lvm[333731]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  3 09:48:31 compute-0 lvm[333731]: VG ceph_vg1 finished
Oct  3 09:48:31 compute-0 lvm[333734]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  3 09:48:31 compute-0 lvm[333734]: VG ceph_vg2 finished
Oct  3 09:48:31 compute-0 openstack_network_exporter[159287]: ERROR   09:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:48:31 compute-0 openstack_network_exporter[159287]: ERROR   09:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:48:31 compute-0 openstack_network_exporter[159287]: ERROR   09:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:48:31 compute-0 openstack_network_exporter[159287]: ERROR   09:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:48:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:48:31 compute-0 openstack_network_exporter[159287]: ERROR   09:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:48:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:48:31 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  3 09:48:31 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct  3 09:48:31 compute-0 systemd[1]: Reloading.
Oct  3 09:48:31 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:48:31 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:48:31 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  3 09:48:32 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct  3 09:48:32 compute-0 systemd[1]: Started PackageKit Daemon.
Oct  3 09:48:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:33 compute-0 python3.9[334876]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  3 09:48:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct  3 09:48:33 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.966s CPU time.
Oct  3 09:48:33 compute-0 systemd[1]: run-rdf52080cc8764dde979fe98cb0cd083f.service: Deactivated successfully.
Oct  3 09:48:34 compute-0 python3.9[335331]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:48:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:48:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:48:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:48:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:48:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:48:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:48:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 28115e53-1c7e-4d0b-aab5-beaa519e4fef does not exist
Oct  3 09:48:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 74d9f0f1-7ee3-4297-911f-68a6ea1f5225 does not exist
Oct  3 09:48:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e0879105-63a7-4a95-96c3-3d6e12042c29 does not exist
Oct  3 09:48:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:48:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:48:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:48:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:48:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:48:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:48:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:48:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:48:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:48:35 compute-0 podman[335601]: 2025-10-03 09:48:35.065029709 +0000 UTC m=+0.054481574 container create 663a56b1248c422e5be49bfadeb7964380cec05b1b49c0bfea2bc2bb91c3af79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:48:35 compute-0 systemd[1]: Started libpod-conmon-663a56b1248c422e5be49bfadeb7964380cec05b1b49c0bfea2bc2bb91c3af79.scope.
Oct  3 09:48:35 compute-0 podman[335601]: 2025-10-03 09:48:35.042701161 +0000 UTC m=+0.032153046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:48:35 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:48:35 compute-0 podman[335601]: 2025-10-03 09:48:35.182464427 +0000 UTC m=+0.171916322 container init 663a56b1248c422e5be49bfadeb7964380cec05b1b49c0bfea2bc2bb91c3af79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 09:48:35 compute-0 podman[335601]: 2025-10-03 09:48:35.194166093 +0000 UTC m=+0.183617948 container start 663a56b1248c422e5be49bfadeb7964380cec05b1b49c0bfea2bc2bb91c3af79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:48:35 compute-0 podman[335601]: 2025-10-03 09:48:35.199491524 +0000 UTC m=+0.188943399 container attach 663a56b1248c422e5be49bfadeb7964380cec05b1b49c0bfea2bc2bb91c3af79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 09:48:35 compute-0 busy_ptolemy[335641]: 167 167
Oct  3 09:48:35 compute-0 systemd[1]: libpod-663a56b1248c422e5be49bfadeb7964380cec05b1b49c0bfea2bc2bb91c3af79.scope: Deactivated successfully.
Oct  3 09:48:35 compute-0 podman[335601]: 2025-10-03 09:48:35.201566311 +0000 UTC m=+0.191018186 container died 663a56b1248c422e5be49bfadeb7964380cec05b1b49c0bfea2bc2bb91c3af79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:48:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2df5a27cb1328f7dcd58e619e381d0c09df5ddf739deb77d0cb842c00ec635af-merged.mount: Deactivated successfully.
Oct  3 09:48:35 compute-0 podman[335601]: 2025-10-03 09:48:35.263949438 +0000 UTC m=+0.253401323 container remove 663a56b1248c422e5be49bfadeb7964380cec05b1b49c0bfea2bc2bb91c3af79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:48:35 compute-0 systemd[1]: libpod-conmon-663a56b1248c422e5be49bfadeb7964380cec05b1b49c0bfea2bc2bb91c3af79.scope: Deactivated successfully.
Oct  3 09:48:35 compute-0 python3.9[335684]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:35 compute-0 podman[335692]: 2025-10-03 09:48:35.425814945 +0000 UTC m=+0.034946275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:48:35 compute-0 podman[335692]: 2025-10-03 09:48:35.519582382 +0000 UTC m=+0.128713612 container create 3bbe2f78d7c514f7022a84e5a16910cd5e4660f26c45895bfd2ef3aa79af73de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct  3 09:48:35 compute-0 systemd[1]: Started libpod-conmon-3bbe2f78d7c514f7022a84e5a16910cd5e4660f26c45895bfd2ef3aa79af73de.scope.
Oct  3 09:48:35 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:48:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af245cc05726faf52648ecc9a5adfa941338300f2f88b458afc91585d6770007/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:48:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af245cc05726faf52648ecc9a5adfa941338300f2f88b458afc91585d6770007/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:48:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af245cc05726faf52648ecc9a5adfa941338300f2f88b458afc91585d6770007/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:48:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af245cc05726faf52648ecc9a5adfa941338300f2f88b458afc91585d6770007/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:48:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af245cc05726faf52648ecc9a5adfa941338300f2f88b458afc91585d6770007/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:48:35 compute-0 podman[335692]: 2025-10-03 09:48:35.64295299 +0000 UTC m=+0.252084240 container init 3bbe2f78d7c514f7022a84e5a16910cd5e4660f26c45895bfd2ef3aa79af73de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 09:48:35 compute-0 podman[335692]: 2025-10-03 09:48:35.654486651 +0000 UTC m=+0.263617921 container start 3bbe2f78d7c514f7022a84e5a16910cd5e4660f26c45895bfd2ef3aa79af73de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 09:48:35 compute-0 podman[335692]: 2025-10-03 09:48:35.661045112 +0000 UTC m=+0.270176372 container attach 3bbe2f78d7c514f7022a84e5a16910cd5e4660f26c45895bfd2ef3aa79af73de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 09:48:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:48:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:36 compute-0 podman[335843]: 2025-10-03 09:48:36.601425693 +0000 UTC m=+0.121427997 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Oct  3 09:48:36 compute-0 podman[335858]: 2025-10-03 09:48:36.607832119 +0000 UTC m=+0.117856032 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  3 09:48:36 compute-0 podman[335849]: 2025-10-03 09:48:36.608540222 +0000 UTC m=+0.119324920 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, name=ubi9-minimal, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct  3 09:48:36 compute-0 podman[335838]: 2025-10-03 09:48:36.624757983 +0000 UTC m=+0.153190778 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release-0.7.12=, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, distribution-scope=public, container_name=kepler, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, io.openshift.tags=base rhel9)
Oct  3 09:48:36 compute-0 podman[335840]: 2025-10-03 09:48:36.623026577 +0000 UTC m=+0.150946967 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:48:36 compute-0 podman[335842]: 2025-10-03 09:48:36.634784456 +0000 UTC m=+0.154904464 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:48:36 compute-0 podman[335844]: 2025-10-03 09:48:36.664361247 +0000 UTC m=+0.180253899 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:48:36 compute-0 nostalgic_kilby[335731]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:48:36 compute-0 nostalgic_kilby[335731]: --> relative data size: 1.0
Oct  3 09:48:36 compute-0 nostalgic_kilby[335731]: --> All data devices are unavailable
Oct  3 09:48:36 compute-0 systemd[1]: libpod-3bbe2f78d7c514f7022a84e5a16910cd5e4660f26c45895bfd2ef3aa79af73de.scope: Deactivated successfully.
Oct  3 09:48:36 compute-0 systemd[1]: libpod-3bbe2f78d7c514f7022a84e5a16910cd5e4660f26c45895bfd2ef3aa79af73de.scope: Consumed 1.120s CPU time.
Oct  3 09:48:36 compute-0 podman[335692]: 2025-10-03 09:48:36.84938965 +0000 UTC m=+1.458520900 container died 3bbe2f78d7c514f7022a84e5a16910cd5e4660f26c45895bfd2ef3aa79af73de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 09:48:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-af245cc05726faf52648ecc9a5adfa941338300f2f88b458afc91585d6770007-merged.mount: Deactivated successfully.
Oct  3 09:48:36 compute-0 python3.9[335984]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 09:48:36 compute-0 podman[335692]: 2025-10-03 09:48:36.925452666 +0000 UTC m=+1.534583896 container remove 3bbe2f78d7c514f7022a84e5a16910cd5e4660f26c45895bfd2ef3aa79af73de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:48:36 compute-0 systemd[1]: libpod-conmon-3bbe2f78d7c514f7022a84e5a16910cd5e4660f26c45895bfd2ef3aa79af73de.scope: Deactivated successfully.
Oct  3 09:48:36 compute-0 systemd[1]: Reloading.
Oct  3 09:48:37 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:48:37 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:48:38 compute-0 python3.9[336348]: ansible-ansible.builtin.service_facts Invoked
Oct  3 09:48:38 compute-0 podman[336360]: 2025-10-03 09:48:38.116002164 +0000 UTC m=+0.114144192 container create 982f6a4d16a6c832919bbc28466744b8b5abb1f2131622e09613609839fa8373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 09:48:38 compute-0 podman[336360]: 2025-10-03 09:48:38.031494766 +0000 UTC m=+0.029636814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:48:38 compute-0 network[336392]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  3 09:48:38 compute-0 network[336393]: 'network-scripts' will be removed from distribution in near future.
Oct  3 09:48:38 compute-0 network[336394]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  3 09:48:38 compute-0 systemd[1]: Started libpod-conmon-982f6a4d16a6c832919bbc28466744b8b5abb1f2131622e09613609839fa8373.scope.
Oct  3 09:48:38 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:48:38 compute-0 podman[336360]: 2025-10-03 09:48:38.228804553 +0000 UTC m=+0.226946601 container init 982f6a4d16a6c832919bbc28466744b8b5abb1f2131622e09613609839fa8373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:48:38 compute-0 podman[336360]: 2025-10-03 09:48:38.245271493 +0000 UTC m=+0.243413521 container start 982f6a4d16a6c832919bbc28466744b8b5abb1f2131622e09613609839fa8373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 09:48:38 compute-0 podman[336360]: 2025-10-03 09:48:38.250046166 +0000 UTC m=+0.248188194 container attach 982f6a4d16a6c832919bbc28466744b8b5abb1f2131622e09613609839fa8373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:48:38 compute-0 mystifying_kapitsa[336398]: 167 167
Oct  3 09:48:38 compute-0 systemd[1]: libpod-982f6a4d16a6c832919bbc28466744b8b5abb1f2131622e09613609839fa8373.scope: Deactivated successfully.
Oct  3 09:48:38 compute-0 podman[336360]: 2025-10-03 09:48:38.255219682 +0000 UTC m=+0.253361720 container died 982f6a4d16a6c832919bbc28466744b8b5abb1f2131622e09613609839fa8373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 09:48:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-fadd612ec087bc878492cc15b0c24a6f5a82a7cfa1ddcb15de7f68d4ee9561fa-merged.mount: Deactivated successfully.
Oct  3 09:48:39 compute-0 podman[336360]: 2025-10-03 09:48:39.260853102 +0000 UTC m=+1.258995130 container remove 982f6a4d16a6c832919bbc28466744b8b5abb1f2131622e09613609839fa8373 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kapitsa, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct  3 09:48:39 compute-0 systemd[1]: libpod-conmon-982f6a4d16a6c832919bbc28466744b8b5abb1f2131622e09613609839fa8373.scope: Deactivated successfully.
Oct  3 09:48:39 compute-0 podman[336432]: 2025-10-03 09:48:39.437158614 +0000 UTC m=+0.049202333 container create deba94a555daa6a6b1fc073ba6274b7425cea73b009580fa86a6eb271e88ff7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:48:39 compute-0 systemd[1]: Started libpod-conmon-deba94a555daa6a6b1fc073ba6274b7425cea73b009580fa86a6eb271e88ff7b.scope.
Oct  3 09:48:39 compute-0 podman[336432]: 2025-10-03 09:48:39.418751762 +0000 UTC m=+0.030795501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:48:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f5e708a8d8885d42b1b4e117f8dc568992e76d1cfeb0120a055d2ba9db7386/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f5e708a8d8885d42b1b4e117f8dc568992e76d1cfeb0120a055d2ba9db7386/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f5e708a8d8885d42b1b4e117f8dc568992e76d1cfeb0120a055d2ba9db7386/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:48:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f5e708a8d8885d42b1b4e117f8dc568992e76d1cfeb0120a055d2ba9db7386/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:48:39 compute-0 podman[336432]: 2025-10-03 09:48:39.553072893 +0000 UTC m=+0.165116712 container init deba94a555daa6a6b1fc073ba6274b7425cea73b009580fa86a6eb271e88ff7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 09:48:39 compute-0 podman[336432]: 2025-10-03 09:48:39.565831683 +0000 UTC m=+0.177875402 container start deba94a555daa6a6b1fc073ba6274b7425cea73b009580fa86a6eb271e88ff7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:48:39 compute-0 podman[336432]: 2025-10-03 09:48:39.570815423 +0000 UTC m=+0.182859172 container attach deba94a555daa6a6b1fc073ba6274b7425cea73b009580fa86a6eb271e88ff7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  3 09:48:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]: {
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:    "0": [
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:        {
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "devices": [
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "/dev/loop3"
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            ],
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "lv_name": "ceph_lv0",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "lv_size": "21470642176",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "name": "ceph_lv0",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "tags": {
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.cluster_name": "ceph",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.crush_device_class": "",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.encrypted": "0",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.osd_id": "0",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.type": "block",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.vdo": "0"
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            },
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "type": "block",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "vg_name": "ceph_vg0"
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:        }
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:    ],
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:    "1": [
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:        {
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "devices": [
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "/dev/loop4"
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            ],
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "lv_name": "ceph_lv1",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "lv_size": "21470642176",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "name": "ceph_lv1",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "tags": {
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.cluster_name": "ceph",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.crush_device_class": "",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.encrypted": "0",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.osd_id": "1",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.type": "block",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.vdo": "0"
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            },
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "type": "block",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "vg_name": "ceph_vg1"
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:        }
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:    ],
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:    "2": [
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:        {
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "devices": [
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "/dev/loop5"
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            ],
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "lv_name": "ceph_lv2",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "lv_size": "21470642176",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "name": "ceph_lv2",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "tags": {
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.cluster_name": "ceph",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.crush_device_class": "",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.encrypted": "0",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.osd_id": "2",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.type": "block",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:                "ceph.vdo": "0"
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            },
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "type": "block",
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:            "vg_name": "ceph_vg2"
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:        }
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]:    ]
Oct  3 09:48:40 compute-0 elastic_varahamihira[336452]: }
Oct  3 09:48:40 compute-0 systemd[1]: libpod-deba94a555daa6a6b1fc073ba6274b7425cea73b009580fa86a6eb271e88ff7b.scope: Deactivated successfully.
Oct  3 09:48:40 compute-0 podman[336432]: 2025-10-03 09:48:40.439686404 +0000 UTC m=+1.051730123 container died deba94a555daa6a6b1fc073ba6274b7425cea73b009580fa86a6eb271e88ff7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 09:48:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4f5e708a8d8885d42b1b4e117f8dc568992e76d1cfeb0120a055d2ba9db7386-merged.mount: Deactivated successfully.
Oct  3 09:48:40 compute-0 podman[336432]: 2025-10-03 09:48:40.510018946 +0000 UTC m=+1.122062665 container remove deba94a555daa6a6b1fc073ba6274b7425cea73b009580fa86a6eb271e88ff7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_varahamihira, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  3 09:48:40 compute-0 systemd[1]: libpod-conmon-deba94a555daa6a6b1fc073ba6274b7425cea73b009580fa86a6eb271e88ff7b.scope: Deactivated successfully.
Oct  3 09:48:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:48:41 compute-0 podman[336690]: 2025-10-03 09:48:41.254309169 +0000 UTC m=+0.035953587 container create 03c65a85ba3a2c1ce1dcc9d904d2e822c1729288bae1006c44ef6c4aad3f1a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 09:48:41 compute-0 systemd[1]: Started libpod-conmon-03c65a85ba3a2c1ce1dcc9d904d2e822c1729288bae1006c44ef6c4aad3f1a8a.scope.
Oct  3 09:48:41 compute-0 podman[336690]: 2025-10-03 09:48:41.239573545 +0000 UTC m=+0.021217993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:48:41 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:48:41 compute-0 podman[336690]: 2025-10-03 09:48:41.354315246 +0000 UTC m=+0.135959684 container init 03c65a85ba3a2c1ce1dcc9d904d2e822c1729288bae1006c44ef6c4aad3f1a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_stonebraker, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:48:41 compute-0 podman[336690]: 2025-10-03 09:48:41.370799997 +0000 UTC m=+0.152444445 container start 03c65a85ba3a2c1ce1dcc9d904d2e822c1729288bae1006c44ef6c4aad3f1a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct  3 09:48:41 compute-0 peaceful_stonebraker[336711]: 167 167
Oct  3 09:48:41 compute-0 podman[336690]: 2025-10-03 09:48:41.377386258 +0000 UTC m=+0.159030676 container attach 03c65a85ba3a2c1ce1dcc9d904d2e822c1729288bae1006c44ef6c4aad3f1a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_stonebraker, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:48:41 compute-0 podman[336690]: 2025-10-03 09:48:41.3783473 +0000 UTC m=+0.159991728 container died 03c65a85ba3a2c1ce1dcc9d904d2e822c1729288bae1006c44ef6c4aad3f1a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_stonebraker, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 09:48:41 compute-0 systemd[1]: libpod-03c65a85ba3a2c1ce1dcc9d904d2e822c1729288bae1006c44ef6c4aad3f1a8a.scope: Deactivated successfully.
Oct  3 09:48:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1719d10a343060e06c56b005b233d1dfcfebcfb8bc8e525951d3ad1f812a4f2f-merged.mount: Deactivated successfully.
Oct  3 09:48:41 compute-0 podman[336690]: 2025-10-03 09:48:41.430062213 +0000 UTC m=+0.211706631 container remove 03c65a85ba3a2c1ce1dcc9d904d2e822c1729288bae1006c44ef6c4aad3f1a8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_stonebraker, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:48:41 compute-0 systemd[1]: libpod-conmon-03c65a85ba3a2c1ce1dcc9d904d2e822c1729288bae1006c44ef6c4aad3f1a8a.scope: Deactivated successfully.
Oct  3 09:48:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:48:41.573 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:48:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:48:41.574 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:48:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:48:41.574 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:48:41 compute-0 podman[336746]: 2025-10-03 09:48:41.644066317 +0000 UTC m=+0.057134659 container create 4bc595c4bef3b1ed1b184c251c5b6cacfd9a673c2ad3101087b121ad84279b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ardinghelli, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 09:48:41 compute-0 systemd[1]: Started libpod-conmon-4bc595c4bef3b1ed1b184c251c5b6cacfd9a673c2ad3101087b121ad84279b29.scope.
Oct  3 09:48:41 compute-0 podman[336746]: 2025-10-03 09:48:41.623156274 +0000 UTC m=+0.036224646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:48:41 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:48:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7eb75a8954a30cf527b3341152a1e3bc4184e950931806d6fb4c42822cf0641/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:48:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7eb75a8954a30cf527b3341152a1e3bc4184e950931806d6fb4c42822cf0641/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:48:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7eb75a8954a30cf527b3341152a1e3bc4184e950931806d6fb4c42822cf0641/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:48:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7eb75a8954a30cf527b3341152a1e3bc4184e950931806d6fb4c42822cf0641/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:48:41 compute-0 podman[336746]: 2025-10-03 09:48:41.762791987 +0000 UTC m=+0.175860349 container init 4bc595c4bef3b1ed1b184c251c5b6cacfd9a673c2ad3101087b121ad84279b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ardinghelli, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:48:41 compute-0 podman[336746]: 2025-10-03 09:48:41.780587289 +0000 UTC m=+0.193655631 container start 4bc595c4bef3b1ed1b184c251c5b6cacfd9a673c2ad3101087b121ad84279b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ardinghelli, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  3 09:48:41 compute-0 podman[336746]: 2025-10-03 09:48:41.7868411 +0000 UTC m=+0.199909472 container attach 4bc595c4bef3b1ed1b184c251c5b6cacfd9a673c2ad3101087b121ad84279b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:48:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]: {
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:        "osd_id": 1,
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:        "type": "bluestore"
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:    },
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:        "osd_id": 2,
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:        "type": "bluestore"
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:    },
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:        "osd_id": 0,
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:        "type": "bluestore"
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]:    }
Oct  3 09:48:42 compute-0 funny_ardinghelli[336766]: }
Oct  3 09:48:42 compute-0 systemd[1]: libpod-4bc595c4bef3b1ed1b184c251c5b6cacfd9a673c2ad3101087b121ad84279b29.scope: Deactivated successfully.
Oct  3 09:48:42 compute-0 systemd[1]: libpod-4bc595c4bef3b1ed1b184c251c5b6cacfd9a673c2ad3101087b121ad84279b29.scope: Consumed 1.047s CPU time.
Oct  3 09:48:42 compute-0 conmon[336766]: conmon 4bc595c4bef3b1ed1b18 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4bc595c4bef3b1ed1b184c251c5b6cacfd9a673c2ad3101087b121ad84279b29.scope/container/memory.events
Oct  3 09:48:42 compute-0 podman[336746]: 2025-10-03 09:48:42.832747595 +0000 UTC m=+1.245815937 container died 4bc595c4bef3b1ed1b184c251c5b6cacfd9a673c2ad3101087b121ad84279b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 09:48:42 compute-0 python3.9[336929]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:48:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7eb75a8954a30cf527b3341152a1e3bc4184e950931806d6fb4c42822cf0641-merged.mount: Deactivated successfully.
Oct  3 09:48:42 compute-0 podman[336746]: 2025-10-03 09:48:42.922924756 +0000 UTC m=+1.335993098 container remove 4bc595c4bef3b1ed1b184c251c5b6cacfd9a673c2ad3101087b121ad84279b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ardinghelli, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:48:42 compute-0 systemd[1]: libpod-conmon-4bc595c4bef3b1ed1b184c251c5b6cacfd9a673c2ad3101087b121ad84279b29.scope: Deactivated successfully.
Oct  3 09:48:42 compute-0 podman[336956]: 2025-10-03 09:48:42.95850062 +0000 UTC m=+0.103085487 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct  3 09:48:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:48:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:48:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:48:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:48:42 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev c85cda09-457c-4095-a475-fbb5afcf6a49 does not exist
Oct  3 09:48:42 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 480102e6-f162-4bfe-981b-ab7c3c2c49ee does not exist
Oct  3 09:48:43 compute-0 python3.9[337186]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:48:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:48:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:48:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:44 compute-0 python3.9[337339]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:48:45 compute-0 python3.9[337492]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:48:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:48:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:48:45
Oct  3 09:48:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:48:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:48:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.meta', 'vms', 'images', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'volumes', 'cephfs.cephfs.meta']
Oct  3 09:48:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:48:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:48:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:48:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:48:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:48:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:48:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:48:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:46 compute-0 python3.9[337645]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:48:47 compute-0 python3.9[337798]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:48:48 compute-0 python3.9[337951]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:48:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:49 compute-0 python3.9[338104]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:48:49 compute-0 podman[338231]: 2025-10-03 09:48:49.818374181 +0000 UTC m=+0.078499566 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  3 09:48:49 compute-0 podman[338230]: 2025-10-03 09:48:49.821451871 +0000 UTC m=+0.081223924 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct  3 09:48:50 compute-0 python3.9[338297]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:50 compute-0 python3.9[338449]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:48:51 compute-0 python3.9[338601]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:52 compute-0 python3.9[338753]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:52 compute-0 python3.9[338905]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:53 compute-0 python3.9[339057]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:54 compute-0 python3.9[339209]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:48:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:48:55 compute-0 python3.9[339361]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:48:55 compute-0 python3.9[339513]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:56 compute-0 python3.9[339665]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:57 compute-0 python3.9[339817]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:48:58 compute-0 python3.9[339969]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:59 compute-0 python3.9[340121]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:48:59 compute-0 podman[157165]: time="2025-10-03T09:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:48:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 40788 "" "Go-http-client/1.1"
Oct  3 09:48:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8086 "" "Go-http-client/1.1"
Oct  3 09:48:59 compute-0 python3.9[340273]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:49:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:00 compute-0 python3.9[340425]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:49:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:49:00.764054) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484940764115, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1699, "num_deletes": 252, "total_data_size": 2853093, "memory_usage": 2898136, "flush_reason": "Manual Compaction"}
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484940775508, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1626003, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11815, "largest_seqno": 13513, "table_properties": {"data_size": 1620341, "index_size": 2801, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14109, "raw_average_key_size": 20, "raw_value_size": 1607926, "raw_average_value_size": 2290, "num_data_blocks": 130, "num_entries": 702, "num_filter_entries": 702, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759484752, "oldest_key_time": 1759484752, "file_creation_time": 1759484940, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 11511 microseconds, and 6227 cpu microseconds.
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:49:00.775563) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1626003 bytes OK
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:49:00.775584) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:49:00.778027) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:49:00.778041) EVENT_LOG_v1 {"time_micros": 1759484940778036, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:49:00.778061) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2845820, prev total WAL file size 2845820, number of live WAL files 2.
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:49:00.778971) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1587KB)], [29(7946KB)]
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484940779044, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9763041, "oldest_snapshot_seqno": -1}
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4000 keys, 7607175 bytes, temperature: kUnknown
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484940826514, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7607175, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7578375, "index_size": 17682, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95361, "raw_average_key_size": 23, "raw_value_size": 7504179, "raw_average_value_size": 1876, "num_data_blocks": 769, "num_entries": 4000, "num_filter_entries": 4000, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759484940, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:49:00.827029) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7607175 bytes
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:49:00.829455) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 204.1 rd, 159.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.8 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.7) write-amplify(4.7) OK, records in: 4427, records dropped: 427 output_compression: NoCompression
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:49:00.829476) EVENT_LOG_v1 {"time_micros": 1759484940829467, "job": 12, "event": "compaction_finished", "compaction_time_micros": 47836, "compaction_time_cpu_micros": 19021, "output_level": 6, "num_output_files": 1, "total_output_size": 7607175, "num_input_records": 4427, "num_output_records": 4000, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484940830890, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759484940833064, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:49:00.778803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:49:00.833433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:49:00.833438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:49:00.833439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:49:00.833440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:49:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:49:00.833442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:49:01 compute-0 python3.9[340577]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:49:01 compute-0 openstack_network_exporter[159287]: ERROR   09:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:49:01 compute-0 openstack_network_exporter[159287]: ERROR   09:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:49:01 compute-0 openstack_network_exporter[159287]: ERROR   09:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:49:01 compute-0 openstack_network_exporter[159287]: ERROR   09:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:49:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:49:01 compute-0 openstack_network_exporter[159287]: ERROR   09:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:49:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:49:02 compute-0 python3.9[340731]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:49:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:03 compute-0 python3.9[340883]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  3 09:49:04 compute-0 python3.9[341035]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 09:49:04 compute-0 systemd[1]: Reloading.
Oct  3 09:49:04 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:49:04 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:49:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:05 compute-0 python3.9[341222]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:49:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:49:06 compute-0 python3.9[341375]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:49:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:06 compute-0 podman[341502]: 2025-10-03 09:49:06.782826803 +0000 UTC m=+0.085762809 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 09:49:06 compute-0 podman[341501]: 2025-10-03 09:49:06.808124997 +0000 UTC m=+0.112065746 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 09:49:06 compute-0 podman[341504]: 2025-10-03 09:49:06.81351914 +0000 UTC m=+0.103414277 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 09:49:06 compute-0 podman[341503]: 2025-10-03 09:49:06.817706306 +0000 UTC m=+0.116731716 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true)
Oct  3 09:49:06 compute-0 podman[341500]: 2025-10-03 09:49:06.828468342 +0000 UTC m=+0.135550371 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9, config_id=edpm, vcs-type=git, vendor=Red Hat, Inc.)
Oct  3 09:49:06 compute-0 podman[341510]: 2025-10-03 09:49:06.83741461 +0000 UTC m=+0.125391975 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 09:49:06 compute-0 podman[341522]: 2025-10-03 09:49:06.854264922 +0000 UTC m=+0.139077985 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 09:49:06 compute-0 python3.9[341645]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:49:07 compute-0 python3.9[341817]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:49:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:08 compute-0 python3.9[341970]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:49:09 compute-0 python3.9[342123]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:49:10 compute-0 python3.9[342276]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:49:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:49:11 compute-0 python3.9[342429]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:49:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:12 compute-0 python3.9[342582]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:13 compute-0 podman[342734]: 2025-10-03 09:49:13.130801379 +0000 UTC m=+0.067580595 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  3 09:49:13 compute-0 python3.9[342735]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:14 compute-0 python3.9[342905]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:14 compute-0 python3.9[343057]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:15 compute-0 python3.9[343209]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:49:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:49:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:49:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:49:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:49:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:49:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:49:16 compute-0 python3.9[343361]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:17 compute-0 python3.9[343513]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:17 compute-0 python3.9[343665]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:18 compute-0 python3.9[343817]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:19 compute-0 python3.9[343969]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:20 compute-0 podman[344094]: 2025-10-03 09:49:20.174062809 +0000 UTC m=+0.103894673 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Oct  3 09:49:20 compute-0 podman[344095]: 2025-10-03 09:49:20.195614942 +0000 UTC m=+0.116986014 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 09:49:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:20 compute-0 python3.9[344160]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:49:21 compute-0 python3.9[344312]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:49:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:26 compute-0 python3.9[344464]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct  3 09:49:27 compute-0 python3.9[344617]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  3 09:49:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:28 compute-0 python3.9[344775]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  3 09:49:29 compute-0 podman[157165]: time="2025-10-03T09:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:49:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 40788 "" "Go-http-client/1.1"
Oct  3 09:49:29 compute-0 systemd-logind[798]: New session 59 of user zuul.
Oct  3 09:49:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8087 "" "Go-http-client/1.1"
Oct  3 09:49:29 compute-0 systemd[1]: Started Session 59 of User zuul.
Oct  3 09:49:29 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Oct  3 09:49:29 compute-0 systemd-logind[798]: Session 59 logged out. Waiting for processes to exit.
Oct  3 09:49:30 compute-0 systemd-logind[798]: Removed session 59.
Oct  3 09:49:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:49:30 compute-0 python3.9[344961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:49:31 compute-0 python3.9[345082]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759484970.1685598-1555-20463686700983/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:31 compute-0 openstack_network_exporter[159287]: ERROR   09:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:49:31 compute-0 openstack_network_exporter[159287]: ERROR   09:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:49:31 compute-0 openstack_network_exporter[159287]: ERROR   09:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:49:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:49:31 compute-0 openstack_network_exporter[159287]: ERROR   09:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:49:31 compute-0 openstack_network_exporter[159287]: ERROR   09:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:49:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:49:32 compute-0 python3.9[345232]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:49:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:32 compute-0 python3.9[345308]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:33 compute-0 python3.9[345458]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:49:33 compute-0 python3.9[345579]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759484972.811809-1555-225142431555846/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:34 compute-0 python3.9[345729]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:49:35 compute-0 python3.9[345850]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759484974.1746223-1555-254285563559198/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:35 compute-0 auditd[710]: Audit daemon rotating log files
Oct  3 09:49:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:49:36 compute-0 python3.9[346000]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:49:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:36 compute-0 python3.9[346121]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759484975.4837945-1555-94278588689372/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:37 compute-0 podman[346282]: 2025-10-03 09:49:37.339826317 +0000 UTC m=+0.105980141 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 09:49:37 compute-0 podman[346269]: 2025-10-03 09:49:37.34115441 +0000 UTC m=+0.105128883 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct  3 09:49:37 compute-0 podman[346252]: 2025-10-03 09:49:37.341983527 +0000 UTC m=+0.128133693 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:49:37 compute-0 podman[346254]: 2025-10-03 09:49:37.344786167 +0000 UTC m=+0.126573463 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 09:49:37 compute-0 podman[346257]: 2025-10-03 09:49:37.34736536 +0000 UTC m=+0.124307051 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 09:49:37 compute-0 podman[346246]: 2025-10-03 09:49:37.354906402 +0000 UTC m=+0.143488487 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, io.openshift.tags=base rhel9, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc.)
Oct  3 09:49:37 compute-0 podman[346258]: 2025-10-03 09:49:37.402010307 +0000 UTC m=+0.173864054 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 09:49:37 compute-0 python3.9[346339]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:49:38 compute-0 python3.9[346566]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:49:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.954 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.955 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.955 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f70b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fa35c9f7170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b8940b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.957 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.958 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35df74380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35b894380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e566ba0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.959 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f73b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.960 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.960 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fa35b894080>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fa35c9f71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.965 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fa35c9f7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fa35c9f7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fa35c9f72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fa35c9f7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fa35c955970>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fa35b894350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fa35c92f7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.966 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fa35c9f7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.967 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fa35c9f7710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.967 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.965 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7bf0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.971 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35e6b9400>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.971 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fa35c9f79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.972 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.973 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fa35e6e6180>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.973 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.972 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.974 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7c50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.975 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7d10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.976 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f76e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.977 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.978 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fa35c9f7fe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fa35b3b7500>] with cache [{}], pollster history [{'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'cpu': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fa35c9f73e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fa35c9f7c20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fa35c9f7440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.978 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fa35c9f7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fa35c9f7ce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fa35c9f7d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fa35c9f7e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fa35c9f7650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fa35c9f7e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fa35c9f76b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.979 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fa35c9f7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fa35c9f7fb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fa35c9b79b0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.980 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.987 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:38 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:49:38.988 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:49:39 compute-0 python3.9[346718]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:49:39 compute-0 python3.9[346871]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:49:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:40 compute-0 python3.9[346994]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759484979.2561486-1648-164386839381085/.source _original_basename=.apo8i4cv follow=False checksum=58e49370ef01e57ab05d4f04e3bae8e4bc016040 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct  3 09:49:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:49:41 compute-0 python3.9[347146]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:49:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:49:41.573 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:49:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:49:41.574 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:49:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:49:41.574 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:49:42 compute-0 python3.9[347298]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:49:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:42 compute-0 python3.9[347419]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759484981.4953-1674-117426594632961/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:43 compute-0 podman[347556]: 2025-10-03 09:49:43.307412536 +0000 UTC m=+0.089050836 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  3 09:49:43 compute-0 python3.9[347617]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:49:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:49:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:49:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:49:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:49:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:49:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:49:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 63259077-0e91-4f4f-b796-c1ab2a804c25 does not exist
Oct  3 09:49:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6fd1f8b3-2da7-432a-8ada-688cc1128df6 does not exist
Oct  3 09:49:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f22cd3d7-c71b-410c-b21d-b85cea0e9874 does not exist
Oct  3 09:49:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:49:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:49:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:49:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:49:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:49:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:49:44 compute-0 python3.9[347827]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759484982.9195156-1689-265662581303847/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:49:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:49:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:49:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:49:44 compute-0 podman[348111]: 2025-10-03 09:49:44.795539536 +0000 UTC m=+0.054555155 container create 8197e5bab32b2078a016cc5dee0d833b55d450804d8cbedb634cd74d9c1a5453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:49:44 compute-0 podman[348111]: 2025-10-03 09:49:44.773091264 +0000 UTC m=+0.032106913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:49:44 compute-0 systemd[1]: Started libpod-conmon-8197e5bab32b2078a016cc5dee0d833b55d450804d8cbedb634cd74d9c1a5453.scope.
Oct  3 09:49:44 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:49:44 compute-0 podman[348111]: 2025-10-03 09:49:44.951632338 +0000 UTC m=+0.210647987 container init 8197e5bab32b2078a016cc5dee0d833b55d450804d8cbedb634cd74d9c1a5453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_panini, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  3 09:49:44 compute-0 podman[348111]: 2025-10-03 09:49:44.963504699 +0000 UTC m=+0.222520328 container start 8197e5bab32b2078a016cc5dee0d833b55d450804d8cbedb634cd74d9c1a5453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_panini, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:49:44 compute-0 podman[348111]: 2025-10-03 09:49:44.968908243 +0000 UTC m=+0.227923872 container attach 8197e5bab32b2078a016cc5dee0d833b55d450804d8cbedb634cd74d9c1a5453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:49:44 compute-0 trusting_panini[348144]: 167 167
Oct  3 09:49:44 compute-0 podman[348111]: 2025-10-03 09:49:44.973057737 +0000 UTC m=+0.232073366 container died 8197e5bab32b2078a016cc5dee0d833b55d450804d8cbedb634cd74d9c1a5453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_panini, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 09:49:44 compute-0 systemd[1]: libpod-8197e5bab32b2078a016cc5dee0d833b55d450804d8cbedb634cd74d9c1a5453.scope: Deactivated successfully.
Oct  3 09:49:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b95af51af128ec67cddf7e1986edfd96028d9ae6dd2ccea53055037ed68dd47-merged.mount: Deactivated successfully.
Oct  3 09:49:45 compute-0 python3.9[348141]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct  3 09:49:45 compute-0 podman[348111]: 2025-10-03 09:49:45.046867052 +0000 UTC m=+0.305882681 container remove 8197e5bab32b2078a016cc5dee0d833b55d450804d8cbedb634cd74d9c1a5453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:49:45 compute-0 systemd[1]: libpod-conmon-8197e5bab32b2078a016cc5dee0d833b55d450804d8cbedb634cd74d9c1a5453.scope: Deactivated successfully.
Oct  3 09:49:45 compute-0 podman[348190]: 2025-10-03 09:49:45.2560157 +0000 UTC m=+0.060959852 container create d063b13f706669e9fd06aabbac82add46cc4dd24649770c9156c8e8458d6acd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_meninsky, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:49:45 compute-0 systemd[1]: Started libpod-conmon-d063b13f706669e9fd06aabbac82add46cc4dd24649770c9156c8e8458d6acd6.scope.
Oct  3 09:49:45 compute-0 podman[348190]: 2025-10-03 09:49:45.231600305 +0000 UTC m=+0.036544487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:49:45 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf57fe2aeaa14f70ddc99cc25237bcd50591285d1061a5428852675786ca02a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf57fe2aeaa14f70ddc99cc25237bcd50591285d1061a5428852675786ca02a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf57fe2aeaa14f70ddc99cc25237bcd50591285d1061a5428852675786ca02a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf57fe2aeaa14f70ddc99cc25237bcd50591285d1061a5428852675786ca02a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:49:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf57fe2aeaa14f70ddc99cc25237bcd50591285d1061a5428852675786ca02a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:49:45 compute-0 podman[348190]: 2025-10-03 09:49:45.379178902 +0000 UTC m=+0.184123074 container init d063b13f706669e9fd06aabbac82add46cc4dd24649770c9156c8e8458d6acd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 09:49:45 compute-0 podman[348190]: 2025-10-03 09:49:45.39871197 +0000 UTC m=+0.203656122 container start d063b13f706669e9fd06aabbac82add46cc4dd24649770c9156c8e8458d6acd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_meninsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 09:49:45 compute-0 podman[348190]: 2025-10-03 09:49:45.404144355 +0000 UTC m=+0.209088537 container attach d063b13f706669e9fd06aabbac82add46cc4dd24649770c9156c8e8458d6acd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_meninsky, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:49:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:49:45 compute-0 python3.9[348338]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 09:49:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:49:45
Oct  3 09:49:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:49:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:49:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'vms', 'backups', 'volumes', 'images', '.rgw.root']
Oct  3 09:49:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:49:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:49:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:49:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:49:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:49:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:49:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:49:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:46 compute-0 eager_meninsky[348229]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:49:46 compute-0 eager_meninsky[348229]: --> relative data size: 1.0
Oct  3 09:49:46 compute-0 eager_meninsky[348229]: --> All data devices are unavailable
Oct  3 09:49:46 compute-0 systemd[1]: libpod-d063b13f706669e9fd06aabbac82add46cc4dd24649770c9156c8e8458d6acd6.scope: Deactivated successfully.
Oct  3 09:49:46 compute-0 systemd[1]: libpod-d063b13f706669e9fd06aabbac82add46cc4dd24649770c9156c8e8458d6acd6.scope: Consumed 1.211s CPU time.
Oct  3 09:49:46 compute-0 podman[348190]: 2025-10-03 09:49:46.70091988 +0000 UTC m=+1.505864032 container died d063b13f706669e9fd06aabbac82add46cc4dd24649770c9156c8e8458d6acd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 09:49:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf57fe2aeaa14f70ddc99cc25237bcd50591285d1061a5428852675786ca02a6-merged.mount: Deactivated successfully.
Oct  3 09:49:46 compute-0 podman[348190]: 2025-10-03 09:49:46.816647812 +0000 UTC m=+1.621591964 container remove d063b13f706669e9fd06aabbac82add46cc4dd24649770c9156c8e8458d6acd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:49:46 compute-0 systemd[1]: libpod-conmon-d063b13f706669e9fd06aabbac82add46cc4dd24649770c9156c8e8458d6acd6.scope: Deactivated successfully.
Oct  3 09:49:46 compute-0 python3[348514]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 09:49:47 compute-0 podman[348689]: 2025-10-03 09:49:47.583591054 +0000 UTC m=+0.048788000 container create 26a2bfc3937f29bbe7652257974145754e6dc08f020d580bf7ac1bb801e8aeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shamir, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:49:47 compute-0 systemd[1]: Started libpod-conmon-26a2bfc3937f29bbe7652257974145754e6dc08f020d580bf7ac1bb801e8aeac.scope.
Oct  3 09:49:47 compute-0 podman[348689]: 2025-10-03 09:49:47.565525963 +0000 UTC m=+0.030722929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:49:47 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:49:47 compute-0 podman[348689]: 2025-10-03 09:49:47.693305813 +0000 UTC m=+0.158502789 container init 26a2bfc3937f29bbe7652257974145754e6dc08f020d580bf7ac1bb801e8aeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:49:47 compute-0 podman[348689]: 2025-10-03 09:49:47.702598292 +0000 UTC m=+0.167795238 container start 26a2bfc3937f29bbe7652257974145754e6dc08f020d580bf7ac1bb801e8aeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 09:49:47 compute-0 focused_shamir[348704]: 167 167
Oct  3 09:49:47 compute-0 systemd[1]: libpod-26a2bfc3937f29bbe7652257974145754e6dc08f020d580bf7ac1bb801e8aeac.scope: Deactivated successfully.
Oct  3 09:49:47 compute-0 podman[348689]: 2025-10-03 09:49:47.753071466 +0000 UTC m=+0.218268482 container attach 26a2bfc3937f29bbe7652257974145754e6dc08f020d580bf7ac1bb801e8aeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shamir, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 09:49:47 compute-0 podman[348689]: 2025-10-03 09:49:47.75475451 +0000 UTC m=+0.219951476 container died 26a2bfc3937f29bbe7652257974145754e6dc08f020d580bf7ac1bb801e8aeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shamir, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 09:49:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f9444b323b164ab4ea440facabb205669fd18ca306d36a949962abeb7bf5bb4-merged.mount: Deactivated successfully.
Oct  3 09:49:47 compute-0 podman[348709]: 2025-10-03 09:49:47.847334438 +0000 UTC m=+0.125648423 container remove 26a2bfc3937f29bbe7652257974145754e6dc08f020d580bf7ac1bb801e8aeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shamir, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 09:49:47 compute-0 systemd[1]: libpod-conmon-26a2bfc3937f29bbe7652257974145754e6dc08f020d580bf7ac1bb801e8aeac.scope: Deactivated successfully.
Oct  3 09:49:48 compute-0 podman[348741]: 2025-10-03 09:49:48.071751107 +0000 UTC m=+0.076078308 container create 0229b75a20b1db77615fa634ac18a3457134ef65034716fac90c4ed086e32caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_buck, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:49:48 compute-0 podman[348741]: 2025-10-03 09:49:48.032996131 +0000 UTC m=+0.037323342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:49:48 compute-0 systemd[1]: Started libpod-conmon-0229b75a20b1db77615fa634ac18a3457134ef65034716fac90c4ed086e32caf.scope.
Oct  3 09:49:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50cb9bb392ca471f42ccb3c776d2fa28cd992dd639c7e06937b9e79e27f617de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50cb9bb392ca471f42ccb3c776d2fa28cd992dd639c7e06937b9e79e27f617de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50cb9bb392ca471f42ccb3c776d2fa28cd992dd639c7e06937b9e79e27f617de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50cb9bb392ca471f42ccb3c776d2fa28cd992dd639c7e06937b9e79e27f617de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:49:48 compute-0 podman[348741]: 2025-10-03 09:49:48.184939949 +0000 UTC m=+0.189267180 container init 0229b75a20b1db77615fa634ac18a3457134ef65034716fac90c4ed086e32caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_buck, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:49:48 compute-0 podman[348741]: 2025-10-03 09:49:48.201866633 +0000 UTC m=+0.206193824 container start 0229b75a20b1db77615fa634ac18a3457134ef65034716fac90c4ed086e32caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_buck, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:49:48 compute-0 podman[348741]: 2025-10-03 09:49:48.206945907 +0000 UTC m=+0.211273118 container attach 0229b75a20b1db77615fa634ac18a3457134ef65034716fac90c4ed086e32caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:49:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:49 compute-0 dazzling_buck[348757]: {
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:    "0": [
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:        {
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "devices": [
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "/dev/loop3"
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            ],
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "lv_name": "ceph_lv0",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "lv_size": "21470642176",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "name": "ceph_lv0",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "tags": {
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.cluster_name": "ceph",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.crush_device_class": "",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.encrypted": "0",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.osd_id": "0",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.type": "block",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.vdo": "0"
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            },
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "type": "block",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "vg_name": "ceph_vg0"
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:        }
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:    ],
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:    "1": [
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:        {
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "devices": [
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "/dev/loop4"
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            ],
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "lv_name": "ceph_lv1",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "lv_size": "21470642176",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "name": "ceph_lv1",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "tags": {
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.cluster_name": "ceph",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.crush_device_class": "",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.encrypted": "0",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.osd_id": "1",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.type": "block",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.vdo": "0"
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            },
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "type": "block",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "vg_name": "ceph_vg1"
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:        }
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:    ],
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:    "2": [
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:        {
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "devices": [
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "/dev/loop5"
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            ],
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "lv_name": "ceph_lv2",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "lv_size": "21470642176",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "name": "ceph_lv2",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "tags": {
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.cluster_name": "ceph",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.crush_device_class": "",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.encrypted": "0",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.osd_id": "2",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.type": "block",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:                "ceph.vdo": "0"
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            },
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "type": "block",
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:            "vg_name": "ceph_vg2"
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:        }
Oct  3 09:49:49 compute-0 dazzling_buck[348757]:    ]
Oct  3 09:49:49 compute-0 dazzling_buck[348757]: }
Oct  3 09:49:49 compute-0 systemd[1]: libpod-0229b75a20b1db77615fa634ac18a3457134ef65034716fac90c4ed086e32caf.scope: Deactivated successfully.
Oct  3 09:49:49 compute-0 podman[348741]: 2025-10-03 09:49:49.100499591 +0000 UTC m=+1.104826792 container died 0229b75a20b1db77615fa634ac18a3457134ef65034716fac90c4ed086e32caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 09:49:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-50cb9bb392ca471f42ccb3c776d2fa28cd992dd639c7e06937b9e79e27f617de-merged.mount: Deactivated successfully.
Oct  3 09:49:49 compute-0 podman[348741]: 2025-10-03 09:49:49.341725221 +0000 UTC m=+1.346052412 container remove 0229b75a20b1db77615fa634ac18a3457134ef65034716fac90c4ed086e32caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 09:49:49 compute-0 systemd[1]: libpod-conmon-0229b75a20b1db77615fa634ac18a3457134ef65034716fac90c4ed086e32caf.scope: Deactivated successfully.
Oct  3 09:49:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:50 compute-0 podman[348915]: 2025-10-03 09:49:50.526369658 +0000 UTC m=+0.052439257 container create 9668452e243a06703a88e812157d938e0e51677433f82cb962816d1e88a237d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:49:50 compute-0 systemd[1]: Started libpod-conmon-9668452e243a06703a88e812157d938e0e51677433f82cb962816d1e88a237d8.scope.
Oct  3 09:49:50 compute-0 podman[348915]: 2025-10-03 09:49:50.50775812 +0000 UTC m=+0.033827729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:49:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:49:50 compute-0 podman[348915]: 2025-10-03 09:49:50.640297354 +0000 UTC m=+0.166366973 container init 9668452e243a06703a88e812157d938e0e51677433f82cb962816d1e88a237d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 09:49:50 compute-0 podman[348915]: 2025-10-03 09:49:50.652400913 +0000 UTC m=+0.178470512 container start 9668452e243a06703a88e812157d938e0e51677433f82cb962816d1e88a237d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 09:49:50 compute-0 agitated_lalande[348937]: 167 167
Oct  3 09:49:50 compute-0 systemd[1]: libpod-9668452e243a06703a88e812157d938e0e51677433f82cb962816d1e88a237d8.scope: Deactivated successfully.
Oct  3 09:49:50 compute-0 podman[348915]: 2025-10-03 09:49:50.659368037 +0000 UTC m=+0.185437646 container attach 9668452e243a06703a88e812157d938e0e51677433f82cb962816d1e88a237d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:49:50 compute-0 podman[348915]: 2025-10-03 09:49:50.660544085 +0000 UTC m=+0.186613684 container died 9668452e243a06703a88e812157d938e0e51677433f82cb962816d1e88a237d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 09:49:50 compute-0 podman[348927]: 2025-10-03 09:49:50.694306251 +0000 UTC m=+0.112546181 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 09:49:50 compute-0 podman[348931]: 2025-10-03 09:49:50.696322446 +0000 UTC m=+0.112116637 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 09:49:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d59533ea405015cde5bd02ab57e05ad610f537caa2caa8d319852a974c628989-merged.mount: Deactivated successfully.
Oct  3 09:49:50 compute-0 podman[348915]: 2025-10-03 09:49:50.746654585 +0000 UTC m=+0.272724184 container remove 9668452e243a06703a88e812157d938e0e51677433f82cb962816d1e88a237d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lalande, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 09:49:50 compute-0 systemd[1]: libpod-conmon-9668452e243a06703a88e812157d938e0e51677433f82cb962816d1e88a237d8.scope: Deactivated successfully.
Oct  3 09:49:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:49:51 compute-0 podman[348995]: 2025-10-03 09:49:51.017846489 +0000 UTC m=+0.094540562 container create 2648a14f1656cf61993f2838c556431f839472bd47178f2d76f4f5cd6b4369dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chandrasekhar, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:49:51 compute-0 podman[348995]: 2025-10-03 09:49:50.981398547 +0000 UTC m=+0.058092650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:49:51 compute-0 systemd[1]: Started libpod-conmon-2648a14f1656cf61993f2838c556431f839472bd47178f2d76f4f5cd6b4369dc.scope.
Oct  3 09:49:51 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd166b78f73d365d5d54ca2ef19b2d9688a20eb9c32351bce068e05a7db926c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd166b78f73d365d5d54ca2ef19b2d9688a20eb9c32351bce068e05a7db926c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd166b78f73d365d5d54ca2ef19b2d9688a20eb9c32351bce068e05a7db926c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd166b78f73d365d5d54ca2ef19b2d9688a20eb9c32351bce068e05a7db926c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:49:51 compute-0 podman[348995]: 2025-10-03 09:49:51.170734677 +0000 UTC m=+0.247428770 container init 2648a14f1656cf61993f2838c556431f839472bd47178f2d76f4f5cd6b4369dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chandrasekhar, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 09:49:51 compute-0 podman[348995]: 2025-10-03 09:49:51.193133537 +0000 UTC m=+0.269827610 container start 2648a14f1656cf61993f2838c556431f839472bd47178f2d76f4f5cd6b4369dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 09:49:51 compute-0 podman[348995]: 2025-10-03 09:49:51.201519227 +0000 UTC m=+0.278213300 container attach 2648a14f1656cf61993f2838c556431f839472bd47178f2d76f4f5cd6b4369dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]: {
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:        "osd_id": 1,
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:        "type": "bluestore"
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:    },
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:        "osd_id": 2,
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:        "type": "bluestore"
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:    },
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:        "osd_id": 0,
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:        "type": "bluestore"
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]:    }
Oct  3 09:49:52 compute-0 exciting_chandrasekhar[349010]: }
Oct  3 09:49:52 compute-0 systemd[1]: libpod-2648a14f1656cf61993f2838c556431f839472bd47178f2d76f4f5cd6b4369dc.scope: Deactivated successfully.
Oct  3 09:49:52 compute-0 podman[348995]: 2025-10-03 09:49:52.284882698 +0000 UTC m=+1.361576771 container died 2648a14f1656cf61993f2838c556431f839472bd47178f2d76f4f5cd6b4369dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chandrasekhar, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:49:52 compute-0 systemd[1]: libpod-2648a14f1656cf61993f2838c556431f839472bd47178f2d76f4f5cd6b4369dc.scope: Consumed 1.074s CPU time.
Oct  3 09:49:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:49:54 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:49:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:49:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:49:59 compute-0 podman[157165]: time="2025-10-03T09:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:50:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:50:01 compute-0 openstack_network_exporter[159287]: ERROR   09:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:50:01 compute-0 openstack_network_exporter[159287]: ERROR   09:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:50:01 compute-0 openstack_network_exporter[159287]: ERROR   09:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:50:01 compute-0 openstack_network_exporter[159287]: ERROR   09:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:50:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:50:01 compute-0 openstack_network_exporter[159287]: ERROR   09:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:50:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:50:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd166b78f73d365d5d54ca2ef19b2d9688a20eb9c32351bce068e05a7db926c2-merged.mount: Deactivated successfully.
Oct  3 09:50:03 compute-0 podman[348995]: 2025-10-03 09:50:03.238777669 +0000 UTC m=+12.315471742 container remove 2648a14f1656cf61993f2838c556431f839472bd47178f2d76f4f5cd6b4369dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 09:50:03 compute-0 podman[157165]: @ - - [03/Oct/2025:09:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42356 "" "Go-http-client/1.1"
Oct  3 09:50:03 compute-0 podman[157165]: @ - - [03/Oct/2025:09:50:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8070 "" "Go-http-client/1.1"
Oct  3 09:50:03 compute-0 systemd[1]: libpod-conmon-2648a14f1656cf61993f2838c556431f839472bd47178f2d76f4f5cd6b4369dc.scope: Deactivated successfully.
Oct  3 09:50:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:50:03 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:50:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:50:03 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:50:03 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ba002876-bd17-48bb-bd26-bb8f09c989e3 does not exist
Oct  3 09:50:03 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ae914806-26df-4a10-9a58-6afebd53692b does not exist
Oct  3 09:50:03 compute-0 podman[348590]: 2025-10-03 09:50:03.325185309 +0000 UTC m=+16.257803431 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct  3 09:50:03 compute-0 podman[349139]: 2025-10-03 09:50:03.513603049 +0000 UTC m=+0.071269983 container create 4d9e651ecb2288dc8b6a4f4da818ebd37aba2296a95e5ed1c0122f2f1c49e2ab (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, container_name=nova_compute_init, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  3 09:50:03 compute-0 podman[349139]: 2025-10-03 09:50:03.473421967 +0000 UTC m=+0.031088921 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct  3 09:50:03 compute-0 python3[348514]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct  3 09:50:04 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:50:04 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:50:04 compute-0 python3.9[349347]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:50:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:05 compute-0 python3.9[349501]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct  3 09:50:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:50:06 compute-0 python3.9[349653]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 09:50:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:07 compute-0 python3[349805]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 09:50:07 compute-0 podman[349838]: 2025-10-03 09:50:07.39412383 +0000 UTC m=+0.068448073 container create ef5bf4c945117048702d65b1b6a237cd82e78a257456814f47d6c81fe966b71a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=nova_compute, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Oct  3 09:50:07 compute-0 podman[349838]: 2025-10-03 09:50:07.354206626 +0000 UTC m=+0.028530889 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct  3 09:50:07 compute-0 python3[349805]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Oct  3 09:50:07 compute-0 podman[349913]: 2025-10-03 09:50:07.869466291 +0000 UTC m=+0.100169533 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Oct  3 09:50:07 compute-0 podman[349906]: 2025-10-03 09:50:07.874185373 +0000 UTC m=+0.126077647 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 09:50:07 compute-0 podman[349908]: 2025-10-03 09:50:07.882556232 +0000 UTC m=+0.133389512 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:50:07 compute-0 podman[349900]: 2025-10-03 09:50:07.886374465 +0000 UTC m=+0.143011872 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, architecture=x86_64, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container)
Oct  3 09:50:07 compute-0 podman[349936]: 2025-10-03 09:50:07.922216288 +0000 UTC m=+0.140214131 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_id=edpm)
Oct  3 09:50:07 compute-0 podman[349931]: 2025-10-03 09:50:07.930700921 +0000 UTC m=+0.161150555 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct  3 09:50:07 compute-0 podman[349922]: 2025-10-03 09:50:07.971138892 +0000 UTC m=+0.205657986 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Oct  3 09:50:08 compute-0 python3.9[350164]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:50:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:09 compute-0 python3.9[350318]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:50:10 compute-0 python3.9[350469]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759485009.3315516-1781-211741432394531/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:50:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:50:10 compute-0 python3.9[350545]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 09:50:10 compute-0 systemd[1]: Reloading.
Oct  3 09:50:10 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:50:10 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:50:11 compute-0 python3.9[350655]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:50:11 compute-0 systemd[1]: Reloading.
Oct  3 09:50:12 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:50:12 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:50:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:12 compute-0 systemd[1]: Starting nova_compute container...
Oct  3 09:50:12 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d0e72f91b595917e54e9fd07b21146a36a1b8c9b4fc46455770e24e5175214/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  3 09:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d0e72f91b595917e54e9fd07b21146a36a1b8c9b4fc46455770e24e5175214/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  3 09:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d0e72f91b595917e54e9fd07b21146a36a1b8c9b4fc46455770e24e5175214/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  3 09:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d0e72f91b595917e54e9fd07b21146a36a1b8c9b4fc46455770e24e5175214/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  3 09:50:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d0e72f91b595917e54e9fd07b21146a36a1b8c9b4fc46455770e24e5175214/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  3 09:50:12 compute-0 podman[350695]: 2025-10-03 09:50:12.603989054 +0000 UTC m=+0.138562468 container init ef5bf4c945117048702d65b1b6a237cd82e78a257456814f47d6c81fe966b71a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  3 09:50:12 compute-0 podman[350695]: 2025-10-03 09:50:12.622040405 +0000 UTC m=+0.156613789 container start ef5bf4c945117048702d65b1b6a237cd82e78a257456814f47d6c81fe966b71a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 09:50:12 compute-0 podman[350695]: nova_compute
Oct  3 09:50:12 compute-0 nova_compute[350711]: + sudo -E kolla_set_configs
Oct  3 09:50:12 compute-0 systemd[1]: Started nova_compute container.
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Validating config file
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Copying service configuration files
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Deleting /etc/ceph
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Creating directory /etc/ceph
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Setting permission for /etc/ceph
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Writing out command to execute
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  3 09:50:12 compute-0 nova_compute[350711]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  3 09:50:12 compute-0 nova_compute[350711]: ++ cat /run_command
Oct  3 09:50:12 compute-0 nova_compute[350711]: + CMD=nova-compute
Oct  3 09:50:12 compute-0 nova_compute[350711]: + ARGS=
Oct  3 09:50:12 compute-0 nova_compute[350711]: + sudo kolla_copy_cacerts
Oct  3 09:50:12 compute-0 nova_compute[350711]: + [[ ! -n '' ]]
Oct  3 09:50:12 compute-0 nova_compute[350711]: + . kolla_extend_start
Oct  3 09:50:12 compute-0 nova_compute[350711]: Running command: 'nova-compute'
Oct  3 09:50:12 compute-0 nova_compute[350711]: + echo 'Running command: '\''nova-compute'\'''
Oct  3 09:50:12 compute-0 nova_compute[350711]: + umask 0022
Oct  3 09:50:12 compute-0 nova_compute[350711]: + exec nova-compute
Oct  3 09:50:13 compute-0 podman[350846]: 2025-10-03 09:50:13.533063991 +0000 UTC m=+0.093522729 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  3 09:50:13 compute-0 python3.9[350887]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:50:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:14 compute-0 python3.9[351040]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:50:14 compute-0 nova_compute[350711]: 2025-10-03 09:50:14.981 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  3 09:50:14 compute-0 nova_compute[350711]: 2025-10-03 09:50:14.981 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  3 09:50:14 compute-0 nova_compute[350711]: 2025-10-03 09:50:14.982 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  3 09:50:14 compute-0 nova_compute[350711]: 2025-10-03 09:50:14.982 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.133 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.151 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 09:50:15 compute-0 python3.9[351194]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.754 2 INFO nova.virt.driver [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  3 09:50:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.889 2 INFO nova.compute.provider_config [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.903 2 DEBUG oslo_concurrency.lockutils [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.904 2 DEBUG oslo_concurrency.lockutils [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.904 2 DEBUG oslo_concurrency.lockutils [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.905 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.905 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.905 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.905 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.905 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.905 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.906 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.906 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.906 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.906 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.906 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.906 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.906 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.907 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.907 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.907 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.907 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.907 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.907 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.908 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.908 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.908 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.908 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.908 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.908 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.908 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.909 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.909 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.909 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.909 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.909 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.909 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.909 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.910 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.910 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.910 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.910 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.910 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.910 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.911 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.911 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.911 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.911 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.911 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.911 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.912 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.912 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.912 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.912 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.912 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.912 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.912 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.913 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.913 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.913 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.913 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.913 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.913 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.913 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.914 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.914 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.914 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.914 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.914 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.914 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.914 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.915 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.915 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.915 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.915 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.915 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.916 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.916 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.916 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.916 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.916 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.916 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.917 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.917 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.917 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.917 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.917 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.918 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.918 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.918 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.918 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.918 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.918 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.918 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.919 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.919 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.919 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.919 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.919 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.919 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.919 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.920 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.920 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.920 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.920 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.920 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.920 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.920 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.921 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.921 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.921 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.921 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.921 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.921 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.921 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.922 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.922 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.922 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.922 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.922 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.922 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.922 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.923 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.923 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.923 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.923 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.923 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.923 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.923 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.924 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.924 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.924 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.924 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.924 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.924 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.924 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.924 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.925 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.925 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.925 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.925 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.925 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.925 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.925 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.926 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.926 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.926 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.926 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.926 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.926 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.926 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.927 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.927 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.927 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.927 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.927 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.928 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.928 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.928 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.928 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.928 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.929 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.929 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.929 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.929 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.929 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.930 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.930 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.930 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.930 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.930 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.931 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.931 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.931 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.931 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.932 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.932 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.932 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.933 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.933 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.933 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.933 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.933 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.934 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.934 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.934 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.934 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.934 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.935 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.935 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.935 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.935 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.935 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.935 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.936 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.936 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.936 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.936 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.936 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.936 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.937 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.937 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.937 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.937 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.937 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.937 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.938 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.938 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.938 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.938 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.938 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.939 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.939 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.939 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.939 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.939 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.939 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.940 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.940 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.940 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.940 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.940 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.940 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.941 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.941 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.941 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.941 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.941 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.941 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.941 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.942 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.942 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.942 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.942 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.942 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.942 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.943 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.943 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.943 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.943 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.943 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.943 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.944 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.944 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.944 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.944 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.944 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.945 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.945 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.945 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.945 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.945 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.945 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.946 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.946 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.946 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.946 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.946 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.946 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.946 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.947 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.947 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.947 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.947 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.947 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.947 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.948 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.948 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.948 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.948 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.948 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.948 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.949 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.949 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.949 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.949 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.949 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.949 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.950 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.950 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.950 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.950 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.950 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.950 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.950 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.951 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.951 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.951 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.951 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.951 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.951 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.952 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.952 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.952 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.952 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.952 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.952 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.953 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.953 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.953 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.953 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.953 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.954 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.954 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.955 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.955 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.955 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.955 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.955 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.955 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.956 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.956 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.956 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.956 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.956 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.957 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.957 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.957 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.957 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.957 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.957 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.957 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.958 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.958 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.958 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.958 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.958 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.958 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.959 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.959 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.959 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.959 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.959 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.959 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.960 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.960 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.960 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.960 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.960 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.960 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.961 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.961 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.961 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.961 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.961 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.962 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.962 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.962 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.962 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.962 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.963 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.963 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.963 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.963 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.963 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.963 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.964 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.964 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.964 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.964 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.965 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.965 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.965 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.965 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.965 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.965 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.966 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.966 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.966 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.966 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.967 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.967 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.967 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.967 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.967 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.967 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.967 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.968 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.968 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.968 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.968 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.968 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.968 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.968 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.969 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.969 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.969 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.969 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.969 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.969 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.969 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.970 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.970 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.970 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.970 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.970 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.971 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.971 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.971 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.971 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.971 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.971 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.972 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.972 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.972 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.972 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.972 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.972 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.972 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.973 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.973 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.973 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.973 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.973 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.973 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.973 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.974 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.974 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.974 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.974 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.974 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.974 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.974 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.975 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.975 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.975 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.975 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.975 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.975 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.976 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.976 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.976 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.976 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.976 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.976 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.976 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.977 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.977 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.977 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.977 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.977 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.977 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.977 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.978 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.978 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.978 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.978 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.978 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.978 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.978 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.979 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.979 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.979 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.979 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.979 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.979 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.979 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.980 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.980 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.980 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.980 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.980 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.980 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.981 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.981 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.981 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.981 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.981 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.982 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.982 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.982 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.982 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.982 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.982 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.983 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.983 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.983 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.983 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.983 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.984 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.984 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.984 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.984 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.984 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.984 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.984 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.985 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.985 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.985 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.985 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.985 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.985 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.985 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.986 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.986 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.986 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.986 2 WARNING oslo_config.cfg [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  3 09:50:15 compute-0 nova_compute[350711]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  3 09:50:15 compute-0 nova_compute[350711]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  3 09:50:15 compute-0 nova_compute[350711]: and ``live_migration_inbound_addr`` respectively.
Oct  3 09:50:15 compute-0 nova_compute[350711]: ).  Its value may be silently ignored in the future.#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.986 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.986 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.987 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.987 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.987 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.987 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.987 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.988 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.988 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.988 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.988 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.988 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.988 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.989 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.989 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.989 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.989 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.989 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.989 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.rbd_secret_uuid        = 9b4e8c9a-5555-5510-a631-4742a1182561 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.989 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.990 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.990 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.990 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.990 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.990 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.990 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.990 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.991 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.991 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.991 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.991 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.991 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.991 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.992 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.992 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.992 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.992 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.992 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.992 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.992 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.993 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.993 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.993 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.993 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.993 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.993 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.993 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.994 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.994 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.994 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.994 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.994 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.995 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.995 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.995 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.995 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.995 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.995 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.996 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.996 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.996 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.996 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.996 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.996 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.997 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.997 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.997 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.997 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.997 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.997 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.997 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.998 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.998 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.998 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.998 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.998 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.998 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.998 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.999 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.999 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.999 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:15 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.999 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.999 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:15.999 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.000 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.000 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.000 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.000 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.000 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.000 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.000 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.001 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.001 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.001 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.001 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.001 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.001 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.001 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.001 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.002 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.002 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.002 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.002 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.002 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.002 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.002 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.003 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.003 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.003 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.003 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.003 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.003 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.004 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.004 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.004 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.004 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.004 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.004 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.004 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.005 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.005 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.005 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.005 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.005 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.006 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.006 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.006 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.006 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.007 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.007 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.007 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.007 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.008 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.008 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.008 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.008 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.008 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.009 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.009 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.009 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.009 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.010 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.010 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.010 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.010 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.011 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.011 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.011 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.011 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.011 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.012 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.012 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.012 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.012 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.013 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.013 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.013 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.013 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.013 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.014 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.014 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.014 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.014 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.015 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.015 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.015 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.015 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.016 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.016 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.016 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.016 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.016 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.017 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.017 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.017 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.017 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.018 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.018 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.018 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.018 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.019 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.019 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.019 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.019 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.019 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.020 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.020 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.020 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.020 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.021 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.021 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.021 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.021 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.021 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.022 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.022 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.022 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.022 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.023 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.023 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.023 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.024 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.024 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.024 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.024 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.025 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.025 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.025 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.025 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.025 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.026 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.026 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.026 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.026 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.027 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.027 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.027 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.027 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.028 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.028 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.028 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.028 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.028 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.029 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.029 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.029 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.029 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.029 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.030 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.030 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.030 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.030 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.031 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.031 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.031 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.031 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.032 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.032 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.032 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.032 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.032 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.033 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.033 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.033 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.033 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.033 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.034 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.034 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.034 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.035 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.035 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.035 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.035 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.036 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.036 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.036 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.036 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.037 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.037 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.037 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.037 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.038 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.038 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.038 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.038 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.038 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.039 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.039 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.039 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.039 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.039 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.040 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.040 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.040 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.040 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.041 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.041 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.041 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.041 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.041 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.042 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.042 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.042 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.042 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.043 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.043 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.043 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.043 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.044 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.044 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.044 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.045 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.045 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.045 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.045 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.046 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.046 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.046 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.046 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.047 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.047 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.047 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.047 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.048 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.048 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.048 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.048 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.048 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.049 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.049 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.049 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.049 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.050 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.050 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.050 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.050 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.050 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.051 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.051 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.051 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.051 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.052 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.052 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.052 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.052 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.052 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.053 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.053 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.053 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.054 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.054 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.054 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.055 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.055 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.055 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.055 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.056 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.056 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.056 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.056 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.057 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.057 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.057 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.057 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.058 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.058 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.058 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.058 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:50:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.058 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.059 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.059 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.059 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.059 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.059 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.060 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.060 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.060 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.060 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.061 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.061 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.061 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.061 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.061 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.062 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.062 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.062 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.062 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.062 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.063 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.063 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.063 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.063 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.063 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:50:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.064 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.064 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.064 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.064 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.065 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.065 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.065 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.065 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.065 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.066 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.066 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.066 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.066 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.066 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.067 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.067 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.067 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.067 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.068 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.068 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.068 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.068 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.068 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.069 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.069 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.069 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.069 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.070 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.070 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.070 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.070 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.070 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.071 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.071 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.071 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.071 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.071 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.072 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.072 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.072 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.072 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.073 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.073 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.073 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.073 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.073 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.074 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.074 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.074 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.074 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.074 2 DEBUG oslo_service.service [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.076 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.089 2 DEBUG nova.virt.libvirt.host [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.091 2 DEBUG nova.virt.libvirt.host [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.092 2 DEBUG nova.virt.libvirt.host [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.092 2 DEBUG nova.virt.libvirt.host [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.109 2 DEBUG nova.virt.libvirt.host [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f13a15a49a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.115 2 DEBUG nova.virt.libvirt.host [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f13a15a49a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.117 2 INFO nova.virt.libvirt.driver [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.136 2 WARNING nova.virt.libvirt.driver [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  3 09:50:16 compute-0 nova_compute[350711]: 2025-10-03 09:50:16.136 2 DEBUG nova.virt.libvirt.volume.mount [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  3 09:50:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:16 compute-0 python3.9[351379]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  3 09:50:16 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 09:50:16 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.199 2 INFO nova.virt.libvirt.host [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Libvirt host capabilities <capabilities>
Oct  3 09:50:17 compute-0 nova_compute[350711]: 
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <host>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <uuid>1cc15826-d1a9-4f5b-875a-64915f5c099d</uuid>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <cpu>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <arch>x86_64</arch>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model>EPYC-Rome-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <vendor>AMD</vendor>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <microcode version='16777317'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <signature family='23' model='49' stepping='0'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <maxphysaddr mode='emulate' bits='40'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='x2apic'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='tsc-deadline'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='osxsave'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='hypervisor'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='tsc_adjust'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='spec-ctrl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='stibp'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='arch-capabilities'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='ssbd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='cmp_legacy'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='topoext'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='virt-ssbd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='lbrv'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='tsc-scale'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='vmcb-clean'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='pause-filter'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='pfthreshold'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='svme-addr-chk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='rdctl-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='skip-l1dfl-vmentry'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='mds-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature name='pschange-mc-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <pages unit='KiB' size='4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <pages unit='KiB' size='2048'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <pages unit='KiB' size='1048576'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </cpu>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <power_management>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <suspend_mem/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </power_management>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <iommu support='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <migration_features>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <live/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <uri_transports>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <uri_transport>tcp</uri_transport>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <uri_transport>rdma</uri_transport>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </uri_transports>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </migration_features>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <topology>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <cells num='1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <cell id='0'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:          <memory unit='KiB'>7864100</memory>
Oct  3 09:50:17 compute-0 nova_compute[350711]:          <pages unit='KiB' size='4'>1966025</pages>
Oct  3 09:50:17 compute-0 nova_compute[350711]:          <pages unit='KiB' size='2048'>0</pages>
Oct  3 09:50:17 compute-0 nova_compute[350711]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  3 09:50:17 compute-0 nova_compute[350711]:          <distances>
Oct  3 09:50:17 compute-0 nova_compute[350711]:            <sibling id='0' value='10'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:          </distances>
Oct  3 09:50:17 compute-0 nova_compute[350711]:          <cpus num='8'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:          </cpus>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        </cell>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </cells>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </topology>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <cache>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </cache>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <secmodel>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model>selinux</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <doi>0</doi>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </secmodel>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <secmodel>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model>dac</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <doi>0</doi>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </secmodel>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </host>
Oct  3 09:50:17 compute-0 nova_compute[350711]: 
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <guest>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <os_type>hvm</os_type>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <arch name='i686'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <wordsize>32</wordsize>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <domain type='qemu'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <domain type='kvm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </arch>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <features>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <pae/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <nonpae/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <acpi default='on' toggle='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <apic default='on' toggle='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <cpuselection/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <deviceboot/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <disksnapshot default='on' toggle='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <externalSnapshot/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </features>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </guest>
Oct  3 09:50:17 compute-0 nova_compute[350711]: 
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <guest>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <os_type>hvm</os_type>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <arch name='x86_64'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <wordsize>64</wordsize>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <domain type='qemu'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <domain type='kvm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </arch>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <features>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <acpi default='on' toggle='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <apic default='on' toggle='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <cpuselection/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <deviceboot/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <disksnapshot default='on' toggle='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <externalSnapshot/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </features>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </guest>
Oct  3 09:50:17 compute-0 nova_compute[350711]: 
Oct  3 09:50:17 compute-0 nova_compute[350711]: </capabilities>
Oct  3 09:50:17 compute-0 nova_compute[350711]: #033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.207 2 DEBUG nova.virt.libvirt.host [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.238 2 DEBUG nova.virt.libvirt.host [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  3 09:50:17 compute-0 nova_compute[350711]: <domainCapabilities>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <path>/usr/libexec/qemu-kvm</path>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <domain>kvm</domain>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <arch>i686</arch>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <vcpu max='4096'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <iothreads supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <os supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <enum name='firmware'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <loader supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='type'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>rom</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>pflash</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='readonly'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>yes</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>no</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='secure'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>no</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </loader>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </os>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <cpu>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <mode name='host-passthrough' supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='hostPassthroughMigratable'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>on</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>off</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </mode>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <mode name='maximum' supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='maximumMigratable'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>on</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>off</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </mode>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <mode name='host-model' supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <vendor>AMD</vendor>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='x2apic'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='tsc-deadline'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='hypervisor'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='tsc_adjust'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='spec-ctrl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='stibp'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='arch-capabilities'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='ssbd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='cmp_legacy'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='overflow-recov'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='succor'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='ibrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='amd-ssbd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='virt-ssbd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='lbrv'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='tsc-scale'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='vmcb-clean'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='flushbyasid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='pause-filter'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='pfthreshold'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='svme-addr-chk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='rdctl-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='mds-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='pschange-mc-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='gds-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='rfds-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='disable' name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </mode>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <mode name='custom' supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-noTSX'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v5'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cooperlake'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cooperlake-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cooperlake-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Denverton'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mpx'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Denverton-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mpx'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Denverton-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Denverton-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Dhyana-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Genoa'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amd-psfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='auto-ibrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='stibp-always-on'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Genoa-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amd-psfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='auto-ibrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='stibp-always-on'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Milan'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Milan-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Milan-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amd-psfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='stibp-always-on'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Rome'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Rome-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Rome-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Rome-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='GraniteRapids'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='prefetchiti'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='GraniteRapids-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='prefetchiti'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='GraniteRapids-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx10'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx10-128'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx10-256'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx10-512'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='prefetchiti'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-noTSX'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-noTSX'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v5'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v6'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v7'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='IvyBridge'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='IvyBridge-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='IvyBridge-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='IvyBridge-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='KnightsMill'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-4fmaps'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-4vnniw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512er'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512pf'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='KnightsMill-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-4fmaps'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-4vnniw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512er'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512pf'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Opteron_G4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fma4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xop'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Opteron_G4-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fma4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xop'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Opteron_G5'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fma4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tbm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xop'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Opteron_G5-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fma4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tbm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xop'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SapphireRapids'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SapphireRapids-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SapphireRapids-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SapphireRapids-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SierraForest'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-ne-convert'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cmpccxadd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SierraForest-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-ne-convert'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cmpccxadd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v5'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='core-capability'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mpx'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='split-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='core-capability'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mpx'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='split-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='core-capability'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='split-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='core-capability'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='split-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='athlon'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnow'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnowext'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='athlon-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnow'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnowext'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='core2duo'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='core2duo-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='coreduo'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='coreduo-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='n270'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='n270-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='phenom'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnow'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnowext'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='phenom-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnow'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnowext'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </mode>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </cpu>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <memoryBacking supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <enum name='sourceType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>file</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>anonymous</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>memfd</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </memoryBacking>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <devices>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <disk supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='diskDevice'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>disk</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>cdrom</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>floppy</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>lun</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='bus'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>fdc</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>scsi</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>usb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>sata</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio-transitional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio-non-transitional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </disk>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <graphics supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='type'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vnc</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>egl-headless</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>dbus</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </graphics>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <video supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='modelType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vga</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>cirrus</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>none</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>bochs</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>ramfb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </video>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <hostdev supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='mode'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>subsystem</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='startupPolicy'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>default</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>mandatory</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>requisite</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>optional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='subsysType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>usb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>pci</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>scsi</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='capsType'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='pciBackend'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </hostdev>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <rng supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio-transitional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio-non-transitional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendModel'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>random</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>egd</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>builtin</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </rng>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <filesystem supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='driverType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>path</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>handle</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtiofs</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </filesystem>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <tpm supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>tpm-tis</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>tpm-crb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendModel'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>emulator</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>external</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendVersion'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>2.0</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </tpm>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <redirdev supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='bus'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>usb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </redirdev>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <channel supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='type'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>pty</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>unix</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </channel>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <crypto supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='type'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>qemu</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendModel'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>builtin</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </crypto>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <interface supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>default</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>passt</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </interface>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <panic supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>isa</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>hyperv</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </panic>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </devices>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <features>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <gic supported='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <vmcoreinfo supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <genid supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <backingStoreInput supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <backup supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <async-teardown supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <ps2 supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <sev supported='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <sgx supported='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <hyperv supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='features'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>relaxed</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vapic</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>spinlocks</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vpindex</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>runtime</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>synic</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>stimer</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>reset</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vendor_id</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>frequencies</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>reenlightenment</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>tlbflush</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>ipi</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>avic</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>emsr_bitmap</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>xmm_input</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </hyperv>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <launchSecurity supported='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </features>
Oct  3 09:50:17 compute-0 nova_compute[350711]: </domainCapabilities>
Oct  3 09:50:17 compute-0 nova_compute[350711]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.244 2 DEBUG nova.virt.libvirt.host [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  3 09:50:17 compute-0 nova_compute[350711]: <domainCapabilities>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <path>/usr/libexec/qemu-kvm</path>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <domain>kvm</domain>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <arch>i686</arch>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <vcpu max='240'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <iothreads supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <os supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <enum name='firmware'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <loader supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='type'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>rom</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>pflash</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='readonly'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>yes</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>no</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='secure'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>no</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </loader>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </os>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <cpu>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <mode name='host-passthrough' supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='hostPassthroughMigratable'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>on</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>off</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </mode>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <mode name='maximum' supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='maximumMigratable'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>on</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>off</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </mode>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <mode name='host-model' supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <vendor>AMD</vendor>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='x2apic'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='tsc-deadline'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='hypervisor'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='tsc_adjust'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='spec-ctrl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='stibp'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='arch-capabilities'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='ssbd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='cmp_legacy'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='overflow-recov'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='succor'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='ibrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='amd-ssbd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='virt-ssbd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='lbrv'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='tsc-scale'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='vmcb-clean'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='flushbyasid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='pause-filter'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='pfthreshold'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='svme-addr-chk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='rdctl-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='mds-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='pschange-mc-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='gds-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='rfds-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='disable' name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </mode>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <mode name='custom' supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-noTSX'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v5'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cooperlake'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cooperlake-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cooperlake-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Denverton'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mpx'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Denverton-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mpx'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Denverton-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Denverton-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Dhyana-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Genoa'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amd-psfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='auto-ibrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='stibp-always-on'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Genoa-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amd-psfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='auto-ibrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='stibp-always-on'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Milan'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Milan-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Milan-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amd-psfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='stibp-always-on'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Rome'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Rome-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Rome-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Rome-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='GraniteRapids'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='prefetchiti'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='GraniteRapids-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='prefetchiti'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='GraniteRapids-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx10'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx10-128'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx10-256'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx10-512'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='prefetchiti'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-noTSX'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-noTSX'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v5'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v6'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v7'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='IvyBridge'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='IvyBridge-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='IvyBridge-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='IvyBridge-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='KnightsMill'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-4fmaps'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-4vnniw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512er'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512pf'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='KnightsMill-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-4fmaps'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-4vnniw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512er'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512pf'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Opteron_G4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fma4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xop'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Opteron_G4-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fma4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xop'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Opteron_G5'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fma4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tbm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xop'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Opteron_G5-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fma4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tbm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xop'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SapphireRapids'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SapphireRapids-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SapphireRapids-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SapphireRapids-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SierraForest'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-ne-convert'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cmpccxadd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SierraForest-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-ne-convert'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cmpccxadd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v5'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='core-capability'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mpx'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='split-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='core-capability'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mpx'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='split-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='core-capability'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='split-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='core-capability'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='split-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='athlon'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnow'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnowext'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='athlon-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnow'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnowext'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='core2duo'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='core2duo-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='coreduo'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='coreduo-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='n270'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='n270-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='phenom'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnow'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnowext'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='phenom-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnow'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnowext'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </mode>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </cpu>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <memoryBacking supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <enum name='sourceType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>file</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>anonymous</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>memfd</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </memoryBacking>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <devices>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <disk supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='diskDevice'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>disk</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>cdrom</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>floppy</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>lun</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='bus'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>ide</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>fdc</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>scsi</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>usb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>sata</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio-transitional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio-non-transitional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </disk>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <graphics supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='type'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vnc</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>egl-headless</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>dbus</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </graphics>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <video supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='modelType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vga</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>cirrus</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>none</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>bochs</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>ramfb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </video>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <hostdev supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='mode'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>subsystem</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='startupPolicy'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>default</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>mandatory</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>requisite</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>optional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='subsysType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>usb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>pci</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>scsi</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='capsType'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='pciBackend'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </hostdev>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <rng supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio-transitional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio-non-transitional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendModel'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>random</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>egd</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>builtin</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </rng>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <filesystem supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='driverType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>path</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>handle</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtiofs</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </filesystem>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <tpm supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>tpm-tis</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>tpm-crb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendModel'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>emulator</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>external</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendVersion'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>2.0</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </tpm>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <redirdev supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='bus'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>usb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </redirdev>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <channel supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='type'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>pty</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>unix</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </channel>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <crypto supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='type'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>qemu</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendModel'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>builtin</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </crypto>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <interface supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>default</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>passt</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </interface>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <panic supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>isa</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>hyperv</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </panic>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </devices>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <features>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <gic supported='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <vmcoreinfo supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <genid supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <backingStoreInput supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <backup supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <async-teardown supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <ps2 supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <sev supported='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <sgx supported='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <hyperv supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='features'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>relaxed</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vapic</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>spinlocks</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vpindex</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>runtime</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>synic</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>stimer</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>reset</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vendor_id</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>frequencies</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>reenlightenment</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>tlbflush</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>ipi</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>avic</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>emsr_bitmap</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>xmm_input</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </hyperv>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <launchSecurity supported='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </features>
Oct  3 09:50:17 compute-0 nova_compute[350711]: </domainCapabilities>
Oct  3 09:50:17 compute-0 nova_compute[350711]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.301 2 DEBUG nova.virt.libvirt.host [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.307 2 DEBUG nova.virt.libvirt.host [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  3 09:50:17 compute-0 nova_compute[350711]: <domainCapabilities>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <path>/usr/libexec/qemu-kvm</path>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <domain>kvm</domain>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <arch>x86_64</arch>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <vcpu max='4096'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <iothreads supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <os supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <enum name='firmware'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>efi</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <loader supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='type'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>rom</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>pflash</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='readonly'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>yes</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>no</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='secure'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>yes</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>no</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </loader>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </os>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <cpu>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <mode name='host-passthrough' supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='hostPassthroughMigratable'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>on</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>off</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </mode>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <mode name='maximum' supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='maximumMigratable'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>on</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>off</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </mode>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <mode name='host-model' supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <vendor>AMD</vendor>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='x2apic'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='tsc-deadline'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='hypervisor'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='tsc_adjust'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='spec-ctrl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='stibp'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='arch-capabilities'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='ssbd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='cmp_legacy'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='overflow-recov'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='succor'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='ibrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='amd-ssbd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='virt-ssbd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='lbrv'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='tsc-scale'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='vmcb-clean'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='flushbyasid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='pause-filter'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='pfthreshold'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='svme-addr-chk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='rdctl-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='mds-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='pschange-mc-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='gds-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='rfds-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='disable' name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </mode>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <mode name='custom' supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-noTSX'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v5'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cooperlake'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cooperlake-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cooperlake-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Denverton'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mpx'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Denverton-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mpx'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Denverton-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Denverton-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Dhyana-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Genoa'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amd-psfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='auto-ibrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='stibp-always-on'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Genoa-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amd-psfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='auto-ibrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='stibp-always-on'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Milan'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Milan-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Milan-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amd-psfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='stibp-always-on'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Rome'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Rome-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Rome-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Rome-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='GraniteRapids'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='prefetchiti'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='GraniteRapids-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='prefetchiti'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='GraniteRapids-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx10'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx10-128'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx10-256'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx10-512'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='prefetchiti'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-noTSX'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-noTSX'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v5'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v6'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v7'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='IvyBridge'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='IvyBridge-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='IvyBridge-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='IvyBridge-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='KnightsMill'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-4fmaps'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-4vnniw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512er'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512pf'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='KnightsMill-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-4fmaps'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-4vnniw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512er'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512pf'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Opteron_G4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fma4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xop'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Opteron_G4-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fma4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xop'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Opteron_G5'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fma4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tbm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xop'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Opteron_G5-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fma4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tbm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xop'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SapphireRapids'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SapphireRapids-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SapphireRapids-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SapphireRapids-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SierraForest'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-ne-convert'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cmpccxadd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SierraForest-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-ne-convert'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cmpccxadd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v5'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='core-capability'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mpx'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='split-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='core-capability'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mpx'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='split-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='core-capability'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='split-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='core-capability'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='split-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='athlon'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnow'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnowext'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='athlon-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnow'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnowext'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='core2duo'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='core2duo-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='coreduo'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='coreduo-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='n270'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='n270-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='phenom'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnow'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnowext'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='phenom-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnow'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnowext'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </mode>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </cpu>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <memoryBacking supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <enum name='sourceType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>file</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>anonymous</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>memfd</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </memoryBacking>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <devices>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <disk supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='diskDevice'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>disk</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>cdrom</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>floppy</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>lun</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='bus'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>fdc</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>scsi</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>usb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>sata</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio-transitional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio-non-transitional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </disk>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <graphics supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='type'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vnc</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>egl-headless</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>dbus</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </graphics>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <video supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='modelType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vga</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>cirrus</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>none</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>bochs</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>ramfb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </video>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <hostdev supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='mode'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>subsystem</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='startupPolicy'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>default</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>mandatory</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>requisite</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>optional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='subsysType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>usb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>pci</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>scsi</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='capsType'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='pciBackend'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </hostdev>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <rng supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio-transitional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio-non-transitional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendModel'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>random</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>egd</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>builtin</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </rng>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <filesystem supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='driverType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>path</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>handle</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtiofs</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </filesystem>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <tpm supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>tpm-tis</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>tpm-crb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendModel'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>emulator</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>external</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendVersion'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>2.0</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </tpm>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <redirdev supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='bus'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>usb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </redirdev>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <channel supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='type'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>pty</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>unix</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </channel>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <crypto supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='type'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>qemu</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendModel'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>builtin</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </crypto>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <interface supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>default</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>passt</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </interface>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <panic supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>isa</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>hyperv</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </panic>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </devices>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <features>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <gic supported='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <vmcoreinfo supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <genid supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <backingStoreInput supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <backup supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <async-teardown supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <ps2 supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <sev supported='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <sgx supported='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <hyperv supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='features'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>relaxed</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vapic</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>spinlocks</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vpindex</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>runtime</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>synic</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>stimer</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>reset</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vendor_id</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>frequencies</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>reenlightenment</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>tlbflush</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>ipi</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>avic</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>emsr_bitmap</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>xmm_input</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </hyperv>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <launchSecurity supported='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </features>
Oct  3 09:50:17 compute-0 nova_compute[350711]: </domainCapabilities>
Oct  3 09:50:17 compute-0 nova_compute[350711]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.443 2 DEBUG nova.virt.libvirt.host [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  3 09:50:17 compute-0 nova_compute[350711]: <domainCapabilities>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <path>/usr/libexec/qemu-kvm</path>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <domain>kvm</domain>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <arch>x86_64</arch>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <vcpu max='240'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <iothreads supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <os supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <enum name='firmware'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <loader supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='type'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>rom</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>pflash</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='readonly'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>yes</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>no</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='secure'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>no</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </loader>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </os>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <cpu>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <mode name='host-passthrough' supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='hostPassthroughMigratable'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>on</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>off</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </mode>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <mode name='maximum' supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='maximumMigratable'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>on</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>off</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </mode>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <mode name='host-model' supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <vendor>AMD</vendor>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='x2apic'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='tsc-deadline'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='hypervisor'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='tsc_adjust'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='spec-ctrl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='stibp'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='arch-capabilities'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='ssbd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='cmp_legacy'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='overflow-recov'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='succor'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='ibrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='amd-ssbd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='virt-ssbd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='lbrv'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='tsc-scale'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='vmcb-clean'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='flushbyasid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='pause-filter'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='pfthreshold'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='svme-addr-chk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='rdctl-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='mds-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='pschange-mc-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='gds-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='require' name='rfds-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <feature policy='disable' name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </mode>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <mode name='custom' supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-noTSX'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Broadwell-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cascadelake-Server-v5'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cooperlake'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cooperlake-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Cooperlake-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Denverton'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mpx'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Denverton-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mpx'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Denverton-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Denverton-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Dhyana-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Genoa'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amd-psfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='auto-ibrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='stibp-always-on'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Genoa-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amd-psfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='auto-ibrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='stibp-always-on'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Milan'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Milan-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Milan-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amd-psfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='stibp-always-on'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Rome'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Rome-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Rome-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-Rome-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='EPYC-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='GraniteRapids'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='prefetchiti'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='GraniteRapids-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='prefetchiti'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='GraniteRapids-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx10'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx10-128'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx10-256'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx10-512'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='prefetchiti'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-noTSX'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Haswell-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-noTSX'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v5'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v6'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Icelake-Server-v7'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='IvyBridge'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='IvyBridge-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='IvyBridge-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='IvyBridge-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='KnightsMill'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-4fmaps'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-4vnniw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512er'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512pf'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='KnightsMill-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-4fmaps'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-4vnniw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512er'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512pf'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Opteron_G4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fma4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xop'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Opteron_G4-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fma4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xop'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Opteron_G5'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fma4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tbm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xop'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Opteron_G5-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fma4'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tbm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xop'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SapphireRapids'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SapphireRapids-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SapphireRapids-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SapphireRapids-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='amx-tile'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-bf16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-fp16'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bitalg'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrc'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fzrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='la57'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='taa-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xfd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SierraForest'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-ne-convert'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cmpccxadd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='SierraForest-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-ifma'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-ne-convert'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx-vnni-int8'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cmpccxadd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fbsdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='fsrs'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ibrs-all'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mcdt-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pbrsb-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='psdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='serialize'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vaes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Client-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='hle'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='rtm'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Skylake-Server-v5'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512bw'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512cd'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512dq'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512f'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='avx512vl'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='invpcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pcid'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='pku'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='core-capability'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mpx'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='split-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='core-capability'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='mpx'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='split-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge-v2'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='core-capability'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='split-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge-v3'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='core-capability'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='split-lock-detect'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='Snowridge-v4'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='cldemote'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='erms'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='gfni'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdir64b'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='movdiri'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='xsaves'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='athlon'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnow'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnowext'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='athlon-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnow'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnowext'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='core2duo'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='core2duo-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='coreduo'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='coreduo-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='n270'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='n270-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='ss'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='phenom'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnow'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnowext'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <blockers model='phenom-v1'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnow'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <feature name='3dnowext'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </blockers>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </mode>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </cpu>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <memoryBacking supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <enum name='sourceType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>file</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>anonymous</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <value>memfd</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </memoryBacking>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <devices>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <disk supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='diskDevice'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>disk</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>cdrom</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>floppy</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>lun</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='bus'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>ide</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>fdc</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>scsi</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>usb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>sata</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio-transitional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio-non-transitional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </disk>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <graphics supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='type'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vnc</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>egl-headless</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>dbus</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </graphics>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <video supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='modelType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vga</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>cirrus</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>none</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>bochs</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>ramfb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </video>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <hostdev supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='mode'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>subsystem</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='startupPolicy'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>default</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>mandatory</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>requisite</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>optional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='subsysType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>usb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>pci</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>scsi</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='capsType'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='pciBackend'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </hostdev>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <rng supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio-transitional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtio-non-transitional</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendModel'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>random</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>egd</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>builtin</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </rng>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <filesystem supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='driverType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>path</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>handle</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>virtiofs</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </filesystem>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <tpm supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>tpm-tis</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>tpm-crb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendModel'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>emulator</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>external</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendVersion'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>2.0</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </tpm>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <redirdev supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='bus'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>usb</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </redirdev>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <channel supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='type'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>pty</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>unix</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </channel>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <crypto supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='type'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>qemu</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendModel'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>builtin</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </crypto>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <interface supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='backendType'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>default</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>passt</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </interface>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <panic supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='model'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>isa</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>hyperv</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </panic>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </devices>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  <features>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <gic supported='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <vmcoreinfo supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <genid supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <backingStoreInput supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <backup supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <async-teardown supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <ps2 supported='yes'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <sev supported='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <sgx supported='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <hyperv supported='yes'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      <enum name='features'>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>relaxed</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vapic</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>spinlocks</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vpindex</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>runtime</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>synic</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>stimer</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>reset</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>vendor_id</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>frequencies</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>reenlightenment</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>tlbflush</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>ipi</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>avic</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>emsr_bitmap</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:        <value>xmm_input</value>
Oct  3 09:50:17 compute-0 nova_compute[350711]:      </enum>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    </hyperv>
Oct  3 09:50:17 compute-0 nova_compute[350711]:    <launchSecurity supported='no'/>
Oct  3 09:50:17 compute-0 nova_compute[350711]:  </features>
Oct  3 09:50:17 compute-0 nova_compute[350711]: </domainCapabilities>
Oct  3 09:50:17 compute-0 nova_compute[350711]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.569 2 DEBUG nova.virt.libvirt.host [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.569 2 INFO nova.virt.libvirt.host [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Secure Boot support detected#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.572 2 INFO nova.virt.libvirt.driver [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.572 2 INFO nova.virt.libvirt.driver [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.586 2 DEBUG nova.virt.libvirt.driver [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.621 2 INFO nova.virt.node [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Determined node identity fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from /var/lib/nova/compute_id#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.650 2 WARNING nova.compute.manager [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Compute nodes ['fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.691 2 INFO nova.compute.manager [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.731 2 WARNING nova.compute.manager [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.731 2 DEBUG oslo_concurrency.lockutils [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.732 2 DEBUG oslo_concurrency.lockutils [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.733 2 DEBUG oslo_concurrency.lockutils [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.733 2 DEBUG nova.compute.resource_tracker [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.733 2 DEBUG oslo_concurrency.processutils [None req-4e3181cf-e9d1-49d8-b752-13ba3ae9385a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 09:50:17 compute-0 python3.9[351564]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 09:50:17 compute-0 systemd[1]: Stopping nova_compute container...
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.966 2 DEBUG oslo_concurrency.lockutils [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.967 2 DEBUG oslo_concurrency.lockutils [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 09:50:17 compute-0 nova_compute[350711]: 2025-10-03 09:50:17.967 2 DEBUG oslo_concurrency.lockutils [None req-6a377efd-bba3-45ec-9e28-89875ebc9d87 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 09:50:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 09:50:18 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1641196190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 09:50:18 compute-0 virtqemud[137656]: End of file while reading data: Input/output error
Oct  3 09:50:18 compute-0 systemd[1]: libpod-ef5bf4c945117048702d65b1b6a237cd82e78a257456814f47d6c81fe966b71a.scope: Deactivated successfully.
Oct  3 09:50:18 compute-0 systemd[1]: libpod-ef5bf4c945117048702d65b1b6a237cd82e78a257456814f47d6c81fe966b71a.scope: Consumed 3.714s CPU time.
Oct  3 09:50:18 compute-0 podman[351588]: 2025-10-03 09:50:18.398955949 +0000 UTC m=+0.487138352 container died ef5bf4c945117048702d65b1b6a237cd82e78a257456814f47d6c81fe966b71a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  3 09:50:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef5bf4c945117048702d65b1b6a237cd82e78a257456814f47d6c81fe966b71a-userdata-shm.mount: Deactivated successfully.
Oct  3 09:50:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9d0e72f91b595917e54e9fd07b21146a36a1b8c9b4fc46455770e24e5175214-merged.mount: Deactivated successfully.
Oct  3 09:50:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:50:20 compute-0 podman[351621]: 2025-10-03 09:50:20.831965585 +0000 UTC m=+0.083625241 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct  3 09:50:20 compute-0 podman[351620]: 2025-10-03 09:50:20.864391769 +0000 UTC m=+0.120392444 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd)
Oct  3 09:50:20 compute-0 podman[351588]: 2025-10-03 09:50:20.99439648 +0000 UTC m=+3.082578913 container cleanup ef5bf4c945117048702d65b1b6a237cd82e78a257456814f47d6c81fe966b71a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 09:50:20 compute-0 podman[351588]: nova_compute
Oct  3 09:50:21 compute-0 podman[351660]: nova_compute
Oct  3 09:50:21 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct  3 09:50:21 compute-0 systemd[1]: Stopped nova_compute container.
Oct  3 09:50:21 compute-0 systemd[1]: edpm_nova_compute.service: Consumed 1.018s CPU time, 20.7M memory peak, read 0B from disk, written 107.5K to disk.
Oct  3 09:50:21 compute-0 systemd[1]: Starting nova_compute container...
Oct  3 09:50:21 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d0e72f91b595917e54e9fd07b21146a36a1b8c9b4fc46455770e24e5175214/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  3 09:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d0e72f91b595917e54e9fd07b21146a36a1b8c9b4fc46455770e24e5175214/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  3 09:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d0e72f91b595917e54e9fd07b21146a36a1b8c9b4fc46455770e24e5175214/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  3 09:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d0e72f91b595917e54e9fd07b21146a36a1b8c9b4fc46455770e24e5175214/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  3 09:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9d0e72f91b595917e54e9fd07b21146a36a1b8c9b4fc46455770e24e5175214/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  3 09:50:21 compute-0 podman[351671]: 2025-10-03 09:50:21.31738257 +0000 UTC m=+0.176061124 container init ef5bf4c945117048702d65b1b6a237cd82e78a257456814f47d6c81fe966b71a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm)
Oct  3 09:50:21 compute-0 podman[351671]: 2025-10-03 09:50:21.335035979 +0000 UTC m=+0.193714523 container start ef5bf4c945117048702d65b1b6a237cd82e78a257456814f47d6c81fe966b71a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Oct  3 09:50:21 compute-0 nova_compute[351685]: + sudo -E kolla_set_configs
Oct  3 09:50:21 compute-0 podman[351671]: nova_compute
Oct  3 09:50:21 compute-0 systemd[1]: Started nova_compute container.
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Validating config file
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Copying service configuration files
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Deleting /etc/ceph
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Creating directory /etc/ceph
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Setting permission for /etc/ceph
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Writing out command to execute
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  3 09:50:21 compute-0 nova_compute[351685]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  3 09:50:21 compute-0 nova_compute[351685]: ++ cat /run_command
Oct  3 09:50:21 compute-0 nova_compute[351685]: + CMD=nova-compute
Oct  3 09:50:21 compute-0 nova_compute[351685]: + ARGS=
Oct  3 09:50:21 compute-0 nova_compute[351685]: + sudo kolla_copy_cacerts
Oct  3 09:50:21 compute-0 nova_compute[351685]: + [[ ! -n '' ]]
Oct  3 09:50:21 compute-0 nova_compute[351685]: + . kolla_extend_start
Oct  3 09:50:21 compute-0 nova_compute[351685]: Running command: 'nova-compute'
Oct  3 09:50:21 compute-0 nova_compute[351685]: + echo 'Running command: '\''nova-compute'\'''
Oct  3 09:50:21 compute-0 nova_compute[351685]: + umask 0022
Oct  3 09:50:21 compute-0 nova_compute[351685]: + exec nova-compute
Oct  3 09:50:22 compute-0 python3.9[351848]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  3 09:50:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:22 compute-0 systemd[1]: Started libpod-conmon-4d9e651ecb2288dc8b6a4f4da818ebd37aba2296a95e5ed1c0122f2f1c49e2ab.scope.
Oct  3 09:50:22 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:50:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7109f181e84eda861174981129958e9c9cfd71efcb1f4872541370f93ffd9913/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct  3 09:50:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7109f181e84eda861174981129958e9c9cfd71efcb1f4872541370f93ffd9913/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  3 09:50:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7109f181e84eda861174981129958e9c9cfd71efcb1f4872541370f93ffd9913/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct  3 09:50:22 compute-0 podman[351873]: 2025-10-03 09:50:22.52507488 +0000 UTC m=+0.182016676 container init 4d9e651ecb2288dc8b6a4f4da818ebd37aba2296a95e5ed1c0122f2f1c49e2ab (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 09:50:22 compute-0 podman[351873]: 2025-10-03 09:50:22.537173779 +0000 UTC m=+0.194115545 container start 4d9e651ecb2288dc8b6a4f4da818ebd37aba2296a95e5ed1c0122f2f1c49e2ab (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3)
Oct  3 09:50:22 compute-0 python3.9[351848]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct  3 09:50:22 compute-0 nova_compute_init[351895]: INFO:nova_statedir:Applying nova statedir ownership
Oct  3 09:50:22 compute-0 nova_compute_init[351895]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct  3 09:50:22 compute-0 nova_compute_init[351895]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct  3 09:50:22 compute-0 nova_compute_init[351895]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct  3 09:50:22 compute-0 nova_compute_init[351895]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct  3 09:50:22 compute-0 nova_compute_init[351895]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct  3 09:50:22 compute-0 nova_compute_init[351895]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct  3 09:50:22 compute-0 nova_compute_init[351895]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct  3 09:50:22 compute-0 nova_compute_init[351895]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct  3 09:50:22 compute-0 nova_compute_init[351895]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct  3 09:50:22 compute-0 nova_compute_init[351895]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct  3 09:50:22 compute-0 nova_compute_init[351895]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct  3 09:50:22 compute-0 nova_compute_init[351895]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct  3 09:50:22 compute-0 nova_compute_init[351895]: INFO:nova_statedir:Nova statedir ownership complete
Oct  3 09:50:22 compute-0 systemd[1]: libpod-4d9e651ecb2288dc8b6a4f4da818ebd37aba2296a95e5ed1c0122f2f1c49e2ab.scope: Deactivated successfully.
Oct  3 09:50:22 compute-0 conmon[351889]: conmon 4d9e651ecb2288dc8b6a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d9e651ecb2288dc8b6a4f4da818ebd37aba2296a95e5ed1c0122f2f1c49e2ab.scope/container/memory.events
Oct  3 09:50:22 compute-0 podman[351896]: 2025-10-03 09:50:22.630303385 +0000 UTC m=+0.055949371 container died 4d9e651ecb2288dc8b6a4f4da818ebd37aba2296a95e5ed1c0122f2f1c49e2ab (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.license=GPLv2, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 09:50:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4d9e651ecb2288dc8b6a4f4da818ebd37aba2296a95e5ed1c0122f2f1c49e2ab-userdata-shm.mount: Deactivated successfully.
Oct  3 09:50:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-7109f181e84eda861174981129958e9c9cfd71efcb1f4872541370f93ffd9913-merged.mount: Deactivated successfully.
Oct  3 09:50:22 compute-0 podman[351906]: 2025-10-03 09:50:22.702870619 +0000 UTC m=+0.087267858 container cleanup 4d9e651ecb2288dc8b6a4f4da818ebd37aba2296a95e5ed1c0122f2f1c49e2ab (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2)
Oct  3 09:50:22 compute-0 systemd[1]: libpod-conmon-4d9e651ecb2288dc8b6a4f4da818ebd37aba2296a95e5ed1c0122f2f1c49e2ab.scope: Deactivated successfully.
Oct  3 09:50:23 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Oct  3 09:50:23 compute-0 systemd-logind[798]: Session 57 logged out. Waiting for processes to exit.
Oct  3 09:50:23 compute-0 systemd[1]: session-57.scope: Consumed 3min 47.930s CPU time.
Oct  3 09:50:23 compute-0 systemd-logind[798]: Removed session 57.
Oct  3 09:50:23 compute-0 nova_compute[351685]: 2025-10-03 09:50:23.553 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  3 09:50:23 compute-0 nova_compute[351685]: 2025-10-03 09:50:23.555 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  3 09:50:23 compute-0 nova_compute[351685]: 2025-10-03 09:50:23.555 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  3 09:50:23 compute-0 nova_compute[351685]: 2025-10-03 09:50:23.556 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  3 09:50:23 compute-0 nova_compute[351685]: 2025-10-03 09:50:23.696 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 09:50:23 compute-0 nova_compute[351685]: 2025-10-03 09:50:23.719 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.156 2 INFO nova.virt.driver [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.251 2 INFO nova.compute.provider_config [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.265 2 DEBUG oslo_concurrency.lockutils [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.265 2 DEBUG oslo_concurrency.lockutils [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.266 2 DEBUG oslo_concurrency.lockutils [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.266 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.266 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.267 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.267 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.267 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.268 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.268 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.268 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.269 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.269 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.269 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.269 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.270 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.270 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.270 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.271 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.271 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.271 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.271 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.272 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.272 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.273 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.273 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.273 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.274 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.274 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.274 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.275 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.275 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.275 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.276 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.276 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.276 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.277 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.277 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.277 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.277 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.278 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.278 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.278 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.279 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.279 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.279 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.280 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.280 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.280 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.280 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.281 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.281 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.281 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.282 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.282 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.282 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.283 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.283 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.283 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.283 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.284 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.284 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.284 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.285 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.285 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.285 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.285 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.286 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.286 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.286 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.287 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.287 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.287 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.288 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.288 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.288 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.288 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.289 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.289 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.289 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.290 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.290 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.290 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.291 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.291 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.291 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.292 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.292 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.292 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.293 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.293 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.293 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.294 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.294 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.294 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.295 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.295 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.295 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.295 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.296 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.296 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.296 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.297 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.297 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.298 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.298 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.298 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.299 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.299 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.299 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.300 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.300 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.300 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.301 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.301 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.301 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.302 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.302 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.302 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.302 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.303 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.303 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.303 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.304 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.304 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.304 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.305 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.305 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.306 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.306 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.306 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.307 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.307 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.307 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.308 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.308 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.308 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.309 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.309 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.309 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.310 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.310 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.310 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.311 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.311 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.311 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.312 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.312 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.312 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.313 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.313 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.314 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.314 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.314 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.315 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.315 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.315 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.316 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.316 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.316 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.317 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.317 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.318 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.318 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.318 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.319 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.319 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.319 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.320 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.320 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.320 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.321 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.321 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.321 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.322 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.322 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.322 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.323 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.323 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.323 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.324 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.324 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.324 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.325 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.325 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.325 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.326 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.326 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.327 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.327 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.327 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.328 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.328 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.329 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.329 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.329 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.330 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.330 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.330 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.331 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.331 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.331 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.332 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.332 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.332 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.333 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.333 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.333 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.334 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.334 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.334 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.335 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.335 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.336 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.336 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.336 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.337 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.337 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.337 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.337 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.338 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.338 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.339 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.339 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.340 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.340 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.341 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.341 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.341 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.341 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.341 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.342 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.342 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.342 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.342 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.343 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.343 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.343 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.343 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.343 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.344 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.344 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.344 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.344 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.344 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.345 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.345 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.345 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.345 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.345 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.346 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.346 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.346 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.346 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.346 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.346 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.347 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.347 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.347 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.347 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.347 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.347 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.348 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.348 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.348 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.348 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.348 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.348 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.348 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.349 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.349 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.349 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.349 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.349 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.349 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.349 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.350 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.350 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.350 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.350 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.350 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.350 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.351 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.351 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.351 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.351 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.351 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.352 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.352 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.352 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.352 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.352 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.352 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.353 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.353 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.353 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.353 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.353 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.354 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.354 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.354 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.354 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.354 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.355 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.355 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.355 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.355 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.355 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.355 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.356 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.356 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.356 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.356 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.356 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.356 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.357 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.357 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.357 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.357 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.357 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.358 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.358 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.358 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.358 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.358 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.358 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.359 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.359 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.359 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.359 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.359 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.360 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.360 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.360 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.360 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.360 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.360 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.361 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.361 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.361 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.361 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.361 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.362 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.362 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.362 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.362 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.362 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.362 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.363 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.363 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.363 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.363 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.364 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.364 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.364 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.364 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.364 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.365 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.365 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.366 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.366 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.366 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.366 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.366 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.367 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.367 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.367 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.367 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.367 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.368 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.368 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.368 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.368 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.368 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.369 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.369 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.369 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.369 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.369 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.370 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.370 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.370 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.370 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.370 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.371 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.371 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.371 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.371 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.371 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.372 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.372 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.372 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.372 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.372 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.373 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.373 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.373 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.373 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.374 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.374 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.374 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.374 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.374 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.375 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.375 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.375 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.375 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.375 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.376 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.376 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.376 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.376 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.376 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.377 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.377 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.377 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.377 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.377 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.378 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.378 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.378 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.378 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.378 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.379 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.379 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.379 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.379 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.379 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.380 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.380 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.380 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.380 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.380 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.381 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.381 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.381 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.381 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.382 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.382 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.382 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.382 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.382 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.382 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.383 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.383 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.383 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.383 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.383 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.384 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.384 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.384 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.384 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.384 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.385 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.385 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.385 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.385 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.385 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.386 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.386 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.386 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.386 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.386 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.387 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.387 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.387 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.387 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.387 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.388 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.388 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.388 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.388 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.388 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.389 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.389 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.389 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.389 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.389 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.390 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.390 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.390 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.390 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.390 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.391 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.391 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.391 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.391 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.391 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.391 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.392 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.392 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.392 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.392 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.393 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.393 2 WARNING oslo_config.cfg [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  3 09:50:24 compute-0 nova_compute[351685]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  3 09:50:24 compute-0 nova_compute[351685]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  3 09:50:24 compute-0 nova_compute[351685]: and ``live_migration_inbound_addr`` respectively.
Oct  3 09:50:24 compute-0 nova_compute[351685]: ).  Its value may be silently ignored in the future.#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.393 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.393 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.394 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.394 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.394 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.394 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.395 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.395 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.395 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.395 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.395 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.396 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.396 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.396 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.396 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.396 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.397 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.397 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.397 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.rbd_secret_uuid        = 9b4e8c9a-5555-5510-a631-4742a1182561 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.397 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.397 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.397 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.398 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.398 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.398 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.398 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.398 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.399 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.399 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.399 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.399 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.400 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.400 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.400 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.400 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.400 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.401 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.401 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.401 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.401 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.401 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.402 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.402 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.402 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.402 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.403 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.403 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.403 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.403 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.403 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.404 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.404 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.404 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.404 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.404 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.405 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.405 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.405 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.405 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.405 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.406 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.406 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.406 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.406 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.406 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.407 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.407 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.407 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.407 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.407 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.408 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.408 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.408 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.408 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.409 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.409 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.409 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.409 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.409 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.410 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.410 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.410 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.410 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.411 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.411 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.411 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.411 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.412 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.412 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.412 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.412 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.412 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.413 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.413 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.413 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.413 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.413 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.414 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.414 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.414 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.414 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.414 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.415 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.415 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.415 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.416 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.416 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.416 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.416 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.417 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.417 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.417 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.417 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.417 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.418 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.418 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.418 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.418 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.418 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.419 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.419 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.419 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.419 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.419 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.420 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.420 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.420 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.420 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.421 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.421 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.421 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.421 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.421 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.422 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.422 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.422 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.422 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.423 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.423 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.423 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.423 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.424 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.424 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.424 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.424 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.424 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.425 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.425 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.425 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.425 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.426 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.426 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.426 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.426 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.427 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.427 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.427 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.427 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.427 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.428 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.428 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.428 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.428 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.429 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.429 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.429 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.429 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.429 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.430 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.430 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.430 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.430 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.431 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.431 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.431 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.431 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.431 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.432 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.432 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.432 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.432 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.433 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.433 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.433 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.433 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.433 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.434 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.434 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.434 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.434 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.434 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.435 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.435 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.435 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.435 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.436 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.436 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.436 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.436 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.437 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.437 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.437 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.437 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.437 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.438 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.438 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.438 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.438 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.438 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.439 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.439 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.439 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.439 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.439 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.440 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.440 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.440 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.440 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.440 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.441 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.441 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.441 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.441 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.442 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.442 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.442 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.442 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.442 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.443 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.443 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.443 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.443 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.443 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.444 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.444 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.444 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.444 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.444 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.445 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.445 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.445 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.445 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.446 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.446 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.446 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.446 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.446 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.447 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.447 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.447 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.448 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.448 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.448 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.448 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.448 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.449 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.449 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.449 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.449 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.450 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.450 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.450 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.450 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.450 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.451 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.451 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.451 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.451 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.451 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.452 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.452 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.452 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.452 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.452 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.453 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.453 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.453 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.453 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.454 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.454 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.454 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.454 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.454 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.455 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.455 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.455 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.455 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.456 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.456 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.456 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.456 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.456 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.457 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.457 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.457 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.457 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.458 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.458 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.458 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.458 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.458 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.459 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.459 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.459 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.459 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.459 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.460 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.460 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.460 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.460 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.461 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.461 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.461 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.461 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.461 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.462 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.462 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.462 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.462 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.462 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.463 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.463 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.463 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.463 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.464 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.464 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.464 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.464 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.464 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.465 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.465 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.465 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.465 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.466 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.466 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.466 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.466 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.467 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.467 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.467 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.467 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.467 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.468 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.468 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.468 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.468 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.468 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.469 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.469 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.469 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.469 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.469 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.470 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.470 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.470 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.470 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.470 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.471 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.471 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.471 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.471 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.472 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.472 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.472 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.472 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.472 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.473 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.473 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.473 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.473 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.473 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.474 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.474 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.474 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.474 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.475 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.475 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.475 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.475 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.475 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.476 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.476 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.476 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.476 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.476 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.477 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.477 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.477 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.477 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.477 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.478 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.478 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.478 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.478 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.479 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.479 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.479 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.479 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.479 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.480 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.480 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.480 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.480 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.480 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.481 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.481 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.481 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.481 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.481 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.482 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.482 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.482 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.482 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.482 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.483 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.483 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.483 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.483 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.484 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.484 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.484 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.484 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.484 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.485 2 DEBUG oslo_service.service [None req-56968d04-bf86-42fb-95b3-2684f3f7acac - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.486 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.501 2 INFO nova.virt.node [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Determined node identity fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from /var/lib/nova/compute_id#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.502 2 DEBUG nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.502 2 DEBUG nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.503 2 DEBUG nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.503 2 DEBUG nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.525 2 DEBUG nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f3536f3c820> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.528 2 DEBUG nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f3536f3c820> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.529 2 INFO nova.virt.libvirt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.537 2 INFO nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Libvirt host capabilities <capabilities>
Oct  3 09:50:24 compute-0 nova_compute[351685]: 
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <host>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <uuid>1cc15826-d1a9-4f5b-875a-64915f5c099d</uuid>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <cpu>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <arch>x86_64</arch>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model>EPYC-Rome-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <vendor>AMD</vendor>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <microcode version='16777317'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <signature family='23' model='49' stepping='0'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <maxphysaddr mode='emulate' bits='40'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='x2apic'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='tsc-deadline'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='osxsave'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='hypervisor'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='tsc_adjust'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='spec-ctrl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='stibp'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='arch-capabilities'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='ssbd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='cmp_legacy'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='topoext'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='virt-ssbd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='lbrv'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='tsc-scale'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='vmcb-clean'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='pause-filter'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='pfthreshold'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='svme-addr-chk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='rdctl-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='skip-l1dfl-vmentry'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='mds-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature name='pschange-mc-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <pages unit='KiB' size='4'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <pages unit='KiB' size='2048'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <pages unit='KiB' size='1048576'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </cpu>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <power_management>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <suspend_mem/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </power_management>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <iommu support='no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <migration_features>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <live/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <uri_transports>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <uri_transport>tcp</uri_transport>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <uri_transport>rdma</uri_transport>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </uri_transports>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </migration_features>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <topology>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <cells num='1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <cell id='0'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:          <memory unit='KiB'>7864100</memory>
Oct  3 09:50:24 compute-0 nova_compute[351685]:          <pages unit='KiB' size='4'>1966025</pages>
Oct  3 09:50:24 compute-0 nova_compute[351685]:          <pages unit='KiB' size='2048'>0</pages>
Oct  3 09:50:24 compute-0 nova_compute[351685]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  3 09:50:24 compute-0 nova_compute[351685]:          <distances>
Oct  3 09:50:24 compute-0 nova_compute[351685]:            <sibling id='0' value='10'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:          </distances>
Oct  3 09:50:24 compute-0 nova_compute[351685]:          <cpus num='8'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:          </cpus>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        </cell>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </cells>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </topology>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <cache>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </cache>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <secmodel>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model>selinux</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <doi>0</doi>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </secmodel>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <secmodel>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model>dac</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <doi>0</doi>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </secmodel>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </host>
Oct  3 09:50:24 compute-0 nova_compute[351685]: 
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <guest>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <os_type>hvm</os_type>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <arch name='i686'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <wordsize>32</wordsize>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <domain type='qemu'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <domain type='kvm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </arch>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <features>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <pae/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <nonpae/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <acpi default='on' toggle='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <apic default='on' toggle='no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <cpuselection/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <deviceboot/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <disksnapshot default='on' toggle='no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <externalSnapshot/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </features>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </guest>
Oct  3 09:50:24 compute-0 nova_compute[351685]: 
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <guest>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <os_type>hvm</os_type>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <arch name='x86_64'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <wordsize>64</wordsize>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <domain type='qemu'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <domain type='kvm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </arch>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <features>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <acpi default='on' toggle='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <apic default='on' toggle='no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <cpuselection/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <deviceboot/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <disksnapshot default='on' toggle='no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <externalSnapshot/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </features>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </guest>
Oct  3 09:50:24 compute-0 nova_compute[351685]: 
Oct  3 09:50:24 compute-0 nova_compute[351685]: </capabilities>
Oct  3 09:50:24 compute-0 nova_compute[351685]: #033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.544 2 DEBUG nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.548 2 DEBUG nova.virt.libvirt.volume.mount [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.550 2 DEBUG nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  3 09:50:24 compute-0 nova_compute[351685]: <domainCapabilities>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <path>/usr/libexec/qemu-kvm</path>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <domain>kvm</domain>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <arch>i686</arch>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <vcpu max='240'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <iothreads supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <os supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <enum name='firmware'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <loader supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='type'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>rom</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>pflash</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='readonly'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>yes</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>no</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='secure'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>no</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </loader>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </os>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <cpu>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <mode name='host-passthrough' supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='hostPassthroughMigratable'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>on</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>off</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </mode>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <mode name='maximum' supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='maximumMigratable'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>on</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>off</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </mode>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <mode name='host-model' supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <vendor>AMD</vendor>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='x2apic'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='tsc-deadline'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='hypervisor'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='tsc_adjust'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='spec-ctrl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='stibp'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='arch-capabilities'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='ssbd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='cmp_legacy'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='overflow-recov'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='succor'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='ibrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='amd-ssbd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='virt-ssbd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='lbrv'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='tsc-scale'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='vmcb-clean'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='flushbyasid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='pause-filter'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='pfthreshold'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='svme-addr-chk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='rdctl-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='mds-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='pschange-mc-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='gds-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='rfds-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='disable' name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </mode>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <mode name='custom' supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-noTSX'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v5'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cooperlake'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cooperlake-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cooperlake-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Denverton'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mpx'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Denverton-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mpx'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Denverton-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Denverton-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Dhyana-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Genoa'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amd-psfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='auto-ibrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='stibp-always-on'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Genoa-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amd-psfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='auto-ibrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='stibp-always-on'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Milan'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Milan-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Milan-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amd-psfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='stibp-always-on'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Rome'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Rome-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Rome-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Rome-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='GraniteRapids'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='prefetchiti'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='GraniteRapids-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='prefetchiti'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='GraniteRapids-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx10'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx10-128'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx10-256'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx10-512'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='prefetchiti'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-noTSX'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-noTSX'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v5'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v6'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v7'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='IvyBridge'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='IvyBridge-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='IvyBridge-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='IvyBridge-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='KnightsMill'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-4fmaps'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-4vnniw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512er'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512pf'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='KnightsMill-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-4fmaps'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-4vnniw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512er'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512pf'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Opteron_G4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fma4'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xop'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Opteron_G4-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fma4'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xop'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Opteron_G5'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fma4'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tbm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xop'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Opteron_G5-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fma4'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tbm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xop'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SapphireRapids'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SapphireRapids-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SapphireRapids-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SapphireRapids-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SierraForest'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-ne-convert'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cmpccxadd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SierraForest-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-ne-convert'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cmpccxadd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v5'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Snowridge'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='core-capability'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mpx'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='split-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Snowridge-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='core-capability'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mpx'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='split-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Snowridge-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='core-capability'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='split-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Snowridge-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='core-capability'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='split-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Snowridge-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='athlon'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnow'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnowext'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='athlon-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnow'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnowext'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='core2duo'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='core2duo-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='coreduo'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='coreduo-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='n270'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='n270-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='phenom'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnow'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnowext'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='phenom-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnow'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnowext'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </mode>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </cpu>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <memoryBacking supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <enum name='sourceType'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <value>file</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <value>anonymous</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <value>memfd</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </memoryBacking>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <devices>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <disk supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='diskDevice'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>disk</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>cdrom</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>floppy</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>lun</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='bus'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>ide</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>fdc</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>scsi</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>usb</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>sata</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='model'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio-transitional</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio-non-transitional</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </disk>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <graphics supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='type'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>vnc</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>egl-headless</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>dbus</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </graphics>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <video supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='modelType'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>vga</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>cirrus</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>none</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>bochs</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>ramfb</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </video>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <hostdev supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='mode'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>subsystem</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='startupPolicy'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>default</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>mandatory</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>requisite</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>optional</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='subsysType'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>usb</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>pci</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>scsi</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='capsType'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='pciBackend'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </hostdev>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <rng supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='model'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio-transitional</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio-non-transitional</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='backendModel'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>random</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>egd</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>builtin</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </rng>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <filesystem supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='driverType'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>path</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>handle</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtiofs</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </filesystem>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <tpm supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='model'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>tpm-tis</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>tpm-crb</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='backendModel'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>emulator</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>external</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='backendVersion'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>2.0</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </tpm>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <redirdev supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='bus'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>usb</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </redirdev>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <channel supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='type'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>pty</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>unix</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </channel>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <crypto supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='model'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='type'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>qemu</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='backendModel'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>builtin</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </crypto>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <interface supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='backendType'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>default</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>passt</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </interface>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <panic supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='model'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>isa</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>hyperv</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </panic>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </devices>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <features>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <gic supported='no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <vmcoreinfo supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <genid supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <backingStoreInput supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <backup supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <async-teardown supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <ps2 supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <sev supported='no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <sgx supported='no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <hyperv supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='features'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>relaxed</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>vapic</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>spinlocks</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>vpindex</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>runtime</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>synic</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>stimer</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>reset</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>vendor_id</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>frequencies</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>reenlightenment</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>tlbflush</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>ipi</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>avic</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>emsr_bitmap</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>xmm_input</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </hyperv>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <launchSecurity supported='no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </features>
Oct  3 09:50:24 compute-0 nova_compute[351685]: </domainCapabilities>
Oct  3 09:50:24 compute-0 nova_compute[351685]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.565 2 DEBUG nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  3 09:50:24 compute-0 nova_compute[351685]: <domainCapabilities>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <path>/usr/libexec/qemu-kvm</path>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <domain>kvm</domain>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <arch>i686</arch>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <vcpu max='4096'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <iothreads supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <os supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <enum name='firmware'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <loader supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='type'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>rom</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>pflash</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='readonly'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>yes</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>no</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='secure'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>no</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </loader>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </os>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <cpu>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <mode name='host-passthrough' supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='hostPassthroughMigratable'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>on</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>off</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </mode>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <mode name='maximum' supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='maximumMigratable'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>on</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>off</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </mode>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <mode name='host-model' supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <vendor>AMD</vendor>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='x2apic'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='tsc-deadline'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='hypervisor'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='tsc_adjust'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='spec-ctrl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='stibp'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='arch-capabilities'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='ssbd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='cmp_legacy'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='overflow-recov'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='succor'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='ibrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='amd-ssbd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='virt-ssbd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='lbrv'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='tsc-scale'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='vmcb-clean'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='flushbyasid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='pause-filter'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='pfthreshold'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='svme-addr-chk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='rdctl-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='mds-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='pschange-mc-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='gds-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='rfds-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='disable' name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </mode>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <mode name='custom' supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-noTSX'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v5'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cooperlake'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cooperlake-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cooperlake-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Denverton'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mpx'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Denverton-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mpx'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Denverton-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Denverton-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Dhyana-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Genoa'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amd-psfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='auto-ibrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='stibp-always-on'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Genoa-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amd-psfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='auto-ibrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='stibp-always-on'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Milan'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Milan-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Milan-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amd-psfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='stibp-always-on'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Rome'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Rome-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Rome-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Rome-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='GraniteRapids'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='prefetchiti'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='GraniteRapids-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='prefetchiti'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='GraniteRapids-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx10'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx10-128'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx10-256'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx10-512'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='prefetchiti'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-noTSX'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-noTSX'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v5'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v6'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v7'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='IvyBridge'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='IvyBridge-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='IvyBridge-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='IvyBridge-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='KnightsMill'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-4fmaps'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-4vnniw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512er'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512pf'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='KnightsMill-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-4fmaps'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-4vnniw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512er'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512pf'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Opteron_G4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fma4'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xop'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Opteron_G4-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fma4'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xop'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Opteron_G5'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fma4'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tbm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xop'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Opteron_G5-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fma4'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tbm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xop'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SapphireRapids'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SapphireRapids-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SapphireRapids-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SapphireRapids-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SierraForest'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-ne-convert'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cmpccxadd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SierraForest-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-ne-convert'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cmpccxadd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v5'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Snowridge'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='core-capability'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mpx'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='split-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Snowridge-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='core-capability'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mpx'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='split-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Snowridge-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='core-capability'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='split-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Snowridge-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='core-capability'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='split-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Snowridge-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='athlon'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnow'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnowext'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='athlon-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnow'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnowext'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='core2duo'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='core2duo-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='coreduo'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='coreduo-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='n270'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='n270-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='phenom'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnow'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnowext'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='phenom-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnow'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnowext'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </mode>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </cpu>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <memoryBacking supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <enum name='sourceType'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <value>file</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <value>anonymous</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <value>memfd</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </memoryBacking>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <devices>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <disk supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='diskDevice'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>disk</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>cdrom</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>floppy</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>lun</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='bus'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>fdc</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>scsi</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>usb</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>sata</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='model'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio-transitional</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio-non-transitional</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </disk>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <graphics supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='type'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>vnc</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>egl-headless</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>dbus</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </graphics>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <video supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='modelType'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>vga</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>cirrus</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>none</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>bochs</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>ramfb</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </video>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <hostdev supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='mode'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>subsystem</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='startupPolicy'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>default</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>mandatory</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>requisite</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>optional</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='subsysType'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>usb</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>pci</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>scsi</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='capsType'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='pciBackend'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </hostdev>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <rng supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='model'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio-transitional</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio-non-transitional</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='backendModel'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>random</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>egd</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>builtin</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </rng>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <filesystem supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='driverType'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>path</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>handle</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtiofs</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </filesystem>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <tpm supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='model'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>tpm-tis</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>tpm-crb</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='backendModel'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>emulator</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>external</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='backendVersion'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>2.0</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </tpm>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <redirdev supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='bus'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>usb</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </redirdev>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <channel supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='type'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>pty</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>unix</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </channel>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <crypto supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='model'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='type'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>qemu</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='backendModel'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>builtin</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </crypto>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <interface supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='backendType'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>default</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>passt</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </interface>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <panic supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='model'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>isa</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>hyperv</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </panic>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </devices>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <features>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <gic supported='no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <vmcoreinfo supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <genid supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <backingStoreInput supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <backup supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <async-teardown supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <ps2 supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <sev supported='no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <sgx supported='no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <hyperv supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='features'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>relaxed</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>vapic</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>spinlocks</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>vpindex</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>runtime</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>synic</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>stimer</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>reset</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>vendor_id</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>frequencies</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>reenlightenment</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>tlbflush</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>ipi</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>avic</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>emsr_bitmap</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>xmm_input</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </hyperv>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <launchSecurity supported='no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </features>
Oct  3 09:50:24 compute-0 nova_compute[351685]: </domainCapabilities>
Oct  3 09:50:24 compute-0 nova_compute[351685]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.624 2 DEBUG nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.629 2 DEBUG nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  3 09:50:24 compute-0 nova_compute[351685]: <domainCapabilities>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <path>/usr/libexec/qemu-kvm</path>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <domain>kvm</domain>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <arch>x86_64</arch>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <vcpu max='240'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <iothreads supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <os supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <enum name='firmware'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <loader supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='type'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>rom</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>pflash</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='readonly'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>yes</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>no</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='secure'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>no</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </loader>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </os>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <cpu>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <mode name='host-passthrough' supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='hostPassthroughMigratable'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>on</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>off</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </mode>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <mode name='maximum' supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='maximumMigratable'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>on</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>off</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </mode>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <mode name='host-model' supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <vendor>AMD</vendor>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='x2apic'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='tsc-deadline'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='hypervisor'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='tsc_adjust'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='spec-ctrl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='stibp'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='arch-capabilities'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='ssbd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='cmp_legacy'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='overflow-recov'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='succor'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='ibrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='amd-ssbd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='virt-ssbd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='lbrv'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='tsc-scale'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='vmcb-clean'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='flushbyasid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='pause-filter'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='pfthreshold'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='svme-addr-chk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='rdctl-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='mds-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='pschange-mc-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='gds-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='rfds-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='disable' name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </mode>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <mode name='custom' supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-noTSX'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v5'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cooperlake'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cooperlake-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cooperlake-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Denverton'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mpx'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Denverton-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mpx'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Denverton-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Denverton-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Dhyana-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Genoa'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amd-psfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='auto-ibrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='stibp-always-on'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Genoa-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amd-psfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='auto-ibrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='stibp-always-on'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Milan'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Milan-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Milan-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amd-psfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='stibp-always-on'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Rome'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Rome-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Rome-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Rome-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='GraniteRapids'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='prefetchiti'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='GraniteRapids-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='prefetchiti'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='GraniteRapids-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx10'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx10-128'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx10-256'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx10-512'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='prefetchiti'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-noTSX'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-noTSX'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v5'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v6'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v7'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='IvyBridge'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='IvyBridge-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='IvyBridge-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='IvyBridge-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='KnightsMill'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-4fmaps'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-4vnniw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512er'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512pf'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='KnightsMill-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-4fmaps'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-4vnniw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512er'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512pf'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Opteron_G4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fma4'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xop'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Opteron_G4-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fma4'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xop'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Opteron_G5'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fma4'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tbm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xop'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Opteron_G5-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fma4'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tbm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xop'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SapphireRapids'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SapphireRapids-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SapphireRapids-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SapphireRapids-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SierraForest'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-ne-convert'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cmpccxadd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='SierraForest-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-ne-convert'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cmpccxadd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v5'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Snowridge'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='core-capability'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mpx'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='split-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Snowridge-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='core-capability'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mpx'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='split-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Snowridge-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='core-capability'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='split-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Snowridge-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='core-capability'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='split-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Snowridge-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='athlon'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnow'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnowext'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='athlon-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnow'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnowext'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='core2duo'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='core2duo-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='coreduo'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='coreduo-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='n270'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='n270-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='phenom'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnow'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnowext'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='phenom-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnow'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='3dnowext'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </mode>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </cpu>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <memoryBacking supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <enum name='sourceType'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <value>file</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <value>anonymous</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <value>memfd</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </memoryBacking>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <devices>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <disk supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='diskDevice'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>disk</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>cdrom</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>floppy</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>lun</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='bus'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>ide</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>fdc</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>scsi</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>usb</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>sata</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='model'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio-transitional</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio-non-transitional</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </disk>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <graphics supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='type'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>vnc</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>egl-headless</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>dbus</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </graphics>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <video supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='modelType'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>vga</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>cirrus</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>none</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>bochs</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>ramfb</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </video>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <hostdev supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='mode'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>subsystem</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='startupPolicy'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>default</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>mandatory</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>requisite</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>optional</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='subsysType'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>usb</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>pci</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>scsi</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='capsType'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='pciBackend'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </hostdev>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <rng supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='model'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio-transitional</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtio-non-transitional</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='backendModel'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>random</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>egd</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>builtin</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </rng>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <filesystem supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='driverType'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>path</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>handle</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>virtiofs</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </filesystem>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <tpm supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='model'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>tpm-tis</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>tpm-crb</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='backendModel'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>emulator</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>external</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='backendVersion'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>2.0</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </tpm>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <redirdev supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='bus'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>usb</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </redirdev>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <channel supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='type'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>pty</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>unix</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </channel>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <crypto supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='model'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='type'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>qemu</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='backendModel'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>builtin</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </crypto>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <interface supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='backendType'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>default</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>passt</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </interface>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <panic supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='model'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>isa</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>hyperv</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </panic>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </devices>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <features>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <gic supported='no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <vmcoreinfo supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <genid supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <backingStoreInput supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <backup supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <async-teardown supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <ps2 supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <sev supported='no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <sgx supported='no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <hyperv supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='features'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>relaxed</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>vapic</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>spinlocks</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>vpindex</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>runtime</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>synic</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>stimer</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>reset</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>vendor_id</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>frequencies</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>reenlightenment</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>tlbflush</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>ipi</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>avic</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>emsr_bitmap</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>xmm_input</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </hyperv>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <launchSecurity supported='no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </features>
Oct  3 09:50:24 compute-0 nova_compute[351685]: </domainCapabilities>
Oct  3 09:50:24 compute-0 nova_compute[351685]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  3 09:50:24 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.764 2 DEBUG nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  3 09:50:24 compute-0 nova_compute[351685]: <domainCapabilities>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <path>/usr/libexec/qemu-kvm</path>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <domain>kvm</domain>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <arch>x86_64</arch>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <vcpu max='4096'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <iothreads supported='yes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <os supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <enum name='firmware'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <value>efi</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <loader supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='type'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>rom</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>pflash</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='readonly'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>yes</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>no</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='secure'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>yes</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>no</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </loader>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  </os>
Oct  3 09:50:24 compute-0 nova_compute[351685]:  <cpu>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <mode name='host-passthrough' supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='hostPassthroughMigratable'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>on</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>off</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </mode>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <mode name='maximum' supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <enum name='maximumMigratable'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>on</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <value>off</value>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </mode>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <mode name='host-model' supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <vendor>AMD</vendor>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='x2apic'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='tsc-deadline'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='hypervisor'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='tsc_adjust'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='spec-ctrl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='stibp'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='arch-capabilities'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='ssbd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='cmp_legacy'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='overflow-recov'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='succor'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='ibrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='amd-ssbd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='virt-ssbd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='lbrv'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='tsc-scale'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='vmcb-clean'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='flushbyasid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='pause-filter'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='pfthreshold'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='svme-addr-chk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='rdctl-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='mds-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='pschange-mc-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='gds-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='require' name='rfds-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <feature policy='disable' name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    </mode>
Oct  3 09:50:24 compute-0 nova_compute[351685]:    <mode name='custom' supported='yes'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-noTSX'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Broadwell-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cascadelake-Server-v5'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cooperlake'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cooperlake-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Cooperlake-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Denverton'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mpx'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Denverton-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mpx'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Denverton-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Denverton-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Dhyana-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Genoa'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amd-psfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='auto-ibrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='stibp-always-on'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Genoa-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amd-psfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='auto-ibrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='stibp-always-on'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Milan'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Milan-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Milan-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amd-psfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='no-nested-data-bp'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='null-sel-clr-base'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='stibp-always-on'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Rome'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Rome-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Rome-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-Rome-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='EPYC-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='GraniteRapids'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='prefetchiti'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='GraniteRapids-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='prefetchiti'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='GraniteRapids-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx10'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx10-128'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx10-256'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx10-512'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='prefetchiti'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-noTSX'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Haswell-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-noTSX'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v1'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v2'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v3'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  3 09:50:24 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v4'>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:24 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v5'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v6'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Icelake-Server-v7'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='IvyBridge'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='IvyBridge-IBRS'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='IvyBridge-v1'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='IvyBridge-v2'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='KnightsMill'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-4fmaps'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-4vnniw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512er'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512pf'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='KnightsMill-v1'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-4fmaps'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-4vnniw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512er'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512pf'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Opteron_G4'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fma4'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xop'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Opteron_G4-v1'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fma4'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xop'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Opteron_G5'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fma4'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='tbm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xop'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Opteron_G5-v1'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fma4'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='tbm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xop'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='SapphireRapids'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='SapphireRapids-v1'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='SapphireRapids-v2'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='SapphireRapids-v3'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='amx-bf16'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='amx-int8'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='amx-tile'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-bf16'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-fp16'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512-vpopcntdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bitalg'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512ifma'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vbmi'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vbmi2'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vnni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrc'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fzrm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='la57'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='taa-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='tsx-ldtrk'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xfd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='SierraForest'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx-ifma'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx-ne-convert'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx-vnni-int8'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='cmpccxadd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='SierraForest-v1'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx-ifma'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx-ne-convert'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx-vnni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx-vnni-int8'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='bus-lock-detect'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='cmpccxadd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fbsdp-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='fsrs'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ibrs-all'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='mcdt-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pbrsb-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='psdp-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='sbdr-ssdp-no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='serialize'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vaes'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='vpclmulqdq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-IBRS'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-v1'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-v2'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-v3'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Skylake-Client-v4'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-IBRS'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v1'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v2'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='hle'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='rtm'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v3'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v4'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Skylake-Server-v5'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512bw'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512cd'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512dq'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512f'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='avx512vl'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='invpcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pcid'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='pku'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Snowridge'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='core-capability'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='mpx'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='split-lock-detect'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Snowridge-v1'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='core-capability'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='mpx'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='split-lock-detect'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Snowridge-v2'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='core-capability'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='split-lock-detect'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Snowridge-v3'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='core-capability'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='split-lock-detect'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='Snowridge-v4'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='cldemote'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='erms'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='gfni'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='movdir64b'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='movdiri'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='xsaves'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='athlon'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='3dnow'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='3dnowext'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='athlon-v1'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='3dnow'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='3dnowext'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='core2duo'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='core2duo-v1'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='coreduo'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='coreduo-v1'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='n270'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='n270-v1'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='ss'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='phenom'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='3dnow'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='3dnowext'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <blockers model='phenom-v1'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='3dnow'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <feature name='3dnowext'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </blockers>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    </mode>
Oct  3 09:50:25 compute-0 nova_compute[351685]:  </cpu>
Oct  3 09:50:25 compute-0 nova_compute[351685]:  <memoryBacking supported='yes'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <enum name='sourceType'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <value>file</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <value>anonymous</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <value>memfd</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:  </memoryBacking>
Oct  3 09:50:25 compute-0 nova_compute[351685]:  <devices>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <disk supported='yes'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='diskDevice'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>disk</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>cdrom</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>floppy</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>lun</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='bus'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>fdc</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>scsi</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>virtio</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>usb</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>sata</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='model'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>virtio</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>virtio-transitional</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>virtio-non-transitional</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    </disk>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <graphics supported='yes'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='type'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>vnc</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>egl-headless</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>dbus</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    </graphics>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <video supported='yes'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='modelType'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>vga</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>cirrus</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>virtio</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>none</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>bochs</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>ramfb</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    </video>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <hostdev supported='yes'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='mode'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>subsystem</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='startupPolicy'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>default</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>mandatory</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>requisite</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>optional</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='subsysType'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>usb</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>pci</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>scsi</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='capsType'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='pciBackend'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    </hostdev>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <rng supported='yes'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='model'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>virtio</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>virtio-transitional</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>virtio-non-transitional</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='backendModel'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>random</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>egd</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>builtin</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    </rng>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <filesystem supported='yes'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='driverType'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>path</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>handle</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>virtiofs</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    </filesystem>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <tpm supported='yes'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='model'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>tpm-tis</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>tpm-crb</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='backendModel'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>emulator</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>external</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='backendVersion'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>2.0</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    </tpm>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <redirdev supported='yes'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='bus'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>usb</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    </redirdev>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <channel supported='yes'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='type'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>pty</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>unix</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    </channel>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <crypto supported='yes'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='model'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='type'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>qemu</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='backendModel'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>builtin</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    </crypto>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <interface supported='yes'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='backendType'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>default</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>passt</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    </interface>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <panic supported='yes'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='model'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>isa</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>hyperv</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    </panic>
Oct  3 09:50:25 compute-0 nova_compute[351685]:  </devices>
Oct  3 09:50:25 compute-0 nova_compute[351685]:  <features>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <gic supported='no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <vmcoreinfo supported='yes'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <genid supported='yes'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <backingStoreInput supported='yes'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <backup supported='yes'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <async-teardown supported='yes'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <ps2 supported='yes'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <sev supported='no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <sgx supported='no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <hyperv supported='yes'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      <enum name='features'>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>relaxed</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>vapic</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>spinlocks</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>vpindex</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>runtime</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>synic</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>stimer</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>reset</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>vendor_id</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>frequencies</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>reenlightenment</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>tlbflush</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>ipi</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>avic</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>emsr_bitmap</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:        <value>xmm_input</value>
Oct  3 09:50:25 compute-0 nova_compute[351685]:      </enum>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    </hyperv>
Oct  3 09:50:25 compute-0 nova_compute[351685]:    <launchSecurity supported='no'/>
Oct  3 09:50:25 compute-0 nova_compute[351685]:  </features>
Oct  3 09:50:25 compute-0 nova_compute[351685]: </domainCapabilities>
Oct  3 09:50:25 compute-0 nova_compute[351685]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  3 09:50:25 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.872 2 DEBUG nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  3 09:50:25 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.873 2 DEBUG nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  3 09:50:25 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.873 2 DEBUG nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  3 09:50:25 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.873 2 INFO nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Secure Boot support detected#033[00m
Oct  3 09:50:25 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.876 2 INFO nova.virt.libvirt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  3 09:50:25 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.876 2 INFO nova.virt.libvirt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  3 09:50:25 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.889 2 DEBUG nova.virt.libvirt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  3 09:50:25 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.918 2 INFO nova.virt.node [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Determined node identity fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from /var/lib/nova/compute_id#033[00m
Oct  3 09:50:25 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.936 2 WARNING nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Compute nodes ['fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct  3 09:50:25 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.956 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct  3 09:50:25 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.971 2 WARNING nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  3 09:50:25 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.971 2 DEBUG oslo_concurrency.lockutils [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:50:25 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.971 2 DEBUG oslo_concurrency.lockutils [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:50:25 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.972 2 DEBUG oslo_concurrency.lockutils [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:50:25 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.972 2 DEBUG nova.compute.resource_tracker [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 09:50:25 compute-0 nova_compute[351685]: 2025-10-03 09:50:24.972 2 DEBUG oslo_concurrency.processutils [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 09:50:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 09:50:25 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186952320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 09:50:25 compute-0 nova_compute[351685]: 2025-10-03 09:50:25.493 2 DEBUG oslo_concurrency.processutils [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 09:50:25 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct  3 09:50:25 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct  3 09:50:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:50:26 compute-0 nova_compute[351685]: 2025-10-03 09:50:26.023 2 WARNING nova.virt.libvirt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 09:50:26 compute-0 nova_compute[351685]: 2025-10-03 09:50:26.025 2 DEBUG nova.compute.resource_tracker [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4572MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 09:50:26 compute-0 nova_compute[351685]: 2025-10-03 09:50:26.025 2 DEBUG oslo_concurrency.lockutils [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:50:26 compute-0 nova_compute[351685]: 2025-10-03 09:50:26.025 2 DEBUG oslo_concurrency.lockutils [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:50:26 compute-0 nova_compute[351685]: 2025-10-03 09:50:26.051 2 WARNING nova.compute.resource_tracker [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] No compute node record for compute-0.ctlplane.example.com:fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a could not be found.#033[00m
Oct  3 09:50:26 compute-0 nova_compute[351685]: 2025-10-03 09:50:26.074 2 INFO nova.compute.resource_tracker [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a#033[00m
Oct  3 09:50:26 compute-0 nova_compute[351685]: 2025-10-03 09:50:26.133 2 DEBUG nova.compute.resource_tracker [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 09:50:26 compute-0 nova_compute[351685]: 2025-10-03 09:50:26.134 2 DEBUG nova.compute.resource_tracker [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 09:50:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:26 compute-0 nova_compute[351685]: 2025-10-03 09:50:26.991 2 INFO nova.scheduler.client.report [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [req-efc11b0d-2947-4686-b04b-a547fd213742] Created resource provider record via placement API for resource provider with UUID fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a and name compute-0.ctlplane.example.com.#033[00m
Oct  3 09:50:27 compute-0 nova_compute[351685]: 2025-10-03 09:50:27.333 2 DEBUG oslo_concurrency.processutils [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 09:50:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 09:50:27 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2511742620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 09:50:27 compute-0 nova_compute[351685]: 2025-10-03 09:50:27.786 2 DEBUG oslo_concurrency.processutils [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 09:50:27 compute-0 nova_compute[351685]: 2025-10-03 09:50:27.794 2 DEBUG nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct  3 09:50:27 compute-0 nova_compute[351685]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Oct  3 09:50:27 compute-0 nova_compute[351685]: 2025-10-03 09:50:27.795 2 INFO nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] kernel doesn't support AMD SEV#033[00m
Oct  3 09:50:27 compute-0 nova_compute[351685]: 2025-10-03 09:50:27.795 2 DEBUG nova.compute.provider_tree [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 09:50:27 compute-0 nova_compute[351685]: 2025-10-03 09:50:27.796 2 DEBUG nova.virt.libvirt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 09:50:27 compute-0 nova_compute[351685]: 2025-10-03 09:50:27.844 2 DEBUG nova.scheduler.client.report [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Updated inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  3 09:50:27 compute-0 nova_compute[351685]: 2025-10-03 09:50:27.845 2 DEBUG nova.compute.provider_tree [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Updating resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  3 09:50:27 compute-0 nova_compute[351685]: 2025-10-03 09:50:27.845 2 DEBUG nova.compute.provider_tree [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 09:50:27 compute-0 nova_compute[351685]: 2025-10-03 09:50:27.940 2 DEBUG nova.compute.provider_tree [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Updating resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  3 09:50:27 compute-0 nova_compute[351685]: 2025-10-03 09:50:27.976 2 DEBUG nova.compute.resource_tracker [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 09:50:27 compute-0 nova_compute[351685]: 2025-10-03 09:50:27.977 2 DEBUG oslo_concurrency.lockutils [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:50:27 compute-0 nova_compute[351685]: 2025-10-03 09:50:27.977 2 DEBUG nova.service [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Oct  3 09:50:28 compute-0 nova_compute[351685]: 2025-10-03 09:50:28.053 2 DEBUG nova.service [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Oct  3 09:50:28 compute-0 nova_compute[351685]: 2025-10-03 09:50:28.053 2 DEBUG nova.servicegroup.drivers.db [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Oct  3 09:50:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:28 compute-0 systemd-logind[798]: New session 60 of user zuul.
Oct  3 09:50:28 compute-0 systemd[1]: Started Session 60 of User zuul.
Oct  3 09:50:29 compute-0 podman[157165]: time="2025-10-03T09:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:50:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45034 "" "Go-http-client/1.1"
Oct  3 09:50:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8545 "" "Go-http-client/1.1"
Oct  3 09:50:30 compute-0 python3.9[352201]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  3 09:50:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:30 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 09:50:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:50:31 compute-0 openstack_network_exporter[159287]: ERROR   09:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:50:31 compute-0 openstack_network_exporter[159287]: ERROR   09:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:50:31 compute-0 openstack_network_exporter[159287]: ERROR   09:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:50:31 compute-0 openstack_network_exporter[159287]: ERROR   09:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:50:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:50:31 compute-0 openstack_network_exporter[159287]: ERROR   09:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:50:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:50:32 compute-0 python3.9[352358]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 09:50:32 compute-0 systemd[1]: Reloading.
Oct  3 09:50:32 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:50:32 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:50:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:33 compute-0 python3.9[352544]: ansible-ansible.builtin.service_facts Invoked
Oct  3 09:50:33 compute-0 network[352561]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  3 09:50:33 compute-0 network[352562]: 'network-scripts' will be removed from distribution in near future.
Oct  3 09:50:33 compute-0 network[352563]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  3 09:50:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:50:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:38 compute-0 podman[352736]: 2025-10-03 09:50:38.861643663 +0000 UTC m=+0.125835939 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, managed_by=edpm_ansible, container_name=kepler, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30)
Oct  3 09:50:38 compute-0 podman[352739]: 2025-10-03 09:50:38.86806098 +0000 UTC m=+0.114081131 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute)
Oct  3 09:50:38 compute-0 podman[352738]: 2025-10-03 09:50:38.877746881 +0000 UTC m=+0.133744743 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:50:38 compute-0 podman[352737]: 2025-10-03 09:50:38.879464737 +0000 UTC m=+0.141513464 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:50:38 compute-0 podman[352756]: 2025-10-03 09:50:38.891611667 +0000 UTC m=+0.133819946 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:50:38 compute-0 podman[352750]: 2025-10-03 09:50:38.891617927 +0000 UTC m=+0.137690140 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible)
Oct  3 09:50:38 compute-0 podman[352740]: 2025-10-03 09:50:38.915772615 +0000 UTC m=+0.163031826 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:50:39 compute-0 python3.9[352981]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:50:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:50:41 compute-0 python3.9[353134]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:50:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:50:41.574 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:50:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:50:41.575 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:50:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:50:41.575 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:50:42 compute-0 python3.9[353286]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:50:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:43 compute-0 python3.9[353438]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:50:43 compute-0 podman[353540]: 2025-10-03 09:50:43.793319339 +0000 UTC m=+0.057305235 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 09:50:44 compute-0 python3.9[353608]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  3 09:50:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:45 compute-0 python3.9[353760]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 09:50:45 compute-0 systemd[1]: Reloading.
Oct  3 09:50:45 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:50:45 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:50:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:50:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:50:45
Oct  3 09:50:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:50:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:50:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'vms', 'backups', 'images', '.rgw.root']
Oct  3 09:50:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:50:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:50:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:50:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:50:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:50:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:50:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:50:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:46 compute-0 python3.9[353947]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:50:47 compute-0 python3.9[354100]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:50:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:48 compute-0 python3.9[354250]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:50:49 compute-0 python3.9[354402]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:50:49 compute-0 python3.9[354479]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:50:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:50:51 compute-0 podman[354604]: 2025-10-03 09:50:51.254439907 +0000 UTC m=+0.083891629 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 09:50:51 compute-0 podman[354603]: 2025-10-03 09:50:51.256506965 +0000 UTC m=+0.084887872 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 09:50:51 compute-0 python3.9[354666]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Oct  3 09:50:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:53 compute-0 python3.9[354818]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Oct  3 09:50:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:54 compute-0 python3.9[354969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:50:55 compute-0 nova_compute[351685]: 2025-10-03 09:50:55.055 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:50:55 compute-0 nova_compute[351685]: 2025-10-03 09:50:55.078 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:50:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 09:50:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3273 writes, 14K keys, 3273 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3273 writes, 3273 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1278 writes, 5546 keys, 1278 commit groups, 1.0 writes per commit group, ingest: 8.44 MB, 0.01 MB/s#012Interval WAL: 1278 writes, 1278 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     67.9      0.21              0.06         6    0.035       0      0       0.0       0.0#012  L6      1/0    7.25 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4    101.9     84.9      0.41              0.09         5    0.082     19K   2188       0.0       0.0#012 Sum      1/0    7.25 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4     67.4     79.1      0.62              0.16        11    0.056     19K   2188       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6     67.3     68.9      0.39              0.09         6    0.065     12K   1455       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    101.9     84.9      0.41              0.09         5    0.082     19K   2188       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     70.7      0.20              0.06         5    0.040       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.6      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.014, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.03 MB/s read, 0.6 seconds#012Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.04 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56005dddb1f0#2 capacity: 308.00 MB usage: 1.37 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(81,1.18 MB,0.382773%) FilterBlock(12,63.23 KB,0.0200495%) IndexBlock(12,130.14 KB,0.0412631%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  3 09:50:55 compute-0 python3.9[355045]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/ceilometer.conf _original_basename=ceilometer.conf recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:50:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:50:56 compute-0 python3.9[355195]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:50:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:56 compute-0 python3.9[355271]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/polling.yaml _original_basename=polling.yaml recurse=False state=file path=/var/lib/openstack/config/telemetry/polling.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:50:57 compute-0 python3.9[355421]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:50:58 compute-0 python3.9[355497]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/custom.conf _original_basename=custom.conf recurse=False state=file path=/var/lib/openstack/config/telemetry/custom.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:50:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:50:59 compute-0 python3.9[355647]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:50:59 compute-0 podman[157165]: time="2025-10-03T09:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:50:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45034 "" "Go-http-client/1.1"
Oct  3 09:50:59 compute-0 python3.9[355799]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:50:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8539 "" "Go-http-client/1.1"
Oct  3 09:51:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:00 compute-0 python3.9[355951]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:51:01 compute-0 python3.9[356027]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json _original_basename=ceilometer-agent-compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:01 compute-0 openstack_network_exporter[159287]: ERROR   09:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:51:01 compute-0 openstack_network_exporter[159287]: ERROR   09:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:51:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:51:01 compute-0 openstack_network_exporter[159287]: ERROR   09:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:51:01 compute-0 openstack_network_exporter[159287]: ERROR   09:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:51:01 compute-0 openstack_network_exporter[159287]: ERROR   09:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:51:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:51:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:02 compute-0 python3.9[356177]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:03 compute-0 python3.9[356253]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:51:04 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:51:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:51:04 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:51:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:51:04 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:51:04 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 72dd8814-01f9-41c2-855f-4eb82f02560f does not exist
Oct  3 09:51:04 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ae0bc025-bf5b-4313-81c7-30120acfc148 does not exist
Oct  3 09:51:04 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ba7617a8-5682-4c65-91ef-ca248e344a27 does not exist
Oct  3 09:51:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:51:04 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:51:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:51:04 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:51:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:51:04 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:51:04 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:51:04 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:51:04 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:51:04 compute-0 python3.9[356557]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:05 compute-0 podman[356749]: 2025-10-03 09:51:05.159021579 +0000 UTC m=+0.061661944 container create 61f50549680ba21840c294c8a1e12522e4bd0fa105a5160869cf0bbfdb34901a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:51:05 compute-0 python3.9[356734]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json _original_basename=ceilometer_agent_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:05 compute-0 systemd[1]: Started libpod-conmon-61f50549680ba21840c294c8a1e12522e4bd0fa105a5160869cf0bbfdb34901a.scope.
Oct  3 09:51:05 compute-0 podman[356749]: 2025-10-03 09:51:05.132867177 +0000 UTC m=+0.035507552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:51:05 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:51:05 compute-0 podman[356749]: 2025-10-03 09:51:05.264182332 +0000 UTC m=+0.166822707 container init 61f50549680ba21840c294c8a1e12522e4bd0fa105a5160869cf0bbfdb34901a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 09:51:05 compute-0 podman[356749]: 2025-10-03 09:51:05.273690818 +0000 UTC m=+0.176331173 container start 61f50549680ba21840c294c8a1e12522e4bd0fa105a5160869cf0bbfdb34901a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 09:51:05 compute-0 podman[356749]: 2025-10-03 09:51:05.280868458 +0000 UTC m=+0.183508833 container attach 61f50549680ba21840c294c8a1e12522e4bd0fa105a5160869cf0bbfdb34901a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 09:51:05 compute-0 eloquent_haslett[356764]: 167 167
Oct  3 09:51:05 compute-0 systemd[1]: libpod-61f50549680ba21840c294c8a1e12522e4bd0fa105a5160869cf0bbfdb34901a.scope: Deactivated successfully.
Oct  3 09:51:05 compute-0 podman[356749]: 2025-10-03 09:51:05.296392348 +0000 UTC m=+0.199032703 container died 61f50549680ba21840c294c8a1e12522e4bd0fa105a5160869cf0bbfdb34901a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 09:51:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6a4bc3fe0d89a9bc28f33f33a23fd1db79188cddda9fcbee0683ec3199b1abb-merged.mount: Deactivated successfully.
Oct  3 09:51:05 compute-0 podman[356749]: 2025-10-03 09:51:05.347302946 +0000 UTC m=+0.249943301 container remove 61f50549680ba21840c294c8a1e12522e4bd0fa105a5160869cf0bbfdb34901a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:51:05 compute-0 systemd[1]: libpod-conmon-61f50549680ba21840c294c8a1e12522e4bd0fa105a5160869cf0bbfdb34901a.scope: Deactivated successfully.
Oct  3 09:51:05 compute-0 podman[356862]: 2025-10-03 09:51:05.549753488 +0000 UTC m=+0.073031370 container create f2e56227c176f1067079a84eb73a1cb98d48c7874bfce9de4f61541157a0810f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:51:05 compute-0 systemd[1]: Started libpod-conmon-f2e56227c176f1067079a84eb73a1cb98d48c7874bfce9de4f61541157a0810f.scope.
Oct  3 09:51:05 compute-0 podman[356862]: 2025-10-03 09:51:05.5268152 +0000 UTC m=+0.050093122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:51:05 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef9b5a07dd9aab24f8c2d55dd8be9a6576188b70320e9ecffe685f8892105b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef9b5a07dd9aab24f8c2d55dd8be9a6576188b70320e9ecffe685f8892105b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef9b5a07dd9aab24f8c2d55dd8be9a6576188b70320e9ecffe685f8892105b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef9b5a07dd9aab24f8c2d55dd8be9a6576188b70320e9ecffe685f8892105b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef9b5a07dd9aab24f8c2d55dd8be9a6576188b70320e9ecffe685f8892105b5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:05 compute-0 podman[356862]: 2025-10-03 09:51:05.677010552 +0000 UTC m=+0.200288434 container init f2e56227c176f1067079a84eb73a1cb98d48c7874bfce9de4f61541157a0810f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:51:05 compute-0 podman[356862]: 2025-10-03 09:51:05.69156027 +0000 UTC m=+0.214838142 container start f2e56227c176f1067079a84eb73a1cb98d48c7874bfce9de4f61541157a0810f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shaw, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 09:51:05 compute-0 podman[356862]: 2025-10-03 09:51:05.695131415 +0000 UTC m=+0.218409307 container attach f2e56227c176f1067079a84eb73a1cb98d48c7874bfce9de4f61541157a0810f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  3 09:51:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:51:05.795637) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485065795682, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1463, "num_deletes": 507, "total_data_size": 1810722, "memory_usage": 1848416, "flush_reason": "Manual Compaction"}
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485065807315, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1792813, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13514, "largest_seqno": 14976, "table_properties": {"data_size": 1786463, "index_size": 3105, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 15741, "raw_average_key_size": 18, "raw_value_size": 1771707, "raw_average_value_size": 2038, "num_data_blocks": 142, "num_entries": 869, "num_filter_entries": 869, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759484941, "oldest_key_time": 1759484941, "file_creation_time": 1759485065, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 11728 microseconds, and 4797 cpu microseconds.
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:51:05.807371) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1792813 bytes OK
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:51:05.807390) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:51:05.809528) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:51:05.809539) EVENT_LOG_v1 {"time_micros": 1759485065809535, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:51:05.809554) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1803214, prev total WAL file size 1803214, number of live WAL files 2.
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:51:05.810301) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1750KB)], [32(7428KB)]
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485065810343, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9399988, "oldest_snapshot_seqno": -1}
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3842 keys, 7397812 bytes, temperature: kUnknown
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485065867288, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7397812, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7369927, "index_size": 17151, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 94125, "raw_average_key_size": 24, "raw_value_size": 7298122, "raw_average_value_size": 1899, "num_data_blocks": 727, "num_entries": 3842, "num_filter_entries": 3842, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759485065, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:51:05.868117) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7397812 bytes
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:51:05.870619) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.2 rd, 128.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.3 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(9.4) write-amplify(4.1) OK, records in: 4869, records dropped: 1027 output_compression: NoCompression
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:51:05.870653) EVENT_LOG_v1 {"time_micros": 1759485065870641, "job": 14, "event": "compaction_finished", "compaction_time_micros": 57592, "compaction_time_cpu_micros": 23386, "output_level": 6, "num_output_files": 1, "total_output_size": 7397812, "num_input_records": 4869, "num_output_records": 3842, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485065872818, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485065874748, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:51:05.810132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:51:05.875206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:51:05.875209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:51:05.875211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:51:05.875212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:51:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:51:05.875214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:51:05 compute-0 python3.9[356960]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:06 compute-0 python3.9[357036]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:06 compute-0 gallant_shaw[356905]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:51:06 compute-0 gallant_shaw[356905]: --> relative data size: 1.0
Oct  3 09:51:06 compute-0 gallant_shaw[356905]: --> All data devices are unavailable
Oct  3 09:51:06 compute-0 systemd[1]: libpod-f2e56227c176f1067079a84eb73a1cb98d48c7874bfce9de4f61541157a0810f.scope: Deactivated successfully.
Oct  3 09:51:06 compute-0 podman[356862]: 2025-10-03 09:51:06.85138034 +0000 UTC m=+1.374658232 container died f2e56227c176f1067079a84eb73a1cb98d48c7874bfce9de4f61541157a0810f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:51:06 compute-0 systemd[1]: libpod-f2e56227c176f1067079a84eb73a1cb98d48c7874bfce9de4f61541157a0810f.scope: Consumed 1.060s CPU time.
Oct  3 09:51:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ef9b5a07dd9aab24f8c2d55dd8be9a6576188b70320e9ecffe685f8892105b5-merged.mount: Deactivated successfully.
Oct  3 09:51:06 compute-0 podman[356862]: 2025-10-03 09:51:06.92443391 +0000 UTC m=+1.447711782 container remove f2e56227c176f1067079a84eb73a1cb98d48c7874bfce9de4f61541157a0810f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:51:06 compute-0 systemd[1]: libpod-conmon-f2e56227c176f1067079a84eb73a1cb98d48c7874bfce9de4f61541157a0810f.scope: Deactivated successfully.
Oct  3 09:51:07 compute-0 python3.9[357243]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:07 compute-0 podman[357439]: 2025-10-03 09:51:07.653494143 +0000 UTC m=+0.067493242 container create e711d8f319585ba6e388b761ff56157017782164f0b76f994e1215f5b19146ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 09:51:07 compute-0 python3.9[357423]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/firewall.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/firewall.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:07 compute-0 podman[357439]: 2025-10-03 09:51:07.631732133 +0000 UTC m=+0.045731252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:51:07 compute-0 systemd[1]: Started libpod-conmon-e711d8f319585ba6e388b761ff56157017782164f0b76f994e1215f5b19146ce.scope.
Oct  3 09:51:07 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:51:07 compute-0 podman[357439]: 2025-10-03 09:51:07.761558319 +0000 UTC m=+0.175557458 container init e711d8f319585ba6e388b761ff56157017782164f0b76f994e1215f5b19146ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_colden, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 09:51:07 compute-0 podman[357439]: 2025-10-03 09:51:07.777512922 +0000 UTC m=+0.191512011 container start e711d8f319585ba6e388b761ff56157017782164f0b76f994e1215f5b19146ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:51:07 compute-0 nervous_colden[357452]: 167 167
Oct  3 09:51:07 compute-0 systemd[1]: libpod-e711d8f319585ba6e388b761ff56157017782164f0b76f994e1215f5b19146ce.scope: Deactivated successfully.
Oct  3 09:51:07 compute-0 podman[357439]: 2025-10-03 09:51:07.786273744 +0000 UTC m=+0.200272843 container attach e711d8f319585ba6e388b761ff56157017782164f0b76f994e1215f5b19146ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_colden, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:51:07 compute-0 podman[357439]: 2025-10-03 09:51:07.787181523 +0000 UTC m=+0.201180612 container died e711d8f319585ba6e388b761ff56157017782164f0b76f994e1215f5b19146ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_colden, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:51:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-921b33c40f835cb5511f0fa56a748e42bd622c32a84b69050599ebc65b725fa9-merged.mount: Deactivated successfully.
Oct  3 09:51:07 compute-0 podman[357439]: 2025-10-03 09:51:07.850390926 +0000 UTC m=+0.264390015 container remove e711d8f319585ba6e388b761ff56157017782164f0b76f994e1215f5b19146ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_colden, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 09:51:07 compute-0 systemd[1]: libpod-conmon-e711d8f319585ba6e388b761ff56157017782164f0b76f994e1215f5b19146ce.scope: Deactivated successfully.
Oct  3 09:51:08 compute-0 podman[357551]: 2025-10-03 09:51:08.023381602 +0000 UTC m=+0.048545593 container create b1944820411dd1e249b4e09e7c51a2be4526a69d32a9352412f8eb77bb018182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:51:08 compute-0 systemd[1]: Started libpod-conmon-b1944820411dd1e249b4e09e7c51a2be4526a69d32a9352412f8eb77bb018182.scope.
Oct  3 09:51:08 compute-0 podman[357551]: 2025-10-03 09:51:08.004796113 +0000 UTC m=+0.029960124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:51:08 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/733ecefdd8e176d2b5af200bfed123114153b0b7888c0a27e3baf99aa027ff1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/733ecefdd8e176d2b5af200bfed123114153b0b7888c0a27e3baf99aa027ff1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/733ecefdd8e176d2b5af200bfed123114153b0b7888c0a27e3baf99aa027ff1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/733ecefdd8e176d2b5af200bfed123114153b0b7888c0a27e3baf99aa027ff1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:08 compute-0 podman[357551]: 2025-10-03 09:51:08.123480682 +0000 UTC m=+0.148644693 container init b1944820411dd1e249b4e09e7c51a2be4526a69d32a9352412f8eb77bb018182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_edison, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 09:51:08 compute-0 podman[357551]: 2025-10-03 09:51:08.139865098 +0000 UTC m=+0.165029089 container start b1944820411dd1e249b4e09e7c51a2be4526a69d32a9352412f8eb77bb018182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_edison, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  3 09:51:08 compute-0 podman[357551]: 2025-10-03 09:51:08.14395487 +0000 UTC m=+0.169118871 container attach b1944820411dd1e249b4e09e7c51a2be4526a69d32a9352412f8eb77bb018182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_edison, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 09:51:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:08 compute-0 python3.9[357647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:08 compute-0 confident_edison[357592]: {
Oct  3 09:51:08 compute-0 confident_edison[357592]:    "0": [
Oct  3 09:51:08 compute-0 confident_edison[357592]:        {
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "devices": [
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "/dev/loop3"
Oct  3 09:51:08 compute-0 confident_edison[357592]:            ],
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "lv_name": "ceph_lv0",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "lv_size": "21470642176",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "name": "ceph_lv0",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "tags": {
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.cluster_name": "ceph",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.crush_device_class": "",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.encrypted": "0",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.osd_id": "0",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.type": "block",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.vdo": "0"
Oct  3 09:51:08 compute-0 confident_edison[357592]:            },
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "type": "block",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "vg_name": "ceph_vg0"
Oct  3 09:51:08 compute-0 confident_edison[357592]:        }
Oct  3 09:51:08 compute-0 confident_edison[357592]:    ],
Oct  3 09:51:08 compute-0 confident_edison[357592]:    "1": [
Oct  3 09:51:08 compute-0 confident_edison[357592]:        {
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "devices": [
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "/dev/loop4"
Oct  3 09:51:08 compute-0 confident_edison[357592]:            ],
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "lv_name": "ceph_lv1",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "lv_size": "21470642176",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "name": "ceph_lv1",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "tags": {
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.cluster_name": "ceph",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.crush_device_class": "",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.encrypted": "0",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.osd_id": "1",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.type": "block",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.vdo": "0"
Oct  3 09:51:08 compute-0 confident_edison[357592]:            },
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "type": "block",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "vg_name": "ceph_vg1"
Oct  3 09:51:08 compute-0 confident_edison[357592]:        }
Oct  3 09:51:08 compute-0 confident_edison[357592]:    ],
Oct  3 09:51:08 compute-0 confident_edison[357592]:    "2": [
Oct  3 09:51:08 compute-0 confident_edison[357592]:        {
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "devices": [
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "/dev/loop5"
Oct  3 09:51:08 compute-0 confident_edison[357592]:            ],
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "lv_name": "ceph_lv2",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "lv_size": "21470642176",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "name": "ceph_lv2",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "tags": {
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.cluster_name": "ceph",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.crush_device_class": "",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.encrypted": "0",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.osd_id": "2",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.type": "block",
Oct  3 09:51:08 compute-0 confident_edison[357592]:                "ceph.vdo": "0"
Oct  3 09:51:08 compute-0 confident_edison[357592]:            },
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "type": "block",
Oct  3 09:51:08 compute-0 confident_edison[357592]:            "vg_name": "ceph_vg2"
Oct  3 09:51:08 compute-0 confident_edison[357592]:        }
Oct  3 09:51:08 compute-0 confident_edison[357592]:    ]
Oct  3 09:51:08 compute-0 confident_edison[357592]: }
Oct  3 09:51:08 compute-0 systemd[1]: libpod-b1944820411dd1e249b4e09e7c51a2be4526a69d32a9352412f8eb77bb018182.scope: Deactivated successfully.
Oct  3 09:51:08 compute-0 podman[357551]: 2025-10-03 09:51:08.940106741 +0000 UTC m=+0.965270732 container died b1944820411dd1e249b4e09e7c51a2be4526a69d32a9352412f8eb77bb018182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_edison, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 09:51:08 compute-0 python3.9[357723]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/node_exporter.json _original_basename=node_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-733ecefdd8e176d2b5af200bfed123114153b0b7888c0a27e3baf99aa027ff1d-merged.mount: Deactivated successfully.
Oct  3 09:51:09 compute-0 podman[357551]: 2025-10-03 09:51:09.088393031 +0000 UTC m=+1.113557032 container remove b1944820411dd1e249b4e09e7c51a2be4526a69d32a9352412f8eb77bb018182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_edison, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 09:51:09 compute-0 systemd[1]: libpod-conmon-b1944820411dd1e249b4e09e7c51a2be4526a69d32a9352412f8eb77bb018182.scope: Deactivated successfully.
Oct  3 09:51:09 compute-0 podman[357739]: 2025-10-03 09:51:09.185704471 +0000 UTC m=+0.198592629 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Oct  3 09:51:09 compute-0 podman[357737]: 2025-10-03 09:51:09.199361971 +0000 UTC m=+0.218826070 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:51:09 compute-0 podman[357729]: 2025-10-03 09:51:09.200208108 +0000 UTC m=+0.209183840 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, build-date=2024-09-18T21:23:30, distribution-scope=public, version=9.4, architecture=x86_64, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 09:51:09 compute-0 podman[357743]: 2025-10-03 09:51:09.204736334 +0000 UTC m=+0.201703579 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 09:51:09 compute-0 podman[357738]: 2025-10-03 09:51:09.206741818 +0000 UTC m=+0.226618951 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:51:09 compute-0 podman[357741]: 2025-10-03 09:51:09.212337758 +0000 UTC m=+0.222294732 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.tags=minimal rhel9)
Oct  3 09:51:09 compute-0 podman[357740]: 2025-10-03 09:51:09.223024192 +0000 UTC m=+0.217344032 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  3 09:51:09 compute-0 python3.9[358127]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:09 compute-0 podman[358170]: 2025-10-03 09:51:09.876660149 +0000 UTC m=+0.054141503 container create 49281efc4367c583233bfdc18c564a66e7ee0521aa5d7c5fd798de9234555afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_booth, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 09:51:09 compute-0 systemd[1]: Started libpod-conmon-49281efc4367c583233bfdc18c564a66e7ee0521aa5d7c5fd798de9234555afa.scope.
Oct  3 09:51:09 compute-0 podman[358170]: 2025-10-03 09:51:09.859371323 +0000 UTC m=+0.036852697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:51:09 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:51:09 compute-0 podman[358170]: 2025-10-03 09:51:09.976710867 +0000 UTC m=+0.154192251 container init 49281efc4367c583233bfdc18c564a66e7ee0521aa5d7c5fd798de9234555afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_booth, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 09:51:09 compute-0 podman[358170]: 2025-10-03 09:51:09.98767119 +0000 UTC m=+0.165152544 container start 49281efc4367c583233bfdc18c564a66e7ee0521aa5d7c5fd798de9234555afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:51:09 compute-0 podman[358170]: 2025-10-03 09:51:09.992419142 +0000 UTC m=+0.169900516 container attach 49281efc4367c583233bfdc18c564a66e7ee0521aa5d7c5fd798de9234555afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:51:09 compute-0 silly_booth[358225]: 167 167
Oct  3 09:51:09 compute-0 systemd[1]: libpod-49281efc4367c583233bfdc18c564a66e7ee0521aa5d7c5fd798de9234555afa.scope: Deactivated successfully.
Oct  3 09:51:09 compute-0 podman[358170]: 2025-10-03 09:51:09.994863171 +0000 UTC m=+0.172344525 container died 49281efc4367c583233bfdc18c564a66e7ee0521aa5d7c5fd798de9234555afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_booth, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Oct  3 09:51:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d3a53f40a6faeeb37fde7ee95b292126e42ff4d8291e0f59f136b63e449f411-merged.mount: Deactivated successfully.
Oct  3 09:51:10 compute-0 podman[358170]: 2025-10-03 09:51:10.048787726 +0000 UTC m=+0.226269080 container remove 49281efc4367c583233bfdc18c564a66e7ee0521aa5d7c5fd798de9234555afa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_booth, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 09:51:10 compute-0 systemd[1]: libpod-conmon-49281efc4367c583233bfdc18c564a66e7ee0521aa5d7c5fd798de9234555afa.scope: Deactivated successfully.
Oct  3 09:51:10 compute-0 python3.9[358270]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:10 compute-0 podman[358279]: 2025-10-03 09:51:10.268737012 +0000 UTC m=+0.083013212 container create fc948dea6419edef86d502df69a88cfc0f467bcf3916e7688982259610e8d055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 09:51:10 compute-0 podman[358279]: 2025-10-03 09:51:10.244902124 +0000 UTC m=+0.059178404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:51:10 compute-0 systemd[1]: Started libpod-conmon-fc948dea6419edef86d502df69a88cfc0f467bcf3916e7688982259610e8d055.scope.
Oct  3 09:51:10 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17fb4454a5baf5e70a1ba7f1319907ebb29e8e33473eae8ae18f073fa97c7d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17fb4454a5baf5e70a1ba7f1319907ebb29e8e33473eae8ae18f073fa97c7d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17fb4454a5baf5e70a1ba7f1319907ebb29e8e33473eae8ae18f073fa97c7d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17fb4454a5baf5e70a1ba7f1319907ebb29e8e33473eae8ae18f073fa97c7d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:10 compute-0 podman[358279]: 2025-10-03 09:51:10.398077462 +0000 UTC m=+0.212353692 container init fc948dea6419edef86d502df69a88cfc0f467bcf3916e7688982259610e8d055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:51:10 compute-0 podman[358279]: 2025-10-03 09:51:10.412726723 +0000 UTC m=+0.227002913 container start fc948dea6419edef86d502df69a88cfc0f467bcf3916e7688982259610e8d055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  3 09:51:10 compute-0 podman[358279]: 2025-10-03 09:51:10.418400156 +0000 UTC m=+0.232676416 container attach fc948dea6419edef86d502df69a88cfc0f467bcf3916e7688982259610e8d055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_boyd, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:51:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:51:11 compute-0 python3.9[358449]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]: {
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:        "osd_id": 1,
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:        "type": "bluestore"
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:    },
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:        "osd_id": 2,
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:        "type": "bluestore"
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:    },
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:        "osd_id": 0,
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:        "type": "bluestore"
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]:    }
Oct  3 09:51:11 compute-0 nostalgic_boyd[358319]: }
Oct  3 09:51:11 compute-0 systemd[1]: libpod-fc948dea6419edef86d502df69a88cfc0f467bcf3916e7688982259610e8d055.scope: Deactivated successfully.
Oct  3 09:51:11 compute-0 systemd[1]: libpod-fc948dea6419edef86d502df69a88cfc0f467bcf3916e7688982259610e8d055.scope: Consumed 1.163s CPU time.
Oct  3 09:51:11 compute-0 podman[358279]: 2025-10-03 09:51:11.595670537 +0000 UTC m=+1.409946747 container died fc948dea6419edef86d502df69a88cfc0f467bcf3916e7688982259610e8d055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:51:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c17fb4454a5baf5e70a1ba7f1319907ebb29e8e33473eae8ae18f073fa97c7d8-merged.mount: Deactivated successfully.
Oct  3 09:51:11 compute-0 python3.9[358545]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json _original_basename=openstack_network_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:11 compute-0 podman[358279]: 2025-10-03 09:51:11.67719283 +0000 UTC m=+1.491469030 container remove fc948dea6419edef86d502df69a88cfc0f467bcf3916e7688982259610e8d055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_boyd, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:51:11 compute-0 systemd[1]: libpod-conmon-fc948dea6419edef86d502df69a88cfc0f467bcf3916e7688982259610e8d055.scope: Deactivated successfully.
Oct  3 09:51:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:51:11 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:51:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:51:11 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:51:11 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev c07c566f-067a-4c2b-8da8-4a2c7cfacc2b does not exist
Oct  3 09:51:11 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e1a711fd-20c7-4d25-ae26-e5d06cc876bb does not exist
Oct  3 09:51:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:51:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:51:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:12 compute-0 python3.9[358764]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:13 compute-0 python3.9[358840]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml _original_basename=openstack_network_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:14 compute-0 podman[358964]: 2025-10-03 09:51:14.118871384 +0000 UTC m=+0.071979386 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  3 09:51:14 compute-0 python3.9[359006]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:15 compute-0 python3.9[359085]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/podman_exporter.json _original_basename=podman_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:51:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:51:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:51:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:51:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:51:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:51:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:51:16 compute-0 python3.9[359235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:16 compute-0 python3.9[359311]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:17 compute-0 python3.9[359461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:18 compute-0 python3.9[359537]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:18 compute-0 python3.9[359687]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:19 compute-0 python3.9[359763]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:20 compute-0 python3.9[359914]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:51:20 compute-0 python3.9[359990]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:21 compute-0 podman[360114]: 2025-10-03 09:51:21.470031452 +0000 UTC m=+0.080909015 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  3 09:51:21 compute-0 podman[360115]: 2025-10-03 09:51:21.490918003 +0000 UTC m=+0.102422655 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 09:51:21 compute-0 python3.9[360178]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:22 compute-0 python3.9[360331]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:23 compute-0 python3.9[360483]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.732 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.733 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.733 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.733 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.746 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.747 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.747 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.748 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.748 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.748 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.749 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.749 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.749 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.776 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.776 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.777 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.777 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 09:51:23 compute-0 nova_compute[351685]: 2025-10-03 09:51:23.778 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 09:51:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 09:51:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1268207017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 09:51:24 compute-0 nova_compute[351685]: 2025-10-03 09:51:24.199 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 09:51:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:24 compute-0 nova_compute[351685]: 2025-10-03 09:51:24.541 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 09:51:24 compute-0 nova_compute[351685]: 2025-10-03 09:51:24.542 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4516MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 09:51:24 compute-0 nova_compute[351685]: 2025-10-03 09:51:24.542 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:51:24 compute-0 nova_compute[351685]: 2025-10-03 09:51:24.542 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:51:24 compute-0 nova_compute[351685]: 2025-10-03 09:51:24.625 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 09:51:24 compute-0 nova_compute[351685]: 2025-10-03 09:51:24.626 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 09:51:24 compute-0 nova_compute[351685]: 2025-10-03 09:51:24.644 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 09:51:24 compute-0 python3.9[360657]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:51:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 09:51:25 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2088036102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 09:51:25 compute-0 nova_compute[351685]: 2025-10-03 09:51:25.146 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 09:51:25 compute-0 nova_compute[351685]: 2025-10-03 09:51:25.155 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 09:51:25 compute-0 nova_compute[351685]: 2025-10-03 09:51:25.171 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 09:51:25 compute-0 nova_compute[351685]: 2025-10-03 09:51:25.173 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 09:51:25 compute-0 nova_compute[351685]: 2025-10-03 09:51:25.173 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:51:25 compute-0 python3.9[360833]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:51:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:26 compute-0 python3.9[360911]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:51:27 compute-0 python3.9[360987]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:27 compute-0 python3.9[361065]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ _original_basename=healthcheck.future recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:51:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:28 compute-0 python3.9[361217]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Oct  3 09:51:29 compute-0 podman[157165]: time="2025-10-03T09:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:51:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45034 "" "Go-http-client/1.1"
Oct  3 09:51:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8547 "" "Go-http-client/1.1"
Oct  3 09:51:30 compute-0 python3.9[361369]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 09:51:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:51:31 compute-0 python3[361521]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 09:51:31 compute-0 openstack_network_exporter[159287]: ERROR   09:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:51:31 compute-0 openstack_network_exporter[159287]: ERROR   09:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:51:31 compute-0 openstack_network_exporter[159287]: ERROR   09:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:51:31 compute-0 openstack_network_exporter[159287]: ERROR   09:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:51:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:51:31 compute-0 openstack_network_exporter[159287]: ERROR   09:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:51:31 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:51:31 compute-0 python3[361521]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "f679b9c320fc42d5695129dd54be81f43c4c4ec41e2859a2f48785c28a8d8cbc",#012          "Digest": "sha256:eeebd2ed5aae224b8d96af1876b27e6fb66f6a4fb424c44c0cce3beaf0cfab2e",#012          "RepoTags": [#012               "quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested"#012          ],#012          "RepoDigests": [#012               "quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute@sha256:eeebd2ed5aae224b8d96af1876b27e6fb66f6a4fb424c44c0cce3beaf0cfab2e"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-10-03T05:12:17.870393293Z",#012          "Config": {#012               "User": "root",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.4",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20250930",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 10 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "254c55e1b6431493ec1ac89f73b53751",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 601353416,#012          "VirtualSize": 601353416,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/a8fbef6de0d2fb3a6c3e2e15bb156c6ca383c2ff7aba9149465ec8a177a066c5/diff:/var/lib/containers/storage/overlay/318aa93b916f9592fbcefb1c25cbf33eebf5096485ac7bfd033b797ea41abece/diff:/var/lib/containers/storage/overlay/d0d7fac02059cf0a8f447a8790ae560ec4756db32ca862aca012b23397a60a58/diff:/var/lib/containers/storage/overlay/3d7063f4eea69b495feea125e59ae13f34b53a14073a991c7a4a030171385d0a/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/d7a1771fb8e44f1cca6b54a57ded62e9df2d1666ec8bfdd518fff0e0d8ada1df/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/d7a1771fb8e44f1cca6b54a57ded62e9df2d1666ec8bfdd518fff0e0d8ada1df/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:3d7063f4eea69b495feea125e59ae13f34b53a14073a991c7a4a030171385d0a",#012                    "sha256:7f9be9b9e6e96dea5bc0d82c37becafcdd2da238f538c542eeb9a6119389c6a9",#012                    "sha256:12d8eaff0dccd213bfc544c46147b5e37c62a3db2e30fe108cd8b30d804e6f17",#012                    "sha256:63454cea95c6b76719d8d8e3cad4e54528c8521d698f35c63dadd4f60004afaf",#012                    "sha256:d92c01dc0fed4471a931f84472a37800382dff6fc6c42400b87fd60d0f235ec0"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.4",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20250930",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 10 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "254c55e1b6431493ec1ac89f73b53751",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "root",#012          "History": [#012               {#012                    "created": "2025-09-30T01:01:28.661608703Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:418939e7e2ccbc31d43c6839107e54b74f045789b3e6192d8110e4430180b37e in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-09-30T01:01:28.661706565Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 10 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20250930\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-09-30T01:01:31.805474404Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-10-03T05:04:07.154700189Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream10",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-03T05:04:07.154769441Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-03T05:04:07.154796482Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-03T05:04:07.154833684Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-03T05:04:07.154862604Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-03T05:04:07.154882655Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-03T05:04:07.548401446Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-03T05:04:08.138795621Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/centos.repo\" ]; then rm -f /etc/yum.repos.d/centos*.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-03T05:04:17.251075027Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && cr
Oct  3 09:51:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:32 compute-0 python3.9[361723]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:51:33 compute-0 python3.9[361877]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:34 compute-0 python3.9[362028]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759485093.6099439-484-67359764065826/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:35 compute-0 python3.9[362104]: ansible-systemd Invoked with state=started name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:51:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:51:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:36 compute-0 python3.9[362258]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 09:51:36 compute-0 systemd[1]: Stopping ceilometer_agent_compute container...
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:51:37.049 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:51:37.151 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:51:37.151 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:51:37.151 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:51:37.151 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[153651]: 2025-10-03 09:51:37.162 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Oct  3 09:51:37 compute-0 virtqemud[137656]: End of file while reading data: Input/output error
Oct  3 09:51:37 compute-0 virtqemud[137656]: End of file while reading data: Input/output error
Oct  3 09:51:37 compute-0 systemd[1]: libpod-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.scope: Deactivated successfully.
Oct  3 09:51:37 compute-0 systemd[1]: libpod-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.scope: Consumed 3.265s CPU time.
Oct  3 09:51:37 compute-0 podman[362262]: 2025-10-03 09:51:37.427569442 +0000 UTC m=+0.429660413 container died d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm)
Oct  3 09:51:37 compute-0 systemd[1]: d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47-50705040be7aa45f.timer: Deactivated successfully.
Oct  3 09:51:37 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.
Oct  3 09:51:37 compute-0 systemd[1]: d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47-50705040be7aa45f.service: Failed to open /run/systemd/transient/d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47-50705040be7aa45f.service: No such file or directory
Oct  3 09:51:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47-userdata-shm.mount: Deactivated successfully.
Oct  3 09:51:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c98a55060b58199c4bc42ce3ae28d8cb4d260b5f5bce8470981640f442d1727-merged.mount: Deactivated successfully.
Oct  3 09:51:37 compute-0 podman[362262]: 2025-10-03 09:51:37.51112827 +0000 UTC m=+0.513219201 container cleanup d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute)
Oct  3 09:51:37 compute-0 podman[362262]: ceilometer_agent_compute
Oct  3 09:51:37 compute-0 podman[362291]: ceilometer_agent_compute
Oct  3 09:51:37 compute-0 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Oct  3 09:51:37 compute-0 systemd[1]: Stopped ceilometer_agent_compute container.
Oct  3 09:51:37 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Oct  3 09:51:37 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c98a55060b58199c4bc42ce3ae28d8cb4d260b5f5bce8470981640f442d1727/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c98a55060b58199c4bc42ce3ae28d8cb4d260b5f5bce8470981640f442d1727/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c98a55060b58199c4bc42ce3ae28d8cb4d260b5f5bce8470981640f442d1727/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c98a55060b58199c4bc42ce3ae28d8cb4d260b5f5bce8470981640f442d1727/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:37 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.
Oct  3 09:51:37 compute-0 podman[362303]: 2025-10-03 09:51:37.754772118 +0000 UTC m=+0.140729069 container init d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: + sudo -E kolla_set_configs
Oct  3 09:51:37 compute-0 podman[362303]: 2025-10-03 09:51:37.787163789 +0000 UTC m=+0.173120730 container start d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Oct  3 09:51:37 compute-0 podman[362303]: ceilometer_agent_compute
Oct  3 09:51:37 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: sudo: unable to send audit message: Operation not permitted
Oct  3 09:51:37 compute-0 podman[362324]: 2025-10-03 09:51:37.892908691 +0000 UTC m=+0.096084422 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930)
Oct  3 09:51:37 compute-0 systemd[1]: d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47-1799c1a3b5e46663.service: Main process exited, code=exited, status=1/FAILURE
Oct  3 09:51:37 compute-0 systemd[1]: d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47-1799c1a3b5e46663.service: Failed with result 'exit-code'.
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: INFO:__main__:Validating config file
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: INFO:__main__:Copying service configuration files
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: INFO:__main__:Writing out command to execute
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: ++ cat /run_command
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: + ARGS=
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: + sudo kolla_copy_cacerts
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: sudo: unable to send audit message: Operation not permitted
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: + [[ ! -n '' ]]
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: + . kolla_extend_start
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: + umask 0022
Oct  3 09:51:37 compute-0 ceilometer_agent_compute[362317]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Oct  3 09:51:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:38 compute-0 python3.9[362498]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:39 compute-0 python3.9[362576]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/node_exporter/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/node_exporter/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:51:39 compute-0 podman[362577]: 2025-10-03 09:51:39.347486743 +0000 UTC m=+0.109060349 container health_status 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 09:51:39 compute-0 podman[362590]: 2025-10-03 09:51:39.357331809 +0000 UTC m=+0.099424839 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3)
Oct  3 09:51:39 compute-0 podman[362585]: 2025-10-03 09:51:39.362879078 +0000 UTC m=+0.101848487 container health_status e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, version=9.6, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 09:51:39 compute-0 podman[362578]: 2025-10-03 09:51:39.364663316 +0000 UTC m=+0.119062262 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=kepler, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Oct  3 09:51:39 compute-0 podman[362579]: 2025-10-03 09:51:39.381466716 +0000 UTC m=+0.125368835 container health_status b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 09:51:39 compute-0 podman[362592]: 2025-10-03 09:51:39.394863327 +0000 UTC m=+0.129778196 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 09:51:40 compute-0 python3.9[362849]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Oct  3 09:51:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.532 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.533 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.533 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.533 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.533 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.533 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.533 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.533 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.533 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.534 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.534 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.534 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.534 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.534 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.534 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.534 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.535 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.535 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.535 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.535 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.535 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.535 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.535 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.535 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.535 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.536 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.536 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.536 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.536 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.536 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.536 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.536 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.536 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.536 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.536 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.536 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.536 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.537 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.537 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.537 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.537 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.537 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.537 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.537 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.537 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.537 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.537 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.537 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.538 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.538 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.538 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.538 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.538 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.538 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.538 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.538 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.538 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.538 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.538 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.538 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.539 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.539 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.539 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.539 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.539 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.539 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.539 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.539 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.539 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.539 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.539 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.540 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.540 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.540 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.540 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.540 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.540 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.540 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.540 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.540 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.540 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.540 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.541 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.541 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.541 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.541 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.541 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.541 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.541 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.541 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.541 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.541 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.542 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.542 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.542 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.542 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.542 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.542 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.542 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.542 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.542 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.542 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.543 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.543 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.543 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.543 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.543 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.543 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.543 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.543 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.543 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.543 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.543 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.543 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.544 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.544 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.544 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.544 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.544 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.544 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.544 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.544 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.544 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.544 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.544 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.545 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.545 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.545 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.545 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.545 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.545 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.545 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.545 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.545 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.545 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.545 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.546 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.546 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.546 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.546 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.546 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.546 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.546 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.546 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.546 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.547 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.547 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.547 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.547 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.547 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.547 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.547 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.547 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.570 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.570 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.570 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.570 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.571 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.571 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.571 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.571 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.571 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.571 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.571 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.571 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.571 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.571 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.572 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.572 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.572 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.572 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.572 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.572 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.572 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.572 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.572 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.572 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.572 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.572 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.572 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.572 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.573 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.573 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.573 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.573 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.573 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.573 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.573 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.573 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.573 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.573 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.573 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.573 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.574 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.574 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.574 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.574 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.574 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.574 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.574 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.574 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.574 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.574 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.574 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.574 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.575 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.575 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.575 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.576 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.576 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.576 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.576 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.576 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.576 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.576 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.576 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.576 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.576 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.576 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.577 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.577 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.577 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.577 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.577 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.577 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.577 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.577 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.577 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.577 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.577 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.578 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.578 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.578 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.578 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.578 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.578 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.578 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.578 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.578 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.578 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.578 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.579 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.579 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.579 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.579 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.579 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.579 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.579 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.579 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.579 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.579 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.579 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.579 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.580 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.580 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.580 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.580 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.580 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.580 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.580 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.580 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.580 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.580 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.580 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.581 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.581 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.581 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.581 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.581 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.581 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.581 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.581 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.581 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.581 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.581 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.582 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.582 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.582 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.582 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.582 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.582 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.582 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.582 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.582 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.582 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.582 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.582 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.583 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.583 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.583 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.583 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.583 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.583 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.583 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.583 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.583 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.583 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.583 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.583 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.583 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.584 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.584 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.584 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.584 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.584 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.584 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.584 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.586 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.590 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.591 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.596 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.616 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.616 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.617 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Oct  3 09:51:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.832 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.832 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.832 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.833 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.833 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.833 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.833 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.833 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.833 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.833 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.833 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.833 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.834 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.834 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.834 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.834 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.834 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.834 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.834 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.834 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.834 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.835 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.835 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.835 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.835 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.835 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.835 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.835 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.835 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.835 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.836 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.836 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.836 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.836 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.836 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.836 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.836 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.836 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.836 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.836 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.836 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.836 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.837 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.837 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.837 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.837 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.837 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.837 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.837 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.837 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.837 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.837 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.837 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.838 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.838 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.838 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.838 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.838 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.838 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.838 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.838 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.838 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.838 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.838 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.839 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.839 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.839 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.839 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.839 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.839 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.839 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.839 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.839 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.839 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.839 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.839 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.839 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.840 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.840 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.840 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.840 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.840 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.840 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.840 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.840 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.840 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.840 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.840 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.841 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.841 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.841 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.841 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.841 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.841 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.841 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.841 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.841 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.841 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.841 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.842 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.842 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.842 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.842 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.842 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.842 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.842 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.842 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.842 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.843 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.843 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.843 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.843 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.843 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.843 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.843 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.843 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.843 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.843 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.843 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.843 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.844 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.844 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.844 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.844 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.844 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.844 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.844 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.844 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.844 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.844 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.844 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.844 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.844 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.844 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.844 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.844 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.845 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.845 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.845 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.845 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.845 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.845 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.845 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.845 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.845 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.845 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.845 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.845 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.845 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.846 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.846 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.846 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.846 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.846 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.846 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.846 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.846 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.846 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.846 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.846 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.846 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.846 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.847 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.847 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.847 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.847 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.847 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.847 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.847 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.849 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.876 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.878 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.878 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.879 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.879 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.879 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.880 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.880 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.880 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.891 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.892 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.892 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.892 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.892 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.892 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.892 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.892 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.892 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.892 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.892 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.893 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.893 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.893 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.893 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.893 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.894 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.894 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.894 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.894 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.894 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.894 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.895 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.895 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.895 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.895 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.895 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.896 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.896 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.896 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.896 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.896 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.896 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.896 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.899 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.899 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.899 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.899 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:51:40.899 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:51:41 compute-0 python3.9[363011]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 09:51:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:51:41.576 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:51:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:51:41.577 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:51:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:51:41.578 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:51:42 compute-0 python3[363166]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 09:51:42 compute-0 python3[363166]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83",#012          "Digest": "sha256:fa8e5700b7762fffe0674e944762f44bb787a7e44d97569fe55348260453bf80",#012          "RepoTags": [#012               "quay.io/prometheus/node-exporter:v1.5.0"#012          ],#012          "RepoDigests": [#012               "quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c",#012               "quay.io/prometheus/node-exporter@sha256:fa8e5700b7762fffe0674e944762f44bb787a7e44d97569fe55348260453bf80"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2022-11-29T19:06:14.987394068Z",#012          "Config": {#012               "User": "nobody",#012               "ExposedPorts": {#012                    "9100/tcp": {}#012               },#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"#012               ],#012               "Entrypoint": [#012                    "/bin/node_exporter"#012               ],#012               "Labels": {#012                    "maintainer": "The Prometheus Authors <prometheus-developers@googlegroups.com>"#012               }#012          },#012          "Version": "19.03.8",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 23851788,#012          "VirtualSize": 23851788,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c/diff:/var/lib/containers/storage/overlay/0438ade5aeea533b00cd75095bec75fbc2b307bace4c89bb39b75d428637bcd8/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:0438ade5aeea533b00cd75095bec75fbc2b307bace4c89bb39b75d428637bcd8",#012                    "sha256:9f2d25037e3e722ca7f4ca9c7a885f19a2ce11140592ee0acb323dec3b26640d",#012                    "sha256:76857a93cd03e12817c36c667cc3263d58886232cad116327e55d79036e5977d"#012               ]#012          },#012          "Labels": {#012               "maintainer": "The Prometheus Authors <prometheus-developers@googlegroups.com>"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "nobody",#012          "History": [#012               {#012                    "created": "2022-10-26T06:30:33.700079457Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:5e991de3200129dc05c3130f7a64bebb5704486b4f773bfcaa6b13165d6c2416 in / "#012               },#012               {#012                    "created": "2022-10-26T06:30:33.794221299Z",#012                    "created_by": "/bin/sh -c #(nop)  CMD [\"sh\"]",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-15T10:54:54.845364304Z",#012                    "created_by": "/bin/sh -c #(nop)  MAINTAINER The Prometheus Authors <prometheus-developers@googlegroups.com>",#012                    "author": "The Prometheus Authors <prometheus-developers@googlegroups.com>",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-15T10:54:55.54866664Z",#012                    "created_by": "/bin/sh -c #(nop) COPY dir:02c961e21531be78a67ed9bad67d03391cfedcead8b0a35cfb9171346636f11a in / ",#012                    "author": "The Prometheus Authors <prometheus-developers@googlegroups.com>"#012               },#012               {#012                    "created": "2022-11-29T19:06:13.622645057Z",#012                    "created_by": "/bin/sh -c #(nop)  LABEL maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-29T19:06:13.810765105Z",#012                    "created_by": "/bin/sh -c #(nop)  ARG ARCH=amd64",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-29T19:06:13.990897895Z",#012                    "created_by": "/bin/sh -c #(nop)  ARG OS=linux",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-29T19:06:14.358293759Z",#012                    "created_by": "/bin/sh -c #(nop) COPY file:3ef20dd145817033186947b860c3b6f7bb06d4c435257258c0e5df15f6e51eb7 in /bin/node_exporter "#012               },#012               {#012                    "created": "2022-11-29T19:06:14.630644274Z",#012                    "created_by": "/bin/sh -c #(nop)  EXPOSE 9100",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-29T19:06:14.79596292Z",#012                    "created_by": "/bin/sh -c #(nop)  USER nobody",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-29T19:06:14.987394068Z",#012                    "created_by": "/bin/sh -c #(nop)  ENTRYPOINT [\"/bin/node_exporter\"]",#012                    "empty_layer": true#012               }#012          ],#012          "NamesHistory": [#012               "quay.io/prometheus/node-exporter:v1.5.0"#012          ]#012     }#012]#012: quay.io/prometheus/node-exporter:v1.5.0
Oct  3 09:51:42 compute-0 systemd[1]: libpod-18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866.scope: Deactivated successfully.
Oct  3 09:51:42 compute-0 systemd[1]: libpod-18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866.scope: Consumed 3.830s CPU time.
Oct  3 09:51:42 compute-0 podman[363212]: 2025-10-03 09:51:42.354915308 +0000 UTC m=+0.064350442 container died 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 09:51:42 compute-0 systemd[1]: 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866-8eea1f1b57b4ea3.timer: Deactivated successfully.
Oct  3 09:51:42 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866.
Oct  3 09:51:42 compute-0 systemd[1]: 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866-8eea1f1b57b4ea3.service: Failed to open /run/systemd/transient/18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866-8eea1f1b57b4ea3.service: No such file or directory
Oct  3 09:51:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866-userdata-shm.mount: Deactivated successfully.
Oct  3 09:51:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fbe36b3145962233b5e5c8c64a6acbf4327cbc0c9eca94a637632715abf8d06-merged.mount: Deactivated successfully.
Oct  3 09:51:42 compute-0 podman[363212]: 2025-10-03 09:51:42.416299563 +0000 UTC m=+0.125734697 container cleanup 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:51:42 compute-0 python3[363166]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop node_exporter
Oct  3 09:51:42 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct  3 09:51:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:42 compute-0 systemd[1]: 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866-8eea1f1b57b4ea3.timer: Failed to open /run/systemd/transient/18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866-8eea1f1b57b4ea3.timer: No such file or directory
Oct  3 09:51:42 compute-0 systemd[1]: 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866-8eea1f1b57b4ea3.service: Failed to open /run/systemd/transient/18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866-8eea1f1b57b4ea3.service: No such file or directory
Oct  3 09:51:42 compute-0 podman[363237]: 2025-10-03 09:51:42.503870209 +0000 UTC m=+0.064325790 container remove 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 09:51:42 compute-0 podman[363238]: Error: no container with ID 18e606953215927bd5d6df0ffff1709df925dfa27830c636b46ff2d8a6345866 found in database: no such container
Oct  3 09:51:42 compute-0 systemd[1]: edpm_node_exporter.service: Control process exited, code=exited, status=125/n/a
Oct  3 09:51:42 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Oct  3 09:51:42 compute-0 python3[363166]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force node_exporter
Oct  3 09:51:42 compute-0 podman[363258]: 2025-10-03 09:51:42.580437292 +0000 UTC m=+0.048275114 container create 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 09:51:42 compute-0 podman[363258]: 2025-10-03 09:51:42.557146883 +0000 UTC m=+0.024984735 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Oct  3 09:51:42 compute-0 python3[363166]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Oct  3 09:51:42 compute-0 systemd[1]: edpm_node_exporter.service: Scheduled restart job, restart counter is at 1.
Oct  3 09:51:42 compute-0 systemd[1]: Stopped node_exporter container.
Oct  3 09:51:42 compute-0 systemd[1]: Starting node_exporter container...
Oct  3 09:51:42 compute-0 systemd[1]: Started libpod-conmon-343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2.scope.
Oct  3 09:51:42 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c4e9f527215d76b20cc6b7c263eb6e4f86b28aad2ebd2050dc835230a4cdc95/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c4e9f527215d76b20cc6b7c263eb6e4f86b28aad2ebd2050dc835230a4cdc95/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:42 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2.
Oct  3 09:51:42 compute-0 podman[363270]: 2025-10-03 09:51:42.766505327 +0000 UTC m=+0.160030688 container init 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.794Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.795Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.795Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.795Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.795Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.795Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.795Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=arp
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=bcache
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=bonding
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=btrfs
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=conntrack
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=cpu
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=diskstats
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=edac
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=filefd
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=filesystem
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=infiniband
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=ipvs
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=loadavg
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=mdadm
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=meminfo
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=netclass
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=netdev
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=netstat
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=nfs
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=nfsd
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=nvme
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=schedstat
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=sockstat
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=softnet
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=systemd
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=tapestats
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=vmstat
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=xfs
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.796Z caller=node_exporter.go:117 level=info collector=zfs
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.797Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Oct  3 09:51:42 compute-0 node_exporter[363289]: ts=2025-10-03T09:51:42.798Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Oct  3 09:51:42 compute-0 podman[363270]: 2025-10-03 09:51:42.801440282 +0000 UTC m=+0.194965623 container start 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 09:51:42 compute-0 podman[363281]: node_exporter
Oct  3 09:51:42 compute-0 python3[363166]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start node_exporter
Oct  3 09:51:42 compute-0 systemd[1]: Started node_exporter container.
Oct  3 09:51:42 compute-0 podman[363303]: 2025-10-03 09:51:42.893179752 +0000 UTC m=+0.078165765 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:51:43 compute-0 python3.9[363502]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:51:44 compute-0 podman[363628]: 2025-10-03 09:51:44.446573773 +0000 UTC m=+0.089061926 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 09:51:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:44 compute-0 python3.9[363674]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:45 compute-0 python3.9[363825]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759485104.7664244-562-75053707818747/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:51:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:51:45
Oct  3 09:51:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:51:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:51:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['vms', 'volumes', 'images', 'backups', 'default.rgw.log', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control']
Oct  3 09:51:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:51:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:51:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:51:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:51:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:51:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:51:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:51:46 compute-0 python3.9[363901]: ansible-systemd Invoked with state=started name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:51:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:48 compute-0 python3.9[364055]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 09:51:48 compute-0 systemd[1]: Stopping node_exporter container...
Oct  3 09:51:48 compute-0 systemd[1]: libpod-343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2.scope: Deactivated successfully.
Oct  3 09:51:48 compute-0 podman[364059]: 2025-10-03 09:51:48.380396709 +0000 UTC m=+0.072821764 container died 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 09:51:48 compute-0 systemd[1]: 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2-72c7bbacd89dbf3d.timer: Deactivated successfully.
Oct  3 09:51:48 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2.
Oct  3 09:51:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2-userdata-shm.mount: Deactivated successfully.
Oct  3 09:51:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c4e9f527215d76b20cc6b7c263eb6e4f86b28aad2ebd2050dc835230a4cdc95-merged.mount: Deactivated successfully.
Oct  3 09:51:48 compute-0 podman[364059]: 2025-10-03 09:51:48.443012074 +0000 UTC m=+0.135437119 container cleanup 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 09:51:48 compute-0 podman[364059]: node_exporter
Oct  3 09:51:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:48 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct  3 09:51:48 compute-0 systemd[1]: libpod-conmon-343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2.scope: Deactivated successfully.
Oct  3 09:51:48 compute-0 podman[364084]: node_exporter
Oct  3 09:51:48 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Oct  3 09:51:48 compute-0 systemd[1]: Stopped node_exporter container.
Oct  3 09:51:48 compute-0 systemd[1]: Starting node_exporter container...
Oct  3 09:51:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c4e9f527215d76b20cc6b7c263eb6e4f86b28aad2ebd2050dc835230a4cdc95/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c4e9f527215d76b20cc6b7c263eb6e4f86b28aad2ebd2050dc835230a4cdc95/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:48 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2.
Oct  3 09:51:48 compute-0 podman[364097]: 2025-10-03 09:51:48.817834001 +0000 UTC m=+0.228102309 container init 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.848Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.848Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.848Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.849Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.849Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.850Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.851Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.851Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=arp
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=bcache
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=bonding
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=btrfs
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=conntrack
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=cpu
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=diskstats
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=edac
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=filefd
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=filesystem
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=infiniband
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=ipvs
Oct  3 09:51:48 compute-0 podman[364097]: 2025-10-03 09:51:48.859186552 +0000 UTC m=+0.269454820 container start 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=loadavg
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=mdadm
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=meminfo
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=netclass
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=netdev
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=netstat
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=nfs
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=nfsd
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=nvme
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=schedstat
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=sockstat
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=softnet
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=systemd
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=tapestats
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=vmstat
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=xfs
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.852Z caller=node_exporter.go:117 level=info collector=zfs
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.853Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Oct  3 09:51:48 compute-0 node_exporter[364112]: ts=2025-10-03T09:51:48.854Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Oct  3 09:51:48 compute-0 podman[364097]: node_exporter
Oct  3 09:51:48 compute-0 systemd[1]: Started node_exporter container.
Oct  3 09:51:48 compute-0 podman[364121]: 2025-10-03 09:51:48.965571164 +0000 UTC m=+0.090827053 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 09:51:50 compute-0 python3.9[364297]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:51:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:50 compute-0 python3.9[364375]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/podman_exporter/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/podman_exporter/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:51:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:51:51 compute-0 podman[364499]: 2025-10-03 09:51:51.620897438 +0000 UTC m=+0.093040503 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  3 09:51:51 compute-0 podman[364500]: 2025-10-03 09:51:51.628450951 +0000 UTC m=+0.095599766 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 09:51:51 compute-0 python3.9[364561]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Oct  3 09:51:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:52 compute-0 python3.9[364714]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 09:51:53 compute-0 python3[364866]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 09:51:54 compute-0 python3[364866]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815",#012          "Digest": "sha256:7b7f37816f4a78244e32f90a517fdec0c458a6d3cd132212bb6bc16a9dc4fade",#012          "RepoTags": [#012               "quay.io/navidys/prometheus-podman-exporter:v1.10.1"#012          ],#012          "RepoDigests": [#012               "quay.io/navidys/prometheus-podman-exporter@sha256:7b7f37816f4a78244e32f90a517fdec0c458a6d3cd132212bb6bc16a9dc4fade",#012               "quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2024-03-17T01:45:00.251170784Z",#012          "Config": {#012               "User": "nobody",#012               "ExposedPorts": {#012                    "9882/tcp": {}#012               },#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"#012               ],#012               "Entrypoint": [#012                    "/bin/podman_exporter"#012               ],#012               "Labels": {#012                    "maintainer": "Navid Yaghoobi <navidys@fedoraproject.org>"#012               }#012          },#012          "Version": "",#012          "Author": "The Prometheus Authors <prometheus-developers@googlegroups.com>",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 33863535,#012          "VirtualSize": 33863535,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1/diff:/var/lib/containers/storage/overlay/1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed",#012                    "sha256:6b83872188a9e8912bee1d43add5e9bc518601b02a14a364c0da43b0d59acf33",#012                    "sha256:7a73cdcd46b4e3c3a632bae42ad152935f204b50dd02f0a46070f81446516318"#012               ]#012          },#012          "Labels": {#012               "maintainer": "Navid Yaghoobi <navidys@fedoraproject.org>"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "nobody",#012          "History": [#012               {#012                    "created": "2023-12-05T20:23:06.467739954Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:ee9bb8755ccbdd689b434d9b4ac7518e972699604ecda33e4ddc2a15d2831443 in / "#012               },#012               {#012                    "created": "2023-12-05T20:23:06.550971969Z",#012                    "created_by": "/bin/sh -c #(nop)  CMD [\"sh\"]",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2023-12-15T10:54:58.99835989Z",#012                    "created_by": "MAINTAINER The Prometheus Authors <prometheus-developers@googlegroups.com>",#012                    "comment": "buildkit.dockerfile.v0",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2023-12-15T10:54:58.99835989Z",#012                    "created_by": "COPY /rootfs / # buildkit",#012                    "comment": "buildkit.dockerfile.v0"#012               },#012               {#012                    "created": "2024-03-17T01:45:00.251170784Z",#012                    "created_by": "LABEL maintainer=Navid Yaghoobi <navidys@fedoraproject.org>",#012                    "comment": "buildkit.dockerfile.v0",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2024-03-17T01:45:00.251170784Z",#012                    "created_by": "ARG TARGETPLATFORM",#012                    "comment": "buildkit.dockerfile.v0",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2024-03-17T01:45:00.251170784Z",#012                    "created_by": "ARG TARGETOS",#012                    "comment": "buildkit.dockerfile.v0",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2024-03-17T01:45:00.251170784Z",#012                    "created_by": "ARG TARGETARCH",#012                    "comment": "buildkit.dockerfile.v0",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2024-03-17T01:45:00.251170784Z",#012                    "created_by": "COPY ./bin/remote/prometheus-podman-exporter-amd64 /bin/podman_exporter # buildkit",#012                    "comment": "buildkit.dockerfile.v0"#012               },#012               {#012                    "created": "2024-03-17T01:45:00.251170784Z",#012                    "created_by": "EXPOSE map[9882/tcp:{}]",#012                    "comment": "buildkit.dockerfile.v0",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2024-03-17T01:45:00.251170784Z",#012                    "created_by": "USER nobody",#012                    "comment": "buildkit.dockerfile.v0",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2024-03-17T01:45:00.251170784Z",#012                    "created_by": "ENTRYPOINT [\"/bin/podman_exporter\"]",#012                    "comment": "buildkit.dockerfile.v0",#012                    "empty_layer": true#012               }#012          ],#012          "NamesHistory": [#012               "quay.io/navidys/prometheus-podman-exporter:v1.10.1"#012          ]#012     }#012]#012: quay.io/navidys/prometheus-podman-exporter:v1.10.1
Oct  3 09:51:54 compute-0 podman[157165]: @ - - [03/Oct/2025:09:26:02 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 2890026 "" "Go-http-client/1.1"
Oct  3 09:51:54 compute-0 systemd[1]: libpod-b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1.scope: Deactivated successfully.
Oct  3 09:51:54 compute-0 systemd[1]: libpod-b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1.scope: Consumed 2.758s CPU time.
Oct  3 09:51:54 compute-0 podman[364914]: 2025-10-03 09:51:54.105000176 +0000 UTC m=+0.053632035 container died b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 09:51:54 compute-0 systemd[1]: b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1-1f0512624d83dadb.timer: Deactivated successfully.
Oct  3 09:51:54 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1.
Oct  3 09:51:54 compute-0 systemd[1]: b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1-1f0512624d83dadb.service: Failed to open /run/systemd/transient/b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1-1f0512624d83dadb.service: No such file or directory
Oct  3 09:51:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1-userdata-shm.mount: Deactivated successfully.
Oct  3 09:51:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae78b2a9d0131aa74ea1592b94b2e8358ef5f3ee849098d3fd1d608a7d74365d-merged.mount: Deactivated successfully.
Oct  3 09:51:54 compute-0 podman[364914]: 2025-10-03 09:51:54.176516577 +0000 UTC m=+0.125148406 container cleanup b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 09:51:54 compute-0 python3[364866]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop podman_exporter
Oct  3 09:51:54 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct  3 09:51:54 compute-0 systemd[1]: b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1-1f0512624d83dadb.timer: Failed to open /run/systemd/transient/b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1-1f0512624d83dadb.timer: No such file or directory
Oct  3 09:51:54 compute-0 systemd[1]: b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1-1f0512624d83dadb.service: Failed to open /run/systemd/transient/b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1-1f0512624d83dadb.service: No such file or directory
Oct  3 09:51:54 compute-0 podman[364942]: Error: no container with name or ID "podman_exporter" found: no such container
Oct  3 09:51:54 compute-0 podman[364941]: 2025-10-03 09:51:54.259385262 +0000 UTC m=+0.062234623 container remove b4f029737d2f8b185763dd359e84120089e70c126fca392def0181df4f27e4e1 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:51:54 compute-0 systemd[1]: edpm_podman_exporter.service: Control process exited, code=exited, status=125/n/a
Oct  3 09:51:54 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Oct  3 09:51:54 compute-0 python3[364866]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force podman_exporter
Oct  3 09:51:54 compute-0 podman[364961]: 2025-10-03 09:51:54.345713238 +0000 UTC m=+0.057387986 container create 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 09:51:54 compute-0 podman[364961]: 2025-10-03 09:51:54.317920824 +0000 UTC m=+0.029595582 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Oct  3 09:51:54 compute-0 python3[364866]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Oct  3 09:51:54 compute-0 systemd[1]: edpm_podman_exporter.service: Scheduled restart job, restart counter is at 1.
Oct  3 09:51:54 compute-0 systemd[1]: Stopped podman_exporter container.
Oct  3 09:51:54 compute-0 systemd[1]: Starting podman_exporter container...
Oct  3 09:51:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:54 compute-0 systemd[1]: Started libpod-conmon-629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c.scope.
Oct  3 09:51:54 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:51:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84884a1b8a66a8e98fb59df2cbc745c25066e34430c4f71e18a2abd89391f03/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84884a1b8a66a8e98fb59df2cbc745c25066e34430c4f71e18a2abd89391f03/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:54 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c.
Oct  3 09:51:54 compute-0 podman[364973]: 2025-10-03 09:51:54.55502602 +0000 UTC m=+0.180307090 container init 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 09:51:54 compute-0 podman_exporter[364993]: ts=2025-10-03T09:51:54.579Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct  3 09:51:54 compute-0 podman_exporter[364993]: ts=2025-10-03T09:51:54.579Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct  3 09:51:54 compute-0 podman_exporter[364993]: ts=2025-10-03T09:51:54.580Z caller=handler.go:94 level=info msg="enabled collectors"
Oct  3 09:51:54 compute-0 podman_exporter[364993]: ts=2025-10-03T09:51:54.580Z caller=handler.go:105 level=info collector=container
Oct  3 09:51:54 compute-0 podman[157165]: @ - - [03/Oct/2025:09:51:54 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct  3 09:51:54 compute-0 podman[157165]: time="2025-10-03T09:51:54Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:51:54 compute-0 podman[364973]: 2025-10-03 09:51:54.585893963 +0000 UTC m=+0.211175013 container start 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 09:51:54 compute-0 podman[364985]: podman_exporter
Oct  3 09:51:54 compute-0 python3[364866]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start podman_exporter
Oct  3 09:51:54 compute-0 systemd[1]: Started podman_exporter container.
Oct  3 09:51:54 compute-0 podman[157165]: @ - - [03/Oct/2025:09:51:54 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 45758 "" "Go-http-client/1.1"
Oct  3 09:51:54 compute-0 podman_exporter[364993]: ts=2025-10-03T09:51:54.634Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Oct  3 09:51:54 compute-0 podman_exporter[364993]: ts=2025-10-03T09:51:54.635Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Oct  3 09:51:54 compute-0 podman_exporter[364993]: ts=2025-10-03T09:51:54.635Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Oct  3 09:51:54 compute-0 podman[365006]: 2025-10-03 09:51:54.69247528 +0000 UTC m=+0.095132040 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:51:55 compute-0 python3.9[365202]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:51:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:51:56 compute-0 python3.9[365356]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:57 compute-0 python3.9[365507]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759485116.5042-640-240948623456128/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:51:57 compute-0 python3.9[365583]: ansible-systemd Invoked with state=started name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:51:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:51:59 compute-0 python3.9[365737]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 09:51:59 compute-0 systemd[1]: Stopping podman_exporter container...
Oct  3 09:51:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:51:54 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 2855 "" "Go-http-client/1.1"
Oct  3 09:51:59 compute-0 systemd[1]: libpod-629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c.scope: Deactivated successfully.
Oct  3 09:51:59 compute-0 conmon[364993]: conmon 629af6886ed1c73248f3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c.scope/container/memory.events
Oct  3 09:51:59 compute-0 podman[365741]: 2025-10-03 09:51:59.632114279 +0000 UTC m=+0.072798082 container died 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:51:59 compute-0 systemd[1]: 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c-5ff08152aea55a53.timer: Deactivated successfully.
Oct  3 09:51:59 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c.
Oct  3 09:51:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c-userdata-shm.mount: Deactivated successfully.
Oct  3 09:51:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f84884a1b8a66a8e98fb59df2cbc745c25066e34430c4f71e18a2abd89391f03-merged.mount: Deactivated successfully.
Oct  3 09:51:59 compute-0 podman[365741]: 2025-10-03 09:51:59.705927373 +0000 UTC m=+0.146611146 container cleanup 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:51:59 compute-0 podman[365741]: podman_exporter
Oct  3 09:51:59 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct  3 09:51:59 compute-0 systemd[1]: libpod-conmon-629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c.scope: Deactivated successfully.
Oct  3 09:51:59 compute-0 podman[365768]: podman_exporter
Oct  3 09:51:59 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Oct  3 09:51:59 compute-0 systemd[1]: Stopped podman_exporter container.
Oct  3 09:51:59 compute-0 systemd[1]: Starting podman_exporter container...
Oct  3 09:51:59 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84884a1b8a66a8e98fb59df2cbc745c25066e34430c4f71e18a2abd89391f03/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f84884a1b8a66a8e98fb59df2cbc745c25066e34430c4f71e18a2abd89391f03/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 09:51:59 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c.
Oct  3 09:51:59 compute-0 podman[365780]: 2025-10-03 09:51:59.961517582 +0000 UTC m=+0.139427284 container init 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 09:51:59 compute-0 podman_exporter[365794]: ts=2025-10-03T09:51:59.980Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct  3 09:51:59 compute-0 podman_exporter[365794]: ts=2025-10-03T09:51:59.980Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct  3 09:51:59 compute-0 podman_exporter[365794]: ts=2025-10-03T09:51:59.980Z caller=handler.go:94 level=info msg="enabled collectors"
Oct  3 09:51:59 compute-0 podman_exporter[365794]: ts=2025-10-03T09:51:59.980Z caller=handler.go:105 level=info collector=container
Oct  3 09:51:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:51:59 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct  3 09:51:59 compute-0 podman[157165]: time="2025-10-03T09:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:51:59 compute-0 podman[365780]: 2025-10-03 09:51:59.997140919 +0000 UTC m=+0.175050601 container start 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 09:52:00 compute-0 podman[365780]: podman_exporter
Oct  3 09:52:00 compute-0 systemd[1]: Started podman_exporter container.
Oct  3 09:52:00 compute-0 podman[157165]: @ - - [03/Oct/2025:09:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 45754 "" "Go-http-client/1.1"
Oct  3 09:52:00 compute-0 podman_exporter[365794]: ts=2025-10-03T09:52:00.023Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Oct  3 09:52:00 compute-0 podman_exporter[365794]: ts=2025-10-03T09:52:00.024Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Oct  3 09:52:00 compute-0 podman_exporter[365794]: ts=2025-10-03T09:52:00.025Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Oct  3 09:52:00 compute-0 podman[365805]: 2025-10-03 09:52:00.084932051 +0000 UTC m=+0.075999324 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 09:52:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:52:00 compute-0 python3.9[365980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:52:01 compute-0 openstack_network_exporter[159287]: ERROR   09:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:52:01 compute-0 openstack_network_exporter[159287]: ERROR   09:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:52:01 compute-0 openstack_network_exporter[159287]: ERROR   09:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:52:01 compute-0 openstack_network_exporter[159287]: ERROR   09:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:52:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:52:01 compute-0 openstack_network_exporter[159287]: ERROR   09:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:52:01 compute-0 openstack_network_exporter[159287]: 
Oct  3 09:52:01 compute-0 python3.9[366058]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/openstack_network_exporter/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:52:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:02 compute-0 python3.9[366210]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Oct  3 09:52:03 compute-0 python3.9[366362]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 09:52:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:04 compute-0 python3[366514]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 09:52:04 compute-0 python3[366514]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1",#012          "Digest": "sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7",#012          "RepoTags": [#012               "quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified"#012          ],#012          "RepoDigests": [#012               "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-08-26T15:52:54.446618393Z",#012          "Config": {#012               "ExposedPorts": {#012                    "1981/tcp": {}#012               },#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "container=oci"#012               ],#012               "Cmd": [#012                    "/app/openstack-network-exporter"#012               ],#012               "WorkingDir": "/",#012               "Labels": {#012                    "architecture": "x86_64",#012                    "build-date": "2025-08-20T13:12:41",#012                    "com.redhat.component": "ubi9-minimal-container",#012                    "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",#012                    "description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",#012                    "distribution-scope": "public",#012                    "io.buildah.version": "1.33.7",#012                    "io.k8s.description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",#012                    "io.k8s.display-name": "Red Hat Universal Base Image 9 Minimal",#012                    "io.openshift.expose-services": "",#012                    "io.openshift.tags": "minimal rhel9",#012                    "maintainer": "Red Hat, Inc.",#012                    "name": "ubi9-minimal",#012                    "release": "1755695350",#012                    "summary": "Provides the latest release of the minimal Red Hat Universal Base Image 9.",#012                    "url": "https://catalog.redhat.com/en/search?searchType=containers",#012                    "vcs-ref": "f4b088292653bbf5ca8188a5e59ffd06a8671d4b",#012                    "vcs-type": "git",#012                    "vendor": "Red Hat, Inc.",#012                    "version": "9.6"#012               }#012          },#012          "Version": "",#012          "Author": "Red Hat",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 142088877,#012          "VirtualSize": 142088877,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/157961e3a1fe369d02893b19044a0e08e15689974ef810b235cb5ec194c7142c/diff:/var/lib/containers/storage/overlay/778d8c610941586099cac6c507cad2d1156b71b2bb54c42cebedf8808c68edb9/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/cd505d6f54e550fae708d1680b6b8d44753cf72fac8d36345974b92245bc660c/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/cd505d6f54e550fae708d1680b6b8d44753cf72fac8d36345974b92245bc660c/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:778d8c610941586099cac6c507cad2d1156b71b2bb54c42cebedf8808c68edb9",#012                    "sha256:60984b2898b5b4ad1680d36433001b7e2bebb1073775d06b4c2ff80f985caccb",#012                    "sha256:866ed9f0f685cc1d741f560227443a94926fc22494aa7808be751e7247cda421"#012               ]#012          },#012          "Labels": {#012               "architecture": "x86_64",#012               "build-date": "2025-08-20T13:12:41",#012               "com.redhat.component": "ubi9-minimal-container",#012               "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",#012               "description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",#012               "distribution-scope": "public",#012               "io.buildah.version": "1.33.7",#012               "io.k8s.description": "The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",#012               "io.k8s.display-name": "Red Hat Universal Base Image 9 Minimal",#012               "io.openshift.expose-services": "",#012               "io.openshift.tags": "minimal rhel9",#012               "maintainer": "Red Hat, Inc.",#012               "name": "ubi9-minimal",#012               "release": "1755695350",#012               "summary": "Provides the latest release of the minimal Red Hat Universal Base Image 9.",#012               "url": "https://catalog.redhat.com/en/search?searchType=containers",#012               "vcs-ref": "f4b088292653bbf5ca8188a5e59ffd06a8671d4b",#012               "vcs-type": "git",#012               "vendor": "Red Hat, Inc.",#012               "version": "9.6"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "",#012          "History": [#012               {#012                    "created": "2025-08-20T13:14:24.836114247Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"Red Hat, Inc.\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-08-20T13:14:24.907067406Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL vendor=\"Red Hat, Inc.\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-08-20T13:14:24.953912498Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL url=\"https://catalog.redhat.com/en/search?searchType=containers\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-08-20T13:14:24.99202543Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL com.redhat.component=\"ubi9-minimal-container\"       name=\"ubi9-minimal\"       version=\"9.6\"       distribution-scope=\"public\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-08-20T13:14:25.033232759Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL com.redhat.license_terms=\"https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-08-20T13:14:25.116880439Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL summary=\"Provides the latest release of the minimal Red Hat Universal Base Image 9.\"",#012                    "empty_layer": true#012               },#012               {#012      
Oct  3 09:52:04 compute-0 systemd[1]: libpod-e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3.scope: Deactivated successfully.
Oct  3 09:52:04 compute-0 systemd[1]: libpod-e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3.scope: Consumed 3.237s CPU time.
Oct  3 09:52:04 compute-0 podman[366558]: 2025-10-03 09:52:04.991981583 +0000 UTC m=+0.072194214 container died e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 09:52:05 compute-0 systemd[1]: e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3-666f6adfaf3294b9.timer: Deactivated successfully.
Oct  3 09:52:05 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3.
Oct  3 09:52:05 compute-0 systemd[1]: e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3-666f6adfaf3294b9.service: Failed to open /run/systemd/transient/e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3-666f6adfaf3294b9.service: No such file or directory
Oct  3 09:52:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3-userdata-shm.mount: Deactivated successfully.
Oct  3 09:52:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-402c235bd6531d096123bc08afe911a8e62d39966c0eafcc0015bdf22273d2fe-merged.mount: Deactivated successfully.
Oct  3 09:52:05 compute-0 podman[366558]: 2025-10-03 09:52:05.085473319 +0000 UTC m=+0.165685950 container cleanup e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct  3 09:52:05 compute-0 python3[366514]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop openstack_network_exporter
Oct  3 09:52:05 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct  3 09:52:05 compute-0 systemd[1]: e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3-666f6adfaf3294b9.timer: Failed to open /run/systemd/transient/e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3-666f6adfaf3294b9.timer: No such file or directory
Oct  3 09:52:05 compute-0 systemd[1]: e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3-666f6adfaf3294b9.service: Failed to open /run/systemd/transient/e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3-666f6adfaf3294b9.service: No such file or directory
Oct  3 09:52:05 compute-0 podman[366583]: 2025-10-03 09:52:05.286418631 +0000 UTC m=+0.175627009 container remove e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct  3 09:52:05 compute-0 podman[366584]: Error: no container with ID e696f2890bea01bd5002dcd5b466086fb3b72831cf84b4bbb600ef44694a2de3 found in database: no such container
Oct  3 09:52:05 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Control process exited, code=exited, status=125/n/a
Oct  3 09:52:05 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Oct  3 09:52:05 compute-0 python3[366514]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force openstack_network_exporter
Oct  3 09:52:05 compute-0 podman[366607]: 2025-10-03 09:52:05.392945887 +0000 UTC m=+0.075937242 container create 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6)
Oct  3 09:52:05 compute-0 podman[366607]: 2025-10-03 09:52:05.340901324 +0000 UTC m=+0.023892699 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct  3 09:52:05 compute-0 python3[366514]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct  3 09:52:05 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Scheduled restart job, restart counter is at 1.
Oct  3 09:52:05 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Oct  3 09:52:05 compute-0 systemd[1]: Starting openstack_network_exporter container...
Oct  3 09:52:05 compute-0 systemd[1]: Started libpod-conmon-795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54.scope.
Oct  3 09:52:05 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68fdb797a320ae1e5de92d1ff66ce19c1eb1a2cd0e9fdcb83cadb5a05fcbd2f0/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68fdb797a320ae1e5de92d1ff66ce19c1eb1a2cd0e9fdcb83cadb5a05fcbd2f0/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68fdb797a320ae1e5de92d1ff66ce19c1eb1a2cd0e9fdcb83cadb5a05fcbd2f0/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:05 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54.
Oct  3 09:52:05 compute-0 podman[366619]: 2025-10-03 09:52:05.671763575 +0000 UTC m=+0.247601475 container init 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Oct  3 09:52:05 compute-0 openstack_network_exporter[366645]: INFO    09:52:05 main.go:48: registering *bridge.Collector
Oct  3 09:52:05 compute-0 openstack_network_exporter[366645]: INFO    09:52:05 main.go:48: registering *coverage.Collector
Oct  3 09:52:05 compute-0 openstack_network_exporter[366645]: INFO    09:52:05 main.go:48: registering *datapath.Collector
Oct  3 09:52:05 compute-0 openstack_network_exporter[366645]: INFO    09:52:05 main.go:48: registering *iface.Collector
Oct  3 09:52:05 compute-0 openstack_network_exporter[366645]: INFO    09:52:05 main.go:48: registering *memory.Collector
Oct  3 09:52:05 compute-0 openstack_network_exporter[366645]: INFO    09:52:05 main.go:48: registering *ovnnorthd.Collector
Oct  3 09:52:05 compute-0 openstack_network_exporter[366645]: INFO    09:52:05 main.go:48: registering *ovn.Collector
Oct  3 09:52:05 compute-0 openstack_network_exporter[366645]: INFO    09:52:05 main.go:48: registering *ovsdbserver.Collector
Oct  3 09:52:05 compute-0 openstack_network_exporter[366645]: INFO    09:52:05 main.go:48: registering *pmd_perf.Collector
Oct  3 09:52:05 compute-0 openstack_network_exporter[366645]: INFO    09:52:05 main.go:48: registering *pmd_rxq.Collector
Oct  3 09:52:05 compute-0 openstack_network_exporter[366645]: INFO    09:52:05 main.go:48: registering *vswitch.Collector
Oct  3 09:52:05 compute-0 openstack_network_exporter[366645]: NOTICE  09:52:05 main.go:76: listening on https://:9105/metrics
Oct  3 09:52:05 compute-0 podman[366619]: 2025-10-03 09:52:05.71017737 +0000 UTC m=+0.286015250 container start 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, config_id=edpm, container_name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41)
Oct  3 09:52:05 compute-0 podman[366630]: openstack_network_exporter
Oct  3 09:52:05 compute-0 python3[366514]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start openstack_network_exporter
Oct  3 09:52:05 compute-0 systemd[1]: Started openstack_network_exporter container.
Oct  3 09:52:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:52:05 compute-0 podman[366650]: 2025-10-03 09:52:05.822010306 +0000 UTC m=+0.100022137 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct  3 09:52:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:06 compute-0 python3.9[366851]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:52:07 compute-0 python3.9[367005]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:52:08 compute-0 podman[367128]: 2025-10-03 09:52:08.343970773 +0000 UTC m=+0.075842370 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 09:52:08 compute-0 systemd[1]: d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47-1799c1a3b5e46663.service: Main process exited, code=exited, status=1/FAILURE
Oct  3 09:52:08 compute-0 systemd[1]: d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47-1799c1a3b5e46663.service: Failed with result 'exit-code'.
Oct  3 09:52:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:08 compute-0 python3.9[367173]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759485127.7962208-718-176966668169995/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:52:09 compute-0 python3.9[367249]: ansible-systemd Invoked with state=started name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:52:09 compute-0 podman[367278]: 2025-10-03 09:52:09.812613415 +0000 UTC m=+0.066635795 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 09:52:09 compute-0 podman[367276]: 2025-10-03 09:52:09.832813874 +0000 UTC m=+0.092718522 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public, io.openshift.tags=base rhel9, io.buildah.version=1.29.0)
Oct  3 09:52:09 compute-0 podman[367277]: 2025-10-03 09:52:09.851813055 +0000 UTC m=+0.108965095 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  3 09:52:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:52:10 compute-0 python3.9[367466]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 09:52:11 compute-0 systemd[1]: Stopping openstack_network_exporter container...
Oct  3 09:52:11 compute-0 systemd[1]: libpod-795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54.scope: Deactivated successfully.
Oct  3 09:52:11 compute-0 podman[367470]: 2025-10-03 09:52:11.092920149 +0000 UTC m=+0.068152493 container died 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter)
Oct  3 09:52:11 compute-0 systemd[1]: 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54-24fe3b85b60b6966.timer: Deactivated successfully.
Oct  3 09:52:11 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54.
Oct  3 09:52:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54-userdata-shm.mount: Deactivated successfully.
Oct  3 09:52:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-68fdb797a320ae1e5de92d1ff66ce19c1eb1a2cd0e9fdcb83cadb5a05fcbd2f0-merged.mount: Deactivated successfully.
Oct  3 09:52:11 compute-0 podman[367470]: 2025-10-03 09:52:11.169667457 +0000 UTC m=+0.144899801 container cleanup 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, config_id=edpm, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., architecture=x86_64)
Oct  3 09:52:11 compute-0 podman[367470]: openstack_network_exporter
Oct  3 09:52:11 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct  3 09:52:11 compute-0 systemd[1]: libpod-conmon-795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54.scope: Deactivated successfully.
Oct  3 09:52:11 compute-0 podman[367495]: openstack_network_exporter
Oct  3 09:52:11 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Oct  3 09:52:11 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Oct  3 09:52:11 compute-0 systemd[1]: Starting openstack_network_exporter container...
Oct  3 09:52:11 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68fdb797a320ae1e5de92d1ff66ce19c1eb1a2cd0e9fdcb83cadb5a05fcbd2f0/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68fdb797a320ae1e5de92d1ff66ce19c1eb1a2cd0e9fdcb83cadb5a05fcbd2f0/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68fdb797a320ae1e5de92d1ff66ce19c1eb1a2cd0e9fdcb83cadb5a05fcbd2f0/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:11 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54.
Oct  3 09:52:11 compute-0 podman[367508]: 2025-10-03 09:52:11.426508147 +0000 UTC m=+0.139809517 container init 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter)
Oct  3 09:52:11 compute-0 openstack_network_exporter[367524]: INFO    09:52:11 main.go:48: registering *bridge.Collector
Oct  3 09:52:11 compute-0 openstack_network_exporter[367524]: INFO    09:52:11 main.go:48: registering *coverage.Collector
Oct  3 09:52:11 compute-0 openstack_network_exporter[367524]: INFO    09:52:11 main.go:48: registering *datapath.Collector
Oct  3 09:52:11 compute-0 openstack_network_exporter[367524]: INFO    09:52:11 main.go:48: registering *iface.Collector
Oct  3 09:52:11 compute-0 openstack_network_exporter[367524]: INFO    09:52:11 main.go:48: registering *memory.Collector
Oct  3 09:52:11 compute-0 openstack_network_exporter[367524]: INFO    09:52:11 main.go:48: registering *ovnnorthd.Collector
Oct  3 09:52:11 compute-0 openstack_network_exporter[367524]: INFO    09:52:11 main.go:48: registering *ovn.Collector
Oct  3 09:52:11 compute-0 openstack_network_exporter[367524]: INFO    09:52:11 main.go:48: registering *ovsdbserver.Collector
Oct  3 09:52:11 compute-0 openstack_network_exporter[367524]: INFO    09:52:11 main.go:48: registering *pmd_perf.Collector
Oct  3 09:52:11 compute-0 openstack_network_exporter[367524]: INFO    09:52:11 main.go:48: registering *pmd_rxq.Collector
Oct  3 09:52:11 compute-0 openstack_network_exporter[367524]: INFO    09:52:11 main.go:48: registering *vswitch.Collector
Oct  3 09:52:11 compute-0 openstack_network_exporter[367524]: NOTICE  09:52:11 main.go:76: listening on https://:9105/metrics
Oct  3 09:52:11 compute-0 podman[367508]: 2025-10-03 09:52:11.457338268 +0000 UTC m=+0.170639618 container start 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, managed_by=edpm_ansible)
Oct  3 09:52:11 compute-0 podman[367508]: openstack_network_exporter
Oct  3 09:52:11 compute-0 systemd[1]: Started openstack_network_exporter container.
Oct  3 09:52:11 compute-0 podman[367534]: 2025-10-03 09:52:11.571225622 +0000 UTC m=+0.101567378 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, version=9.6, container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 09:52:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:12 compute-0 python3.9[367817]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  3 09:52:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:52:12 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:52:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:52:12 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:52:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:52:12 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:52:12 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 70d2a0c3-f5b4-40cb-b4e7-70c49d0942df does not exist
Oct  3 09:52:12 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3bfe13c1-ff54-465e-becb-ad0905a6d5fa does not exist
Oct  3 09:52:12 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 33ad0374-ffb0-4583-8c9b-2e78a7a10239 does not exist
Oct  3 09:52:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:52:12 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:52:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:52:12 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:52:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:52:12 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:52:12 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:52:12 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:52:12 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:52:13 compute-0 podman[368078]: 2025-10-03 09:52:13.535766141 +0000 UTC m=+0.061185498 container create 0c85802a6f5a50977252c6ec78f9a4062531f05c4ee289ee4d53b0f34e8a429e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 09:52:13 compute-0 systemd[1]: Started libpod-conmon-0c85802a6f5a50977252c6ec78f9a4062531f05c4ee289ee4d53b0f34e8a429e.scope.
Oct  3 09:52:13 compute-0 podman[368078]: 2025-10-03 09:52:13.514371162 +0000 UTC m=+0.039790539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:52:13 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:52:13 compute-0 podman[368078]: 2025-10-03 09:52:13.648562539 +0000 UTC m=+0.173981936 container init 0c85802a6f5a50977252c6ec78f9a4062531f05c4ee289ee4d53b0f34e8a429e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:52:13 compute-0 podman[368078]: 2025-10-03 09:52:13.659075507 +0000 UTC m=+0.184494874 container start 0c85802a6f5a50977252c6ec78f9a4062531f05c4ee289ee4d53b0f34e8a429e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:52:13 compute-0 podman[368078]: 2025-10-03 09:52:13.665016558 +0000 UTC m=+0.190435925 container attach 0c85802a6f5a50977252c6ec78f9a4062531f05c4ee289ee4d53b0f34e8a429e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 09:52:13 compute-0 wonderful_mendel[368141]: 167 167
Oct  3 09:52:13 compute-0 systemd[1]: libpod-0c85802a6f5a50977252c6ec78f9a4062531f05c4ee289ee4d53b0f34e8a429e.scope: Deactivated successfully.
Oct  3 09:52:13 compute-0 podman[368078]: 2025-10-03 09:52:13.668489539 +0000 UTC m=+0.193908906 container died 0c85802a6f5a50977252c6ec78f9a4062531f05c4ee289ee4d53b0f34e8a429e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 09:52:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-685c0c50640d17c14a19917fb85b6cd0e5b3c92a814352ea0e5340c35ccdb5d0-merged.mount: Deactivated successfully.
Oct  3 09:52:13 compute-0 podman[368078]: 2025-10-03 09:52:13.742548291 +0000 UTC m=+0.267967658 container remove 0c85802a6f5a50977252c6ec78f9a4062531f05c4ee289ee4d53b0f34e8a429e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 09:52:13 compute-0 systemd[1]: libpod-conmon-0c85802a6f5a50977252c6ec78f9a4062531f05c4ee289ee4d53b0f34e8a429e.scope: Deactivated successfully.
Oct  3 09:52:13 compute-0 python3.9[368145]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Oct  3 09:52:13 compute-0 podman[368168]: 2025-10-03 09:52:13.930432264 +0000 UTC m=+0.055327851 container create e409461d7131639053dcfe1c5841d3cd34db7ab1770c8b4ccef1d5a894780997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poincare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 09:52:13 compute-0 systemd[1]: Started libpod-conmon-e409461d7131639053dcfe1c5841d3cd34db7ab1770c8b4ccef1d5a894780997.scope.
Oct  3 09:52:14 compute-0 podman[368168]: 2025-10-03 09:52:13.91321098 +0000 UTC m=+0.038106587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:52:14 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec915ad036687dee95388cd68483f898bf58e3d3e204f0b8c78f705a44644ace/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec915ad036687dee95388cd68483f898bf58e3d3e204f0b8c78f705a44644ace/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec915ad036687dee95388cd68483f898bf58e3d3e204f0b8c78f705a44644ace/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec915ad036687dee95388cd68483f898bf58e3d3e204f0b8c78f705a44644ace/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec915ad036687dee95388cd68483f898bf58e3d3e204f0b8c78f705a44644ace/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:14 compute-0 podman[368168]: 2025-10-03 09:52:14.06372306 +0000 UTC m=+0.188618677 container init e409461d7131639053dcfe1c5841d3cd34db7ab1770c8b4ccef1d5a894780997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct  3 09:52:14 compute-0 podman[368168]: 2025-10-03 09:52:14.087117583 +0000 UTC m=+0.212013170 container start e409461d7131639053dcfe1c5841d3cd34db7ab1770c8b4ccef1d5a894780997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:52:14 compute-0 podman[368168]: 2025-10-03 09:52:14.093635583 +0000 UTC m=+0.218531250 container attach e409461d7131639053dcfe1c5841d3cd34db7ab1770c8b4ccef1d5a894780997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poincare, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 09:52:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:14 compute-0 podman[368324]: 2025-10-03 09:52:14.702042148 +0000 UTC m=+0.077524984 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 09:52:14 compute-0 python3.9[368370]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:15 compute-0 systemd[1]: Started libpod-conmon-e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4.scope.
Oct  3 09:52:15 compute-0 podman[368384]: 2025-10-03 09:52:15.099542162 +0000 UTC m=+0.107073855 container exec e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller)
Oct  3 09:52:15 compute-0 podman[368384]: 2025-10-03 09:52:15.13492271 +0000 UTC m=+0.142454373 container exec_died e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 09:52:15 compute-0 systemd[1]: libpod-conmon-e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4.scope: Deactivated successfully.
Oct  3 09:52:15 compute-0 clever_poincare[368208]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:52:15 compute-0 clever_poincare[368208]: --> relative data size: 1.0
Oct  3 09:52:15 compute-0 clever_poincare[368208]: --> All data devices are unavailable
Oct  3 09:52:15 compute-0 systemd[1]: libpod-e409461d7131639053dcfe1c5841d3cd34db7ab1770c8b4ccef1d5a894780997.scope: Deactivated successfully.
Oct  3 09:52:15 compute-0 systemd[1]: libpod-e409461d7131639053dcfe1c5841d3cd34db7ab1770c8b4ccef1d5a894780997.scope: Consumed 1.051s CPU time.
Oct  3 09:52:15 compute-0 podman[368432]: 2025-10-03 09:52:15.279755868 +0000 UTC m=+0.035898305 container died e409461d7131639053dcfe1c5841d3cd34db7ab1770c8b4ccef1d5a894780997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poincare, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:52:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec915ad036687dee95388cd68483f898bf58e3d3e204f0b8c78f705a44644ace-merged.mount: Deactivated successfully.
Oct  3 09:52:15 compute-0 podman[368432]: 2025-10-03 09:52:15.362826799 +0000 UTC m=+0.118969236 container remove e409461d7131639053dcfe1c5841d3cd34db7ab1770c8b4ccef1d5a894780997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_poincare, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:52:15 compute-0 systemd[1]: libpod-conmon-e409461d7131639053dcfe1c5841d3cd34db7ab1770c8b4ccef1d5a894780997.scope: Deactivated successfully.
Oct  3 09:52:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:52:15 compute-0 python3.9[368690]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:52:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:52:16 compute-0 systemd[1]: Started libpod-conmon-e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4.scope.
Oct  3 09:52:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:52:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:52:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:52:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:52:16 compute-0 podman[368715]: 2025-10-03 09:52:16.07772165 +0000 UTC m=+0.106041171 container exec e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 09:52:16 compute-0 podman[368715]: 2025-10-03 09:52:16.112011283 +0000 UTC m=+0.140330794 container exec_died e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  3 09:52:16 compute-0 systemd[1]: libpod-conmon-e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4.scope: Deactivated successfully.
Oct  3 09:52:16 compute-0 podman[368754]: 2025-10-03 09:52:16.180187325 +0000 UTC m=+0.053693718 container create 27f137fbdc5fc8d01725773bac487be745d53024459ddf63f972561aa83ab8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bhaskara, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:52:16 compute-0 systemd[1]: Started libpod-conmon-27f137fbdc5fc8d01725773bac487be745d53024459ddf63f972561aa83ab8e8.scope.
Oct  3 09:52:16 compute-0 podman[368754]: 2025-10-03 09:52:16.161017409 +0000 UTC m=+0.034523792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:52:16 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:52:16 compute-0 podman[368754]: 2025-10-03 09:52:16.286118902 +0000 UTC m=+0.159625295 container init 27f137fbdc5fc8d01725773bac487be745d53024459ddf63f972561aa83ab8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bhaskara, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 09:52:16 compute-0 podman[368754]: 2025-10-03 09:52:16.297192548 +0000 UTC m=+0.170698921 container start 27f137fbdc5fc8d01725773bac487be745d53024459ddf63f972561aa83ab8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bhaskara, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:52:16 compute-0 podman[368754]: 2025-10-03 09:52:16.303181731 +0000 UTC m=+0.176688104 container attach 27f137fbdc5fc8d01725773bac487be745d53024459ddf63f972561aa83ab8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bhaskara, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 09:52:16 compute-0 thirsty_bhaskara[368781]: 167 167
Oct  3 09:52:16 compute-0 systemd[1]: libpod-27f137fbdc5fc8d01725773bac487be745d53024459ddf63f972561aa83ab8e8.scope: Deactivated successfully.
Oct  3 09:52:16 compute-0 podman[368754]: 2025-10-03 09:52:16.305487895 +0000 UTC m=+0.178994268 container died 27f137fbdc5fc8d01725773bac487be745d53024459ddf63f972561aa83ab8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bhaskara, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 09:52:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-df29e7ed7c26d84f7a0c9657b4a9fa8df9a8481b8f2a432ee379c505b5054022-merged.mount: Deactivated successfully.
Oct  3 09:52:16 compute-0 podman[368754]: 2025-10-03 09:52:16.366586411 +0000 UTC m=+0.240092784 container remove 27f137fbdc5fc8d01725773bac487be745d53024459ddf63f972561aa83ab8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 09:52:16 compute-0 systemd[1]: libpod-conmon-27f137fbdc5fc8d01725773bac487be745d53024459ddf63f972561aa83ab8e8.scope: Deactivated successfully.
Oct  3 09:52:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:16 compute-0 podman[368870]: 2025-10-03 09:52:16.576885844 +0000 UTC m=+0.053622726 container create 22f586f531e17205d21c5b13e45258f197a73417649afee40696a5b05a3ef8f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 09:52:16 compute-0 systemd[1]: Started libpod-conmon-22f586f531e17205d21c5b13e45258f197a73417649afee40696a5b05a3ef8f6.scope.
Oct  3 09:52:16 compute-0 podman[368870]: 2025-10-03 09:52:16.551073914 +0000 UTC m=+0.027810796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:52:16 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f07bae31ebc9e7726d73c7eaa2a5447aca0ed53464f2b7bb0bf64a0b73e783/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f07bae31ebc9e7726d73c7eaa2a5447aca0ed53464f2b7bb0bf64a0b73e783/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f07bae31ebc9e7726d73c7eaa2a5447aca0ed53464f2b7bb0bf64a0b73e783/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37f07bae31ebc9e7726d73c7eaa2a5447aca0ed53464f2b7bb0bf64a0b73e783/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:16 compute-0 podman[368870]: 2025-10-03 09:52:16.700461667 +0000 UTC m=+0.177198559 container init 22f586f531e17205d21c5b13e45258f197a73417649afee40696a5b05a3ef8f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_volhard, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:52:16 compute-0 podman[368870]: 2025-10-03 09:52:16.710601384 +0000 UTC m=+0.187338246 container start 22f586f531e17205d21c5b13e45258f197a73417649afee40696a5b05a3ef8f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:52:16 compute-0 podman[368870]: 2025-10-03 09:52:16.718459406 +0000 UTC m=+0.195196308 container attach 22f586f531e17205d21c5b13e45258f197a73417649afee40696a5b05a3ef8f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_volhard, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:52:17 compute-0 python3.9[368966]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:52:17 compute-0 nice_volhard[368914]: {
Oct  3 09:52:17 compute-0 nice_volhard[368914]:    "0": [
Oct  3 09:52:17 compute-0 nice_volhard[368914]:        {
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "devices": [
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "/dev/loop3"
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            ],
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "lv_name": "ceph_lv0",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "lv_size": "21470642176",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "name": "ceph_lv0",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "tags": {
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.cluster_name": "ceph",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.crush_device_class": "",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.encrypted": "0",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.osd_id": "0",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.type": "block",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.vdo": "0"
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            },
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "type": "block",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "vg_name": "ceph_vg0"
Oct  3 09:52:17 compute-0 nice_volhard[368914]:        }
Oct  3 09:52:17 compute-0 nice_volhard[368914]:    ],
Oct  3 09:52:17 compute-0 nice_volhard[368914]:    "1": [
Oct  3 09:52:17 compute-0 nice_volhard[368914]:        {
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "devices": [
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "/dev/loop4"
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            ],
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "lv_name": "ceph_lv1",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "lv_size": "21470642176",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "name": "ceph_lv1",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "tags": {
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.cluster_name": "ceph",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.crush_device_class": "",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.encrypted": "0",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.osd_id": "1",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.type": "block",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.vdo": "0"
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            },
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "type": "block",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "vg_name": "ceph_vg1"
Oct  3 09:52:17 compute-0 nice_volhard[368914]:        }
Oct  3 09:52:17 compute-0 nice_volhard[368914]:    ],
Oct  3 09:52:17 compute-0 nice_volhard[368914]:    "2": [
Oct  3 09:52:17 compute-0 nice_volhard[368914]:        {
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "devices": [
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "/dev/loop5"
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            ],
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "lv_name": "ceph_lv2",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "lv_size": "21470642176",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "name": "ceph_lv2",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "tags": {
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.cluster_name": "ceph",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.crush_device_class": "",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.encrypted": "0",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.osd_id": "2",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.type": "block",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:                "ceph.vdo": "0"
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            },
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "type": "block",
Oct  3 09:52:17 compute-0 nice_volhard[368914]:            "vg_name": "ceph_vg2"
Oct  3 09:52:17 compute-0 nice_volhard[368914]:        }
Oct  3 09:52:17 compute-0 nice_volhard[368914]:    ]
Oct  3 09:52:17 compute-0 nice_volhard[368914]: }
Oct  3 09:52:17 compute-0 systemd[1]: libpod-22f586f531e17205d21c5b13e45258f197a73417649afee40696a5b05a3ef8f6.scope: Deactivated successfully.
Oct  3 09:52:17 compute-0 podman[368870]: 2025-10-03 09:52:17.55969595 +0000 UTC m=+1.036432832 container died 22f586f531e17205d21c5b13e45258f197a73417649afee40696a5b05a3ef8f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_volhard, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 09:52:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-37f07bae31ebc9e7726d73c7eaa2a5447aca0ed53464f2b7bb0bf64a0b73e783-merged.mount: Deactivated successfully.
Oct  3 09:52:17 compute-0 podman[368870]: 2025-10-03 09:52:17.79163721 +0000 UTC m=+1.268374082 container remove 22f586f531e17205d21c5b13e45258f197a73417649afee40696a5b05a3ef8f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_volhard, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 09:52:17 compute-0 systemd[1]: libpod-conmon-22f586f531e17205d21c5b13e45258f197a73417649afee40696a5b05a3ef8f6.scope: Deactivated successfully.
Oct  3 09:52:17 compute-0 python3.9[369131]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Oct  3 09:52:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:18 compute-0 podman[369382]: 2025-10-03 09:52:18.514158907 +0000 UTC m=+0.029270193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:52:18 compute-0 podman[369382]: 2025-10-03 09:52:18.659924454 +0000 UTC m=+0.175035720 container create 7049d94c5c446d6db6bf3ddd452acbbd9c6a0beedfa19074b458219c0dab1c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sanderson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:52:18 compute-0 systemd[1]: Started libpod-conmon-7049d94c5c446d6db6bf3ddd452acbbd9c6a0beedfa19074b458219c0dab1c7e.scope.
Oct  3 09:52:18 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:52:18 compute-0 python3.9[369449]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:19 compute-0 podman[369382]: 2025-10-03 09:52:19.11407761 +0000 UTC m=+0.629188936 container init 7049d94c5c446d6db6bf3ddd452acbbd9c6a0beedfa19074b458219c0dab1c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sanderson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:52:19 compute-0 podman[369382]: 2025-10-03 09:52:19.124778404 +0000 UTC m=+0.639889670 container start 7049d94c5c446d6db6bf3ddd452acbbd9c6a0beedfa19074b458219c0dab1c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sanderson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 09:52:19 compute-0 epic_sanderson[369452]: 167 167
Oct  3 09:52:19 compute-0 systemd[1]: libpod-7049d94c5c446d6db6bf3ddd452acbbd9c6a0beedfa19074b458219c0dab1c7e.scope: Deactivated successfully.
Oct  3 09:52:19 compute-0 podman[369382]: 2025-10-03 09:52:19.222728374 +0000 UTC m=+0.737839640 container attach 7049d94c5c446d6db6bf3ddd452acbbd9c6a0beedfa19074b458219c0dab1c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sanderson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 09:52:19 compute-0 podman[369382]: 2025-10-03 09:52:19.223634773 +0000 UTC m=+0.738746039 container died 7049d94c5c446d6db6bf3ddd452acbbd9c6a0beedfa19074b458219c0dab1c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sanderson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:52:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc0eb262c9896033df3b7407134ea97ab329fc2986cc74b0463b038d1e9132ef-merged.mount: Deactivated successfully.
Oct  3 09:52:19 compute-0 podman[369382]: 2025-10-03 09:52:19.956864715 +0000 UTC m=+1.471975991 container remove 7049d94c5c446d6db6bf3ddd452acbbd9c6a0beedfa19074b458219c0dab1c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sanderson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 09:52:19 compute-0 systemd[1]: libpod-conmon-7049d94c5c446d6db6bf3ddd452acbbd9c6a0beedfa19074b458219c0dab1c7e.scope: Deactivated successfully.
Oct  3 09:52:20 compute-0 systemd[1]: Started libpod-conmon-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.scope.
Oct  3 09:52:20 compute-0 podman[369455]: 2025-10-03 09:52:20.185851649 +0000 UTC m=+1.270149230 container exec d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:52:20 compute-0 podman[369498]: 2025-10-03 09:52:20.19955006 +0000 UTC m=+0.104213473 container create c1c1e5c7fa6d32fe831ed4ba6a0fd7ffed15640ae7f901580458fcbdaec5c668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ellis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:52:20 compute-0 podman[369498]: 2025-10-03 09:52:20.134727685 +0000 UTC m=+0.039391118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:52:20 compute-0 podman[369467]: 2025-10-03 09:52:20.280905316 +0000 UTC m=+1.116186829 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 09:52:20 compute-0 systemd[1]: Started libpod-conmon-c1c1e5c7fa6d32fe831ed4ba6a0fd7ffed15640ae7f901580458fcbdaec5c668.scope.
Oct  3 09:52:20 compute-0 podman[369532]: 2025-10-03 09:52:20.356937861 +0000 UTC m=+0.153523998 container exec_died d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Oct  3 09:52:20 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:52:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a45ff9ae2968e2a4ecba2c70f735435ae98e34b812a2241adc44b6d9df84d76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a45ff9ae2968e2a4ecba2c70f735435ae98e34b812a2241adc44b6d9df84d76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a45ff9ae2968e2a4ecba2c70f735435ae98e34b812a2241adc44b6d9df84d76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a45ff9ae2968e2a4ecba2c70f735435ae98e34b812a2241adc44b6d9df84d76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:52:20 compute-0 podman[369455]: 2025-10-03 09:52:20.39172135 +0000 UTC m=+1.476018921 container exec_died d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930)
Oct  3 09:52:20 compute-0 systemd[1]: libpod-conmon-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.scope: Deactivated successfully.
Oct  3 09:52:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:20 compute-0 podman[369498]: 2025-10-03 09:52:20.775575465 +0000 UTC m=+0.680238858 container init c1c1e5c7fa6d32fe831ed4ba6a0fd7ffed15640ae7f901580458fcbdaec5c668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 09:52:20 compute-0 podman[369498]: 2025-10-03 09:52:20.788604314 +0000 UTC m=+0.693267757 container start c1c1e5c7fa6d32fe831ed4ba6a0fd7ffed15640ae7f901580458fcbdaec5c668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  3 09:52:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:52:20 compute-0 podman[369498]: 2025-10-03 09:52:20.83884834 +0000 UTC m=+0.743511793 container attach c1c1e5c7fa6d32fe831ed4ba6a0fd7ffed15640ae7f901580458fcbdaec5c668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 09:52:21 compute-0 python3.9[369702]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:21 compute-0 systemd[1]: Started libpod-conmon-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.scope.
Oct  3 09:52:21 compute-0 podman[369705]: 2025-10-03 09:52:21.641269946 +0000 UTC m=+0.143697332 container exec d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ceilometer_agent_compute)
Oct  3 09:52:21 compute-0 podman[369736]: 2025-10-03 09:52:21.716366761 +0000 UTC m=+0.061164888 container exec_died d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 09:52:21 compute-0 podman[369705]: 2025-10-03 09:52:21.725476323 +0000 UTC m=+0.227903729 container exec_died d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 09:52:21 compute-0 systemd[1]: libpod-conmon-d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47.scope: Deactivated successfully.
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]: {
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:        "osd_id": 1,
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:        "type": "bluestore"
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:    },
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:        "osd_id": 2,
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:        "type": "bluestore"
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:    },
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:        "osd_id": 0,
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:        "type": "bluestore"
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]:    }
Oct  3 09:52:21 compute-0 inspiring_ellis[369546]: }
Oct  3 09:52:21 compute-0 systemd[1]: libpod-c1c1e5c7fa6d32fe831ed4ba6a0fd7ffed15640ae7f901580458fcbdaec5c668.scope: Deactivated successfully.
Oct  3 09:52:21 compute-0 podman[369498]: 2025-10-03 09:52:21.817156022 +0000 UTC m=+1.721819425 container died c1c1e5c7fa6d32fe831ed4ba6a0fd7ffed15640ae7f901580458fcbdaec5c668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ellis, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  3 09:52:21 compute-0 systemd[1]: libpod-c1c1e5c7fa6d32fe831ed4ba6a0fd7ffed15640ae7f901580458fcbdaec5c668.scope: Consumed 1.023s CPU time.
Oct  3 09:52:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a45ff9ae2968e2a4ecba2c70f735435ae98e34b812a2241adc44b6d9df84d76-merged.mount: Deactivated successfully.
Oct  3 09:52:22 compute-0 podman[369498]: 2025-10-03 09:52:22.14266904 +0000 UTC m=+2.047332443 container remove c1c1e5c7fa6d32fe831ed4ba6a0fd7ffed15640ae7f901580458fcbdaec5c668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 09:52:22 compute-0 podman[369755]: 2025-10-03 09:52:22.144446168 +0000 UTC m=+0.393064832 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 09:52:22 compute-0 systemd[1]: libpod-conmon-c1c1e5c7fa6d32fe831ed4ba6a0fd7ffed15640ae7f901580458fcbdaec5c668.scope: Deactivated successfully.
Oct  3 09:52:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:52:22 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:52:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:52:22 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:52:22 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f83fa44c-b29a-43a5-a5de-ee5c9174e89a does not exist
Oct  3 09:52:22 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev bd6629fb-0b01-4afe-be0f-332d8490c546 does not exist
Oct  3 09:52:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:52:22 compute-0 podman[369754]: 2025-10-03 09:52:22.274117097 +0000 UTC m=+0.535197862 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct  3 09:52:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:23 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:52:23 compute-0 python3.9[370009]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:52:24 compute-0 python3.9[370161]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Oct  3 09:52:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:25 compute-0 python3.9[370326]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:25 compute-0 systemd[1]: Started libpod-conmon-343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2.scope.
Oct  3 09:52:25 compute-0 podman[370327]: 2025-10-03 09:52:25.14359704 +0000 UTC m=+0.110799165 container exec 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 09:52:25 compute-0 nova_compute[351685]: 2025-10-03 09:52:25.164 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:52:25 compute-0 nova_compute[351685]: 2025-10-03 09:52:25.165 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:52:25 compute-0 podman[370327]: 2025-10-03 09:52:25.179712752 +0000 UTC m=+0.146914897 container exec_died 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:52:25 compute-0 nova_compute[351685]: 2025-10-03 09:52:25.184 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:52:25 compute-0 nova_compute[351685]: 2025-10-03 09:52:25.184 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 09:52:25 compute-0 nova_compute[351685]: 2025-10-03 09:52:25.185 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 09:52:25 compute-0 nova_compute[351685]: 2025-10-03 09:52:25.200 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  3 09:52:25 compute-0 nova_compute[351685]: 2025-10-03 09:52:25.201 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:52:25 compute-0 nova_compute[351685]: 2025-10-03 09:52:25.202 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:52:25 compute-0 nova_compute[351685]: 2025-10-03 09:52:25.202 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:52:25 compute-0 nova_compute[351685]: 2025-10-03 09:52:25.203 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 09:52:25 compute-0 nova_compute[351685]: 2025-10-03 09:52:25.203 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:52:25 compute-0 systemd[1]: libpod-conmon-343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2.scope: Deactivated successfully.
Oct  3 09:52:25 compute-0 nova_compute[351685]: 2025-10-03 09:52:25.230 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:52:25 compute-0 nova_compute[351685]: 2025-10-03 09:52:25.231 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:52:25 compute-0 nova_compute[351685]: 2025-10-03 09:52:25.231 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:52:25 compute-0 nova_compute[351685]: 2025-10-03 09:52:25.231 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 09:52:25 compute-0 nova_compute[351685]: 2025-10-03 09:52:25.231 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 09:52:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 09:52:25 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/532017172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 09:52:25 compute-0 nova_compute[351685]: 2025-10-03 09:52:25.720 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 09:52:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:52:26 compute-0 nova_compute[351685]: 2025-10-03 09:52:26.032 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 09:52:26 compute-0 nova_compute[351685]: 2025-10-03 09:52:26.034 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4603MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 09:52:26 compute-0 nova_compute[351685]: 2025-10-03 09:52:26.034 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:52:26 compute-0 nova_compute[351685]: 2025-10-03 09:52:26.035 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:52:26 compute-0 python3.9[370531]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:26 compute-0 nova_compute[351685]: 2025-10-03 09:52:26.112 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 09:52:26 compute-0 nova_compute[351685]: 2025-10-03 09:52:26.112 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 09:52:26 compute-0 nova_compute[351685]: 2025-10-03 09:52:26.128 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 09:52:26 compute-0 systemd[1]: Started libpod-conmon-343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2.scope.
Oct  3 09:52:26 compute-0 podman[370532]: 2025-10-03 09:52:26.180882709 +0000 UTC m=+0.095285216 container exec 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 09:52:26 compute-0 podman[370532]: 2025-10-03 09:52:26.2234998 +0000 UTC m=+0.137902307 container exec_died 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 09:52:26 compute-0 systemd[1]: libpod-conmon-343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2.scope: Deactivated successfully.
Oct  3 09:52:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 09:52:26 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1907522833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 09:52:26 compute-0 nova_compute[351685]: 2025-10-03 09:52:26.589 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 09:52:26 compute-0 nova_compute[351685]: 2025-10-03 09:52:26.596 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 09:52:26 compute-0 nova_compute[351685]: 2025-10-03 09:52:26.613 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 09:52:26 compute-0 nova_compute[351685]: 2025-10-03 09:52:26.615 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 09:52:26 compute-0 nova_compute[351685]: 2025-10-03 09:52:26.615 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:52:26 compute-0 python3.9[370736]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:52:27 compute-0 nova_compute[351685]: 2025-10-03 09:52:27.142 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:52:27 compute-0 nova_compute[351685]: 2025-10-03 09:52:27.143 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:52:27 compute-0 nova_compute[351685]: 2025-10-03 09:52:27.143 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:52:27 compute-0 python3.9[370888]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Oct  3 09:52:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:28 compute-0 python3.9[371052]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:28 compute-0 systemd[1]: Started libpod-conmon-629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c.scope.
Oct  3 09:52:28 compute-0 podman[371053]: 2025-10-03 09:52:28.743110651 +0000 UTC m=+0.107047214 container exec 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:52:28 compute-0 podman[371071]: 2025-10-03 09:52:28.816883102 +0000 UTC m=+0.060888928 container exec_died 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 09:52:28 compute-0 podman[371053]: 2025-10-03 09:52:28.82953555 +0000 UTC m=+0.193472193 container exec_died 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:52:28 compute-0 systemd[1]: libpod-conmon-629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c.scope: Deactivated successfully.
Oct  3 09:52:29 compute-0 python3.9[371234]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:29 compute-0 podman[157165]: time="2025-10-03T09:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:52:29 compute-0 systemd[1]: Started libpod-conmon-629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c.scope.
Oct  3 09:52:29 compute-0 podman[371235]: 2025-10-03 09:52:29.81928592 +0000 UTC m=+0.112761567 container exec 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:52:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45035 "" "Go-http-client/1.1"
Oct  3 09:52:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8502 "" "Go-http-client/1.1"
Oct  3 09:52:29 compute-0 podman[371235]: 2025-10-03 09:52:29.850745212 +0000 UTC m=+0.144220839 container exec_died 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:52:29 compute-0 systemd[1]: libpod-conmon-629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c.scope: Deactivated successfully.
Oct  3 09:52:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:52:30 compute-0 podman[371389]: 2025-10-03 09:52:30.833918191 +0000 UTC m=+0.096405602 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 09:52:31 compute-0 python3.9[371440]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:52:31 compute-0 openstack_network_exporter[367524]: ERROR   09:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:52:31 compute-0 openstack_network_exporter[367524]: ERROR   09:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:52:31 compute-0 openstack_network_exporter[367524]: ERROR   09:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:52:31 compute-0 openstack_network_exporter[367524]: ERROR   09:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:52:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 09:52:31 compute-0 openstack_network_exporter[367524]: ERROR   09:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:52:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 09:52:32 compute-0 python3.9[371596]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Oct  3 09:52:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:33 compute-0 python3.9[371760]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:33 compute-0 systemd[1]: Started libpod-conmon-795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54.scope.
Oct  3 09:52:33 compute-0 podman[371761]: 2025-10-03 09:52:33.853697237 +0000 UTC m=+0.114027558 container exec 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-type=git, architecture=x86_64, version=9.6, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.buildah.version=1.33.7, container_name=openstack_network_exporter)
Oct  3 09:52:33 compute-0 podman[371761]: 2025-10-03 09:52:33.893771576 +0000 UTC m=+0.154101877 container exec_died 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350)
Oct  3 09:52:33 compute-0 systemd[1]: libpod-conmon-795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54.scope: Deactivated successfully.
Oct  3 09:52:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:34 compute-0 python3.9[371943]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:34 compute-0 systemd[1]: Started libpod-conmon-795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54.scope.
Oct  3 09:52:34 compute-0 podman[371944]: 2025-10-03 09:52:34.913590224 +0000 UTC m=+0.103423257 container exec 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.expose-services=)
Oct  3 09:52:34 compute-0 podman[371944]: 2025-10-03 09:52:34.958515638 +0000 UTC m=+0.148348661 container exec_died 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350)
Oct  3 09:52:35 compute-0 systemd[1]: libpod-conmon-795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54.scope: Deactivated successfully.
Oct  3 09:52:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:52:35 compute-0 python3.9[372126]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:52:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:36 compute-0 python3.9[372278]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Oct  3 09:52:37 compute-0 python3.9[372441]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:37 compute-0 systemd[1]: Started libpod-conmon-e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7.scope.
Oct  3 09:52:37 compute-0 podman[372442]: 2025-10-03 09:52:37.91067038 +0000 UTC m=+0.095285996 container exec e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 09:52:37 compute-0 podman[372442]: 2025-10-03 09:52:37.942669899 +0000 UTC m=+0.127285545 container exec_died e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0)
Oct  3 09:52:37 compute-0 systemd[1]: libpod-conmon-e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7.scope: Deactivated successfully.
Oct  3 09:52:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:38 compute-0 podman[372596]: 2025-10-03 09:52:38.635485029 +0000 UTC m=+0.102408324 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_id=edpm)
Oct  3 09:52:38 compute-0 python3.9[372643]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:38 compute-0 systemd[1]: Started libpod-conmon-e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7.scope.
Oct  3 09:52:39 compute-0 podman[372644]: 2025-10-03 09:52:39.009828659 +0000 UTC m=+0.136328366 container exec e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  3 09:52:39 compute-0 podman[372663]: 2025-10-03 09:52:39.08790173 +0000 UTC m=+0.061123878 container exec_died e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 09:52:39 compute-0 podman[372644]: 2025-10-03 09:52:39.126583493 +0000 UTC m=+0.253083200 container exec_died e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  3 09:52:39 compute-0 systemd[1]: libpod-conmon-e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7.scope: Deactivated successfully.
Oct  3 09:52:39 compute-0 python3.9[372827]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:52:40 compute-0 podman[372828]: 2025-10-03 09:52:40.120061084 +0000 UTC m=+0.083852718 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.expose-services=, container_name=kepler, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., architecture=x86_64, release-0.7.12=, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Oct  3 09:52:40 compute-0 podman[372830]: 2025-10-03 09:52:40.132216365 +0000 UTC m=+0.090518002 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 09:52:40 compute-0 podman[372829]: 2025-10-03 09:52:40.186722777 +0000 UTC m=+0.149222250 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  3 09:52:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:52:40 compute-0 python3.9[373042]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Oct  3 09:52:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:52:41.577 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:52:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:52:41.577 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:52:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:52:41.577 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:52:41 compute-0 podman[373130]: 2025-10-03 09:52:41.797452839 +0000 UTC m=+0.064257118 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, release=1755695350, container_name=openstack_network_exporter)
Oct  3 09:52:42 compute-0 python3.9[373227]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:42 compute-0 systemd[1]: Started libpod-conmon-02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8.scope.
Oct  3 09:52:42 compute-0 podman[373228]: 2025-10-03 09:52:42.34198089 +0000 UTC m=+0.112923022 container exec 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vcs-type=git, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler)
Oct  3 09:52:42 compute-0 podman[373228]: 2025-10-03 09:52:42.384776457 +0000 UTC m=+0.155718569 container exec_died 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=edpm, architecture=x86_64, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., name=ubi9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., release-0.7.12=, distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Oct  3 09:52:42 compute-0 systemd[1]: libpod-conmon-02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8.scope: Deactivated successfully.
Oct  3 09:52:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:43 compute-0 python3.9[373412]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:43 compute-0 systemd[1]: Started libpod-conmon-02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8.scope.
Oct  3 09:52:43 compute-0 podman[373413]: 2025-10-03 09:52:43.964174601 +0000 UTC m=+0.101447224 container exec 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9)
Oct  3 09:52:43 compute-0 podman[373413]: 2025-10-03 09:52:43.998138754 +0000 UTC m=+0.135411367 container exec_died 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release-0.7.12=, vcs-type=git, version=9.4, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 09:52:44 compute-0 systemd[1]: libpod-conmon-02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8.scope: Deactivated successfully.
Oct  3 09:52:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:44 compute-0 podman[373595]: 2025-10-03 09:52:44.846516667 +0000 UTC m=+0.077048949 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 09:52:44 compute-0 python3.9[373594]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:52:45 compute-0 python3.9[373764]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Oct  3 09:52:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:52:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:52:45
Oct  3 09:52:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:52:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:52:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', 'vms', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', '.mgr']
Oct  3 09:52:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:52:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:52:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:52:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:52:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:52:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:52:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:52:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:46 compute-0 python3.9[373928]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:46 compute-0 systemd[1]: Started libpod-conmon-99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5.scope.
Oct  3 09:52:46 compute-0 podman[373929]: 2025-10-03 09:52:46.799005289 +0000 UTC m=+0.109058808 container exec 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:52:46 compute-0 podman[373929]: 2025-10-03 09:52:46.836595338 +0000 UTC m=+0.146648817 container exec_died 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 09:52:46 compute-0 systemd[1]: libpod-conmon-99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5.scope: Deactivated successfully.
Oct  3 09:52:47 compute-0 python3.9[374109]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:47 compute-0 systemd[1]: Started libpod-conmon-99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5.scope.
Oct  3 09:52:47 compute-0 podman[374110]: 2025-10-03 09:52:47.894274963 +0000 UTC m=+0.131479409 container exec 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 09:52:48 compute-0 podman[374128]: 2025-10-03 09:52:48.023930103 +0000 UTC m=+0.110662130 container exec_died 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:52:48 compute-0 podman[374110]: 2025-10-03 09:52:48.045003131 +0000 UTC m=+0.282207587 container exec_died 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  3 09:52:48 compute-0 systemd[1]: libpod-conmon-99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5.scope: Deactivated successfully.
Oct  3 09:52:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:48 compute-0 python3.9[374290]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:52:49 compute-0 python3.9[374443]: ansible-containers.podman.podman_container_info Invoked with name=['iscsid'] executable=podman
Oct  3 09:52:50 compute-0 podman[374580]: 2025-10-03 09:52:50.43052699 +0000 UTC m=+0.095103390 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:52:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:50 compute-0 python3.9[374629]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:50 compute-0 systemd[1]: Started libpod-conmon-ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0.scope.
Oct  3 09:52:50 compute-0 podman[374633]: 2025-10-03 09:52:50.774808732 +0000 UTC m=+0.129921539 container exec ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  3 09:52:50 compute-0 podman[374633]: 2025-10-03 09:52:50.810140377 +0000 UTC m=+0.165253254 container exec_died ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 09:52:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:52:50 compute-0 systemd[1]: libpod-conmon-ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0.scope: Deactivated successfully.
Oct  3 09:52:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 09:52:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 5676 writes, 23K keys, 5676 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5676 writes, 887 syncs, 6.40 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d1bcb0dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d1bcb0dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  3 09:52:51 compute-0 python3.9[374814]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:51 compute-0 systemd[1]: Started libpod-conmon-ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0.scope.
Oct  3 09:52:51 compute-0 podman[374815]: 2025-10-03 09:52:51.838682726 +0000 UTC m=+0.130974633 container exec ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  3 09:52:51 compute-0 podman[374815]: 2025-10-03 09:52:51.872223014 +0000 UTC m=+0.164514901 container exec_died ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:52:51 compute-0 systemd[1]: libpod-conmon-ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0.scope: Deactivated successfully.
Oct  3 09:52:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:52 compute-0 podman[374891]: 2025-10-03 09:52:52.826516174 +0000 UTC m=+0.082582626 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:52:52 compute-0 podman[374892]: 2025-10-03 09:52:52.826955178 +0000 UTC m=+0.076232393 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:52:53 compute-0 python3.9[375029]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/iscsid recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:52:54 compute-0 python3.9[375181]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Oct  3 09:52:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:52:55 compute-0 python3.9[375346]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:55 compute-0 systemd[1]: Started libpod-conmon-b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92.scope.
Oct  3 09:52:55 compute-0 podman[375347]: 2025-10-03 09:52:55.26635523 +0000 UTC m=+0.093201439 container exec b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 09:52:55 compute-0 podman[375347]: 2025-10-03 09:52:55.303892076 +0000 UTC m=+0.130738285 container exec_died b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct  3 09:52:55 compute-0 systemd[1]: libpod-conmon-b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92.scope: Deactivated successfully.
Oct  3 09:52:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:52:56 compute-0 python3.9[375527]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct  3 09:52:56 compute-0 systemd[1]: Started libpod-conmon-b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92.scope.
Oct  3 09:52:56 compute-0 podman[375528]: 2025-10-03 09:52:56.375017845 +0000 UTC m=+0.120020001 container exec b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 09:52:56 compute-0 podman[375528]: 2025-10-03 09:52:56.415455124 +0000 UTC m=+0.160457290 container exec_died b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 09:52:56 compute-0 systemd[1]: libpod-conmon-b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92.scope: Deactivated successfully.
Oct  3 09:52:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:57 compute-0 python3.9[375707]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:52:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 09:52:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6820 writes, 27K keys, 6820 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6820 writes, 1261 syncs, 5.41 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5650663e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5650663e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  3 09:52:58 compute-0 python3.9[375859]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:52:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:52:58 compute-0 python3.9[376011]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:52:59 compute-0 python3.9[376089]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/telemetry.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/telemetry.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:52:59 compute-0 podman[157165]: time="2025-10-03T09:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:52:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45034 "" "Go-http-client/1.1"
Oct  3 09:52:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8508 "" "Go-http-client/1.1"
Oct  3 09:53:00 compute-0 python3.9[376241]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:53:01 compute-0 podman[376393]: 2025-10-03 09:53:01.005975957 +0000 UTC m=+0.086927297 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 09:53:01 compute-0 python3.9[376394]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:53:01 compute-0 openstack_network_exporter[367524]: ERROR   09:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:53:01 compute-0 openstack_network_exporter[367524]: ERROR   09:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:53:01 compute-0 openstack_network_exporter[367524]: ERROR   09:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:53:01 compute-0 openstack_network_exporter[367524]: ERROR   09:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:53:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 09:53:01 compute-0 openstack_network_exporter[367524]: ERROR   09:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:53:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 09:53:01 compute-0 python3.9[376495]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:02 compute-0 python3.9[376647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:53:03 compute-0 python3.9[376725]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.s0aw5ijr recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 09:53:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5738 writes, 24K keys, 5738 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5738 writes, 916 syncs, 6.26 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56167acccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56167acccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Oct  3 09:53:04 compute-0 python3.9[376877]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:53:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:04 compute-0 python3.9[376955]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:05 compute-0 ceph-mgr[192071]: [devicehealth INFO root] Check health
Oct  3 09:53:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:53:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:06 compute-0 python3.9[377107]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:53:07 compute-0 python3[377260]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  3 09:53:08 compute-0 python3.9[377412]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:53:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:08 compute-0 podman[377462]: 2025-10-03 09:53:08.837358983 +0000 UTC m=+0.098215669 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Oct  3 09:53:08 compute-0 python3.9[377509]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:10 compute-0 python3.9[377662]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:53:10 compute-0 podman[377723]: 2025-10-03 09:53:10.363194454 +0000 UTC m=+0.077113870 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 09:53:10 compute-0 podman[377713]: 2025-10-03 09:53:10.384128848 +0000 UTC m=+0.105751433 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.29.0, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_id=edpm, name=ubi9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64)
Oct  3 09:53:10 compute-0 podman[377721]: 2025-10-03 09:53:10.400295607 +0000 UTC m=+0.113700007 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 09:53:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:10 compute-0 python3.9[377773]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:53:11 compute-0 python3.9[377951]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:53:11 compute-0 python3.9[378029]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:12 compute-0 podman[378153]: 2025-10-03 09:53:12.623863328 +0000 UTC m=+0.104137511 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Oct  3 09:53:12 compute-0 python3.9[378202]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:53:13 compute-0 python3.9[378280]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:14 compute-0 python3.9[378432]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:53:15 compute-0 podman[378482]: 2025-10-03 09:53:15.280802316 +0000 UTC m=+0.100184324 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:53:15 compute-0 python3.9[378528]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:53:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:53:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:53:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:53:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:53:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:53:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:53:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:16 compute-0 python3.9[378681]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:53:17 compute-0 python3.9[378836]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:18 compute-0 python3.9[378988]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:53:19 compute-0 python3.9[379141]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:53:20 compute-0 python3.9[379294]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:20 compute-0 podman[379319]: 2025-10-03 09:53:20.795767037 +0000 UTC m=+0.058944466 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:53:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:53:20 compute-0 systemd[1]: session-60.scope: Deactivated successfully.
Oct  3 09:53:20 compute-0 systemd[1]: session-60.scope: Consumed 2min 7.008s CPU time.
Oct  3 09:53:20 compute-0 systemd-logind[798]: Session 60 logged out. Waiting for processes to exit.
Oct  3 09:53:20 compute-0 systemd-logind[798]: Removed session 60.
Oct  3 09:53:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:53:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:53:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:53:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:53:23 compute-0 podman[379457]: 2025-10-03 09:53:23.031818588 +0000 UTC m=+0.091492903 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  3 09:53:23 compute-0 podman[379458]: 2025-10-03 09:53:23.035924021 +0000 UTC m=+0.089513481 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid)
Oct  3 09:53:23 compute-0 nova_compute[351685]: 2025-10-03 09:53:23.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:53:23 compute-0 nova_compute[351685]: 2025-10-03 09:53:23.774 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:53:23 compute-0 nova_compute[351685]: 2025-10-03 09:53:23.775 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:53:23 compute-0 nova_compute[351685]: 2025-10-03 09:53:23.775 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:53:23 compute-0 nova_compute[351685]: 2025-10-03 09:53:23.775 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 09:53:23 compute-0 nova_compute[351685]: 2025-10-03 09:53:23.775 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 09:53:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:53:23 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:53:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 09:53:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:53:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 09:53:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:53:23 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 32e9d2e5-c4c9-46d0-82b1-3ce9ef623b50 does not exist
Oct  3 09:53:23 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3fb6ba9a-a6f0-4e55-8e8d-f9417af69d42 does not exist
Oct  3 09:53:23 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 00c9686c-06d5-4f75-85d3-a347610d79ef does not exist
Oct  3 09:53:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 09:53:23 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 09:53:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 09:53:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:53:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 09:53:23 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 09:53:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:53:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:53:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 09:53:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:53:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 09:53:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 09:53:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2906754967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 09:53:24 compute-0 nova_compute[351685]: 2025-10-03 09:53:24.253 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 09:53:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:24 compute-0 nova_compute[351685]: 2025-10-03 09:53:24.581 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 09:53:24 compute-0 nova_compute[351685]: 2025-10-03 09:53:24.583 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4575MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 09:53:24 compute-0 nova_compute[351685]: 2025-10-03 09:53:24.583 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:53:24 compute-0 nova_compute[351685]: 2025-10-03 09:53:24.583 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:53:24 compute-0 podman[379791]: 2025-10-03 09:53:24.599804224 +0000 UTC m=+0.050985400 container create a8c576489d25390a0deb3ea7cfc4396ae513a80fcdb2bb1e070b803e018e4741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_germain, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 09:53:24 compute-0 nova_compute[351685]: 2025-10-03 09:53:24.645 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 09:53:24 compute-0 nova_compute[351685]: 2025-10-03 09:53:24.645 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 09:53:24 compute-0 systemd[1]: Started libpod-conmon-a8c576489d25390a0deb3ea7cfc4396ae513a80fcdb2bb1e070b803e018e4741.scope.
Oct  3 09:53:24 compute-0 nova_compute[351685]: 2025-10-03 09:53:24.662 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 09:53:24 compute-0 podman[379791]: 2025-10-03 09:53:24.579559433 +0000 UTC m=+0.030740629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:53:24 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:53:24 compute-0 podman[379791]: 2025-10-03 09:53:24.699628045 +0000 UTC m=+0.150809251 container init a8c576489d25390a0deb3ea7cfc4396ae513a80fcdb2bb1e070b803e018e4741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 09:53:24 compute-0 podman[379791]: 2025-10-03 09:53:24.710462293 +0000 UTC m=+0.161643469 container start a8c576489d25390a0deb3ea7cfc4396ae513a80fcdb2bb1e070b803e018e4741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_germain, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:53:24 compute-0 amazing_germain[379806]: 167 167
Oct  3 09:53:24 compute-0 systemd[1]: libpod-a8c576489d25390a0deb3ea7cfc4396ae513a80fcdb2bb1e070b803e018e4741.scope: Deactivated successfully.
Oct  3 09:53:24 compute-0 conmon[379806]: conmon a8c576489d25390a0deb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8c576489d25390a0deb3ea7cfc4396ae513a80fcdb2bb1e070b803e018e4741.scope/container/memory.events
Oct  3 09:53:24 compute-0 podman[379791]: 2025-10-03 09:53:24.71690319 +0000 UTC m=+0.168084386 container attach a8c576489d25390a0deb3ea7cfc4396ae513a80fcdb2bb1e070b803e018e4741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:53:24 compute-0 podman[379791]: 2025-10-03 09:53:24.720501466 +0000 UTC m=+0.171682642 container died a8c576489d25390a0deb3ea7cfc4396ae513a80fcdb2bb1e070b803e018e4741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_germain, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:53:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2b47999ac8ac01462943ae1780c8e2cd1be95c33adf129a43dca05ad460594b-merged.mount: Deactivated successfully.
Oct  3 09:53:24 compute-0 podman[379791]: 2025-10-03 09:53:24.780359151 +0000 UTC m=+0.231540327 container remove a8c576489d25390a0deb3ea7cfc4396ae513a80fcdb2bb1e070b803e018e4741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_germain, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:53:24 compute-0 systemd[1]: libpod-conmon-a8c576489d25390a0deb3ea7cfc4396ae513a80fcdb2bb1e070b803e018e4741.scope: Deactivated successfully.
Oct  3 09:53:24 compute-0 podman[379851]: 2025-10-03 09:53:24.968357277 +0000 UTC m=+0.058526573 container create 8e63775e9154714b835d5a40cc05644657855102e10186699ad01692e9822ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:53:25 compute-0 systemd[1]: Started libpod-conmon-8e63775e9154714b835d5a40cc05644657855102e10186699ad01692e9822ff0.scope.
Oct  3 09:53:25 compute-0 podman[379851]: 2025-10-03 09:53:24.944915683 +0000 UTC m=+0.035085009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:53:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:53:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f46baa5b8fbbf64aac4943a5cdbea4679f4eaa0df2aaec0d1076a6ea0902e6bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:53:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f46baa5b8fbbf64aac4943a5cdbea4679f4eaa0df2aaec0d1076a6ea0902e6bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:53:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f46baa5b8fbbf64aac4943a5cdbea4679f4eaa0df2aaec0d1076a6ea0902e6bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:53:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f46baa5b8fbbf64aac4943a5cdbea4679f4eaa0df2aaec0d1076a6ea0902e6bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:53:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f46baa5b8fbbf64aac4943a5cdbea4679f4eaa0df2aaec0d1076a6ea0902e6bb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 09:53:25 compute-0 podman[379851]: 2025-10-03 09:53:25.076535346 +0000 UTC m=+0.166704672 container init 8e63775e9154714b835d5a40cc05644657855102e10186699ad01692e9822ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:53:25 compute-0 podman[379851]: 2025-10-03 09:53:25.097023835 +0000 UTC m=+0.187193131 container start 8e63775e9154714b835d5a40cc05644657855102e10186699ad01692e9822ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 09:53:25 compute-0 podman[379851]: 2025-10-03 09:53:25.101689445 +0000 UTC m=+0.191858741 container attach 8e63775e9154714b835d5a40cc05644657855102e10186699ad01692e9822ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kapitsa, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 09:53:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 09:53:25 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/863026956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 09:53:25 compute-0 nova_compute[351685]: 2025-10-03 09:53:25.135 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 09:53:25 compute-0 nova_compute[351685]: 2025-10-03 09:53:25.144 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 09:53:25 compute-0 nova_compute[351685]: 2025-10-03 09:53:25.164 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 09:53:25 compute-0 nova_compute[351685]: 2025-10-03 09:53:25.166 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 09:53:25 compute-0 nova_compute[351685]: 2025-10-03 09:53:25.166 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:53:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:53:26 compute-0 ecstatic_kapitsa[379868]: --> passed data devices: 0 physical, 3 LVM
Oct  3 09:53:26 compute-0 ecstatic_kapitsa[379868]: --> relative data size: 1.0
Oct  3 09:53:26 compute-0 ecstatic_kapitsa[379868]: --> All data devices are unavailable
Oct  3 09:53:26 compute-0 nova_compute[351685]: 2025-10-03 09:53:26.166 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:53:26 compute-0 nova_compute[351685]: 2025-10-03 09:53:26.167 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:53:26 compute-0 nova_compute[351685]: 2025-10-03 09:53:26.167 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 09:53:26 compute-0 nova_compute[351685]: 2025-10-03 09:53:26.167 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 09:53:26 compute-0 nova_compute[351685]: 2025-10-03 09:53:26.184 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  3 09:53:26 compute-0 nova_compute[351685]: 2025-10-03 09:53:26.184 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:53:26 compute-0 nova_compute[351685]: 2025-10-03 09:53:26.184 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:53:26 compute-0 nova_compute[351685]: 2025-10-03 09:53:26.184 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:53:26 compute-0 nova_compute[351685]: 2025-10-03 09:53:26.185 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:53:26 compute-0 nova_compute[351685]: 2025-10-03 09:53:26.185 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 09:53:26 compute-0 systemd[1]: libpod-8e63775e9154714b835d5a40cc05644657855102e10186699ad01692e9822ff0.scope: Deactivated successfully.
Oct  3 09:53:26 compute-0 systemd[1]: libpod-8e63775e9154714b835d5a40cc05644657855102e10186699ad01692e9822ff0.scope: Consumed 1.035s CPU time.
Oct  3 09:53:26 compute-0 podman[379899]: 2025-10-03 09:53:26.233929458 +0000 UTC m=+0.028421656 container died 8e63775e9154714b835d5a40cc05644657855102e10186699ad01692e9822ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 09:53:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f46baa5b8fbbf64aac4943a5cdbea4679f4eaa0df2aaec0d1076a6ea0902e6bb-merged.mount: Deactivated successfully.
Oct  3 09:53:26 compute-0 podman[379899]: 2025-10-03 09:53:26.302662569 +0000 UTC m=+0.097154767 container remove 8e63775e9154714b835d5a40cc05644657855102e10186699ad01692e9822ff0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kapitsa, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 09:53:26 compute-0 systemd[1]: libpod-conmon-8e63775e9154714b835d5a40cc05644657855102e10186699ad01692e9822ff0.scope: Deactivated successfully.
Oct  3 09:53:26 compute-0 systemd-logind[798]: New session 61 of user zuul.
Oct  3 09:53:26 compute-0 systemd[1]: Started Session 61 of User zuul.
Oct  3 09:53:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:27 compute-0 podman[380113]: 2025-10-03 09:53:27.086740925 +0000 UTC m=+0.051720105 container create 52218bb83697a6789b3c62cc160df72a7f5f3164cf40ed1cb8c410f1c77e5b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bassi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:53:27 compute-0 systemd[1]: Started libpod-conmon-52218bb83697a6789b3c62cc160df72a7f5f3164cf40ed1cb8c410f1c77e5b73.scope.
Oct  3 09:53:27 compute-0 podman[380113]: 2025-10-03 09:53:27.065953476 +0000 UTC m=+0.030932676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:53:27 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:53:27 compute-0 podman[380113]: 2025-10-03 09:53:27.17989163 +0000 UTC m=+0.144870820 container init 52218bb83697a6789b3c62cc160df72a7f5f3164cf40ed1cb8c410f1c77e5b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 09:53:27 compute-0 podman[380113]: 2025-10-03 09:53:27.187814904 +0000 UTC m=+0.152794074 container start 52218bb83697a6789b3c62cc160df72a7f5f3164cf40ed1cb8c410f1c77e5b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 09:53:27 compute-0 inspiring_bassi[380167]: 167 167
Oct  3 09:53:27 compute-0 systemd[1]: libpod-52218bb83697a6789b3c62cc160df72a7f5f3164cf40ed1cb8c410f1c77e5b73.scope: Deactivated successfully.
Oct  3 09:53:27 compute-0 podman[380113]: 2025-10-03 09:53:27.193946862 +0000 UTC m=+0.158926062 container attach 52218bb83697a6789b3c62cc160df72a7f5f3164cf40ed1cb8c410f1c77e5b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 09:53:27 compute-0 podman[380113]: 2025-10-03 09:53:27.194398376 +0000 UTC m=+0.159377556 container died 52218bb83697a6789b3c62cc160df72a7f5f3164cf40ed1cb8c410f1c77e5b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 09:53:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b61f997c029586caad914d249971e225e773dad4b2c52b868020888b996b149-merged.mount: Deactivated successfully.
Oct  3 09:53:27 compute-0 podman[380113]: 2025-10-03 09:53:27.240223331 +0000 UTC m=+0.205202501 container remove 52218bb83697a6789b3c62cc160df72a7f5f3164cf40ed1cb8c410f1c77e5b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 09:53:27 compute-0 systemd[1]: libpod-conmon-52218bb83697a6789b3c62cc160df72a7f5f3164cf40ed1cb8c410f1c77e5b73.scope: Deactivated successfully.
Oct  3 09:53:27 compute-0 podman[380247]: 2025-10-03 09:53:27.426930145 +0000 UTC m=+0.050305469 container create a97d2ee4e42a587bb1e191aa13a9e80203c864bde219c8c7e79d2d347790eb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curran, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:53:27 compute-0 systemd[1]: Started libpod-conmon-a97d2ee4e42a587bb1e191aa13a9e80203c864bde219c8c7e79d2d347790eb31.scope.
Oct  3 09:53:27 compute-0 podman[380247]: 2025-10-03 09:53:27.406849789 +0000 UTC m=+0.030225133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:53:27 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/169afb4e9cd4774894c525a869b0da9889c91f0c89101f4707d2ed33dc36464a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/169afb4e9cd4774894c525a869b0da9889c91f0c89101f4707d2ed33dc36464a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/169afb4e9cd4774894c525a869b0da9889c91f0c89101f4707d2ed33dc36464a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/169afb4e9cd4774894c525a869b0da9889c91f0c89101f4707d2ed33dc36464a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:53:27 compute-0 podman[380247]: 2025-10-03 09:53:27.565915784 +0000 UTC m=+0.189291128 container init a97d2ee4e42a587bb1e191aa13a9e80203c864bde219c8c7e79d2d347790eb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curran, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct  3 09:53:27 compute-0 podman[380247]: 2025-10-03 09:53:27.588300494 +0000 UTC m=+0.211675818 container start a97d2ee4e42a587bb1e191aa13a9e80203c864bde219c8c7e79d2d347790eb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curran, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 09:53:27 compute-0 podman[380247]: 2025-10-03 09:53:27.592940734 +0000 UTC m=+0.216316088 container attach a97d2ee4e42a587bb1e191aa13a9e80203c864bde219c8c7e79d2d347790eb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curran, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 09:53:27 compute-0 python3.9[380254]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 09:53:27 compute-0 systemd[1]: Reloading.
Oct  3 09:53:27 compute-0 nova_compute[351685]: 2025-10-03 09:53:27.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:53:27 compute-0 nova_compute[351685]: 2025-10-03 09:53:27.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:53:27 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:53:27 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:53:28 compute-0 distracted_curran[380264]: {
Oct  3 09:53:28 compute-0 distracted_curran[380264]:    "0": [
Oct  3 09:53:28 compute-0 distracted_curran[380264]:        {
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "devices": [
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "/dev/loop3"
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            ],
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "lv_name": "ceph_lv0",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "lv_size": "21470642176",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "name": "ceph_lv0",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "tags": {
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.cluster_name": "ceph",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.crush_device_class": "",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.encrypted": "0",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.osd_id": "0",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.type": "block",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.vdo": "0"
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            },
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "type": "block",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "vg_name": "ceph_vg0"
Oct  3 09:53:28 compute-0 distracted_curran[380264]:        }
Oct  3 09:53:28 compute-0 distracted_curran[380264]:    ],
Oct  3 09:53:28 compute-0 distracted_curran[380264]:    "1": [
Oct  3 09:53:28 compute-0 distracted_curran[380264]:        {
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "devices": [
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "/dev/loop4"
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            ],
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "lv_name": "ceph_lv1",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "lv_size": "21470642176",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "name": "ceph_lv1",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "tags": {
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.cluster_name": "ceph",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.crush_device_class": "",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.encrypted": "0",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.osd_id": "1",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.type": "block",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.vdo": "0"
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            },
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "type": "block",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "vg_name": "ceph_vg1"
Oct  3 09:53:28 compute-0 distracted_curran[380264]:        }
Oct  3 09:53:28 compute-0 distracted_curran[380264]:    ],
Oct  3 09:53:28 compute-0 distracted_curran[380264]:    "2": [
Oct  3 09:53:28 compute-0 distracted_curran[380264]:        {
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "devices": [
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "/dev/loop5"
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            ],
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "lv_name": "ceph_lv2",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "lv_size": "21470642176",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "name": "ceph_lv2",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "tags": {
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.cephx_lockbox_secret": "",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.cluster_name": "ceph",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.crush_device_class": "",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.encrypted": "0",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.osd_id": "2",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.type": "block",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:                "ceph.vdo": "0"
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            },
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "type": "block",
Oct  3 09:53:28 compute-0 distracted_curran[380264]:            "vg_name": "ceph_vg2"
Oct  3 09:53:28 compute-0 distracted_curran[380264]:        }
Oct  3 09:53:28 compute-0 distracted_curran[380264]:    ]
Oct  3 09:53:28 compute-0 distracted_curran[380264]: }
Oct  3 09:53:28 compute-0 systemd[1]: libpod-a97d2ee4e42a587bb1e191aa13a9e80203c864bde219c8c7e79d2d347790eb31.scope: Deactivated successfully.
Oct  3 09:53:28 compute-0 podman[380247]: 2025-10-03 09:53:28.433143405 +0000 UTC m=+1.056518739 container died a97d2ee4e42a587bb1e191aa13a9e80203c864bde219c8c7e79d2d347790eb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 09:53:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-169afb4e9cd4774894c525a869b0da9889c91f0c89101f4707d2ed33dc36464a-merged.mount: Deactivated successfully.
Oct  3 09:53:28 compute-0 podman[380247]: 2025-10-03 09:53:28.577805297 +0000 UTC m=+1.201180621 container remove a97d2ee4e42a587bb1e191aa13a9e80203c864bde219c8c7e79d2d347790eb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curran, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 09:53:28 compute-0 systemd[1]: libpod-conmon-a97d2ee4e42a587bb1e191aa13a9e80203c864bde219c8c7e79d2d347790eb31.scope: Deactivated successfully.
Oct  3 09:53:29 compute-0 python3.9[380565]: ansible-ansible.builtin.service_facts Invoked
Oct  3 09:53:29 compute-0 network[380611]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  3 09:53:29 compute-0 network[380613]: 'network-scripts' will be removed from distribution in near future.
Oct  3 09:53:29 compute-0 network[380615]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  3 09:53:29 compute-0 podman[380631]: 2025-10-03 09:53:29.402292172 +0000 UTC m=+0.052909552 container create d4763ce758e053fabebe505e57c83374bf3caae64ee6fa13f0ffe103366389a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_williamson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 09:53:29 compute-0 podman[380631]: 2025-10-03 09:53:29.382658711 +0000 UTC m=+0.033276111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:53:29 compute-0 podman[157165]: time="2025-10-03T09:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:53:30 compute-0 systemd[1]: Started libpod-conmon-d4763ce758e053fabebe505e57c83374bf3caae64ee6fa13f0ffe103366389a2.scope.
Oct  3 09:53:30 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:53:30 compute-0 podman[380631]: 2025-10-03 09:53:30.291612273 +0000 UTC m=+0.942229703 container init d4763ce758e053fabebe505e57c83374bf3caae64ee6fa13f0ffe103366389a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_williamson, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  3 09:53:30 compute-0 podman[380631]: 2025-10-03 09:53:30.31078989 +0000 UTC m=+0.961407270 container start d4763ce758e053fabebe505e57c83374bf3caae64ee6fa13f0ffe103366389a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_williamson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  3 09:53:30 compute-0 podman[380631]: 2025-10-03 09:53:30.317642111 +0000 UTC m=+0.968259591 container attach d4763ce758e053fabebe505e57c83374bf3caae64ee6fa13f0ffe103366389a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:53:30 compute-0 podman[157165]: @ - - [03/Oct/2025:09:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46408 "" "Go-http-client/1.1"
Oct  3 09:53:30 compute-0 vibrant_williamson[380647]: 167 167
Oct  3 09:53:30 compute-0 systemd[1]: libpod-d4763ce758e053fabebe505e57c83374bf3caae64ee6fa13f0ffe103366389a2.scope: Deactivated successfully.
Oct  3 09:53:30 compute-0 conmon[380647]: conmon d4763ce758e053fabebe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d4763ce758e053fabebe505e57c83374bf3caae64ee6fa13f0ffe103366389a2.scope/container/memory.events
Oct  3 09:53:30 compute-0 podman[380631]: 2025-10-03 09:53:30.326883407 +0000 UTC m=+0.977500797 container died d4763ce758e053fabebe505e57c83374bf3caae64ee6fa13f0ffe103366389a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:53:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-96bcbc36cada0fce961778ddcd313a62b68480a0f705c3e7b20ba33ce944a576-merged.mount: Deactivated successfully.
Oct  3 09:53:30 compute-0 podman[380631]: 2025-10-03 09:53:30.384722168 +0000 UTC m=+1.035339548 container remove d4763ce758e053fabebe505e57c83374bf3caae64ee6fa13f0ffe103366389a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_williamson, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 09:53:30 compute-0 podman[157165]: @ - - [03/Oct/2025:09:53:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8521 "" "Go-http-client/1.1"
Oct  3 09:53:30 compute-0 systemd[1]: libpod-conmon-d4763ce758e053fabebe505e57c83374bf3caae64ee6fa13f0ffe103366389a2.scope: Deactivated successfully.
Oct  3 09:53:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:30 compute-0 podman[380682]: 2025-10-03 09:53:30.573932693 +0000 UTC m=+0.050767844 container create fdbaa6aa51e7b99645ea4aca69321317c28fcd1912be53218f4c773e1c8e5974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 09:53:30 compute-0 systemd[1]: Started libpod-conmon-fdbaa6aa51e7b99645ea4aca69321317c28fcd1912be53218f4c773e1c8e5974.scope.
Oct  3 09:53:30 compute-0 podman[380682]: 2025-10-03 09:53:30.552810874 +0000 UTC m=+0.029646055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 09:53:30 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:53:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868ef764a86ce3c0cc2b16615974d52827c4a2454d70436244b93ab2c9c6ae19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 09:53:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868ef764a86ce3c0cc2b16615974d52827c4a2454d70436244b93ab2c9c6ae19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 09:53:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868ef764a86ce3c0cc2b16615974d52827c4a2454d70436244b93ab2c9c6ae19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 09:53:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868ef764a86ce3c0cc2b16615974d52827c4a2454d70436244b93ab2c9c6ae19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 09:53:30 compute-0 podman[380682]: 2025-10-03 09:53:30.705314178 +0000 UTC m=+0.182149429 container init fdbaa6aa51e7b99645ea4aca69321317c28fcd1912be53218f4c773e1c8e5974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goodall, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 09:53:30 compute-0 podman[380682]: 2025-10-03 09:53:30.726112897 +0000 UTC m=+0.202948048 container start fdbaa6aa51e7b99645ea4aca69321317c28fcd1912be53218f4c773e1c8e5974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goodall, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 09:53:30 compute-0 podman[380682]: 2025-10-03 09:53:30.731736368 +0000 UTC m=+0.208571539 container attach fdbaa6aa51e7b99645ea4aca69321317c28fcd1912be53218f4c773e1c8e5974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goodall, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 09:53:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:53:31 compute-0 podman[380722]: 2025-10-03 09:53:31.143266263 +0000 UTC m=+0.085840293 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 09:53:31 compute-0 openstack_network_exporter[367524]: ERROR   09:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:53:31 compute-0 openstack_network_exporter[367524]: ERROR   09:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:53:31 compute-0 openstack_network_exporter[367524]: ERROR   09:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:53:31 compute-0 openstack_network_exporter[367524]: ERROR   09:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:53:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 09:53:31 compute-0 openstack_network_exporter[367524]: ERROR   09:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:53:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]: {
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:        "osd_id": 1,
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:        "type": "bluestore"
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:    },
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:        "osd_id": 2,
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:        "type": "bluestore"
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:    },
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:        "osd_id": 0,
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:        "type": "bluestore"
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]:    }
Oct  3 09:53:31 compute-0 thirsty_goodall[380702]: }
Oct  3 09:53:31 compute-0 systemd[1]: libpod-fdbaa6aa51e7b99645ea4aca69321317c28fcd1912be53218f4c773e1c8e5974.scope: Deactivated successfully.
Oct  3 09:53:31 compute-0 systemd[1]: libpod-fdbaa6aa51e7b99645ea4aca69321317c28fcd1912be53218f4c773e1c8e5974.scope: Consumed 1.024s CPU time.
Oct  3 09:53:31 compute-0 podman[380808]: 2025-10-03 09:53:31.81298452 +0000 UTC m=+0.037287729 container died fdbaa6aa51e7b99645ea4aca69321317c28fcd1912be53218f4c773e1c8e5974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct  3 09:53:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-868ef764a86ce3c0cc2b16615974d52827c4a2454d70436244b93ab2c9c6ae19-merged.mount: Deactivated successfully.
Oct  3 09:53:31 compute-0 podman[380808]: 2025-10-03 09:53:31.886486454 +0000 UTC m=+0.110789643 container remove fdbaa6aa51e7b99645ea4aca69321317c28fcd1912be53218f4c773e1c8e5974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct  3 09:53:31 compute-0 systemd[1]: libpod-conmon-fdbaa6aa51e7b99645ea4aca69321317c28fcd1912be53218f4c773e1c8e5974.scope: Deactivated successfully.
Oct  3 09:53:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 09:53:31 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:53:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 09:53:31 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:53:31 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 55a6bc08-0823-49d7-91aa-b4ce5dab5bf9 does not exist
Oct  3 09:53:31 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 5cbd67e9-9c00-4b7e-9fe5-080d06302eb8 does not exist
Oct  3 09:53:32 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:53:32 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 09:53:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:34 compute-0 python3.9[381077]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:53:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:53:35 compute-0 python3.9[381230]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:53:36.250034) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485216250118, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1424, "num_deletes": 251, "total_data_size": 2299414, "memory_usage": 2342704, "flush_reason": "Manual Compaction"}
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485216268331, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2245989, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14977, "largest_seqno": 16400, "table_properties": {"data_size": 2239284, "index_size": 3840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13593, "raw_average_key_size": 19, "raw_value_size": 2225955, "raw_average_value_size": 3212, "num_data_blocks": 176, "num_entries": 693, "num_filter_entries": 693, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759485066, "oldest_key_time": 1759485066, "file_creation_time": 1759485216, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 18339 microseconds, and 6455 cpu microseconds.
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:53:36.268384) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2245989 bytes OK
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:53:36.268400) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:53:36.271131) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:53:36.271148) EVENT_LOG_v1 {"time_micros": 1759485216271143, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:53:36.271167) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2293131, prev total WAL file size 2293131, number of live WAL files 2.
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:53:36.272102) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2193KB)], [35(7224KB)]
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485216272165, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9643801, "oldest_snapshot_seqno": -1}
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4021 keys, 7877961 bytes, temperature: kUnknown
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485216320209, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7877961, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7848483, "index_size": 18296, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 98291, "raw_average_key_size": 24, "raw_value_size": 7773144, "raw_average_value_size": 1933, "num_data_blocks": 774, "num_entries": 4021, "num_filter_entries": 4021, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759485216, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:53:36.320439) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7877961 bytes
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:53:36.322401) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.4 rd, 163.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.1 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(7.8) write-amplify(3.5) OK, records in: 4535, records dropped: 514 output_compression: NoCompression
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:53:36.322418) EVENT_LOG_v1 {"time_micros": 1759485216322410, "job": 16, "event": "compaction_finished", "compaction_time_micros": 48132, "compaction_time_cpu_micros": 19497, "output_level": 6, "num_output_files": 1, "total_output_size": 7877961, "num_input_records": 4535, "num_output_records": 4021, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485216322898, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485216324403, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:53:36.271998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:53:36.324509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:53:36.324513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:53:36.324517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:53:36.324519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:53:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-09:53:36.324521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 09:53:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:36 compute-0 python3.9[381382]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:37 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct  3 09:53:38 compute-0 python3.9[381534]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:53:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:39 compute-0 podman[381660]: 2025-10-03 09:53:39.228959349 +0000 UTC m=+0.071134669 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 09:53:39 compute-0 python3.9[381702]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  3 09:53:40 compute-0 python3.9[381855]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  3 09:53:40 compute-0 systemd[1]: Reloading.
Oct  3 09:53:40 compute-0 podman[381857]: 2025-10-03 09:53:40.502722673 +0000 UTC m=+0.092109833 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:53:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:40 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  3 09:53:40 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  3 09:53:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.877 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.878 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.878 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.879 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.879 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.880 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.880 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.880 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.880 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.880 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.881 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.881 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.881 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.881 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.881 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.881 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.881 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.881 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.882 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.883 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.883 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.883 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.883 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.883 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.883 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.883 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.883 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.883 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.883 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.883 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.882 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.883 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.885 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.885 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.allocation': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.885 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.886 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.887 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.887 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.887 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.887 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.887 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.888 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.889 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.889 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.890 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.890 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.890 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.890 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.890 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.891 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.891 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.891 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.891 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.891 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:53:40.894 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:53:40 compute-0 podman[381912]: 2025-10-03 09:53:40.912421639 +0000 UTC m=+0.081521473 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, name=ubi9, version=9.4, vcs-type=git, distribution-scope=public, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, container_name=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release-0.7.12=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30)
Oct  3 09:53:40 compute-0 podman[381914]: 2025-10-03 09:53:40.942441224 +0000 UTC m=+0.111701343 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:53:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:53:41.578 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:53:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:53:41.578 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:53:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:53:41.578 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:53:41 compute-0 python3.9[382109]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 09:53:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:42 compute-0 python3.9[382262]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:53:42 compute-0 podman[382286]: 2025-10-03 09:53:42.870220582 +0000 UTC m=+0.125373793 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct  3 09:53:43 compute-0 python3.9[382430]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:53:44 compute-0 python3.9[382582]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:53:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:44 compute-0 python3.9[382658]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:53:45 compute-0 podman[382735]: 2025-10-03 09:53:45.795028724 +0000 UTC m=+0.059855717 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 09:53:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:53:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:53:45
Oct  3 09:53:45 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:53:45 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:53:45 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.meta', 'images', '.mgr', 'volumes', 'default.rgw.control', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log']
Oct  3 09:53:45 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:53:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:53:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:53:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:53:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:53:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:53:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:53:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:46 compute-0 python3.9[382829]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Oct  3 09:53:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:48 compute-0 python3.9[382980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:53:49 compute-0 python3.9[383056]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf _original_basename=ceilometer.conf recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:49 compute-0 python3.9[383207]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:53:50 compute-0 python3.9[383283]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml _original_basename=polling.yaml recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:53:50 compute-0 podman[383407]: 2025-10-03 09:53:50.967375087 +0000 UTC m=+0.061771127 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 09:53:51 compute-0 python3.9[383450]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:53:51 compute-0 python3.9[383533]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf _original_basename=custom.conf recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:52 compute-0 python3.9[383683]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:53:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:53 compute-0 python3.9[383835]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:53:53 compute-0 podman[383961]: 2025-10-03 09:53:53.812350951 +0000 UTC m=+0.075294302 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  3 09:53:53 compute-0 podman[383962]: 2025-10-03 09:53:53.821220386 +0000 UTC m=+0.076100618 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001)
Oct  3 09:53:54 compute-0 python3.9[384019]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:53:54 compute-0 python3.9[384101]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json _original_basename=ceilometer-agent-ipmi.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:53:55 compute-0 python3.9[384251]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:53:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:53:55 compute-0 python3.9[384327]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:56 compute-0 python3.9[384477]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:53:57 compute-0 python3.9[384553]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json _original_basename=ceilometer_agent_ipmi.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:58 compute-0 python3.9[384703]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:53:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:53:59 compute-0 python3.9[384779]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:53:59 compute-0 podman[157165]: time="2025-10-03T09:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:53:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45034 "" "Go-http-client/1.1"
Oct  3 09:53:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8509 "" "Go-http-client/1.1"
Oct  3 09:53:59 compute-0 python3.9[384929]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:54:00 compute-0 python3.9[385005]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:54:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:54:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:54:01 compute-0 python3.9[385155]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:54:01 compute-0 podman[385156]: 2025-10-03 09:54:01.355599472 +0000 UTC m=+0.077652878 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 09:54:01 compute-0 openstack_network_exporter[367524]: ERROR   09:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:54:01 compute-0 openstack_network_exporter[367524]: ERROR   09:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:54:01 compute-0 openstack_network_exporter[367524]: ERROR   09:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:54:01 compute-0 openstack_network_exporter[367524]: ERROR   09:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:54:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 09:54:01 compute-0 openstack_network_exporter[367524]: ERROR   09:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:54:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 09:54:01 compute-0 python3.9[385256]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json _original_basename=kepler.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:54:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:54:02 compute-0 python3.9[385406]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:54:03 compute-0 python3.9[385482]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:54:03 compute-0 python3.9[385634]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:54:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:54:04 compute-0 python3.9[385786]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:54:05 compute-0 python3.9[385938]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:54:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:54:06 compute-0 python3.9[386090]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:54:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:54:06 compute-0 python3.9[386168]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:54:07 compute-0 python3.9[386244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:54:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:54:08 compute-0 python3.9[386322]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ _original_basename=healthcheck.future recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:54:09 compute-0 python3.9[386474]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  3 09:54:09 compute-0 podman[386524]: 2025-10-03 09:54:09.600163598 +0000 UTC m=+0.076485211 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct  3 09:54:09 compute-0 python3.9[386571]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/kepler/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/kepler/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  3 09:54:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:54:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:54:11 compute-0 podman[386695]: 2025-10-03 09:54:11.247871538 +0000 UTC m=+0.083851118 container health_status 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, distribution-scope=public, name=ubi9, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, release=1214.1726694543)
Oct  3 09:54:11 compute-0 podman[386697]: 2025-10-03 09:54:11.261899839 +0000 UTC m=+0.092238858 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=1, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  3 09:54:11 compute-0 systemd[1]: e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7-6cef6d41888ae8b6.service: Main process exited, code=exited, status=1/FAILURE
Oct  3 09:54:11 compute-0 systemd[1]: e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7-6cef6d41888ae8b6.service: Failed with result 'exit-code'.
Oct  3 09:54:11 compute-0 podman[386696]: 2025-10-03 09:54:11.281774368 +0000 UTC m=+0.113330205 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 09:54:11 compute-0 python3.9[386776]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Oct  3 09:54:12 compute-0 python3.9[386932]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 09:54:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:54:13 compute-0 podman[387056]: 2025-10-03 09:54:13.373122236 +0000 UTC m=+0.081582025 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, architecture=x86_64, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, version=9.6)
Oct  3 09:54:13 compute-0 python3[387106]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 09:54:13 compute-0 python3[387106]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "4e3fcb5b1fba62258ff06f167ae06a1ec1b5619d7c6c0d986039bf8e54f8eb69",#012          "Digest": "sha256:31c0d98fec7ff16416903874af0addeff03a7e72ede256990f2a71589e8be5ce",#012          "RepoTags": [#012               "quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified"#012          ],#012          "RepoDigests": [#012               "quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:31c0d98fec7ff16416903874af0addeff03a7e72ede256990f2a71589e8be5ce"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-10-02T06:24:36.894186563Z",#012          "Config": {#012               "User": "ceilometer",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.3",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20251001",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 9 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "a0eac564d779a7eaac46c9816bff261a",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 506042235,#012          "VirtualSize": 506042235,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/34365f170072023ec5c8c572d7511714609a26f43c067a32144a7059987f02c5/diff:/var/lib/containers/storage/overlay/42e6de9e7202d00f42d3bd209135e03f782967c2586fadb6628837faf9793f24/diff:/var/lib/containers/storage/overlay/661e15e0dfc445ecdff08d434d5cb11b0b9a54f42dd69506bb77f4c8cd8adb25/diff:/var/lib/containers/storage/overlay/dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/2cf9b56c3b0130b731829b54bdaf9d18e56e469fb556a8f57c1e6996fceabdd0/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/2cf9b56c3b0130b731829b54bdaf9d18e56e469fb556a8f57c1e6996fceabdd0/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:dfe3535c047dfd1b56a035a76f7fcccd61101a4c7c28b14527de35475ed1e01a",#012                    "sha256:c7c80f27a004d53fb75b6d30a961f2416ea855138d9e550000fa093a1e5e384d",#012                    "sha256:b750c0fcea5f2ef8ddf8bac392b882b999626fea5ad4fc74394b8a33125ae898",#012                    "sha256:bff6b53cc8f5f5da3c1e46587d75b635f64cdcfabc11cc88956a45d827a92462",#012                    "sha256:102959d6671d6451dd9b4b86320438fa167f5fbd2002b179c4620bad7a13f452"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.3",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20251001",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 9 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "a0eac564d779a7eaac46c9816bff261a",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "ceilometer",#012          "History": [#012               {#012                    "created": "2025-10-01T03:48:01.636308726Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:6811d025892d980eece98a69cb13f590c9e0f62dda383ab9076072b45b58a87f in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-01T03:48:01.636415187Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251001\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-01T03:48:09.404099909Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-10-02T06:10:09.757191184Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream9",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-02T06:10:09.757211565Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-02T06:10:09.757229405Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-02T06:10:09.757245856Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-02T06:10:09.757279147Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-02T06:10:09.757304688Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-02T06:10:10.233672718Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-10-02T06:10:47.227633956Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012                    "empty_layer": true#012               },#012   
Oct  3 09:54:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:54:14 compute-0 python3.9[387314]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:54:15 compute-0 python3.9[387468]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:54:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:54:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:54:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:54:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:54:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:54:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:54:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:54:16 compute-0 podman[387591]: 2025-10-03 09:54:16.384399719 +0000 UTC m=+0.074440596 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:54:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:54:16 compute-0 python3.9[387636]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759485255.6891701-391-174058550009767/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:54:17 compute-0 python3.9[387712]: ansible-systemd Invoked with state=started name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:54:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:54:19 compute-0 python3.9[387866]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Oct  3 09:54:20 compute-0 python3.9[388019]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  3 09:54:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:54:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:54:21 compute-0 podman[388143]: 2025-10-03 09:54:21.252415594 +0000 UTC m=+0.061349174 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 09:54:21 compute-0 python3[388194]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Oct  3 09:54:21 compute-0 python3[388194]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7",#012          "Digest": "sha256:c74e63cd5740586d4c62182467bb463ef5e3dd809027aedc92c05ac19e93b086",#012          "RepoTags": [#012               "quay.io/sustainable_computing_io/kepler:release-0.7.12"#012          ],#012          "RepoDigests": [#012               "quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd",#012               "quay.io/sustainable_computing_io/kepler@sha256:c74e63cd5740586d4c62182467bb463ef5e3dd809027aedc92c05ac19e93b086"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2024-10-15T06:30:56.315982344Z",#012          "Config": {#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "container=oci",#012                    "NVIDIA_VISIBLE_DEVICES=all",#012                    "NVIDIA_DRIVER_CAPABILITIES=utility",#012                    "NVIDIA_MIG_MONITOR_DEVICES=all",#012                    "NVIDIA_MIG_CONFIG_DEVICES=all"#012               ],#012               "Entrypoint": [#012                    "/usr/bin/kepler"#012               ],#012               "Labels": {#012                    "architecture": "x86_64",#012                    "build-date": "2024-09-18T21:23:30",#012                    "com.redhat.component": "ubi9-container",#012                    "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",#012                    "description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",#012                    "distribution-scope": "public",#012                    "io.buildah.version": "1.29.0",#012                    "io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",#012                    "io.k8s.display-name": "Red Hat Universal Base Image 9",#012                    "io.openshift.expose-services": "",#012                    "io.openshift.tags": "base rhel9",#012                    "maintainer": "Red Hat, Inc.",#012                    "name": "ubi9",#012                    "release": "1214.1726694543",#012                    "release-0.7.12": "",#012                    "summary": "Provides the latest release of Red Hat Universal Base Image 9.",#012                    "url": "https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543",#012                    "vcs-ref": "e309397d02fc53f7fa99db1371b8700eb49f268f",#012                    "vcs-type": "git",#012                    "vendor": "Red Hat, Inc.",#012                    "version": "9.4"#012               }#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 331545571,#012          "VirtualSize": 331545571,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/de1557109facda5eb038045e25371b06ad2baf5cf32c60a7fe84a603bee1e079/diff:/var/lib/containers/storage/overlay/725f7e4e3b8edde36f0bdcd313bbaf872dbe55b162264f8008ee3c09a0b89b66/diff:/var/lib/containers/storage/overlay/573769ea2305456dffa2f0674424aa020c1494387d36bcccb339788fd220d39b/diff:/var/lib/containers/storage/overlay/56a7d751d1997fb4e9fb31bd07356a0c9a7699a9bb524feeb3c7fe2b433b8223/diff:/var/lib/containers/storage/overlay/0560e6233aa93f1e1ac7bed53255811f32dc680869ef7f31dd630efc1203b853/diff:/var/lib/containers/storage/overlay/8d984035cdde48f32944ddaa464ac42d376faabc98415168800b2b8c9aec0930/diff:/var/lib/containers/storage/overlay/e7328e803158cca63d8efdbe1caefb1b51654de77e5fa8691079ad06db1abf75/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/ed698de2bb3f7ef46422d45edf0654a1764e700cec794f481dab0a1f34f51932/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/ed698de2bb3f7ef46422d45edf0654a1764e700cec794f481dab0a1f34f51932/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:e7328e803158cca63d8efdbe1caefb1b51654de77e5fa8691079ad06db1abf75",#012                    "sha256:f947b23b2d0723eac9b608b79e6d48e59d90f74958e05f2762295489e0088e86",#012                    "sha256:3bf6ab40cc16a103a087232c2c6a1a093dcb6141e70397de57907f5d00741429",#012                    "sha256:2f5269f1ade14b3b0806305a0b2d3efffe65a187b302789a50ac00bcb815b960",#012                    "sha256:413f5abb84bd1c03bdfd9c1e0dec8f4be92159c9c6116c4e44247efcdcc6b518",#012                    "sha256:60c06a2423851502fc43aec0680b91181b0d62b52812c019d3fc66f1546c4529",#012                    "sha256:323ce4bcad35618db6032dd5bfbd6c8ebb0cde882f730b19296d0ceaf5e39427",#012                    "sha256:270b3386a8e4a2127a32b007abfea7cb394ae1dee577ee7fefdbb79cd2bea856"#012               ]#012          },#012          "Labels": {#012               "architecture": "x86_64",#012               "build-date": "2024-09-18T21:23:30",#012               "com.redhat.component": "ubi9-container",#012               "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",#012               "description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",#012               "distribution-scope": "public",#012               "io.buildah.version": "1.29.0",#012               "io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.",#012               "io.k8s.display-name": "Red Hat Universal Base Image 9",#012               "io.openshift.expose-services": "",#012               "io.openshift.tags": "base rhel9",#012               "maintainer": "Red Hat, Inc.",#012               "name": "ubi9",#012               "release": "1214.1726694543",#012               "release-0.7.12": "",#012               "summary": "Provides the latest release of Red Hat Universal Base Image 9.",#012               "url": "https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543",#012               "vcs-ref": "e309397d02fc53f7fa99db1371b8700eb49f268f",#012               "vcs-type": "git",#012               "vendor": "Red Hat, Inc.",#012               "version": "9.4"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.oci.image.manifest.v1+json",#012          "User": "",#012          "History": [#012               {#012                    "created": "2024-09-18T21:36:31.099323493Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:0067eb9f2ee25ab2d666a7639a85fe707b582902a09242761abf30c53664069b in / ",#012                    "empty_layer": true#012               },#012               {#012 
Oct  3 09:54:22 compute-0 kepler[176935]: I1003 09:54:22.043919       1 exporter.go:218] Received shutdown signal
Oct  3 09:54:22 compute-0 kepler[176935]: I1003 09:54:22.044495       1 exporter.go:226] Exiting...
Oct  3 09:54:22 compute-0 systemd[1]: libpod-02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8.scope: Deactivated successfully.
Oct  3 09:54:22 compute-0 systemd[1]: libpod-02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8.scope: Consumed 27.355s CPU time.
Oct  3 09:54:22 compute-0 podman[388243]: 2025-10-03 09:54:22.119815059 +0000 UTC m=+0.226731553 container died 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=kepler, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, vcs-type=git, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64)
Oct  3 09:54:22 compute-0 systemd[1]: 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8-74c3e41ed1965459.timer: Deactivated successfully.
Oct  3 09:54:22 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8.
Oct  3 09:54:22 compute-0 systemd[1]: 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8-74c3e41ed1965459.service: Failed to open /run/systemd/transient/02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8-74c3e41ed1965459.service: No such file or directory
Oct  3 09:54:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8-userdata-shm.mount: Deactivated successfully.
Oct  3 09:54:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-5893b20fbf3101d69c6051d87d970cd31b130343e0eb2911a2fa23bf76bf9532-merged.mount: Deactivated successfully.
Oct  3 09:54:22 compute-0 podman[388243]: 2025-10-03 09:54:22.168602337 +0000 UTC m=+0.275518831 container cleanup 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-type=git, release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct  3 09:54:22 compute-0 python3[388194]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop kepler
Oct  3 09:54:22 compute-0 systemd[1]: 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8-74c3e41ed1965459.timer: Failed to open /run/systemd/transient/02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8-74c3e41ed1965459.timer: No such file or directory
Oct  3 09:54:22 compute-0 systemd[1]: 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8-74c3e41ed1965459.service: Failed to open /run/systemd/transient/02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8-74c3e41ed1965459.service: No such file or directory
Oct  3 09:54:22 compute-0 podman[388270]: 2025-10-03 09:54:22.240913764 +0000 UTC m=+0.053998098 container remove 02a40dc9785b2eb5971a01f74fcdde1bd867f4eb53add5794b99110a8b9762e8 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, architecture=x86_64, vcs-type=git, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 09:54:22 compute-0 podman[388271]: Error: no container with name or ID "kepler" found: no such container
Oct  3 09:54:22 compute-0 python3[388194]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force kepler
Oct  3 09:54:22 compute-0 systemd[1]: edpm_kepler.service: Control process exited, code=exited, status=125/n/a
Oct  3 09:54:22 compute-0 podman[388297]: Error: no container with name or ID "kepler" found: no such container
Oct  3 09:54:22 compute-0 systemd[1]: edpm_kepler.service: Control process exited, code=exited, status=125/n/a
Oct  3 09:54:22 compute-0 systemd[1]: edpm_kepler.service: Failed with result 'exit-code'.
Oct  3 09:54:22 compute-0 podman[388296]: 2025-10-03 09:54:22.31422063 +0000 UTC m=+0.052327813 container create be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, version=9.4, io.openshift.tags=base rhel9)
Oct  3 09:54:22 compute-0 podman[388296]: 2025-10-03 09:54:22.286723736 +0000 UTC m=+0.024830939 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Oct  3 09:54:22 compute-0 python3[388194]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Oct  3 09:54:22 compute-0 systemd[1]: Started libpod-conmon-be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a.scope.
Oct  3 09:54:22 compute-0 systemd[1]: edpm_kepler.service: Scheduled restart job, restart counter is at 1.
Oct  3 09:54:22 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:54:22 compute-0 systemd[1]: Stopped kepler container.
Oct  3 09:54:22 compute-0 systemd[1]: Starting kepler container...
Oct  3 09:54:22 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a.
Oct  3 09:54:22 compute-0 podman[388320]: 2025-10-03 09:54:22.53401519 +0000 UTC m=+0.198114993 container init be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-type=git, container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, config_id=edpm, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, release-0.7.12=, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=)
Oct  3 09:54:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:54:22 compute-0 podman[388320]: 2025-10-03 09:54:22.560685427 +0000 UTC m=+0.224785210 container start be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., version=9.4, release=1214.1726694543, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.tags=base rhel9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Oct  3 09:54:22 compute-0 kepler[388336]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct  3 09:54:22 compute-0 kepler[388336]: I1003 09:54:22.587254       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Oct  3 09:54:22 compute-0 kepler[388336]: I1003 09:54:22.587778       1 config.go:293] using gCgroup ID in the BPF program: true
Oct  3 09:54:22 compute-0 kepler[388336]: I1003 09:54:22.587819       1 config.go:295] kernel version: 5.14
Oct  3 09:54:22 compute-0 kepler[388336]: I1003 09:54:22.588500       1 power.go:78] Unable to obtain power, use estimate method
Oct  3 09:54:22 compute-0 kepler[388336]: I1003 09:54:22.588522       1 redfish.go:169] failed to get redfish credential file path
Oct  3 09:54:22 compute-0 kepler[388336]: I1003 09:54:22.588895       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Oct  3 09:54:22 compute-0 kepler[388336]: I1003 09:54:22.588923       1 power.go:79] using none to obtain power
Oct  3 09:54:22 compute-0 kepler[388336]: E1003 09:54:22.588937       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Oct  3 09:54:22 compute-0 kepler[388336]: E1003 09:54:22.588960       1 exporter.go:154] failed to init GPU accelerators: no devices found
Oct  3 09:54:22 compute-0 kepler[388336]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct  3 09:54:22 compute-0 kepler[388336]: I1003 09:54:22.590804       1 exporter.go:84] Number of CPUs: 8
Oct  3 09:54:22 compute-0 python3[388194]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start kepler
Oct  3 09:54:22 compute-0 podman[388339]: kepler
Oct  3 09:54:22 compute-0 systemd[1]: Started kepler container.
Oct  3 09:54:22 compute-0 podman[388352]: 2025-10-03 09:54:22.78806198 +0000 UTC m=+0.217797776 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, version=9.4, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543)
Oct  3 09:54:22 compute-0 systemd[1]: be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a-bfaa34734942ace.service: Main process exited, code=exited, status=1/FAILURE
Oct  3 09:54:22 compute-0 systemd[1]: be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a-bfaa34734942ace.service: Failed with result 'exit-code'.
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.151025       1 watcher.go:83] Using in cluster k8s config
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.152629       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Oct  3 09:54:23 compute-0 kepler[388336]: E1003 09:54:23.152806       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.156921       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.156951       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.160701       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.160727       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.170684       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.170737       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.170760       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.178028       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.178055       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.178060       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.178064       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.178070       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.178079       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.178142       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.178163       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.178181       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.178196       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.178314       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Oct  3 09:54:23 compute-0 kepler[388336]: I1003 09:54:23.178954       1 exporter.go:208] Started Kepler in 591.883116ms
Oct  3 09:54:23 compute-0 python3.9[388567]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  3 09:54:24 compute-0 podman[388694]: 2025-10-03 09:54:24.190957887 +0000 UTC m=+0.072288366 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 09:54:24 compute-0 podman[388693]: 2025-10-03 09:54:24.214178424 +0000 UTC m=+0.101834887 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 09:54:24 compute-0 python3.9[388755]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:54:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:54:24 compute-0 nova_compute[351685]: 2025-10-03 09:54:24.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:54:24 compute-0 nova_compute[351685]: 2025-10-03 09:54:24.728 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 09:54:24 compute-0 nova_compute[351685]: 2025-10-03 09:54:24.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 09:54:24 compute-0 nova_compute[351685]: 2025-10-03 09:54:24.742 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  3 09:54:24 compute-0 nova_compute[351685]: 2025-10-03 09:54:24.743 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:54:24 compute-0 nova_compute[351685]: 2025-10-03 09:54:24.771 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:54:24 compute-0 nova_compute[351685]: 2025-10-03 09:54:24.772 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:54:24 compute-0 nova_compute[351685]: 2025-10-03 09:54:24.772 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:54:24 compute-0 nova_compute[351685]: 2025-10-03 09:54:24.772 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 09:54:24 compute-0 nova_compute[351685]: 2025-10-03 09:54:24.772 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 09:54:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 09:54:25 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/364420413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 09:54:25 compute-0 python3.9[388926]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759485264.5840957-449-230404200205895/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  3 09:54:25 compute-0 nova_compute[351685]: 2025-10-03 09:54:25.277 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 09:54:25 compute-0 nova_compute[351685]: 2025-10-03 09:54:25.583 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 09:54:25 compute-0 nova_compute[351685]: 2025-10-03 09:54:25.584 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4601MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 09:54:25 compute-0 nova_compute[351685]: 2025-10-03 09:54:25.585 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:54:25 compute-0 nova_compute[351685]: 2025-10-03 09:54:25.585 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:54:25 compute-0 nova_compute[351685]: 2025-10-03 09:54:25.644 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 09:54:25 compute-0 nova_compute[351685]: 2025-10-03 09:54:25.645 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 09:54:25 compute-0 nova_compute[351685]: 2025-10-03 09:54:25.659 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 09:54:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:54:25 compute-0 python3.9[389004]: ansible-systemd Invoked with state=started name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  3 09:54:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 09:54:26 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3361814501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 09:54:26 compute-0 nova_compute[351685]: 2025-10-03 09:54:26.107 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 09:54:26 compute-0 nova_compute[351685]: 2025-10-03 09:54:26.114 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 09:54:26 compute-0 nova_compute[351685]: 2025-10-03 09:54:26.309 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 09:54:26 compute-0 nova_compute[351685]: 2025-10-03 09:54:26.312 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 09:54:26 compute-0 nova_compute[351685]: 2025-10-03 09:54:26.312 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:54:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:54:26 compute-0 python3.9[389183]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 09:54:26 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Oct  3 09:54:27 compute-0 ceilometer_agent_ipmi[176659]: 2025-10-03 09:54:27.156 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Oct  3 09:54:27 compute-0 ceilometer_agent_ipmi[176659]: 2025-10-03 09:54:27.258 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Oct  3 09:54:27 compute-0 ceilometer_agent_ipmi[176659]: 2025-10-03 09:54:27.258 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Oct  3 09:54:27 compute-0 ceilometer_agent_ipmi[176659]: 2025-10-03 09:54:27.258 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Oct  3 09:54:27 compute-0 ceilometer_agent_ipmi[176659]: 2025-10-03 09:54:27.265 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Oct  3 09:54:27 compute-0 nova_compute[351685]: 2025-10-03 09:54:27.300 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:54:27 compute-0 nova_compute[351685]: 2025-10-03 09:54:27.337 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:54:27 compute-0 nova_compute[351685]: 2025-10-03 09:54:27.338 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:54:27 compute-0 nova_compute[351685]: 2025-10-03 09:54:27.338 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:54:27 compute-0 nova_compute[351685]: 2025-10-03 09:54:27.339 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 09:54:27 compute-0 systemd[1]: libpod-e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7.scope: Deactivated successfully.
Oct  3 09:54:27 compute-0 systemd[1]: libpod-e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7.scope: Consumed 3.052s CPU time.
Oct  3 09:54:27 compute-0 podman[389187]: 2025-10-03 09:54:27.483082841 +0000 UTC m=+0.541682131 container died e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Oct  3 09:54:27 compute-0 systemd[1]: e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7-6cef6d41888ae8b6.timer: Deactivated successfully.
Oct  3 09:54:27 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7.
Oct  3 09:54:27 compute-0 nova_compute[351685]: 2025-10-03 09:54:27.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:54:27 compute-0 nova_compute[351685]: 2025-10-03 09:54:27.732 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:54:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7-userdata-shm.mount: Deactivated successfully.
Oct  3 09:54:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-82dab6352b7fe95bce8343d5b29441da55af56ff7858d9613cc673c93b41ca2c-merged.mount: Deactivated successfully.
Oct  3 09:54:27 compute-0 podman[389187]: 2025-10-03 09:54:27.852816132 +0000 UTC m=+0.911415442 container cleanup e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct  3 09:54:27 compute-0 podman[389187]: ceilometer_agent_ipmi
Oct  3 09:54:27 compute-0 podman[389215]: ceilometer_agent_ipmi
Oct  3 09:54:27 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Oct  3 09:54:27 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Oct  3 09:54:27 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Oct  3 09:54:28 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:54:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dab6352b7fe95bce8343d5b29441da55af56ff7858d9613cc673c93b41ca2c/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct  3 09:54:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dab6352b7fe95bce8343d5b29441da55af56ff7858d9613cc673c93b41ca2c/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Oct  3 09:54:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dab6352b7fe95bce8343d5b29441da55af56ff7858d9613cc673c93b41ca2c/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Oct  3 09:54:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82dab6352b7fe95bce8343d5b29441da55af56ff7858d9613cc673c93b41ca2c/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Oct  3 09:54:28 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7.
Oct  3 09:54:28 compute-0 podman[389227]: 2025-10-03 09:54:28.246879704 +0000 UTC m=+0.278123644 container init e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: + sudo -E kolla_set_configs
Oct  3 09:54:28 compute-0 podman[389227]: 2025-10-03 09:54:28.290276851 +0000 UTC m=+0.321520761 container start e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Oct  3 09:54:28 compute-0 podman[389227]: ceilometer_agent_ipmi
Oct  3 09:54:28 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Oct  3 09:54:28 compute-0 podman[389248]: 2025-10-03 09:54:28.40249696 +0000 UTC m=+0.102383534 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 09:54:28 compute-0 systemd[1]: e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7-1388d357420f114c.service: Main process exited, code=exited, status=1/FAILURE
Oct  3 09:54:28 compute-0 systemd[1]: e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7-1388d357420f114c.service: Failed with result 'exit-code'.
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: INFO:__main__:Validating config file
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: INFO:__main__:Copying service configuration files
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: INFO:__main__:Writing out command to execute
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: ++ cat /run_command
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: + ARGS=
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: + sudo kolla_copy_cacerts
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: + [[ ! -n '' ]]
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: + . kolla_extend_start
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: + umask 0022
Oct  3 09:54:28 compute-0 ceilometer_agent_ipmi[389241]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Oct  3 09:54:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:54:29 compute-0 python3.9[389422]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  3 09:54:29 compute-0 systemd[1]: Stopping kepler container...
Oct  3 09:54:29 compute-0 kepler[388336]: I1003 09:54:29.450075       1 exporter.go:218] Received shutdown signal
Oct  3 09:54:29 compute-0 kepler[388336]: I1003 09:54:29.451163       1 exporter.go:226] Exiting...
Oct  3 09:54:29 compute-0 systemd[1]: libpod-be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a.scope: Deactivated successfully.
Oct  3 09:54:29 compute-0 podman[389426]: 2025-10-03 09:54:29.660700074 +0000 UTC m=+0.345178692 container died be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, name=ubi9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, container_name=kepler, io.openshift.tags=base rhel9, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 09:54:29 compute-0 systemd[1]: be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a-bfaa34734942ace.timer: Deactivated successfully.
Oct  3 09:54:29 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a.
Oct  3 09:54:29 compute-0 nova_compute[351685]: 2025-10-03 09:54:29.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:54:29 compute-0 nova_compute[351685]: 2025-10-03 09:54:29.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:54:29 compute-0 podman[157165]: time="2025-10-03T09:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:54:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a-userdata-shm.mount: Deactivated successfully.
Oct  3 09:54:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2e25f5d2884281c6d6a54195ebe83d743d8fe4a6c9b98acb7394e80547db0c2-merged.mount: Deactivated successfully.
Oct  3 09:54:29 compute-0 podman[389426]: 2025-10-03 09:54:29.839209205 +0000 UTC m=+0.523687813 container cleanup be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, name=ubi9, version=9.4, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, release=1214.1726694543, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, architecture=x86_64)
Oct  3 09:54:29 compute-0 podman[389426]: kepler
Oct  3 09:54:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45029 "" "Go-http-client/1.1"
Oct  3 09:54:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8096 "" "Go-http-client/1.1"
Oct  3 09:54:29 compute-0 systemd[1]: libpod-conmon-be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a.scope: Deactivated successfully.
Oct  3 09:54:29 compute-0 podman[389453]: kepler
Oct  3 09:54:29 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Oct  3 09:54:29 compute-0 systemd[1]: Stopped kepler container.
Oct  3 09:54:29 compute-0 systemd[1]: Starting kepler container...
Oct  3 09:54:30 compute-0 systemd[1]: Started libcrun container.
Oct  3 09:54:30 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a.
Oct  3 09:54:30 compute-0 podman[389466]: 2025-10-03 09:54:30.326560598 +0000 UTC m=+0.384365912 container init be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, config_id=edpm, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, name=ubi9, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Oct  3 09:54:30 compute-0 kepler[389482]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct  3 09:54:30 compute-0 kepler[389482]: I1003 09:54:30.352116       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Oct  3 09:54:30 compute-0 kepler[389482]: I1003 09:54:30.352219       1 config.go:293] using gCgroup ID in the BPF program: true
Oct  3 09:54:30 compute-0 kepler[389482]: I1003 09:54:30.352276       1 config.go:295] kernel version: 5.14
Oct  3 09:54:30 compute-0 kepler[389482]: I1003 09:54:30.352750       1 power.go:78] Unable to obtain power, use estimate method
Oct  3 09:54:30 compute-0 kepler[389482]: I1003 09:54:30.352767       1 redfish.go:169] failed to get redfish credential file path
Oct  3 09:54:30 compute-0 kepler[389482]: I1003 09:54:30.353052       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Oct  3 09:54:30 compute-0 kepler[389482]: I1003 09:54:30.353062       1 power.go:79] using none to obtain power
Oct  3 09:54:30 compute-0 kepler[389482]: E1003 09:54:30.353074       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Oct  3 09:54:30 compute-0 kepler[389482]: E1003 09:54:30.353089       1 exporter.go:154] failed to init GPU accelerators: no devices found
Oct  3 09:54:30 compute-0 kepler[389482]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Oct  3 09:54:30 compute-0 kepler[389482]: I1003 09:54:30.354482       1 exporter.go:84] Number of CPUs: 8
Oct  3 09:54:30 compute-0 podman[389466]: 2025-10-03 09:54:30.358061621 +0000 UTC m=+0.415866925 container start be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, name=ubi9, release-0.7.12=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.462 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.463 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.463 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.463 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.463 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.463 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.463 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.463 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.463 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.463 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.463 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.464 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.464 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.464 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.464 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.464 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.464 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.464 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.464 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.464 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.464 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.465 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.465 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.465 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.465 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.465 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.465 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.465 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.465 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.465 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.465 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.465 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.465 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.466 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.466 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.466 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.466 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.466 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.466 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.466 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.466 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.466 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.466 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.467 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.467 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.467 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.467 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.467 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.467 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.467 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.467 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.467 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.467 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.467 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.467 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.468 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.468 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.468 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.468 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.468 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.468 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.468 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.468 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.469 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.469 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.469 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.469 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.469 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.469 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.469 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.469 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.469 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.469 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.469 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.470 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.471 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.472 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.472 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.472 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.472 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.472 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.472 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.472 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.472 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.472 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.472 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.473 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.473 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.473 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.473 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.473 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.473 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.473 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.473 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.474 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.474 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.474 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.474 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.474 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.474 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.474 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.474 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.474 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.474 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.474 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.475 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.475 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.475 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.475 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.475 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.475 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.475 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.475 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.475 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.475 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.475 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.476 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.476 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.476 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.476 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.476 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.476 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.476 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.476 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.476 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.476 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.476 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.476 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.476 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.477 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.477 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:54:30 compute-0 ceilometer_agent_ipmi[389241]: 2025-10-03 09:54:30.477 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct  3 09:59:29 compute-0 nova_compute[351685]: 2025-10-03 09:59:29.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:59:29 compute-0 nova_compute[351685]: 2025-10-03 09:59:29.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 09:59:29 compute-0 podman[157165]: time="2025-10-03T09:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:59:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45034 "" "Go-http-client/1.1"
Oct  3 09:59:29 compute-0 podman[157165]: @ - - [03/Oct/2025:09:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8527 "" "Go-http-client/1.1"
Oct  3 09:59:29 compute-0 rsyslogd[187556]: imjournal: 3532 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct  3 09:59:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:59:30 compute-0 nova_compute[351685]: 2025-10-03 09:59:30.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:59:30 compute-0 nova_compute[351685]: 2025-10-03 09:59:30.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:59:30 compute-0 nova_compute[351685]: 2025-10-03 09:59:30.763 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:59:30 compute-0 nova_compute[351685]: 2025-10-03 09:59:30.763 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:59:30 compute-0 nova_compute[351685]: 2025-10-03 09:59:30.764 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:59:30 compute-0 nova_compute[351685]: 2025-10-03 09:59:30.764 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 09:59:30 compute-0 nova_compute[351685]: 2025-10-03 09:59:30.764 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 09:59:30 compute-0 podman[409240]: 2025-10-03 09:59:30.798738442 +0000 UTC m=+0.065120294 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 09:59:30 compute-0 podman[409241]: 2025-10-03 09:59:30.804752105 +0000 UTC m=+0.069048201 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 09:59:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:59:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 09:59:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3810395451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 09:59:31 compute-0 nova_compute[351685]: 2025-10-03 09:59:31.258 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 09:59:31 compute-0 openstack_network_exporter[367524]: ERROR   09:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 09:59:31 compute-0 openstack_network_exporter[367524]: ERROR   09:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:59:31 compute-0 openstack_network_exporter[367524]: ERROR   09:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 09:59:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 09:59:31 compute-0 openstack_network_exporter[367524]: ERROR   09:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 09:59:31 compute-0 openstack_network_exporter[367524]: ERROR   09:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 09:59:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 09:59:31 compute-0 nova_compute[351685]: 2025-10-03 09:59:31.594 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 09:59:31 compute-0 nova_compute[351685]: 2025-10-03 09:59:31.596 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4581MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 09:59:31 compute-0 nova_compute[351685]: 2025-10-03 09:59:31.596 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:59:31 compute-0 nova_compute[351685]: 2025-10-03 09:59:31.596 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:59:31 compute-0 nova_compute[351685]: 2025-10-03 09:59:31.746 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 09:59:31 compute-0 nova_compute[351685]: 2025-10-03 09:59:31.746 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 09:59:31 compute-0 nova_compute[351685]: 2025-10-03 09:59:31.767 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 09:59:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 09:59:32 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3644687972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 09:59:32 compute-0 nova_compute[351685]: 2025-10-03 09:59:32.245 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 09:59:32 compute-0 nova_compute[351685]: 2025-10-03 09:59:32.253 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 09:59:32 compute-0 nova_compute[351685]: 2025-10-03 09:59:32.318 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 09:59:32 compute-0 nova_compute[351685]: 2025-10-03 09:59:32.319 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 09:59:32 compute-0 nova_compute[351685]: 2025-10-03 09:59:32.320 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:59:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:59:34 compute-0 systemd[1]: session-63.scope: Deactivated successfully.
Oct  3 09:59:34 compute-0 systemd[1]: session-63.scope: Consumed 9.069s CPU time.
Oct  3 09:59:34 compute-0 systemd-logind[798]: Session 63 logged out. Waiting for processes to exit.
Oct  3 09:59:34 compute-0 systemd-logind[798]: Removed session 63.
Oct  3 09:59:34 compute-0 podman[409320]: 2025-10-03 09:59:34.538129362 +0000 UTC m=+0.071447098 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 09:59:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:59:35 compute-0 nova_compute[351685]: 2025-10-03 09:59:35.321 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:59:35 compute-0 nova_compute[351685]: 2025-10-03 09:59:35.322 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:59:35 compute-0 nova_compute[351685]: 2025-10-03 09:59:35.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:59:35 compute-0 nova_compute[351685]: 2025-10-03 09:59:35.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:59:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:59:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:59:37 compute-0 nova_compute[351685]: 2025-10-03 09:59:37.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 09:59:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:59:38 compute-0 podman[409343]: 2025-10-03 09:59:38.823503334 +0000 UTC m=+0.086140460 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.tags=base rhel9)
Oct  3 09:59:38 compute-0 podman[409342]: 2025-10-03 09:59:38.836346567 +0000 UTC m=+0.101930018 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 09:59:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.880 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.881 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.881 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.881 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.882 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.882 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.882 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.882 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.882 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.884 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.885 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.885 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.885 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.885 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.885 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.885 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.886 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.886 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.886 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95c09280>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.887 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.888 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.888 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.888 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.889 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.889 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.889 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.889 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.889 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.890 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.890 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.890 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.890 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.890 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.891 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.891 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.891 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.891 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.891 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.894 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 09:59:40.894 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 09:59:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:59:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:59:41.583 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 09:59:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:59:41.584 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 09:59:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 09:59:41.584 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 09:59:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:59:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_09:59:46
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', '.rgw.root', 'backups', 'volumes', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.meta']
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 09:59:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 09:59:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:59:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:59:48 compute-0 podman[409385]: 2025-10-03 09:59:48.805987068 +0000 UTC m=+0.070777966 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm)
Oct  3 09:59:49 compute-0 podman[409405]: 2025-10-03 09:59:49.801133525 +0000 UTC m=+0.069333309 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, release=1755695350, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.6, vcs-type=git, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct  3 09:59:49 compute-0 podman[409406]: 2025-10-03 09:59:49.845864683 +0000 UTC m=+0.110052469 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Oct  3 09:59:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:59:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:59:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:59:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 09:59:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/861052358' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 09:59:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 09:59:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/861052358' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 09:59:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 09:59:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 09:59:55 compute-0 podman[409449]: 2025-10-03 09:59:55.796467104 +0000 UTC m=+0.065297131 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 09:59:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 09:59:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:59:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 09:59:58 compute-0 podman[409468]: 2025-10-03 09:59:58.795698031 +0000 UTC m=+0.059309028 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 09:59:59 compute-0 podman[157165]: time="2025-10-03T09:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 09:59:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45034 "" "Go-http-client/1.1"
Oct  3 09:59:59 compute-0 podman[157165]: @ - - [03/Oct/2025:09:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8523 "" "Go-http-client/1.1"
Oct  3 10:00:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:00:01 compute-0 openstack_network_exporter[367524]: ERROR   10:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:00:01 compute-0 openstack_network_exporter[367524]: ERROR   10:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:00:01 compute-0 openstack_network_exporter[367524]: ERROR   10:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:00:01 compute-0 openstack_network_exporter[367524]: ERROR   10:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:00:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:00:01 compute-0 openstack_network_exporter[367524]: ERROR   10:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:00:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:00:01 compute-0 podman[409493]: 2025-10-03 10:00:01.801686477 +0000 UTC m=+0.066738776 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  3 10:00:01 compute-0 podman[409492]: 2025-10-03 10:00:01.805687807 +0000 UTC m=+0.074180155 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  3 10:00:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:04 compute-0 podman[409532]: 2025-10-03 10:00:04.791583436 +0000 UTC m=+0.060417573 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:00:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:00:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:09 compute-0 podman[409551]: 2025-10-03 10:00:09.814870917 +0000 UTC m=+0.068718749 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:00:09 compute-0 podman[409552]: 2025-10-03 10:00:09.880943442 +0000 UTC m=+0.126552059 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., release=1214.1726694543, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-container, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, container_name=kepler, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9)
Oct  3 10:00:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:00:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:00:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:00:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:00:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:00:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:00:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:00:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:00:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:19 compute-0 podman[409594]: 2025-10-03 10:00:19.827472238 +0000 UTC m=+0.089759437 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 10:00:19 compute-0 podman[409615]: 2025-10-03 10:00:19.962516188 +0000 UTC m=+0.095734348 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350)
Oct  3 10:00:19 compute-0 podman[409616]: 2025-10-03 10:00:19.994212347 +0000 UTC m=+0.122027983 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  3 10:00:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:00:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:23 compute-0 nova_compute[351685]: 2025-10-03 10:00:23.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:00:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:00:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:00:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:00:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:00:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:00:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:00:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d3837cd4-3f41-402b-b79d-78430c54e14a does not exist
Oct  3 10:00:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 851dad1b-29bf-4b82-9022-9b053dfe349e does not exist
Oct  3 10:00:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a68737f1-d230-48b7-9cb6-b34b9aa99d9e does not exist
Oct  3 10:00:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:00:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:00:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:00:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:00:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:00:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:00:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:25 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:00:25 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:00:25 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:00:25 compute-0 podman[409933]: 2025-10-03 10:00:25.336020547 +0000 UTC m=+0.030017215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:00:25 compute-0 podman[409933]: 2025-10-03 10:00:25.429670578 +0000 UTC m=+0.123667236 container create 725d08a07d235044a99098c2d084cb611617dc20d5cd3b7542e437c672f7f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 10:00:25 compute-0 systemd[1]: Started libpod-conmon-725d08a07d235044a99098c2d084cb611617dc20d5cd3b7542e437c672f7f38b.scope.
Oct  3 10:00:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:00:25 compute-0 podman[409933]: 2025-10-03 10:00:25.555652968 +0000 UTC m=+0.249649676 container init 725d08a07d235044a99098c2d084cb611617dc20d5cd3b7542e437c672f7f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct  3 10:00:25 compute-0 podman[409933]: 2025-10-03 10:00:25.567494097 +0000 UTC m=+0.261490775 container start 725d08a07d235044a99098c2d084cb611617dc20d5cd3b7542e437c672f7f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cray, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 10:00:25 compute-0 podman[409933]: 2025-10-03 10:00:25.576021922 +0000 UTC m=+0.270018620 container attach 725d08a07d235044a99098c2d084cb611617dc20d5cd3b7542e437c672f7f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cray, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 10:00:25 compute-0 tender_cray[409949]: 167 167
Oct  3 10:00:25 compute-0 systemd[1]: libpod-725d08a07d235044a99098c2d084cb611617dc20d5cd3b7542e437c672f7f38b.scope: Deactivated successfully.
Oct  3 10:00:25 compute-0 podman[409933]: 2025-10-03 10:00:25.580651721 +0000 UTC m=+0.274648369 container died 725d08a07d235044a99098c2d084cb611617dc20d5cd3b7542e437c672f7f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cray, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:00:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-018bc77555403a7fd4bc915c872cd21ffad1308cb3f7db610a3fc6a372815e9b-merged.mount: Deactivated successfully.
Oct  3 10:00:25 compute-0 podman[409933]: 2025-10-03 10:00:25.648711998 +0000 UTC m=+0.342708636 container remove 725d08a07d235044a99098c2d084cb611617dc20d5cd3b7542e437c672f7f38b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_cray, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  3 10:00:25 compute-0 systemd[1]: libpod-conmon-725d08a07d235044a99098c2d084cb611617dc20d5cd3b7542e437c672f7f38b.scope: Deactivated successfully.
Oct  3 10:00:25 compute-0 podman[409973]: 2025-10-03 10:00:25.839122659 +0000 UTC m=+0.058480791 container create 01ebbb18842777b23b06625d08c6632804043f6f216fd29a20e3f2d4b1cf0e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ganguly, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  3 10:00:25 compute-0 systemd[1]: Started libpod-conmon-01ebbb18842777b23b06625d08c6632804043f6f216fd29a20e3f2d4b1cf0e01.scope.
Oct  3 10:00:25 compute-0 podman[409973]: 2025-10-03 10:00:25.817428002 +0000 UTC m=+0.036786134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:00:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9e6348c37c9ac2307ce4c3401a2426649ec26bb2ff4862a59953ffd073cbcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9e6348c37c9ac2307ce4c3401a2426649ec26bb2ff4862a59953ffd073cbcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9e6348c37c9ac2307ce4c3401a2426649ec26bb2ff4862a59953ffd073cbcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9e6348c37c9ac2307ce4c3401a2426649ec26bb2ff4862a59953ffd073cbcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9e6348c37c9ac2307ce4c3401a2426649ec26bb2ff4862a59953ffd073cbcb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:00:25 compute-0 podman[409973]: 2025-10-03 10:00:25.951498201 +0000 UTC m=+0.170856363 container init 01ebbb18842777b23b06625d08c6632804043f6f216fd29a20e3f2d4b1cf0e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ganguly, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 10:00:25 compute-0 podman[409973]: 2025-10-03 10:00:25.972486406 +0000 UTC m=+0.191844658 container start 01ebbb18842777b23b06625d08c6632804043f6f216fd29a20e3f2d4b1cf0e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ganguly, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 10:00:25 compute-0 podman[409973]: 2025-10-03 10:00:25.977851289 +0000 UTC m=+0.197209451 container attach 01ebbb18842777b23b06625d08c6632804043f6f216fd29a20e3f2d4b1cf0e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 10:00:25 compute-0 podman[409987]: 2025-10-03 10:00:25.979796641 +0000 UTC m=+0.091075338 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:00:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:00:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:27 compute-0 peaceful_ganguly[409995]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:00:27 compute-0 peaceful_ganguly[409995]: --> relative data size: 1.0
Oct  3 10:00:27 compute-0 peaceful_ganguly[409995]: --> All data devices are unavailable
Oct  3 10:00:27 compute-0 systemd[1]: libpod-01ebbb18842777b23b06625d08c6632804043f6f216fd29a20e3f2d4b1cf0e01.scope: Deactivated successfully.
Oct  3 10:00:27 compute-0 podman[409973]: 2025-10-03 10:00:27.234838422 +0000 UTC m=+1.454196554 container died 01ebbb18842777b23b06625d08c6632804043f6f216fd29a20e3f2d4b1cf0e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:00:27 compute-0 systemd[1]: libpod-01ebbb18842777b23b06625d08c6632804043f6f216fd29a20e3f2d4b1cf0e01.scope: Consumed 1.204s CPU time.
Oct  3 10:00:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a9e6348c37c9ac2307ce4c3401a2426649ec26bb2ff4862a59953ffd073cbcb-merged.mount: Deactivated successfully.
Oct  3 10:00:27 compute-0 podman[409973]: 2025-10-03 10:00:27.450954244 +0000 UTC m=+1.670312376 container remove 01ebbb18842777b23b06625d08c6632804043f6f216fd29a20e3f2d4b1cf0e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ganguly, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 10:00:27 compute-0 systemd[1]: libpod-conmon-01ebbb18842777b23b06625d08c6632804043f6f216fd29a20e3f2d4b1cf0e01.scope: Deactivated successfully.
Oct  3 10:00:28 compute-0 podman[410184]: 2025-10-03 10:00:28.256693008 +0000 UTC m=+0.053233059 container create 35a4e1fedd78849ed027f8aed552bdd69acb49c8f2ab8d3d63846ae03af936c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  3 10:00:28 compute-0 systemd[1]: Started libpod-conmon-35a4e1fedd78849ed027f8aed552bdd69acb49c8f2ab8d3d63846ae03af936c9.scope.
Oct  3 10:00:28 compute-0 podman[410184]: 2025-10-03 10:00:28.233809584 +0000 UTC m=+0.030349665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:00:28 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:00:28 compute-0 podman[410184]: 2025-10-03 10:00:28.379662032 +0000 UTC m=+0.176202173 container init 35a4e1fedd78849ed027f8aed552bdd69acb49c8f2ab8d3d63846ae03af936c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:00:28 compute-0 podman[410184]: 2025-10-03 10:00:28.3961224 +0000 UTC m=+0.192662451 container start 35a4e1fedd78849ed027f8aed552bdd69acb49c8f2ab8d3d63846ae03af936c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 10:00:28 compute-0 podman[410184]: 2025-10-03 10:00:28.401708999 +0000 UTC m=+0.198249080 container attach 35a4e1fedd78849ed027f8aed552bdd69acb49c8f2ab8d3d63846ae03af936c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:00:28 compute-0 pensive_hugle[410200]: 167 167
Oct  3 10:00:28 compute-0 systemd[1]: libpod-35a4e1fedd78849ed027f8aed552bdd69acb49c8f2ab8d3d63846ae03af936c9.scope: Deactivated successfully.
Oct  3 10:00:28 compute-0 podman[410184]: 2025-10-03 10:00:28.407897088 +0000 UTC m=+0.204437139 container died 35a4e1fedd78849ed027f8aed552bdd69acb49c8f2ab8d3d63846ae03af936c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 10:00:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e429e2c541f32432011bf37153283b41c606fc48d867517f27b62b7149d9043c-merged.mount: Deactivated successfully.
Oct  3 10:00:28 compute-0 podman[410184]: 2025-10-03 10:00:28.46907373 +0000 UTC m=+0.265613781 container remove 35a4e1fedd78849ed027f8aed552bdd69acb49c8f2ab8d3d63846ae03af936c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hugle, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:00:28 compute-0 systemd[1]: libpod-conmon-35a4e1fedd78849ed027f8aed552bdd69acb49c8f2ab8d3d63846ae03af936c9.scope: Deactivated successfully.
Oct  3 10:00:28 compute-0 podman[410224]: 2025-10-03 10:00:28.655649684 +0000 UTC m=+0.054995295 container create 90c31f237f28d52095e501ed0727681daebb8aeb33b6fc0d0ae037d0f9749be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:00:28 compute-0 systemd[1]: Started libpod-conmon-90c31f237f28d52095e501ed0727681daebb8aeb33b6fc0d0ae037d0f9749be9.scope.
Oct  3 10:00:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:28 compute-0 podman[410224]: 2025-10-03 10:00:28.632067148 +0000 UTC m=+0.031412789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:00:28 compute-0 nova_compute[351685]: 2025-10-03 10:00:28.740 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:00:28 compute-0 nova_compute[351685]: 2025-10-03 10:00:28.741 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 10:00:28 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345edb5930ffde42aa0f3f4d38b24a9453359f5a5f59d2fdd60883da9f6cab26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345edb5930ffde42aa0f3f4d38b24a9453359f5a5f59d2fdd60883da9f6cab26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345edb5930ffde42aa0f3f4d38b24a9453359f5a5f59d2fdd60883da9f6cab26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:00:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345edb5930ffde42aa0f3f4d38b24a9453359f5a5f59d2fdd60883da9f6cab26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:00:28 compute-0 nova_compute[351685]: 2025-10-03 10:00:28.756 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 10:00:28 compute-0 podman[410224]: 2025-10-03 10:00:28.775591901 +0000 UTC m=+0.174937532 container init 90c31f237f28d52095e501ed0727681daebb8aeb33b6fc0d0ae037d0f9749be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 10:00:28 compute-0 podman[410224]: 2025-10-03 10:00:28.792316737 +0000 UTC m=+0.191662348 container start 90c31f237f28d52095e501ed0727681daebb8aeb33b6fc0d0ae037d0f9749be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:00:28 compute-0 podman[410224]: 2025-10-03 10:00:28.796972987 +0000 UTC m=+0.196318588 container attach 90c31f237f28d52095e501ed0727681daebb8aeb33b6fc0d0ae037d0f9749be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]: {
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:    "0": [
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:        {
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "devices": [
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "/dev/loop3"
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            ],
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "lv_name": "ceph_lv0",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "lv_size": "21470642176",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "name": "ceph_lv0",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "tags": {
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.cluster_name": "ceph",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.crush_device_class": "",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.encrypted": "0",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.osd_id": "0",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.type": "block",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.vdo": "0"
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            },
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "type": "block",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "vg_name": "ceph_vg0"
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:        }
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:    ],
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:    "1": [
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:        {
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "devices": [
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "/dev/loop4"
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            ],
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "lv_name": "ceph_lv1",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "lv_size": "21470642176",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "name": "ceph_lv1",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "tags": {
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.cluster_name": "ceph",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.crush_device_class": "",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.encrypted": "0",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.osd_id": "1",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.type": "block",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.vdo": "0"
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            },
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "type": "block",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "vg_name": "ceph_vg1"
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:        }
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:    ],
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:    "2": [
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:        {
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "devices": [
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "/dev/loop5"
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            ],
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "lv_name": "ceph_lv2",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "lv_size": "21470642176",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "name": "ceph_lv2",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "tags": {
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.cluster_name": "ceph",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.crush_device_class": "",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.encrypted": "0",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.osd_id": "2",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.type": "block",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:                "ceph.vdo": "0"
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            },
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "type": "block",
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:            "vg_name": "ceph_vg2"
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:        }
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]:    ]
Oct  3 10:00:29 compute-0 eloquent_ishizaka[410240]: }
Oct  3 10:00:29 compute-0 systemd[1]: libpod-90c31f237f28d52095e501ed0727681daebb8aeb33b6fc0d0ae037d0f9749be9.scope: Deactivated successfully.
Oct  3 10:00:29 compute-0 podman[410224]: 2025-10-03 10:00:29.665769724 +0000 UTC m=+1.065115335 container died 90c31f237f28d52095e501ed0727681daebb8aeb33b6fc0d0ae037d0f9749be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:00:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-345edb5930ffde42aa0f3f4d38b24a9453359f5a5f59d2fdd60883da9f6cab26-merged.mount: Deactivated successfully.
Oct  3 10:00:29 compute-0 podman[157165]: time="2025-10-03T10:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:00:29 compute-0 podman[410224]: 2025-10-03 10:00:29.746563296 +0000 UTC m=+1.145908907 container remove 90c31f237f28d52095e501ed0727681daebb8aeb33b6fc0d0ae037d0f9749be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Oct  3 10:00:29 compute-0 nova_compute[351685]: 2025-10-03 10:00:29.745 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:00:29 compute-0 nova_compute[351685]: 2025-10-03 10:00:29.746 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:00:29 compute-0 nova_compute[351685]: 2025-10-03 10:00:29.746 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:00:29 compute-0 systemd[1]: libpod-conmon-90c31f237f28d52095e501ed0727681daebb8aeb33b6fc0d0ae037d0f9749be9.scope: Deactivated successfully.
Oct  3 10:00:29 compute-0 nova_compute[351685]: 2025-10-03 10:00:29.760 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  3 10:00:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45034 "" "Go-http-client/1.1"
Oct  3 10:00:29 compute-0 podman[410250]: 2025-10-03 10:00:29.783466919 +0000 UTC m=+0.092203059 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 10:00:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8520 "" "Go-http-client/1.1"
Oct  3 10:00:30 compute-0 podman[410423]: 2025-10-03 10:00:30.492053157 +0000 UTC m=+0.056340019 container create 020d6e910a453776add16982837378e9e6c3ee8d89e4ac1f4fc3bf4b091206dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:00:30 compute-0 systemd[1]: Started libpod-conmon-020d6e910a453776add16982837378e9e6c3ee8d89e4ac1f4fc3bf4b091206dc.scope.
Oct  3 10:00:30 compute-0 podman[410423]: 2025-10-03 10:00:30.469023229 +0000 UTC m=+0.033310111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:00:30 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:00:30 compute-0 podman[410423]: 2025-10-03 10:00:30.595143644 +0000 UTC m=+0.159430616 container init 020d6e910a453776add16982837378e9e6c3ee8d89e4ac1f4fc3bf4b091206dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_clarke, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 10:00:30 compute-0 podman[410423]: 2025-10-03 10:00:30.603941135 +0000 UTC m=+0.168227977 container start 020d6e910a453776add16982837378e9e6c3ee8d89e4ac1f4fc3bf4b091206dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 10:00:30 compute-0 podman[410423]: 2025-10-03 10:00:30.609453102 +0000 UTC m=+0.173739944 container attach 020d6e910a453776add16982837378e9e6c3ee8d89e4ac1f4fc3bf4b091206dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_clarke, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 10:00:30 compute-0 stupefied_clarke[410438]: 167 167
Oct  3 10:00:30 compute-0 systemd[1]: libpod-020d6e910a453776add16982837378e9e6c3ee8d89e4ac1f4fc3bf4b091206dc.scope: Deactivated successfully.
Oct  3 10:00:30 compute-0 podman[410423]: 2025-10-03 10:00:30.614326859 +0000 UTC m=+0.178613701 container died 020d6e910a453776add16982837378e9e6c3ee8d89e4ac1f4fc3bf4b091206dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:00:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-276970de5e8fad947325ee921d04a0010001f0ef87d6dad06cf027e60ffa23f9-merged.mount: Deactivated successfully.
Oct  3 10:00:30 compute-0 podman[410423]: 2025-10-03 10:00:30.667479054 +0000 UTC m=+0.231765896 container remove 020d6e910a453776add16982837378e9e6c3ee8d89e4ac1f4fc3bf4b091206dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_clarke, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct  3 10:00:30 compute-0 systemd[1]: libpod-conmon-020d6e910a453776add16982837378e9e6c3ee8d89e4ac1f4fc3bf4b091206dc.scope: Deactivated successfully.
Oct  3 10:00:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:30 compute-0 nova_compute[351685]: 2025-10-03 10:00:30.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:00:30 compute-0 nova_compute[351685]: 2025-10-03 10:00:30.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:00:30 compute-0 nova_compute[351685]: 2025-10-03 10:00:30.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:00:30 compute-0 nova_compute[351685]: 2025-10-03 10:00:30.751 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:00:30 compute-0 nova_compute[351685]: 2025-10-03 10:00:30.751 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:00:30 compute-0 nova_compute[351685]: 2025-10-03 10:00:30.752 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:00:30 compute-0 nova_compute[351685]: 2025-10-03 10:00:30.752 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:00:30 compute-0 nova_compute[351685]: 2025-10-03 10:00:30.752 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:00:30 compute-0 podman[410461]: 2025-10-03 10:00:30.823221869 +0000 UTC m=+0.029503947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:00:30 compute-0 podman[410461]: 2025-10-03 10:00:30.986519457 +0000 UTC m=+0.192801495 container create ab034494e3b79e589145ee88d2727ead5f44023af81183b6403d55be27ed06c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_leavitt, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:00:31 compute-0 systemd[1]: Started libpod-conmon-ab034494e3b79e589145ee88d2727ead5f44023af81183b6403d55be27ed06c5.scope.
Oct  3 10:00:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:00:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:00:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74280f00a9c7887302111e0d44ffa42095a675dbca6d408859a3789fefba2170/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:00:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74280f00a9c7887302111e0d44ffa42095a675dbca6d408859a3789fefba2170/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:00:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74280f00a9c7887302111e0d44ffa42095a675dbca6d408859a3789fefba2170/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:00:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74280f00a9c7887302111e0d44ffa42095a675dbca6d408859a3789fefba2170/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:00:31 compute-0 podman[410461]: 2025-10-03 10:00:31.110093311 +0000 UTC m=+0.316375469 container init ab034494e3b79e589145ee88d2727ead5f44023af81183b6403d55be27ed06c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:00:31 compute-0 podman[410461]: 2025-10-03 10:00:31.131711194 +0000 UTC m=+0.337993242 container start ab034494e3b79e589145ee88d2727ead5f44023af81183b6403d55be27ed06c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_leavitt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:00:31 compute-0 podman[410461]: 2025-10-03 10:00:31.137097947 +0000 UTC m=+0.343380005 container attach ab034494e3b79e589145ee88d2727ead5f44023af81183b6403d55be27ed06c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:00:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:00:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2701383413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:00:31 compute-0 nova_compute[351685]: 2025-10-03 10:00:31.270 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:00:31 compute-0 openstack_network_exporter[367524]: ERROR   10:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:00:31 compute-0 openstack_network_exporter[367524]: ERROR   10:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:00:31 compute-0 openstack_network_exporter[367524]: ERROR   10:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:00:31 compute-0 openstack_network_exporter[367524]: ERROR   10:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:00:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:00:31 compute-0 openstack_network_exporter[367524]: ERROR   10:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:00:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:00:31 compute-0 nova_compute[351685]: 2025-10-03 10:00:31.659 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:00:31 compute-0 nova_compute[351685]: 2025-10-03 10:00:31.661 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4557MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:00:31 compute-0 nova_compute[351685]: 2025-10-03 10:00:31.661 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:00:31 compute-0 nova_compute[351685]: 2025-10-03 10:00:31.662 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:00:31 compute-0 nova_compute[351685]: 2025-10-03 10:00:31.723 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:00:31 compute-0 nova_compute[351685]: 2025-10-03 10:00:31.724 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:00:31 compute-0 nova_compute[351685]: 2025-10-03 10:00:31.741 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:00:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:00:32 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/423146100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:00:32 compute-0 nova_compute[351685]: 2025-10-03 10:00:32.210 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:00:32 compute-0 nova_compute[351685]: 2025-10-03 10:00:32.220 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:00:32 compute-0 nova_compute[351685]: 2025-10-03 10:00:32.237 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:00:32 compute-0 nova_compute[351685]: 2025-10-03 10:00:32.239 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:00:32 compute-0 nova_compute[351685]: 2025-10-03 10:00:32.239 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]: {
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:        "osd_id": 1,
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:        "type": "bluestore"
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:    },
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:        "osd_id": 2,
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:        "type": "bluestore"
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:    },
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:        "osd_id": 0,
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:        "type": "bluestore"
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]:    }
Oct  3 10:00:32 compute-0 jovial_leavitt[410496]: }
Oct  3 10:00:32 compute-0 systemd[1]: libpod-ab034494e3b79e589145ee88d2727ead5f44023af81183b6403d55be27ed06c5.scope: Deactivated successfully.
Oct  3 10:00:32 compute-0 systemd[1]: libpod-ab034494e3b79e589145ee88d2727ead5f44023af81183b6403d55be27ed06c5.scope: Consumed 1.114s CPU time.
Oct  3 10:00:32 compute-0 podman[410461]: 2025-10-03 10:00:32.279419797 +0000 UTC m=+1.485701845 container died ab034494e3b79e589145ee88d2727ead5f44023af81183b6403d55be27ed06c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_leavitt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  3 10:00:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-74280f00a9c7887302111e0d44ffa42095a675dbca6d408859a3789fefba2170-merged.mount: Deactivated successfully.
Oct  3 10:00:32 compute-0 podman[410461]: 2025-10-03 10:00:32.357588135 +0000 UTC m=+1.563870183 container remove ab034494e3b79e589145ee88d2727ead5f44023af81183b6403d55be27ed06c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_leavitt, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:00:32 compute-0 systemd[1]: libpod-conmon-ab034494e3b79e589145ee88d2727ead5f44023af81183b6403d55be27ed06c5.scope: Deactivated successfully.
Oct  3 10:00:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:00:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct  3 10:00:32 compute-0 podman[410561]: 2025-10-03 10:00:32.417199677 +0000 UTC m=+0.101745565 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible)
Oct  3 10:00:32 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:00:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:00:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct  3 10:00:32 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct  3 10:00:32 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:00:32 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b4e26d7e-9956-44b9-b9ac-08dcf36baa2f does not exist
Oct  3 10:00:32 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b4194c91-623a-47c1-a83d-340910534eef does not exist
Oct  3 10:00:32 compute-0 podman[410554]: 2025-10-03 10:00:32.445784243 +0000 UTC m=+0.133917666 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:00:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:33 compute-0 nova_compute[351685]: 2025-10-03 10:00:33.235 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:00:33 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:00:33 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:00:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct  3 10:00:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct  3 10:00:33 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct  3 10:00:33 compute-0 nova_compute[351685]: 2025-10-03 10:00:33.724 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:00:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct  3 10:00:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct  3 10:00:34 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct  3 10:00:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Oct  3 10:00:34 compute-0 nova_compute[351685]: 2025-10-03 10:00:34.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:00:34 compute-0 nova_compute[351685]: 2025-10-03 10:00:34.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:00:35 compute-0 nova_compute[351685]: 2025-10-03 10:00:35.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:00:35 compute-0 nova_compute[351685]: 2025-10-03 10:00:35.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:00:35 compute-0 nova_compute[351685]: 2025-10-03 10:00:35.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 10:00:35 compute-0 podman[410651]: 2025-10-03 10:00:35.847906266 +0000 UTC m=+0.108137839 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 10:00:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:00:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 8.4 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Oct  3 10:00:36 compute-0 nova_compute[351685]: 2025-10-03 10:00:36.746 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:00:37 compute-0 nova_compute[351685]: 2025-10-03 10:00:37.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:00:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 2.5 MiB/s wr, 16 op/s
Oct  3 10:00:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.9 MiB/s wr, 13 op/s
Oct  3 10:00:40 compute-0 podman[410670]: 2025-10-03 10:00:40.805757151 +0000 UTC m=+0.071793434 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:00:40 compute-0 podman[410671]: 2025-10-03 10:00:40.832682675 +0000 UTC m=+0.096123924 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30)
Oct  3 10:00:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:00:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct  3 10:00:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct  3 10:00:41 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct  3 10:00:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:00:41.586 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:00:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:00:41.586 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:00:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:00:41.586 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:00:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 1.9 MiB/s wr, 13 op/s
Oct  3 10:00:44 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:00:44.024 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:73:2e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:70:f7:73:f2:48'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 10:00:44 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:00:44.025 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  3 10:00:44 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:00:44.026 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:00:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 1.6 MiB/s wr, 8 op/s
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:00:46
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.control', '.mgr', 'images', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'default.rgw.log']
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:00:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:00:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 772 KiB/s wr, 1 op/s
Oct  3 10:00:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 102 B/s wr, 0 op/s
Oct  3 10:00:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:50 compute-0 podman[410714]: 2025-10-03 10:00:50.811787185 +0000 UTC m=+0.074466069 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Oct  3 10:00:50 compute-0 podman[410715]: 2025-10-03 10:00:50.815476264 +0000 UTC m=+0.072728594 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  3 10:00:50 compute-0 podman[410716]: 2025-10-03 10:00:50.848968628 +0000 UTC m=+0.103748379 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  3 10:00:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:00:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:00:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1837809303' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:00:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:00:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1837809303' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:00:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:55 compute-0 nova_compute[351685]: 2025-10-03 10:00:55.058 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:00:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:00:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:00:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4597 writes, 20K keys, 4597 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4597 writes, 4597 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1324 writes, 6003 keys, 1324 commit groups, 1.0 writes per commit group, ingest: 8.62 MB, 0.01 MB/s#012Interval WAL: 1324 writes, 1324 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     57.9      0.37              0.08        11    0.033       0      0       0.0       0.0#012  L6      1/0    7.12 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     94.4     78.5      0.89              0.22        10    0.089     43K   5169       0.0       0.0#012 Sum      1/0    7.12 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     66.8     72.5      1.26              0.30        21    0.060     43K   5169       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.0     66.2     66.0      0.64              0.15        10    0.064     23K   2981       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     94.4     78.5      0.89              0.22        10    0.089     43K   5169       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     59.2      0.36              0.08        10    0.036       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.6      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.021, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 1.3 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56005dddb1f0#2 capacity: 308.00 MB usage: 6.49 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 7.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(410,6.13 MB,1.99122%) FilterBlock(22,128.30 KB,0.0406785%) IndexBlock(22,240.08 KB,0.0761205%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  3 10:00:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:00:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:56 compute-0 podman[410774]: 2025-10-03 10:00:56.812297523 +0000 UTC m=+0.076656531 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:00:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:00:59 compute-0 podman[157165]: time="2025-10-03T10:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:00:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45034 "" "Go-http-client/1.1"
Oct  3 10:00:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8534 "" "Go-http-client/1.1"
Oct  3 10:01:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:00 compute-0 podman[410793]: 2025-10-03 10:01:00.804622637 +0000 UTC m=+0.070813933 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:01:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:01:01 compute-0 openstack_network_exporter[367524]: ERROR   10:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:01:01 compute-0 openstack_network_exporter[367524]: ERROR   10:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:01:01 compute-0 openstack_network_exporter[367524]: ERROR   10:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:01:01 compute-0 openstack_network_exporter[367524]: ERROR   10:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:01:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:01:01 compute-0 openstack_network_exporter[367524]: ERROR   10:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:01:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:01:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:02 compute-0 podman[410829]: 2025-10-03 10:01:02.811684043 +0000 UTC m=+0.070088009 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:01:02 compute-0 podman[410828]: 2025-10-03 10:01:02.814161023 +0000 UTC m=+0.081186176 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  3 10:01:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:01:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:06 compute-0 podman[410868]: 2025-10-03 10:01:06.819626309 +0000 UTC m=+0.082196776 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 10:01:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:01:11 compute-0 podman[410887]: 2025-10-03 10:01:11.791945487 +0000 UTC m=+0.054670504 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:01:11 compute-0 podman[410888]: 2025-10-03 10:01:11.805519282 +0000 UTC m=+0.062890258 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, distribution-scope=public, release=1214.1726694543, com.redhat.component=ubi9-container, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release-0.7.12=, io.openshift.tags=base rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=kepler, managed_by=edpm_ansible)
Oct  3 10:01:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:01:13.912824) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485673912878, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1334, "num_deletes": 251, "total_data_size": 2027709, "memory_usage": 2060336, "flush_reason": "Manual Compaction"}
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485673928893, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1997015, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19680, "largest_seqno": 21013, "table_properties": {"data_size": 1990680, "index_size": 3538, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13261, "raw_average_key_size": 19, "raw_value_size": 1977913, "raw_average_value_size": 2965, "num_data_blocks": 161, "num_entries": 667, "num_filter_entries": 667, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759485542, "oldest_key_time": 1759485542, "file_creation_time": 1759485673, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 16145 microseconds, and 6254 cpu microseconds.
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:01:13.928967) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1997015 bytes OK
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:01:13.928995) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:01:13.931075) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:01:13.931091) EVENT_LOG_v1 {"time_micros": 1759485673931086, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:01:13.931120) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2021744, prev total WAL file size 2021744, number of live WAL files 2.
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:01:13.933522) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1950KB)], [47(7286KB)]
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485673933639, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9458000, "oldest_snapshot_seqno": -1}
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4340 keys, 7697556 bytes, temperature: kUnknown
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485673982946, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7697556, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7667205, "index_size": 18401, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 107403, "raw_average_key_size": 24, "raw_value_size": 7587194, "raw_average_value_size": 1748, "num_data_blocks": 770, "num_entries": 4340, "num_filter_entries": 4340, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759485673, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:01:13.983168) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7697556 bytes
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:01:13.984944) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.6 rd, 155.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.1 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(8.6) write-amplify(3.9) OK, records in: 4858, records dropped: 518 output_compression: NoCompression
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:01:13.984966) EVENT_LOG_v1 {"time_micros": 1759485673984956, "job": 24, "event": "compaction_finished", "compaction_time_micros": 49366, "compaction_time_cpu_micros": 29148, "output_level": 6, "num_output_files": 1, "total_output_size": 7697556, "num_input_records": 4858, "num_output_records": 4340, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485673985739, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485673987287, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:01:13.932933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:01:13.987491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:01:13.987498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:01:13.987500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:01:13.987501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:01:13.987503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:01:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:01:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:01:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:01:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:01:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:01:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:01:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:01:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:01:21 compute-0 podman[410929]: 2025-10-03 10:01:21.808166398 +0000 UTC m=+0.069435858 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, version=9.6, distribution-scope=public, vcs-type=git, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Oct  3 10:01:21 compute-0 podman[410930]: 2025-10-03 10:01:21.816146415 +0000 UTC m=+0.074077248 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:01:21 compute-0 podman[410931]: 2025-10-03 10:01:21.855419984 +0000 UTC m=+0.108272744 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  3 10:01:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:01:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:27 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:27.352 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:73:2e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:70:f7:73:f2:48'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 10:01:27 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:27.352 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  3 10:01:27 compute-0 podman[410995]: 2025-10-03 10:01:27.834579906 +0000 UTC m=+0.083084347 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible)
Oct  3 10:01:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:29 compute-0 podman[157165]: time="2025-10-03T10:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:01:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45034 "" "Go-http-client/1.1"
Oct  3 10:01:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8513 "" "Go-http-client/1.1"
Oct  3 10:01:30 compute-0 nova_compute[351685]: 2025-10-03 10:01:30.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:01:30 compute-0 nova_compute[351685]: 2025-10-03 10:01:30.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:01:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:01:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:31.355 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:01:31 compute-0 openstack_network_exporter[367524]: ERROR   10:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:01:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:01:31 compute-0 openstack_network_exporter[367524]: ERROR   10:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:01:31 compute-0 openstack_network_exporter[367524]: ERROR   10:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:01:31 compute-0 openstack_network_exporter[367524]: ERROR   10:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:01:31 compute-0 openstack_network_exporter[367524]: ERROR   10:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:01:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:01:31 compute-0 nova_compute[351685]: 2025-10-03 10:01:31.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:01:31 compute-0 nova_compute[351685]: 2025-10-03 10:01:31.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:01:31 compute-0 nova_compute[351685]: 2025-10-03 10:01:31.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:01:31 compute-0 podman[411014]: 2025-10-03 10:01:31.795538503 +0000 UTC m=+0.056433311 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:01:31 compute-0 nova_compute[351685]: 2025-10-03 10:01:31.810 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  3 10:01:31 compute-0 nova_compute[351685]: 2025-10-03 10:01:31.810 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:01:31 compute-0 nova_compute[351685]: 2025-10-03 10:01:31.837 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:01:31 compute-0 nova_compute[351685]: 2025-10-03 10:01:31.838 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:01:31 compute-0 nova_compute[351685]: 2025-10-03 10:01:31.838 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:01:31 compute-0 nova_compute[351685]: 2025-10-03 10:01:31.838 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:01:31 compute-0 nova_compute[351685]: 2025-10-03 10:01:31.838 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:01:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:01:32 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/195237941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:01:32 compute-0 nova_compute[351685]: 2025-10-03 10:01:32.295 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:01:32 compute-0 nova_compute[351685]: 2025-10-03 10:01:32.625 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:01:32 compute-0 nova_compute[351685]: 2025-10-03 10:01:32.627 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4595MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:01:32 compute-0 nova_compute[351685]: 2025-10-03 10:01:32.628 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:01:32 compute-0 nova_compute[351685]: 2025-10-03 10:01:32.628 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:01:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:32 compute-0 nova_compute[351685]: 2025-10-03 10:01:32.823 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:01:32 compute-0 nova_compute[351685]: 2025-10-03 10:01:32.824 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:01:32 compute-0 nova_compute[351685]: 2025-10-03 10:01:32.898 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 10:01:32 compute-0 podman[411136]: 2025-10-03 10:01:32.964686053 +0000 UTC m=+0.072364672 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 10:01:32 compute-0 podman[411135]: 2025-10-03 10:01:32.96490989 +0000 UTC m=+0.077001750 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:01:32 compute-0 nova_compute[351685]: 2025-10-03 10:01:32.978 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 10:01:32 compute-0 nova_compute[351685]: 2025-10-03 10:01:32.978 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 10:01:32 compute-0 nova_compute[351685]: 2025-10-03 10:01:32.996 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 10:01:33 compute-0 nova_compute[351685]: 2025-10-03 10:01:33.027 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 10:01:33 compute-0 nova_compute[351685]: 2025-10-03 10:01:33.044 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:01:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:01:33 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:01:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:01:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:01:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:01:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:01:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:01:33 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1740508896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:01:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 01fddb2f-c22f-48db-ad6c-60e8593f1d8f does not exist
Oct  3 10:01:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3e61ca6d-dfbf-4c80-863a-980736621c77 does not exist
Oct  3 10:01:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev afe2763b-d01b-4b66-b400-eeebcf8f5708 does not exist
Oct  3 10:01:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:01:33 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:01:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:01:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:01:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:01:33 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:01:33 compute-0 nova_compute[351685]: 2025-10-03 10:01:33.543 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:01:33 compute-0 nova_compute[351685]: 2025-10-03 10:01:33.552 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:01:33 compute-0 nova_compute[351685]: 2025-10-03 10:01:33.569 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:01:33 compute-0 nova_compute[351685]: 2025-10-03 10:01:33.571 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:01:33 compute-0 nova_compute[351685]: 2025-10-03 10:01:33.571 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:01:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:01:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:01:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:01:34 compute-0 podman[411389]: 2025-10-03 10:01:34.28117989 +0000 UTC m=+0.065270904 container create cff4f862f4c4057b3dd0840cb2dcea7b291ccb9605c89c9fe2221f81603f3a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:01:34 compute-0 systemd[1]: Started libpod-conmon-cff4f862f4c4057b3dd0840cb2dcea7b291ccb9605c89c9fe2221f81603f3a93.scope.
Oct  3 10:01:34 compute-0 podman[411389]: 2025-10-03 10:01:34.254201045 +0000 UTC m=+0.038292139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:01:34 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:01:34 compute-0 podman[411389]: 2025-10-03 10:01:34.397842322 +0000 UTC m=+0.181933806 container init cff4f862f4c4057b3dd0840cb2dcea7b291ccb9605c89c9fe2221f81603f3a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 10:01:34 compute-0 podman[411389]: 2025-10-03 10:01:34.410002813 +0000 UTC m=+0.194093837 container start cff4f862f4c4057b3dd0840cb2dcea7b291ccb9605c89c9fe2221f81603f3a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:01:34 compute-0 magical_mirzakhani[411406]: 167 167
Oct  3 10:01:34 compute-0 podman[411389]: 2025-10-03 10:01:34.416969856 +0000 UTC m=+0.201060900 container attach cff4f862f4c4057b3dd0840cb2dcea7b291ccb9605c89c9fe2221f81603f3a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mirzakhani, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 10:01:34 compute-0 systemd[1]: libpod-cff4f862f4c4057b3dd0840cb2dcea7b291ccb9605c89c9fe2221f81603f3a93.scope: Deactivated successfully.
Oct  3 10:01:34 compute-0 podman[411389]: 2025-10-03 10:01:34.418693091 +0000 UTC m=+0.202784115 container died cff4f862f4c4057b3dd0840cb2dcea7b291ccb9605c89c9fe2221f81603f3a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mirzakhani, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:01:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3fd5bc26ea261e0fb6bed94b834e0dca12ec717523283c3c76bab37dc781897-merged.mount: Deactivated successfully.
Oct  3 10:01:34 compute-0 podman[411389]: 2025-10-03 10:01:34.483439718 +0000 UTC m=+0.267530732 container remove cff4f862f4c4057b3dd0840cb2dcea7b291ccb9605c89c9fe2221f81603f3a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mirzakhani, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 10:01:34 compute-0 systemd[1]: libpod-conmon-cff4f862f4c4057b3dd0840cb2dcea7b291ccb9605c89c9fe2221f81603f3a93.scope: Deactivated successfully.
Oct  3 10:01:34 compute-0 podman[411430]: 2025-10-03 10:01:34.695831971 +0000 UTC m=+0.058839079 container create 5a342c5689d6e17fc1928e38a234615b067054097d7598c698a238cc3c7c2566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:01:34 compute-0 systemd[1]: Started libpod-conmon-5a342c5689d6e17fc1928e38a234615b067054097d7598c698a238cc3c7c2566.scope.
Oct  3 10:01:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:34 compute-0 podman[411430]: 2025-10-03 10:01:34.675614362 +0000 UTC m=+0.038621470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:01:34 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2e2ab0e768106126adaa9ca6b3b573da5a86b67aeecefb0a5c60a5e855e4593/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2e2ab0e768106126adaa9ca6b3b573da5a86b67aeecefb0a5c60a5e855e4593/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2e2ab0e768106126adaa9ca6b3b573da5a86b67aeecefb0a5c60a5e855e4593/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2e2ab0e768106126adaa9ca6b3b573da5a86b67aeecefb0a5c60a5e855e4593/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2e2ab0e768106126adaa9ca6b3b573da5a86b67aeecefb0a5c60a5e855e4593/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:01:34 compute-0 podman[411430]: 2025-10-03 10:01:34.824749336 +0000 UTC m=+0.187756464 container init 5a342c5689d6e17fc1928e38a234615b067054097d7598c698a238cc3c7c2566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:01:34 compute-0 podman[411430]: 2025-10-03 10:01:34.844637324 +0000 UTC m=+0.207644422 container start 5a342c5689d6e17fc1928e38a234615b067054097d7598c698a238cc3c7c2566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 10:01:34 compute-0 podman[411430]: 2025-10-03 10:01:34.856202015 +0000 UTC m=+0.219209123 container attach 5a342c5689d6e17fc1928e38a234615b067054097d7598c698a238cc3c7c2566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:01:35 compute-0 nova_compute[351685]: 2025-10-03 10:01:35.567 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:01:35 compute-0 nova_compute[351685]: 2025-10-03 10:01:35.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:01:35 compute-0 nova_compute[351685]: 2025-10-03 10:01:35.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:01:36 compute-0 hardcore_hawking[411446]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:01:36 compute-0 hardcore_hawking[411446]: --> relative data size: 1.0
Oct  3 10:01:36 compute-0 hardcore_hawking[411446]: --> All data devices are unavailable
Oct  3 10:01:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:01:36 compute-0 systemd[1]: libpod-5a342c5689d6e17fc1928e38a234615b067054097d7598c698a238cc3c7c2566.scope: Deactivated successfully.
Oct  3 10:01:36 compute-0 systemd[1]: libpod-5a342c5689d6e17fc1928e38a234615b067054097d7598c698a238cc3c7c2566.scope: Consumed 1.178s CPU time.
Oct  3 10:01:36 compute-0 podman[411475]: 2025-10-03 10:01:36.146931695 +0000 UTC m=+0.031555933 container died 5a342c5689d6e17fc1928e38a234615b067054097d7598c698a238cc3c7c2566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 10:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2e2ab0e768106126adaa9ca6b3b573da5a86b67aeecefb0a5c60a5e855e4593-merged.mount: Deactivated successfully.
Oct  3 10:01:36 compute-0 podman[411475]: 2025-10-03 10:01:36.210911017 +0000 UTC m=+0.095535245 container remove 5a342c5689d6e17fc1928e38a234615b067054097d7598c698a238cc3c7c2566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 10:01:36 compute-0 systemd[1]: libpod-conmon-5a342c5689d6e17fc1928e38a234615b067054097d7598c698a238cc3c7c2566.scope: Deactivated successfully.
Oct  3 10:01:36 compute-0 nova_compute[351685]: 2025-10-03 10:01:36.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:01:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:37 compute-0 podman[411628]: 2025-10-03 10:01:37.022979154 +0000 UTC m=+0.059937844 container create 38cfc1bbd1ce2b2e7bc449c28363b64e7c0f89a78f166af47adadd3841d602b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:01:37 compute-0 systemd[1]: Started libpod-conmon-38cfc1bbd1ce2b2e7bc449c28363b64e7c0f89a78f166af47adadd3841d602b0.scope.
Oct  3 10:01:37 compute-0 podman[411628]: 2025-10-03 10:01:37.003357475 +0000 UTC m=+0.040316195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:01:37 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:01:37 compute-0 podman[411628]: 2025-10-03 10:01:37.117188166 +0000 UTC m=+0.154146886 container init 38cfc1bbd1ce2b2e7bc449c28363b64e7c0f89a78f166af47adadd3841d602b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct  3 10:01:37 compute-0 podman[411628]: 2025-10-03 10:01:37.12665675 +0000 UTC m=+0.163615460 container start 38cfc1bbd1ce2b2e7bc449c28363b64e7c0f89a78f166af47adadd3841d602b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 10:01:37 compute-0 podman[411628]: 2025-10-03 10:01:37.131701371 +0000 UTC m=+0.168660061 container attach 38cfc1bbd1ce2b2e7bc449c28363b64e7c0f89a78f166af47adadd3841d602b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 10:01:37 compute-0 beautiful_mclaren[411646]: 167 167
Oct  3 10:01:37 compute-0 systemd[1]: libpod-38cfc1bbd1ce2b2e7bc449c28363b64e7c0f89a78f166af47adadd3841d602b0.scope: Deactivated successfully.
Oct  3 10:01:37 compute-0 podman[411628]: 2025-10-03 10:01:37.135352659 +0000 UTC m=+0.172311359 container died 38cfc1bbd1ce2b2e7bc449c28363b64e7c0f89a78f166af47adadd3841d602b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mclaren, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 10:01:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-17834c3be49939f86dffa893c70ae1977f4142681659769056b2c4d6d27c99a1-merged.mount: Deactivated successfully.
Oct  3 10:01:37 compute-0 podman[411628]: 2025-10-03 10:01:37.187917475 +0000 UTC m=+0.224876155 container remove 38cfc1bbd1ce2b2e7bc449c28363b64e7c0f89a78f166af47adadd3841d602b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mclaren, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:01:37 compute-0 podman[411641]: 2025-10-03 10:01:37.189544526 +0000 UTC m=+0.121341422 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 10:01:37 compute-0 systemd[1]: libpod-conmon-38cfc1bbd1ce2b2e7bc449c28363b64e7c0f89a78f166af47adadd3841d602b0.scope: Deactivated successfully.
Oct  3 10:01:37 compute-0 podman[411688]: 2025-10-03 10:01:37.380311565 +0000 UTC m=+0.061177403 container create 2a9860c23b0d98e354a04748e7092107dea31601778cd994a664014a00c7612b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 10:01:37 compute-0 systemd[1]: Started libpod-conmon-2a9860c23b0d98e354a04748e7092107dea31601778cd994a664014a00c7612b.scope.
Oct  3 10:01:37 compute-0 podman[411688]: 2025-10-03 10:01:37.352131221 +0000 UTC m=+0.032997079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:01:37 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:01:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3b9c16c97ae5fbeae6d46e7f82d499713b89cf7079f912bb02c9cdfd2a17ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:01:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3b9c16c97ae5fbeae6d46e7f82d499713b89cf7079f912bb02c9cdfd2a17ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:01:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3b9c16c97ae5fbeae6d46e7f82d499713b89cf7079f912bb02c9cdfd2a17ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:01:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3b9c16c97ae5fbeae6d46e7f82d499713b89cf7079f912bb02c9cdfd2a17ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:01:37 compute-0 podman[411688]: 2025-10-03 10:01:37.501033888 +0000 UTC m=+0.181899726 container init 2a9860c23b0d98e354a04748e7092107dea31601778cd994a664014a00c7612b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  3 10:01:37 compute-0 podman[411688]: 2025-10-03 10:01:37.515288365 +0000 UTC m=+0.196154173 container start 2a9860c23b0d98e354a04748e7092107dea31601778cd994a664014a00c7612b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:01:37 compute-0 podman[411688]: 2025-10-03 10:01:37.520043537 +0000 UTC m=+0.200909355 container attach 2a9860c23b0d98e354a04748e7092107dea31601778cd994a664014a00c7612b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:01:37 compute-0 nova_compute[351685]: 2025-10-03 10:01:37.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:01:38 compute-0 nova_compute[351685]: 2025-10-03 10:01:38.225 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:01:38 compute-0 nova_compute[351685]: 2025-10-03 10:01:38.226 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:01:38 compute-0 nova_compute[351685]: 2025-10-03 10:01:38.246 2 DEBUG nova.compute.manager [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  3 10:01:38 compute-0 romantic_turing[411702]: {
Oct  3 10:01:38 compute-0 romantic_turing[411702]:    "0": [
Oct  3 10:01:38 compute-0 romantic_turing[411702]:        {
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "devices": [
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "/dev/loop3"
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            ],
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "lv_name": "ceph_lv0",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "lv_size": "21470642176",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "name": "ceph_lv0",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "tags": {
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.cluster_name": "ceph",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.crush_device_class": "",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.encrypted": "0",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.osd_id": "0",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.type": "block",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.vdo": "0"
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            },
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "type": "block",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "vg_name": "ceph_vg0"
Oct  3 10:01:38 compute-0 romantic_turing[411702]:        }
Oct  3 10:01:38 compute-0 romantic_turing[411702]:    ],
Oct  3 10:01:38 compute-0 romantic_turing[411702]:    "1": [
Oct  3 10:01:38 compute-0 romantic_turing[411702]:        {
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "devices": [
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "/dev/loop4"
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            ],
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "lv_name": "ceph_lv1",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "lv_size": "21470642176",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "name": "ceph_lv1",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "tags": {
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.cluster_name": "ceph",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.crush_device_class": "",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.encrypted": "0",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.osd_id": "1",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.type": "block",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.vdo": "0"
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            },
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "type": "block",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "vg_name": "ceph_vg1"
Oct  3 10:01:38 compute-0 romantic_turing[411702]:        }
Oct  3 10:01:38 compute-0 romantic_turing[411702]:    ],
Oct  3 10:01:38 compute-0 romantic_turing[411702]:    "2": [
Oct  3 10:01:38 compute-0 romantic_turing[411702]:        {
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "devices": [
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "/dev/loop5"
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            ],
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "lv_name": "ceph_lv2",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "lv_size": "21470642176",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "name": "ceph_lv2",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "tags": {
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.cluster_name": "ceph",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.crush_device_class": "",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.encrypted": "0",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.osd_id": "2",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.type": "block",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:                "ceph.vdo": "0"
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            },
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "type": "block",
Oct  3 10:01:38 compute-0 romantic_turing[411702]:            "vg_name": "ceph_vg2"
Oct  3 10:01:38 compute-0 romantic_turing[411702]:        }
Oct  3 10:01:38 compute-0 romantic_turing[411702]:    ]
Oct  3 10:01:38 compute-0 romantic_turing[411702]: }
Oct  3 10:01:38 compute-0 nova_compute[351685]: 2025-10-03 10:01:38.399 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:01:38 compute-0 nova_compute[351685]: 2025-10-03 10:01:38.401 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:01:38 compute-0 systemd[1]: libpod-2a9860c23b0d98e354a04748e7092107dea31601778cd994a664014a00c7612b.scope: Deactivated successfully.
Oct  3 10:01:38 compute-0 podman[411688]: 2025-10-03 10:01:38.412472542 +0000 UTC m=+1.093338350 container died 2a9860c23b0d98e354a04748e7092107dea31601778cd994a664014a00c7612b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 10:01:38 compute-0 nova_compute[351685]: 2025-10-03 10:01:38.412 2 DEBUG nova.virt.hardware [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  3 10:01:38 compute-0 nova_compute[351685]: 2025-10-03 10:01:38.413 2 INFO nova.compute.claims [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  3 10:01:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a3b9c16c97ae5fbeae6d46e7f82d499713b89cf7079f912bb02c9cdfd2a17ef-merged.mount: Deactivated successfully.
Oct  3 10:01:38 compute-0 nova_compute[351685]: 2025-10-03 10:01:38.553 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:01:38 compute-0 podman[411688]: 2025-10-03 10:01:38.625559116 +0000 UTC m=+1.306424924 container remove 2a9860c23b0d98e354a04748e7092107dea31601778cd994a664014a00c7612b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 10:01:38 compute-0 systemd[1]: libpod-conmon-2a9860c23b0d98e354a04748e7092107dea31601778cd994a664014a00c7612b.scope: Deactivated successfully.
Oct  3 10:01:38 compute-0 nova_compute[351685]: 2025-10-03 10:01:38.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:01:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:01:39 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:01:39 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1361151003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.064 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.072 2 DEBUG nova.compute.provider_tree [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.089 2 DEBUG nova.scheduler.client.report [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.113 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.114 2 DEBUG nova.compute.manager [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.172 2 DEBUG nova.compute.manager [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.173 2 DEBUG nova.network.neutron [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.198 2 INFO nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.241 2 DEBUG nova.compute.manager [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.330 2 DEBUG nova.compute.manager [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.331 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.331 2 INFO nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Creating image(s)#033[00m
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.366 2 DEBUG nova.storage.rbd_utils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.404 2 DEBUG nova.storage.rbd_utils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:01:39 compute-0 podman[411898]: 2025-10-03 10:01:39.433852143 +0000 UTC m=+0.062826426 container create 9305c0b0b428325752bfdf2f686e48a7718621fca841239c6e431466ad9cc3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feynman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.442 2 DEBUG nova.storage.rbd_utils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.448 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "8123da205344dbbb79d5d821c9749dc540280b1e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.449 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "8123da205344dbbb79d5d821c9749dc540280b1e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:01:39 compute-0 systemd[1]: Started libpod-conmon-9305c0b0b428325752bfdf2f686e48a7718621fca841239c6e431466ad9cc3ba.scope.
Oct  3 10:01:39 compute-0 podman[411898]: 2025-10-03 10:01:39.406091732 +0000 UTC m=+0.035066075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:01:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:01:39 compute-0 podman[411898]: 2025-10-03 10:01:39.528602612 +0000 UTC m=+0.157576935 container init 9305c0b0b428325752bfdf2f686e48a7718621fca841239c6e431466ad9cc3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feynman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:01:39 compute-0 podman[411898]: 2025-10-03 10:01:39.537514668 +0000 UTC m=+0.166488961 container start 9305c0b0b428325752bfdf2f686e48a7718621fca841239c6e431466ad9cc3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:01:39 compute-0 podman[411898]: 2025-10-03 10:01:39.542964453 +0000 UTC m=+0.171938766 container attach 9305c0b0b428325752bfdf2f686e48a7718621fca841239c6e431466ad9cc3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feynman, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 10:01:39 compute-0 exciting_feynman[411952]: 167 167
Oct  3 10:01:39 compute-0 systemd[1]: libpod-9305c0b0b428325752bfdf2f686e48a7718621fca841239c6e431466ad9cc3ba.scope: Deactivated successfully.
Oct  3 10:01:39 compute-0 podman[411898]: 2025-10-03 10:01:39.545888277 +0000 UTC m=+0.174862580 container died 9305c0b0b428325752bfdf2f686e48a7718621fca841239c6e431466ad9cc3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 10:01:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4847549e11f1226890bec5c29e9a3c512f0d17bbb25498247e87eab62ffe631-merged.mount: Deactivated successfully.
Oct  3 10:01:39 compute-0 podman[411898]: 2025-10-03 10:01:39.595291151 +0000 UTC m=+0.224265444 container remove 9305c0b0b428325752bfdf2f686e48a7718621fca841239c6e431466ad9cc3ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feynman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:01:39 compute-0 systemd[1]: libpod-conmon-9305c0b0b428325752bfdf2f686e48a7718621fca841239c6e431466ad9cc3ba.scope: Deactivated successfully.
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.755 2 DEBUG nova.virt.libvirt.imagebackend [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Image locations are: [{'url': 'rbd://9b4e8c9a-5555-5510-a631-4742a1182561/images/37f03e8a-3aed-46a5-8219-fc87e355127e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://9b4e8c9a-5555-5510-a631-4742a1182561/images/37f03e8a-3aed-46a5-8219-fc87e355127e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  3 10:01:39 compute-0 podman[411978]: 2025-10-03 10:01:39.781412281 +0000 UTC m=+0.051270765 container create db049df851cf484788ec783366ed87955438ad70a2702d4b90c96eb48c579c96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:01:39 compute-0 systemd[1]: Started libpod-conmon-db049df851cf484788ec783366ed87955438ad70a2702d4b90c96eb48c579c96.scope.
Oct  3 10:01:39 compute-0 podman[411978]: 2025-10-03 10:01:39.761490841 +0000 UTC m=+0.031349355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:01:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9919dafe5482cae9b1ac107041c846f0c2de1250cf0dbae5213d1c74a30bb7e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9919dafe5482cae9b1ac107041c846f0c2de1250cf0dbae5213d1c74a30bb7e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9919dafe5482cae9b1ac107041c846f0c2de1250cf0dbae5213d1c74a30bb7e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:01:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9919dafe5482cae9b1ac107041c846f0c2de1250cf0dbae5213d1c74a30bb7e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.878 2 WARNING oslo_policy.policy [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  3 10:01:39 compute-0 nova_compute[351685]: 2025-10-03 10:01:39.879 2 WARNING oslo_policy.policy [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  3 10:01:39 compute-0 podman[411978]: 2025-10-03 10:01:39.887221935 +0000 UTC m=+0.157080449 container init db049df851cf484788ec783366ed87955438ad70a2702d4b90c96eb48c579c96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:01:39 compute-0 podman[411978]: 2025-10-03 10:01:39.900646326 +0000 UTC m=+0.170504820 container start db049df851cf484788ec783366ed87955438ad70a2702d4b90c96eb48c579c96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:01:39 compute-0 podman[411978]: 2025-10-03 10:01:39.905524032 +0000 UTC m=+0.175382526 container attach db049df851cf484788ec783366ed87955438ad70a2702d4b90c96eb48c579c96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 10:01:40 compute-0 nova_compute[351685]: 2025-10-03 10:01:40.547 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:01:40 compute-0 nova_compute[351685]: 2025-10-03 10:01:40.568 2 DEBUG nova.network.neutron [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Successfully created port: a8897fbc-9fd1-4981-b049-6e702bcb7e2d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  3 10:01:40 compute-0 nova_compute[351685]: 2025-10-03 10:01:40.611 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e.part --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:01:40 compute-0 nova_compute[351685]: 2025-10-03 10:01:40.611 2 DEBUG nova.virt.images [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] 37f03e8a-3aed-46a5-8219-fc87e355127e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  3 10:01:40 compute-0 nova_compute[351685]: 2025-10-03 10:01:40.613 2 DEBUG nova.privsep.utils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  3 10:01:40 compute-0 nova_compute[351685]: 2025-10-03 10:01:40.613 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e.part /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:01:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 687 KiB/s rd, 85 B/s wr, 6 op/s
Oct  3 10:01:40 compute-0 nova_compute[351685]: 2025-10-03 10:01:40.806 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e.part /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e.converted" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:01:40 compute-0 nova_compute[351685]: 2025-10-03 10:01:40.810 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.880 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.881 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.881 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.882 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.882 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.882 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.882 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.882 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.882 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 nova_compute[351685]: 2025-10-03 10:01:40.882 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e.converted --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:01:40 compute-0 nova_compute[351685]: 2025-10-03 10:01:40.883 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "8123da205344dbbb79d5d821c9749dc540280b1e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.884 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.884 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.886 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.887 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.887 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.887 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.887 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.887 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.888 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.888 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.888 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.888 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.888 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.889 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.889 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.889 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.889 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.889 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.890 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.890 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.890 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.890 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.890 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.891 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.891 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.891 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.891 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.891 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.892 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.892 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.892 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.892 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.892 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.892 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.893 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.894 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.894 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.894 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.894 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.894 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.894 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.894 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.894 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.894 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.894 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.894 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:01:40.895 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:01:40 compute-0 nova_compute[351685]: 2025-10-03 10:01:40.916 2 DEBUG nova.storage.rbd_utils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:01:40 compute-0 nova_compute[351685]: 2025-10-03 10:01:40.926 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]: {
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:        "osd_id": 1,
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:        "type": "bluestore"
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:    },
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:        "osd_id": 2,
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:        "type": "bluestore"
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:    },
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:        "osd_id": 0,
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:        "type": "bluestore"
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]:    }
Oct  3 10:01:40 compute-0 beautiful_kepler[411995]: }
Oct  3 10:01:40 compute-0 systemd[1]: libpod-db049df851cf484788ec783366ed87955438ad70a2702d4b90c96eb48c579c96.scope: Deactivated successfully.
Oct  3 10:01:40 compute-0 podman[411978]: 2025-10-03 10:01:40.976164823 +0000 UTC m=+1.246023327 container died db049df851cf484788ec783366ed87955438ad70a2702d4b90c96eb48c579c96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 10:01:40 compute-0 systemd[1]: libpod-db049df851cf484788ec783366ed87955438ad70a2702d4b90c96eb48c579c96.scope: Consumed 1.065s CPU time.
Oct  3 10:01:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9919dafe5482cae9b1ac107041c846f0c2de1250cf0dbae5213d1c74a30bb7e6-merged.mount: Deactivated successfully.
Oct  3 10:01:41 compute-0 podman[411978]: 2025-10-03 10:01:41.05402249 +0000 UTC m=+1.323880984 container remove db049df851cf484788ec783366ed87955438ad70a2702d4b90c96eb48c579c96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kepler, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  3 10:01:41 compute-0 systemd[1]: libpod-conmon-db049df851cf484788ec783366ed87955438ad70a2702d4b90c96eb48c579c96.scope: Deactivated successfully.
Oct  3 10:01:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:01:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:01:41 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:01:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:01:41 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:01:41 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev df42e4aa-3ef0-489d-8550-dd78e8e69bb0 does not exist
Oct  3 10:01:41 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 503865bb-5341-437e-81b7-6d095cccf12e does not exist
Oct  3 10:01:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct  3 10:01:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct  3 10:01:41 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct  3 10:01:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:41.587 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:01:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:41.588 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:01:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:41.588 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:01:41 compute-0 nova_compute[351685]: 2025-10-03 10:01:41.897 2 DEBUG nova.network.neutron [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Successfully updated port: a8897fbc-9fd1-4981-b049-6e702bcb7e2d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  3 10:01:41 compute-0 nova_compute[351685]: 2025-10-03 10:01:41.915 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:01:41 compute-0 nova_compute[351685]: 2025-10-03 10:01:41.915 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:01:41 compute-0 nova_compute[351685]: 2025-10-03 10:01:41.915 2 DEBUG nova.network.neutron [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 10:01:42 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:01:42 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:01:42 compute-0 nova_compute[351685]: 2025-10-03 10:01:42.138 2 DEBUG nova.network.neutron [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  3 10:01:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct  3 10:01:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct  3 10:01:42 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct  3 10:01:42 compute-0 nova_compute[351685]: 2025-10-03 10:01:42.462 2 DEBUG nova.compute.manager [req-4966e91e-f729-4715-8313-f414105ab81c req-94edeae5-f249-47d2-8a8f-3e535fec38f8 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Received event network-changed-a8897fbc-9fd1-4981-b049-6e702bcb7e2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:01:42 compute-0 nova_compute[351685]: 2025-10-03 10:01:42.463 2 DEBUG nova.compute.manager [req-4966e91e-f729-4715-8313-f414105ab81c req-94edeae5-f249-47d2-8a8f-3e535fec38f8 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Refreshing instance network info cache due to event network-changed-a8897fbc-9fd1-4981-b049-6e702bcb7e2d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 10:01:42 compute-0 nova_compute[351685]: 2025-10-03 10:01:42.463 2 DEBUG oslo_concurrency.lockutils [req-4966e91e-f729-4715-8313-f414105ab81c req-94edeae5-f249-47d2-8a8f-3e535fec38f8 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:01:42 compute-0 nova_compute[351685]: 2025-10-03 10:01:42.578 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.653s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:01:42 compute-0 nova_compute[351685]: 2025-10-03 10:01:42.672 2 DEBUG nova.storage.rbd_utils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] resizing rbd image b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  3 10:01:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 127 B/s wr, 9 op/s
Oct  3 10:01:42 compute-0 podman[412192]: 2025-10-03 10:01:42.832767783 +0000 UTC m=+0.085552735 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:01:42 compute-0 podman[412193]: 2025-10-03 10:01:42.870170023 +0000 UTC m=+0.120148305 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, config_id=edpm, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, managed_by=edpm_ansible, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.4)
Oct  3 10:01:42 compute-0 nova_compute[351685]: 2025-10-03 10:01:42.890 2 DEBUG nova.objects.instance [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lazy-loading 'migration_context' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:01:42 compute-0 nova_compute[351685]: 2025-10-03 10:01:42.950 2 DEBUG nova.storage.rbd_utils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:01:42 compute-0 nova_compute[351685]: 2025-10-03 10:01:42.995 2 DEBUG nova.storage.rbd_utils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:01:43 compute-0 nova_compute[351685]: 2025-10-03 10:01:43.004 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:01:43 compute-0 nova_compute[351685]: 2025-10-03 10:01:43.006 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:01:43 compute-0 nova_compute[351685]: 2025-10-03 10:01:43.006 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:01:43 compute-0 nova_compute[351685]: 2025-10-03 10:01:43.027 2 DEBUG nova.network.neutron [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:01:43 compute-0 nova_compute[351685]: 2025-10-03 10:01:43.037 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:01:43 compute-0 nova_compute[351685]: 2025-10-03 10:01:43.039 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:01:43 compute-0 nova_compute[351685]: 2025-10-03 10:01:43.084 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:01:43 compute-0 nova_compute[351685]: 2025-10-03 10:01:43.085 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:01:43 compute-0 nova_compute[351685]: 2025-10-03 10:01:43.124 2 DEBUG nova.storage.rbd_utils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:01:43 compute-0 nova_compute[351685]: 2025-10-03 10:01:43.132 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:01:43 compute-0 nova_compute[351685]: 2025-10-03 10:01:43.153 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:01:43 compute-0 nova_compute[351685]: 2025-10-03 10:01:43.153 2 DEBUG nova.compute.manager [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Instance network_info: |[{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  3 10:01:43 compute-0 nova_compute[351685]: 2025-10-03 10:01:43.154 2 DEBUG oslo_concurrency.lockutils [req-4966e91e-f729-4715-8313-f414105ab81c req-94edeae5-f249-47d2-8a8f-3e535fec38f8 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:01:43 compute-0 nova_compute[351685]: 2025-10-03 10:01:43.155 2 DEBUG nova.network.neutron [req-4966e91e-f729-4715-8313-f414105ab81c req-94edeae5-f249-47d2-8a8f-3e535fec38f8 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Refreshing network info cache for port a8897fbc-9fd1-4981-b049-6e702bcb7e2d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.097 2 DEBUG nova.network.neutron [req-4966e91e-f729-4715-8313-f414105ab81c req-94edeae5-f249-47d2-8a8f-3e535fec38f8 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated VIF entry in instance network info cache for port a8897fbc-9fd1-4981-b049-6e702bcb7e2d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.098 2 DEBUG nova.network.neutron [req-4966e91e-f729-4715-8313-f414105ab81c req-94edeae5-f249-47d2-8a8f-3e535fec38f8 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.115 2 DEBUG oslo_concurrency.lockutils [req-4966e91e-f729-4715-8313-f414105ab81c req-94edeae5-f249-47d2-8a8f-3e535fec38f8 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.191 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.348 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.349 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Ensure instance console log exists: /var/lib/nova/instances/b43db93c-a4fe-46e9-8418-eedf4f5c135a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.351 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.352 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.352 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.355 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Start _get_guest_xml network_info=[{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-03T10:00:30Z,direct_url=<?>,disk_format='qcow2',id=37f03e8a-3aed-46a5-8219-fc87e355127e,min_disk=0,min_ram=0,name='cirros',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-03T10:00:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 0, 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}], 'ephemerals': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vdb', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 1, 'encrypted': False, 'encryption_options': None, 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.361 2 WARNING nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.370 2 DEBUG nova.virt.libvirt.host [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.371 2 DEBUG nova.virt.libvirt.host [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.375 2 DEBUG nova.virt.libvirt.host [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.376 2 DEBUG nova.virt.libvirt.host [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.377 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.377 2 DEBUG nova.virt.hardware [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-03T10:00:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='ada739ee-222b-4269-8d29-62bea534173e',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-03T10:00:30Z,direct_url=<?>,disk_format='qcow2',id=37f03e8a-3aed-46a5-8219-fc87e355127e,min_disk=0,min_ram=0,name='cirros',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-03T10:00:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.378 2 DEBUG nova.virt.hardware [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.378 2 DEBUG nova.virt.hardware [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.378 2 DEBUG nova.virt.hardware [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.379 2 DEBUG nova.virt.hardware [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.379 2 DEBUG nova.virt.hardware [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.379 2 DEBUG nova.virt.hardware [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.380 2 DEBUG nova.virt.hardware [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.380 2 DEBUG nova.virt.hardware [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.380 2 DEBUG nova.virt.hardware [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.381 2 DEBUG nova.virt.hardware [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.384 2 DEBUG nova.privsep.utils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.385 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:01:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 40 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 15 op/s
Oct  3 10:01:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:01:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1753924470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.855 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:01:44 compute-0 nova_compute[351685]: 2025-10-03 10:01:44.857 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:01:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:01:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1372984565' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.313 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.340 2 DEBUG nova.storage.rbd_utils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.346 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:01:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:01:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1346396135' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.771 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.773 2 DEBUG nova.virt.libvirt.vif [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T10:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee75a4dc6ade43baab6ee923c9cf4cdf',ramdisk_id='',reservation_id='r-28mdpdg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T10:01:39Z,user_data=None,user_id='2f408449ba0f42fcb69f92dbf541f2e3',uuid=b43db93c-a4fe-46e9-8418-eedf4f5c135a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.774 2 DEBUG nova.network.os_vif_util [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converting VIF {"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.775 2 DEBUG nova.network.os_vif_util [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:40:5c,bridge_name='br-int',has_traffic_filtering=True,id=a8897fbc-9fd1-4981-b049-6e702bcb7e2d,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8897fbc-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.778 2 DEBUG nova.objects.instance [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lazy-loading 'pci_devices' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.794 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] End _get_guest_xml xml=<domain type="kvm">
Oct  3 10:01:45 compute-0 nova_compute[351685]:  <uuid>b43db93c-a4fe-46e9-8418-eedf4f5c135a</uuid>
Oct  3 10:01:45 compute-0 nova_compute[351685]:  <name>instance-00000001</name>
Oct  3 10:01:45 compute-0 nova_compute[351685]:  <memory>524288</memory>
Oct  3 10:01:45 compute-0 nova_compute[351685]:  <vcpu>1</vcpu>
Oct  3 10:01:45 compute-0 nova_compute[351685]:  <metadata>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <nova:name>test_0</nova:name>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <nova:creationTime>2025-10-03 10:01:44</nova:creationTime>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <nova:flavor name="m1.small">
Oct  3 10:01:45 compute-0 nova_compute[351685]:        <nova:memory>512</nova:memory>
Oct  3 10:01:45 compute-0 nova_compute[351685]:        <nova:disk>1</nova:disk>
Oct  3 10:01:45 compute-0 nova_compute[351685]:        <nova:swap>0</nova:swap>
Oct  3 10:01:45 compute-0 nova_compute[351685]:        <nova:ephemeral>1</nova:ephemeral>
Oct  3 10:01:45 compute-0 nova_compute[351685]:        <nova:vcpus>1</nova:vcpus>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      </nova:flavor>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <nova:owner>
Oct  3 10:01:45 compute-0 nova_compute[351685]:        <nova:user uuid="2f408449ba0f42fcb69f92dbf541f2e3">admin</nova:user>
Oct  3 10:01:45 compute-0 nova_compute[351685]:        <nova:project uuid="ee75a4dc6ade43baab6ee923c9cf4cdf">admin</nova:project>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      </nova:owner>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <nova:root type="image" uuid="37f03e8a-3aed-46a5-8219-fc87e355127e"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <nova:ports>
Oct  3 10:01:45 compute-0 nova_compute[351685]:        <nova:port uuid="a8897fbc-9fd1-4981-b049-6e702bcb7e2d">
Oct  3 10:01:45 compute-0 nova_compute[351685]:          <nova:ip type="fixed" address="192.168.0.158" ipVersion="4"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:        </nova:port>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      </nova:ports>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    </nova:instance>
Oct  3 10:01:45 compute-0 nova_compute[351685]:  </metadata>
Oct  3 10:01:45 compute-0 nova_compute[351685]:  <sysinfo type="smbios">
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <system>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <entry name="manufacturer">RDO</entry>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <entry name="product">OpenStack Compute</entry>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <entry name="serial">b43db93c-a4fe-46e9-8418-eedf4f5c135a</entry>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <entry name="uuid">b43db93c-a4fe-46e9-8418-eedf4f5c135a</entry>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <entry name="family">Virtual Machine</entry>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    </system>
Oct  3 10:01:45 compute-0 nova_compute[351685]:  </sysinfo>
Oct  3 10:01:45 compute-0 nova_compute[351685]:  <os>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <boot dev="hd"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <smbios mode="sysinfo"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:  </os>
Oct  3 10:01:45 compute-0 nova_compute[351685]:  <features>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <acpi/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <apic/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <vmcoreinfo/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:  </features>
Oct  3 10:01:45 compute-0 nova_compute[351685]:  <clock offset="utc">
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <timer name="pit" tickpolicy="delay"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <timer name="hpet" present="no"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:  </clock>
Oct  3 10:01:45 compute-0 nova_compute[351685]:  <cpu mode="host-model" match="exact">
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <topology sockets="1" cores="1" threads="1"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:  </cpu>
Oct  3 10:01:45 compute-0 nova_compute[351685]:  <devices>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk">
Oct  3 10:01:45 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      </source>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:01:45 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <target dev="vda" bus="virtio"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk.eph0">
Oct  3 10:01:45 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      </source>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:01:45 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <target dev="vdb" bus="virtio"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <disk type="network" device="cdrom">
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk.config">
Oct  3 10:01:45 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      </source>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:01:45 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <target dev="sda" bus="sata"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <interface type="ethernet">
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <mac address="fa:16:3e:a9:40:5c"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <driver name="vhost" rx_queue_size="512"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <mtu size="1442"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <target dev="tapa8897fbc-9f"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    </interface>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <serial type="pty">
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <log file="/var/lib/nova/instances/b43db93c-a4fe-46e9-8418-eedf4f5c135a/console.log" append="off"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    </serial>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <video>
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    </video>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <input type="tablet" bus="usb"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <rng model="virtio">
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <backend model="random">/dev/urandom</backend>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    </rng>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <controller type="usb" index="0"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    <memballoon model="virtio">
Oct  3 10:01:45 compute-0 nova_compute[351685]:      <stats period="10"/>
Oct  3 10:01:45 compute-0 nova_compute[351685]:    </memballoon>
Oct  3 10:01:45 compute-0 nova_compute[351685]:  </devices>
Oct  3 10:01:45 compute-0 nova_compute[351685]: </domain>
Oct  3 10:01:45 compute-0 nova_compute[351685]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.796 2 DEBUG nova.compute.manager [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Preparing to wait for external event network-vif-plugged-a8897fbc-9fd1-4981-b049-6e702bcb7e2d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.796 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.797 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.797 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.798 2 DEBUG nova.virt.libvirt.vif [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T10:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee75a4dc6ade43baab6ee923c9cf4cdf',ramdisk_id='',reservation_id='r-28mdpdg2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T10:01:39Z,user_data=None,user_id='2f408449ba0f42fcb69f92dbf541f2e3',uuid=b43db93c-a4fe-46e9-8418-eedf4f5c135a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.798 2 DEBUG nova.network.os_vif_util [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converting VIF {"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.799 2 DEBUG nova.network.os_vif_util [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:40:5c,bridge_name='br-int',has_traffic_filtering=True,id=a8897fbc-9fd1-4981-b049-6e702bcb7e2d,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8897fbc-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.800 2 DEBUG os_vif [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:40:5c,bridge_name='br-int',has_traffic_filtering=True,id=a8897fbc-9fd1-4981-b049-6e702bcb7e2d,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8897fbc-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.868 2 DEBUG ovsdbapp.backend.ovs_idl [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.869 2 DEBUG ovsdbapp.backend.ovs_idl [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.869 2 DEBUG ovsdbapp.backend.ovs_idl [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.888 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.889 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:01:45 compute-0 nova_compute[351685]: 2025-10-03 10:01:45.890 2 INFO oslo.privsep.daemon [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpasxgwg_1/privsep.sock']#033[00m
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:01:46
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'vms', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'default.rgw.control']
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:01:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:01:46 compute-0 nova_compute[351685]: 2025-10-03 10:01:46.691 2 INFO oslo.privsep.daemon [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  3 10:01:46 compute-0 nova_compute[351685]: 2025-10-03 10:01:46.565 980 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  3 10:01:46 compute-0 nova_compute[351685]: 2025-10-03 10:01:46.572 980 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  3 10:01:46 compute-0 nova_compute[351685]: 2025-10-03 10:01:46.576 980 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Oct  3 10:01:46 compute-0 nova_compute[351685]: 2025-10-03 10:01:46.576 980 INFO oslo.privsep.daemon [-] privsep daemon running as pid 980#033[00m
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 47 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 63 op/s
Oct  3 10:01:46 compute-0 nova_compute[351685]: 2025-10-03 10:01:46.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:46 compute-0 ceph-mgr[192071]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3262515590
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.096 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8897fbc-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.097 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa8897fbc-9f, col_values=(('external_ids', {'iface-id': 'a8897fbc-9fd1-4981-b049-6e702bcb7e2d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:40:5c', 'vm-uuid': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:47 compute-0 NetworkManager[45015]: <info>  [1759485707.1000] manager: (tapa8897fbc-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.110 2 INFO os_vif [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:40:5c,bridge_name='br-int',has_traffic_filtering=True,id=a8897fbc-9fd1-4981-b049-6e702bcb7e2d,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8897fbc-9f')#033[00m
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.180 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.182 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.182 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.183 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No VIF found with MAC fa:16:3e:a9:40:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.184 2 INFO nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Using config drive#033[00m
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.217 2 DEBUG nova.storage.rbd_utils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.636 2 INFO nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Creating config drive at /var/lib/nova/instances/b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.config#033[00m
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.641 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp26xs_yrb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.781 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp26xs_yrb" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.815 2 DEBUG nova.storage.rbd_utils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:01:47 compute-0 nova_compute[351685]: 2025-10-03 10:01:47.822 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.config b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:01:48 compute-0 nova_compute[351685]: 2025-10-03 10:01:48.025 2 DEBUG oslo_concurrency.processutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.config b43db93c-a4fe-46e9-8418-eedf4f5c135a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:01:48 compute-0 nova_compute[351685]: 2025-10-03 10:01:48.026 2 INFO nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Deleting local config drive /var/lib/nova/instances/b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.config because it was imported into RBD.#033[00m
Oct  3 10:01:48 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct  3 10:01:48 compute-0 systemd[1]: Started libvirt secret daemon.
Oct  3 10:01:48 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct  3 10:01:48 compute-0 NetworkManager[45015]: <info>  [1759485708.1814] manager: (tapa8897fbc-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Oct  3 10:01:48 compute-0 kernel: tapa8897fbc-9f: entered promiscuous mode
Oct  3 10:01:48 compute-0 ovn_controller[88471]: 2025-10-03T10:01:48Z|00027|binding|INFO|Claiming lport a8897fbc-9fd1-4981-b049-6e702bcb7e2d for this chassis.
Oct  3 10:01:48 compute-0 ovn_controller[88471]: 2025-10-03T10:01:48Z|00028|binding|INFO|a8897fbc-9fd1-4981-b049-6e702bcb7e2d: Claiming fa:16:3e:a9:40:5c 192.168.0.158
Oct  3 10:01:48 compute-0 nova_compute[351685]: 2025-10-03 10:01:48.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:48 compute-0 nova_compute[351685]: 2025-10-03 10:01:48.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:48.201 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:40:5c 192.168.0.158'], port_security=['fa:16:3e:a9:40:5c 192.168.0.158'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.158/24', 'neutron:device_id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c22f45fa-3e9c-4adb-8724-80552acd48b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c829e9c2-c63b-44e6-806c-cc11bdf56258, chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=a8897fbc-9fd1-4981-b049-6e702bcb7e2d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 10:01:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:48.203 284328 INFO neutron.agent.ovn.metadata.agent [-] Port a8897fbc-9fd1-4981-b049-6e702bcb7e2d in datapath 67eed0ac-d131-40ed-a5fe-0484d04236a0 bound to our chassis#033[00m
Oct  3 10:01:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:48.206 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67eed0ac-d131-40ed-a5fe-0484d04236a0#033[00m
Oct  3 10:01:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:48.208 284328 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpdg3oqy1v/privsep.sock']#033[00m
Oct  3 10:01:48 compute-0 systemd-udevd[412566]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 10:01:48 compute-0 NetworkManager[45015]: <info>  [1759485708.2502] device (tapa8897fbc-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 10:01:48 compute-0 NetworkManager[45015]: <info>  [1759485708.2513] device (tapa8897fbc-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  3 10:01:48 compute-0 systemd-machined[137653]: New machine qemu-1-instance-00000001.
Oct  3 10:01:48 compute-0 nova_compute[351685]: 2025-10-03 10:01:48.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:48 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct  3 10:01:48 compute-0 ovn_controller[88471]: 2025-10-03T10:01:48Z|00029|binding|INFO|Setting lport a8897fbc-9fd1-4981-b049-6e702bcb7e2d ovn-installed in OVS
Oct  3 10:01:48 compute-0 ovn_controller[88471]: 2025-10-03T10:01:48Z|00030|binding|INFO|Setting lport a8897fbc-9fd1-4981-b049-6e702bcb7e2d up in Southbound
Oct  3 10:01:48 compute-0 nova_compute[351685]: 2025-10-03 10:01:48.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1007 KiB/s rd, 2.0 MiB/s wr, 62 op/s
Oct  3 10:01:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:48.904 284328 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  3 10:01:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:48.905 284328 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpdg3oqy1v/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  3 10:01:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:48.808 412583 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  3 10:01:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:48.813 412583 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  3 10:01:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:48.816 412583 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct  3 10:01:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:48.817 412583 INFO oslo.privsep.daemon [-] privsep daemon running as pid 412583#033[00m
Oct  3 10:01:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:48.909 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[4e575427-8c34-4ec8-ae19-aeb65a80154b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:49 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct  3 10:01:49 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct  3 10:01:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:49.789 412583 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:01:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:49.789 412583 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:01:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:49.789 412583 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:01:49 compute-0 nova_compute[351685]: 2025-10-03 10:01:49.887 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759485709.8869011, b43db93c-a4fe-46e9-8418-eedf4f5c135a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:01:49 compute-0 nova_compute[351685]: 2025-10-03 10:01:49.888 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] VM Started (Lifecycle Event)#033[00m
Oct  3 10:01:49 compute-0 nova_compute[351685]: 2025-10-03 10:01:49.924 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:01:49 compute-0 nova_compute[351685]: 2025-10-03 10:01:49.932 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759485709.8870072, b43db93c-a4fe-46e9-8418-eedf4f5c135a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:01:49 compute-0 nova_compute[351685]: 2025-10-03 10:01:49.933 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] VM Paused (Lifecycle Event)#033[00m
Oct  3 10:01:49 compute-0 nova_compute[351685]: 2025-10-03 10:01:49.955 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:01:49 compute-0 nova_compute[351685]: 2025-10-03 10:01:49.963 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 10:01:49 compute-0 nova_compute[351685]: 2025-10-03 10:01:49.983 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.343 2 DEBUG nova.compute.manager [req-0708683a-431f-416f-a4c6-6ab142b6f6bf req-f60f65b7-b859-445c-9c3e-886e2b9631bc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Received event network-vif-plugged-a8897fbc-9fd1-4981-b049-6e702bcb7e2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.344 2 DEBUG oslo_concurrency.lockutils [req-0708683a-431f-416f-a4c6-6ab142b6f6bf req-f60f65b7-b859-445c-9c3e-886e2b9631bc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.345 2 DEBUG oslo_concurrency.lockutils [req-0708683a-431f-416f-a4c6-6ab142b6f6bf req-f60f65b7-b859-445c-9c3e-886e2b9631bc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.345 2 DEBUG oslo_concurrency.lockutils [req-0708683a-431f-416f-a4c6-6ab142b6f6bf req-f60f65b7-b859-445c-9c3e-886e2b9631bc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.346 2 DEBUG nova.compute.manager [req-0708683a-431f-416f-a4c6-6ab142b6f6bf req-f60f65b7-b859-445c-9c3e-886e2b9631bc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Processing event network-vif-plugged-a8897fbc-9fd1-4981-b049-6e702bcb7e2d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.346 2 DEBUG nova.compute.manager [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.351 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759485710.3513305, b43db93c-a4fe-46e9-8418-eedf4f5c135a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.352 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] VM Resumed (Lifecycle Event)#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.360 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.366 2 INFO nova.virt.libvirt.driver [-] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Instance spawned successfully.#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.366 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.374 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.380 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.395 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.396 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.396 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.397 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.397 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.398 2 DEBUG nova.virt.libvirt.driver [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.401 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.447 2 INFO nova.compute.manager [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Took 11.12 seconds to spawn the instance on the hypervisor.#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.448 2 DEBUG nova.compute.manager [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.510 2 INFO nova.compute.manager [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Took 12.14 seconds to build instance.#033[00m
Oct  3 10:01:50 compute-0 nova_compute[351685]: 2025-10-03 10:01:50.528 2 DEBUG oslo_concurrency.lockutils [None req-4de462ac-15ea-4b2b-96dd-526453329ecf 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:01:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:50.598 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[18041ad0-8d81-4b79-b211-d09171469335]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:50.599 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap67eed0ac-d1 in ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  3 10:01:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:50.602 412583 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap67eed0ac-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  3 10:01:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:50.602 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[a97c6548-c440-4d46-bef7-66149ace4eec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:50.607 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[537c1d60-2b32-4c86-961c-7cb1ba574c42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:50.637 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[34c3bff8-2457-45d9-8546-c30d894793e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:50.660 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[36cb4489-7b5d-4886-931e-17242adb825a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:50.662 284328 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpmfo67ltj/privsep.sock']#033[00m
Oct  3 10:01:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 848 KiB/s rd, 1.7 MiB/s wr, 59 op/s
Oct  3 10:01:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:01:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct  3 10:01:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct  3 10:01:51 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct  3 10:01:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:51.372 284328 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  3 10:01:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:51.373 284328 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpmfo67ltj/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  3 10:01:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:51.225 412675 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  3 10:01:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:51.232 412675 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  3 10:01:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:51.236 412675 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  3 10:01:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:51.237 412675 INFO oslo.privsep.daemon [-] privsep daemon running as pid 412675#033[00m
Oct  3 10:01:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:51.376 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[0e852558-7638-4ed0-9ad2-5d79e3550629]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:51 compute-0 nova_compute[351685]: 2025-10-03 10:01:51.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:51.954 412675 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:01:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:51.954 412675 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:01:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:51.954 412675 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:01:52 compute-0 nova_compute[351685]: 2025-10-03 10:01:52.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:52 compute-0 nova_compute[351685]: 2025-10-03 10:01:52.435 2 DEBUG nova.compute.manager [req-74689c68-0ffe-45bf-8f71-0abe321f4dfe req-2eec2858-2c52-464a-b811-00473760eb99 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Received event network-vif-plugged-a8897fbc-9fd1-4981-b049-6e702bcb7e2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:01:52 compute-0 nova_compute[351685]: 2025-10-03 10:01:52.436 2 DEBUG oslo_concurrency.lockutils [req-74689c68-0ffe-45bf-8f71-0abe321f4dfe req-2eec2858-2c52-464a-b811-00473760eb99 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:01:52 compute-0 nova_compute[351685]: 2025-10-03 10:01:52.436 2 DEBUG oslo_concurrency.lockutils [req-74689c68-0ffe-45bf-8f71-0abe321f4dfe req-2eec2858-2c52-464a-b811-00473760eb99 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:01:52 compute-0 nova_compute[351685]: 2025-10-03 10:01:52.436 2 DEBUG oslo_concurrency.lockutils [req-74689c68-0ffe-45bf-8f71-0abe321f4dfe req-2eec2858-2c52-464a-b811-00473760eb99 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:01:52 compute-0 nova_compute[351685]: 2025-10-03 10:01:52.436 2 DEBUG nova.compute.manager [req-74689c68-0ffe-45bf-8f71-0abe321f4dfe req-2eec2858-2c52-464a-b811-00473760eb99 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] No waiting events found dispatching network-vif-plugged-a8897fbc-9fd1-4981-b049-6e702bcb7e2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 10:01:52 compute-0 nova_compute[351685]: 2025-10-03 10:01:52.436 2 WARNING nova.compute.manager [req-74689c68-0ffe-45bf-8f71-0abe321f4dfe req-2eec2858-2c52-464a-b811-00473760eb99 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Received unexpected event network-vif-plugged-a8897fbc-9fd1-4981-b049-6e702bcb7e2d for instance with vm_state active and task_state None.#033[00m
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.712 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[f987b0f0-132c-42de-8a26-b6835ed7762a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.721 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[f43ea7fe-fa25-4267-8619-98e6eec6f573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:52 compute-0 NetworkManager[45015]: <info>  [1759485712.7226] manager: (tap67eed0ac-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.760 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d4145d-197f-4f41-9832-aa4160273d75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.764 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee04a7e-e461-49da-97c3-4e48af0f9e01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:52 compute-0 systemd-udevd[412711]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 10:01:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 811 KiB/s rd, 1.6 MiB/s wr, 57 op/s
Oct  3 10:01:52 compute-0 NetworkManager[45015]: <info>  [1759485712.7961] device (tap67eed0ac-d0): carrier: link connected
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.802 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[25ce01d4-94e7-4f0d-bd6f-690afaf1c3b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.822 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[295e0646-94e6-417a-b4c3-341958df82e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67eed0ac-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489539, 'reachable_time': 37052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412748, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.838 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[be2b1bf6-f59c-43e3-bf74-1272bbeae839]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0b:cc0d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489539, 'tstamp': 489539}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412759, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.854 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[3a1d88d5-764e-45da-ba81-fa41488bbfc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67eed0ac-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489539, 'reachable_time': 37052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 412763, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:52 compute-0 podman[412683]: 2025-10-03 10:01:52.859419859 +0000 UTC m=+0.124324929 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public)
Oct  3 10:01:52 compute-0 podman[412684]: 2025-10-03 10:01:52.861087012 +0000 UTC m=+0.122888943 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:01:52 compute-0 podman[412685]: 2025-10-03 10:01:52.871537278 +0000 UTC m=+0.128001557 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.891 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[58b926a0-a36e-487e-8396-fc8ee79c0e21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.952 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[b9452d74-445c-4329-92e8-7a23edea67c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.954 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67eed0ac-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.955 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.955 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67eed0ac-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:01:52 compute-0 kernel: tap67eed0ac-d0: entered promiscuous mode
Oct  3 10:01:52 compute-0 nova_compute[351685]: 2025-10-03 10:01:52.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:52 compute-0 NetworkManager[45015]: <info>  [1759485712.9586] manager: (tap67eed0ac-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Oct  3 10:01:52 compute-0 nova_compute[351685]: 2025-10-03 10:01:52.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.966 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67eed0ac-d0, col_values=(('external_ids', {'iface-id': 'e79720f4-8084-4b6f-a8ef-933cf0e7b8bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:01:52 compute-0 ovn_controller[88471]: 2025-10-03T10:01:52Z|00031|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 10:01:52 compute-0 nova_compute[351685]: 2025-10-03 10:01:52.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:52 compute-0 nova_compute[351685]: 2025-10-03 10:01:52.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.992 284328 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/67eed0ac-d131-40ed-a5fe-0484d04236a0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/67eed0ac-d131-40ed-a5fe-0484d04236a0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.994 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[7a7343ef-9e74-4c2e-8785-08d4f6833425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.995 284328 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: global
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    log         /dev/log local0 debug
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    log-tag     haproxy-metadata-proxy-67eed0ac-d131-40ed-a5fe-0484d04236a0
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    user        root
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    group       root
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    maxconn     1024
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    pidfile     /var/lib/neutron/external/pids/67eed0ac-d131-40ed-a5fe-0484d04236a0.pid.haproxy
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    daemon
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: defaults
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    log global
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    mode http
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    option httplog
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    option dontlognull
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    option http-server-close
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    option forwardfor
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    retries                 3
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    timeout http-request    30s
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    timeout connect         30s
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    timeout client          32s
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    timeout server          32s
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    timeout http-keep-alive 30s
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: listen listener
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    bind 169.254.169.254:80
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]:    http-request add-header X-OVN-Network-ID 67eed0ac-d131-40ed-a5fe-0484d04236a0
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  3 10:01:52 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:01:52.996 284328 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'env', 'PROCESS_TAG=haproxy-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/67eed0ac-d131-40ed-a5fe-0484d04236a0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  3 10:01:52 compute-0 nova_compute[351685]: 2025-10-03 10:01:52.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:53 compute-0 podman[412800]: 2025-10-03 10:01:53.383812769 +0000 UTC m=+0.032305438 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  3 10:01:53 compute-0 podman[412800]: 2025-10-03 10:01:53.50294584 +0000 UTC m=+0.151438489 container create 454010046fb023c59982c3d8526edb39d33fdee9b2f69eeaf6470b373aa0feb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  3 10:01:53 compute-0 systemd[1]: Started libpod-conmon-454010046fb023c59982c3d8526edb39d33fdee9b2f69eeaf6470b373aa0feb2.scope.
Oct  3 10:01:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5634c256841753a94782ff060b596ef82e2722d8474590ecf6e51759630ad6f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  3 10:01:53 compute-0 podman[412800]: 2025-10-03 10:01:53.627591619 +0000 UTC m=+0.276084298 container init 454010046fb023c59982c3d8526edb39d33fdee9b2f69eeaf6470b373aa0feb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  3 10:01:53 compute-0 podman[412800]: 2025-10-03 10:01:53.635715049 +0000 UTC m=+0.284207698 container start 454010046fb023c59982c3d8526edb39d33fdee9b2f69eeaf6470b373aa0feb2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  3 10:01:53 compute-0 neutron-haproxy-ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0[412816]: [NOTICE]   (412820) : New worker (412822) forked
Oct  3 10:01:53 compute-0 neutron-haproxy-ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0[412816]: [NOTICE]   (412820) : Loading success.
Oct  3 10:01:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:01:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3270960735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:01:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:01:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3270960735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:01:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 960 KiB/s rd, 481 KiB/s wr, 84 op/s
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0002673989263853617 of space, bias 1.0, pg target 0.08021967791560852 quantized to 32 (current 32)
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:01:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:01:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:01:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 17 KiB/s wr, 62 op/s
Oct  3 10:01:56 compute-0 nova_compute[351685]: 2025-10-03 10:01:56.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:57 compute-0 nova_compute[351685]: 2025-10-03 10:01:57.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:01:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 67 op/s
Oct  3 10:01:58 compute-0 podman[412832]: 2025-10-03 10:01:58.820532662 +0000 UTC m=+0.081530976 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  3 10:01:59 compute-0 podman[157165]: time="2025-10-03T10:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:01:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:01:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9019 "" "Go-http-client/1.1"
Oct  3 10:02:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 0 B/s wr, 61 op/s
Oct  3 10:02:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:02:01 compute-0 openstack_network_exporter[367524]: ERROR   10:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:02:01 compute-0 openstack_network_exporter[367524]: ERROR   10:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:02:01 compute-0 openstack_network_exporter[367524]: ERROR   10:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:02:01 compute-0 openstack_network_exporter[367524]: ERROR   10:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:02:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:02:01 compute-0 openstack_network_exporter[367524]: ERROR   10:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:02:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:02:01 compute-0 nova_compute[351685]: 2025-10-03 10:02:01.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:02 compute-0 nova_compute[351685]: 2025-10-03 10:02:02.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:02 compute-0 ovn_controller[88471]: 2025-10-03T10:02:02Z|00032|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 10:02:02 compute-0 nova_compute[351685]: 2025-10-03 10:02:02.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:02 compute-0 NetworkManager[45015]: <info>  [1759485722.1997] manager: (patch-br-int-to-provnet-2f451962-c7fa-4b6e-9611-cc6f10cbab9f): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Oct  3 10:02:02 compute-0 NetworkManager[45015]: <info>  [1759485722.2002] device (patch-br-int-to-provnet-2f451962-c7fa-4b6e-9611-cc6f10cbab9f)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  3 10:02:02 compute-0 NetworkManager[45015]: <info>  [1759485722.2020] manager: (patch-provnet-2f451962-c7fa-4b6e-9611-cc6f10cbab9f-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Oct  3 10:02:02 compute-0 NetworkManager[45015]: <info>  [1759485722.2023] device (patch-provnet-2f451962-c7fa-4b6e-9611-cc6f10cbab9f-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  3 10:02:02 compute-0 NetworkManager[45015]: <info>  [1759485722.2029] manager: (patch-provnet-2f451962-c7fa-4b6e-9611-cc6f10cbab9f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Oct  3 10:02:02 compute-0 NetworkManager[45015]: <info>  [1759485722.2034] manager: (patch-br-int-to-provnet-2f451962-c7fa-4b6e-9611-cc6f10cbab9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct  3 10:02:02 compute-0 NetworkManager[45015]: <info>  [1759485722.2038] device (patch-br-int-to-provnet-2f451962-c7fa-4b6e-9611-cc6f10cbab9f)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  3 10:02:02 compute-0 NetworkManager[45015]: <info>  [1759485722.2040] device (patch-provnet-2f451962-c7fa-4b6e-9611-cc6f10cbab9f-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  3 10:02:02 compute-0 ovn_controller[88471]: 2025-10-03T10:02:02Z|00033|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 10:02:02 compute-0 nova_compute[351685]: 2025-10-03 10:02:02.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:02 compute-0 nova_compute[351685]: 2025-10-03 10:02:02.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:02 compute-0 nova_compute[351685]: 2025-10-03 10:02:02.516 2 DEBUG nova.compute.manager [req-6e722098-67ed-4026-92bf-c3fe6fdab0ac req-5fa8c662-012d-4c7b-97de-46d96e0e902b 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Received event network-changed-a8897fbc-9fd1-4981-b049-6e702bcb7e2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:02:02 compute-0 nova_compute[351685]: 2025-10-03 10:02:02.516 2 DEBUG nova.compute.manager [req-6e722098-67ed-4026-92bf-c3fe6fdab0ac req-5fa8c662-012d-4c7b-97de-46d96e0e902b 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Refreshing instance network info cache due to event network-changed-a8897fbc-9fd1-4981-b049-6e702bcb7e2d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 10:02:02 compute-0 nova_compute[351685]: 2025-10-03 10:02:02.516 2 DEBUG oslo_concurrency.lockutils [req-6e722098-67ed-4026-92bf-c3fe6fdab0ac req-5fa8c662-012d-4c7b-97de-46d96e0e902b 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:02:02 compute-0 nova_compute[351685]: 2025-10-03 10:02:02.516 2 DEBUG oslo_concurrency.lockutils [req-6e722098-67ed-4026-92bf-c3fe6fdab0ac req-5fa8c662-012d-4c7b-97de-46d96e0e902b 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:02:02 compute-0 nova_compute[351685]: 2025-10-03 10:02:02.516 2 DEBUG nova.network.neutron [req-6e722098-67ed-4026-92bf-c3fe6fdab0ac req-5fa8c662-012d-4c7b-97de-46d96e0e902b 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Refreshing network info cache for port a8897fbc-9fd1-4981-b049-6e702bcb7e2d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 10:02:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 0 B/s wr, 52 op/s
Oct  3 10:02:02 compute-0 podman[412851]: 2025-10-03 10:02:02.80520868 +0000 UTC m=+0.072059672 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:02:03 compute-0 podman[412872]: 2025-10-03 10:02:03.833905307 +0000 UTC m=+0.082747766 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  3 10:02:03 compute-0 podman[412873]: 2025-10-03 10:02:03.865595063 +0000 UTC m=+0.117265973 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, config_id=iscsid)
Oct  3 10:02:04 compute-0 nova_compute[351685]: 2025-10-03 10:02:04.397 2 DEBUG nova.network.neutron [req-6e722098-67ed-4026-92bf-c3fe6fdab0ac req-5fa8c662-012d-4c7b-97de-46d96e0e902b 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated VIF entry in instance network info cache for port a8897fbc-9fd1-4981-b049-6e702bcb7e2d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 10:02:04 compute-0 nova_compute[351685]: 2025-10-03 10:02:04.398 2 DEBUG nova.network.neutron [req-6e722098-67ed-4026-92bf-c3fe6fdab0ac req-5fa8c662-012d-4c7b-97de-46d96e0e902b 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:02:04 compute-0 nova_compute[351685]: 2025-10-03 10:02:04.414 2 DEBUG oslo_concurrency.lockutils [req-6e722098-67ed-4026-92bf-c3fe6fdab0ac req-5fa8c662-012d-4c7b-97de-46d96e0e902b 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:02:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 0 B/s wr, 50 op/s
Oct  3 10:02:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:02:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 744 KiB/s rd, 23 op/s
Oct  3 10:02:06 compute-0 nova_compute[351685]: 2025-10-03 10:02:06.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:07 compute-0 nova_compute[351685]: 2025-10-03 10:02:07.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:07 compute-0 podman[412907]: 2025-10-03 10:02:07.814021418 +0000 UTC m=+0.081283048 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 10:02:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 10 op/s
Oct  3 10:02:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:02:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:02:11 compute-0 nova_compute[351685]: 2025-10-03 10:02:11.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:12 compute-0 nova_compute[351685]: 2025-10-03 10:02:12.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:02:13 compute-0 podman[412927]: 2025-10-03 10:02:13.827261303 +0000 UTC m=+0.081464674 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:02:13 compute-0 podman[412928]: 2025-10-03 10:02:13.83742901 +0000 UTC m=+0.098847232 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.buildah.version=1.29.0, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=base rhel9, name=ubi9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, com.redhat.component=ubi9-container, container_name=kepler)
Oct  3 10:02:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:02:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:02:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:02:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:02:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:02:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:02:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:02:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:02:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:02:16 compute-0 nova_compute[351685]: 2025-10-03 10:02:16.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:17 compute-0 nova_compute[351685]: 2025-10-03 10:02:17.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:02:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:02:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:02:21 compute-0 nova_compute[351685]: 2025-10-03 10:02:21.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:22 compute-0 nova_compute[351685]: 2025-10-03 10:02:22.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 49 MiB data, 181 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:02:23 compute-0 podman[412973]: 2025-10-03 10:02:23.836048386 +0000 UTC m=+0.089136930 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 10:02:23 compute-0 podman[412972]: 2025-10-03 10:02:23.836101467 +0000 UTC m=+0.084951305 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9)
Oct  3 10:02:23 compute-0 podman[412974]: 2025-10-03 10:02:23.890601816 +0000 UTC m=+0.138068640 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:02:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 51 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 46 KiB/s wr, 1 op/s
Oct  3 10:02:25 compute-0 ovn_controller[88471]: 2025-10-03T10:02:25Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:40:5c 192.168.0.158
Oct  3 10:02:25 compute-0 ovn_controller[88471]: 2025-10-03T10:02:25Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:40:5c 192.168.0.158
Oct  3 10:02:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:02:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 56 MiB data, 186 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 526 KiB/s wr, 9 op/s
Oct  3 10:02:26 compute-0 nova_compute[351685]: 2025-10-03 10:02:26.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:27 compute-0 nova_compute[351685]: 2025-10-03 10:02:27.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 66 MiB data, 193 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 1.1 MiB/s wr, 36 op/s
Oct  3 10:02:29 compute-0 podman[157165]: time="2025-10-03T10:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:02:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:02:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9040 "" "Go-http-client/1.1"
Oct  3 10:02:29 compute-0 podman[413039]: 2025-10-03 10:02:29.846913895 +0000 UTC m=+0.105108591 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:02:30 compute-0 nova_compute[351685]: 2025-10-03 10:02:30.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:02:30 compute-0 nova_compute[351685]: 2025-10-03 10:02:30.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:02:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Oct  3 10:02:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:02:31 compute-0 openstack_network_exporter[367524]: ERROR   10:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:02:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:02:31 compute-0 openstack_network_exporter[367524]: ERROR   10:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:02:31 compute-0 openstack_network_exporter[367524]: ERROR   10:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:02:31 compute-0 openstack_network_exporter[367524]: ERROR   10:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:02:31 compute-0 openstack_network_exporter[367524]: ERROR   10:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:02:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:02:31 compute-0 nova_compute[351685]: 2025-10-03 10:02:31.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:32 compute-0 nova_compute[351685]: 2025-10-03 10:02:32.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:32 compute-0 ovn_controller[88471]: 2025-10-03T10:02:32Z|00034|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct  3 10:02:32 compute-0 nova_compute[351685]: 2025-10-03 10:02:32.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:02:32 compute-0 nova_compute[351685]: 2025-10-03 10:02:32.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:02:32 compute-0 nova_compute[351685]: 2025-10-03 10:02:32.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:02:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Oct  3 10:02:33 compute-0 nova_compute[351685]: 2025-10-03 10:02:33.392 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:02:33 compute-0 nova_compute[351685]: 2025-10-03 10:02:33.393 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:02:33 compute-0 nova_compute[351685]: 2025-10-03 10:02:33.393 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:02:33 compute-0 nova_compute[351685]: 2025-10-03 10:02:33.394 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:02:33 compute-0 podman[413056]: 2025-10-03 10:02:33.795023491 +0000 UTC m=+0.064208040 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:02:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Oct  3 10:02:34 compute-0 podman[413080]: 2025-10-03 10:02:34.811615109 +0000 UTC m=+0.074765750 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:02:34 compute-0 podman[413081]: 2025-10-03 10:02:34.847783308 +0000 UTC m=+0.101812477 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  3 10:02:35 compute-0 nova_compute[351685]: 2025-10-03 10:02:35.044 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:02:35 compute-0 nova_compute[351685]: 2025-10-03 10:02:35.166 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:02:35 compute-0 nova_compute[351685]: 2025-10-03 10:02:35.166 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:02:35 compute-0 nova_compute[351685]: 2025-10-03 10:02:35.167 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:02:35 compute-0 nova_compute[351685]: 2025-10-03 10:02:35.207 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:02:35 compute-0 nova_compute[351685]: 2025-10-03 10:02:35.208 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:02:35 compute-0 nova_compute[351685]: 2025-10-03 10:02:35.208 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:02:35 compute-0 nova_compute[351685]: 2025-10-03 10:02:35.208 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:02:35 compute-0 nova_compute[351685]: 2025-10-03 10:02:35.209 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:02:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:02:35 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/242390642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:02:35 compute-0 nova_compute[351685]: 2025-10-03 10:02:35.711 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:02:35 compute-0 nova_compute[351685]: 2025-10-03 10:02:35.797 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:02:35 compute-0 nova_compute[351685]: 2025-10-03 10:02:35.799 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:02:35 compute-0 nova_compute[351685]: 2025-10-03 10:02:35.799 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:02:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.150 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.151 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4064MB free_disk=59.955196380615234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.152 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.152 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.222 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.223 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.223 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.259 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:02:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:02:36 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/670695123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.734 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.744 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.783 2 ERROR nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [req-c1ba8611-1c59-4c3d-862f-a01453b58811] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-c1ba8611-1c59-4c3d-862f-a01453b58811"}]}#033[00m
Oct  3 10:02:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 1.4 MiB/s wr, 54 op/s
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.808 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.833 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.834 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.856 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.883 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:36 compute-0 nova_compute[351685]: 2025-10-03 10:02:36.917 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:02:37 compute-0 nova_compute[351685]: 2025-10-03 10:02:37.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:02:37 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2799547699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:02:37 compute-0 nova_compute[351685]: 2025-10-03 10:02:37.410 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:02:37 compute-0 nova_compute[351685]: 2025-10-03 10:02:37.419 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 10:02:37 compute-0 nova_compute[351685]: 2025-10-03 10:02:37.463 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updated inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  3 10:02:37 compute-0 nova_compute[351685]: 2025-10-03 10:02:37.464 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  3 10:02:37 compute-0 nova_compute[351685]: 2025-10-03 10:02:37.464 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 10:02:37 compute-0 nova_compute[351685]: 2025-10-03 10:02:37.490 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:02:37 compute-0 nova_compute[351685]: 2025-10-03 10:02:37.490 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:02:38 compute-0 nova_compute[351685]: 2025-10-03 10:02:38.052 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:02:38 compute-0 nova_compute[351685]: 2025-10-03 10:02:38.053 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:02:38 compute-0 nova_compute[351685]: 2025-10-03 10:02:38.126 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:02:38 compute-0 nova_compute[351685]: 2025-10-03 10:02:38.126 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:02:38 compute-0 nova_compute[351685]: 2025-10-03 10:02:38.126 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:02:38 compute-0 nova_compute[351685]: 2025-10-03 10:02:38.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:02:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 996 KiB/s wr, 46 op/s
Oct  3 10:02:38 compute-0 podman[413185]: 2025-10-03 10:02:38.804681987 +0000 UTC m=+0.068766317 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:02:40 compute-0 nova_compute[351685]: 2025-10-03 10:02:40.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:02:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 403 KiB/s wr, 20 op/s
Oct  3 10:02:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:02:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:02:41.589 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:02:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:02:41.590 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:02:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:02:41.591 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:02:41 compute-0 nova_compute[351685]: 2025-10-03 10:02:41.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:42 compute-0 nova_compute[351685]: 2025-10-03 10:02:42.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:02:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 0 op/s
Oct  3 10:02:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:02:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:02:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:02:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:02:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:02:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:02:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:02:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:02:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:02:43 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 9525538e-64f0-48f9-9f25-40e12abd86a6 does not exist
Oct  3 10:02:43 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev de30aff5-21a5-4148-8922-89cdd00ee309 does not exist
Oct  3 10:02:43 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8d8e314a-ffbf-407a-92dd-fb2931931fac does not exist
Oct  3 10:02:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:02:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:02:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:02:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:02:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:02:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:02:43 compute-0 podman[413592]: 2025-10-03 10:02:43.828698823 +0000 UTC m=+0.036777431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:02:44 compute-0 podman[413592]: 2025-10-03 10:02:44.487668999 +0000 UTC m=+0.695747577 container create 40852e4995748c0b4c1fe939e720b6eb8bf7d4efb0b202dfa254bfff2fd3d743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  3 10:02:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:02:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:02:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:02:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:02:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:02:44 compute-0 systemd[1]: Started libpod-conmon-40852e4995748c0b4c1fe939e720b6eb8bf7d4efb0b202dfa254bfff2fd3d743.scope.
Oct  3 10:02:44 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:02:44 compute-0 podman[413606]: 2025-10-03 10:02:44.625296003 +0000 UTC m=+0.081698400 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:02:44 compute-0 podman[413592]: 2025-10-03 10:02:44.626327747 +0000 UTC m=+0.834406355 container init 40852e4995748c0b4c1fe939e720b6eb8bf7d4efb0b202dfa254bfff2fd3d743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jemison, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:02:44 compute-0 podman[413592]: 2025-10-03 10:02:44.6361188 +0000 UTC m=+0.844197378 container start 40852e4995748c0b4c1fe939e720b6eb8bf7d4efb0b202dfa254bfff2fd3d743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  3 10:02:44 compute-0 podman[413592]: 2025-10-03 10:02:44.642778234 +0000 UTC m=+0.850856842 container attach 40852e4995748c0b4c1fe939e720b6eb8bf7d4efb0b202dfa254bfff2fd3d743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 10:02:44 compute-0 angry_jemison[413620]: 167 167
Oct  3 10:02:44 compute-0 systemd[1]: libpod-40852e4995748c0b4c1fe939e720b6eb8bf7d4efb0b202dfa254bfff2fd3d743.scope: Deactivated successfully.
Oct  3 10:02:44 compute-0 podman[413592]: 2025-10-03 10:02:44.64419972 +0000 UTC m=+0.852278288 container died 40852e4995748c0b4c1fe939e720b6eb8bf7d4efb0b202dfa254bfff2fd3d743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jemison, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  3 10:02:44 compute-0 podman[413609]: 2025-10-03 10:02:44.657442565 +0000 UTC m=+0.114446362 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, version=9.4, io.openshift.expose-services=, release-0.7.12=, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, container_name=kepler, vcs-type=git)
Oct  3 10:02:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad9bd2a8c886017e7e79ae2c4e861878fd600865e7679606b0d2f2bc5d4d39d8-merged.mount: Deactivated successfully.
Oct  3 10:02:44 compute-0 podman[413592]: 2025-10-03 10:02:44.722226443 +0000 UTC m=+0.930305021 container remove 40852e4995748c0b4c1fe939e720b6eb8bf7d4efb0b202dfa254bfff2fd3d743 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:02:44 compute-0 systemd[1]: libpod-conmon-40852e4995748c0b4c1fe939e720b6eb8bf7d4efb0b202dfa254bfff2fd3d743.scope: Deactivated successfully.
Oct  3 10:02:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 0 op/s
Oct  3 10:02:44 compute-0 podman[413673]: 2025-10-03 10:02:44.910073398 +0000 UTC m=+0.048696343 container create 35ce00df24cfcd3b077677f3e411ae67621433abcabdb3f4d489a2766eb4ce2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:02:44 compute-0 systemd[1]: Started libpod-conmon-35ce00df24cfcd3b077677f3e411ae67621433abcabdb3f4d489a2766eb4ce2c.scope.
Oct  3 10:02:44 compute-0 podman[413673]: 2025-10-03 10:02:44.890467519 +0000 UTC m=+0.029090484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:02:45 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfd3ba0a352dba3931c0af91b002e579e47582586d2a3dfb74d36148a051f7a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfd3ba0a352dba3931c0af91b002e579e47582586d2a3dfb74d36148a051f7a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfd3ba0a352dba3931c0af91b002e579e47582586d2a3dfb74d36148a051f7a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfd3ba0a352dba3931c0af91b002e579e47582586d2a3dfb74d36148a051f7a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfd3ba0a352dba3931c0af91b002e579e47582586d2a3dfb74d36148a051f7a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:02:45 compute-0 podman[413673]: 2025-10-03 10:02:45.032107892 +0000 UTC m=+0.170730857 container init 35ce00df24cfcd3b077677f3e411ae67621433abcabdb3f4d489a2766eb4ce2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:02:45 compute-0 podman[413673]: 2025-10-03 10:02:45.046864106 +0000 UTC m=+0.185487051 container start 35ce00df24cfcd3b077677f3e411ae67621433abcabdb3f4d489a2766eb4ce2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  3 10:02:45 compute-0 podman[413673]: 2025-10-03 10:02:45.052545338 +0000 UTC m=+0.191168303 container attach 35ce00df24cfcd3b077677f3e411ae67621433abcabdb3f4d489a2766eb4ce2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:02:46
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', '.rgw.root', '.mgr']
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:02:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:02:46 compute-0 elastic_hofstadter[413688]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:02:46 compute-0 elastic_hofstadter[413688]: --> relative data size: 1.0
Oct  3 10:02:46 compute-0 elastic_hofstadter[413688]: --> All data devices are unavailable
Oct  3 10:02:46 compute-0 systemd[1]: libpod-35ce00df24cfcd3b077677f3e411ae67621433abcabdb3f4d489a2766eb4ce2c.scope: Deactivated successfully.
Oct  3 10:02:46 compute-0 systemd[1]: libpod-35ce00df24cfcd3b077677f3e411ae67621433abcabdb3f4d489a2766eb4ce2c.scope: Consumed 1.075s CPU time.
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:02:46 compute-0 podman[413717]: 2025-10-03 10:02:46.324728183 +0000 UTC m=+0.039700785 container died 35ce00df24cfcd3b077677f3e411ae67621433abcabdb3f4d489a2766eb4ce2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct  3 10:02:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfd3ba0a352dba3931c0af91b002e579e47582586d2a3dfb74d36148a051f7a4-merged.mount: Deactivated successfully.
Oct  3 10:02:46 compute-0 podman[413717]: 2025-10-03 10:02:46.397221148 +0000 UTC m=+0.112193720 container remove 35ce00df24cfcd3b077677f3e411ae67621433abcabdb3f4d489a2766eb4ce2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  3 10:02:46 compute-0 systemd[1]: libpod-conmon-35ce00df24cfcd3b077677f3e411ae67621433abcabdb3f4d489a2766eb4ce2c.scope: Deactivated successfully.
Oct  3 10:02:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Oct  3 10:02:46 compute-0 nova_compute[351685]: 2025-10-03 10:02:46.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:47 compute-0 nova_compute[351685]: 2025-10-03 10:02:47.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:47 compute-0 podman[413868]: 2025-10-03 10:02:47.13802653 +0000 UTC m=+0.053179347 container create c637724212f21dca1eac92e0af8ed7164aeb66c02b8bfbd9fdae11b768bba389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct  3 10:02:47 compute-0 systemd[1]: Started libpod-conmon-c637724212f21dca1eac92e0af8ed7164aeb66c02b8bfbd9fdae11b768bba389.scope.
Oct  3 10:02:47 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:02:47 compute-0 podman[413868]: 2025-10-03 10:02:47.119534766 +0000 UTC m=+0.034687593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:02:47 compute-0 podman[413868]: 2025-10-03 10:02:47.229791683 +0000 UTC m=+0.144944520 container init c637724212f21dca1eac92e0af8ed7164aeb66c02b8bfbd9fdae11b768bba389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:02:47 compute-0 podman[413868]: 2025-10-03 10:02:47.240206247 +0000 UTC m=+0.155359064 container start c637724212f21dca1eac92e0af8ed7164aeb66c02b8bfbd9fdae11b768bba389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:02:47 compute-0 podman[413868]: 2025-10-03 10:02:47.244287758 +0000 UTC m=+0.159440575 container attach c637724212f21dca1eac92e0af8ed7164aeb66c02b8bfbd9fdae11b768bba389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:02:47 compute-0 great_merkle[413884]: 167 167
Oct  3 10:02:47 compute-0 systemd[1]: libpod-c637724212f21dca1eac92e0af8ed7164aeb66c02b8bfbd9fdae11b768bba389.scope: Deactivated successfully.
Oct  3 10:02:47 compute-0 podman[413868]: 2025-10-03 10:02:47.248213233 +0000 UTC m=+0.163366050 container died c637724212f21dca1eac92e0af8ed7164aeb66c02b8bfbd9fdae11b768bba389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:02:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-19abc746d24a9ba183bc35f83fc67c3a37ec739b70691f42df5db93efacaba63-merged.mount: Deactivated successfully.
Oct  3 10:02:47 compute-0 podman[413868]: 2025-10-03 10:02:47.306573456 +0000 UTC m=+0.221726263 container remove c637724212f21dca1eac92e0af8ed7164aeb66c02b8bfbd9fdae11b768bba389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:02:47 compute-0 systemd[1]: libpod-conmon-c637724212f21dca1eac92e0af8ed7164aeb66c02b8bfbd9fdae11b768bba389.scope: Deactivated successfully.
Oct  3 10:02:47 compute-0 podman[413907]: 2025-10-03 10:02:47.514445853 +0000 UTC m=+0.063044833 container create a59368e9f4ce9aef6179a41528b9b8e3f2034caf3f779b3cc7baf2708b24ba1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:02:47 compute-0 systemd[1]: Started libpod-conmon-a59368e9f4ce9aef6179a41528b9b8e3f2034caf3f779b3cc7baf2708b24ba1f.scope.
Oct  3 10:02:47 compute-0 podman[413907]: 2025-10-03 10:02:47.485189945 +0000 UTC m=+0.033788955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:02:47 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8b43611bd63b91ecb4862f9ffcad3b7ef9a86efa6e605c6651b864bfea80b35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8b43611bd63b91ecb4862f9ffcad3b7ef9a86efa6e605c6651b864bfea80b35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8b43611bd63b91ecb4862f9ffcad3b7ef9a86efa6e605c6651b864bfea80b35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8b43611bd63b91ecb4862f9ffcad3b7ef9a86efa6e605c6651b864bfea80b35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:02:47 compute-0 podman[413907]: 2025-10-03 10:02:47.641341513 +0000 UTC m=+0.189940513 container init a59368e9f4ce9aef6179a41528b9b8e3f2034caf3f779b3cc7baf2708b24ba1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 10:02:47 compute-0 podman[413907]: 2025-10-03 10:02:47.65929228 +0000 UTC m=+0.207891260 container start a59368e9f4ce9aef6179a41528b9b8e3f2034caf3f779b3cc7baf2708b24ba1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gould, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 10:02:47 compute-0 podman[413907]: 2025-10-03 10:02:47.664711833 +0000 UTC m=+0.213310913 container attach a59368e9f4ce9aef6179a41528b9b8e3f2034caf3f779b3cc7baf2708b24ba1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gould, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:02:48 compute-0 hopeful_gould[413921]: {
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:    "0": [
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:        {
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "devices": [
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "/dev/loop3"
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            ],
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "lv_name": "ceph_lv0",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "lv_size": "21470642176",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "name": "ceph_lv0",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "tags": {
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.cluster_name": "ceph",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.crush_device_class": "",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.encrypted": "0",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.osd_id": "0",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.type": "block",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.vdo": "0"
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            },
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "type": "block",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "vg_name": "ceph_vg0"
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:        }
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:    ],
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:    "1": [
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:        {
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "devices": [
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "/dev/loop4"
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            ],
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "lv_name": "ceph_lv1",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "lv_size": "21470642176",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "name": "ceph_lv1",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "tags": {
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.cluster_name": "ceph",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.crush_device_class": "",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.encrypted": "0",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.osd_id": "1",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.type": "block",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.vdo": "0"
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            },
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "type": "block",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "vg_name": "ceph_vg1"
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:        }
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:    ],
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:    "2": [
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:        {
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "devices": [
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "/dev/loop5"
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            ],
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "lv_name": "ceph_lv2",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "lv_size": "21470642176",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "name": "ceph_lv2",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "tags": {
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.cluster_name": "ceph",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.crush_device_class": "",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.encrypted": "0",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.osd_id": "2",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.type": "block",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:                "ceph.vdo": "0"
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            },
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "type": "block",
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:            "vg_name": "ceph_vg2"
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:        }
Oct  3 10:02:48 compute-0 hopeful_gould[413921]:    ]
Oct  3 10:02:48 compute-0 hopeful_gould[413921]: }
Oct  3 10:02:48 compute-0 systemd[1]: libpod-a59368e9f4ce9aef6179a41528b9b8e3f2034caf3f779b3cc7baf2708b24ba1f.scope: Deactivated successfully.
Oct  3 10:02:48 compute-0 podman[413933]: 2025-10-03 10:02:48.536560267 +0000 UTC m=+0.030427516 container died a59368e9f4ce9aef6179a41528b9b8e3f2034caf3f779b3cc7baf2708b24ba1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gould, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:02:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8b43611bd63b91ecb4862f9ffcad3b7ef9a86efa6e605c6651b864bfea80b35-merged.mount: Deactivated successfully.
Oct  3 10:02:48 compute-0 podman[413933]: 2025-10-03 10:02:48.617151492 +0000 UTC m=+0.111018721 container remove a59368e9f4ce9aef6179a41528b9b8e3f2034caf3f779b3cc7baf2708b24ba1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gould, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  3 10:02:48 compute-0 systemd[1]: libpod-conmon-a59368e9f4ce9aef6179a41528b9b8e3f2034caf3f779b3cc7baf2708b24ba1f.scope: Deactivated successfully.
Oct  3 10:02:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Oct  3 10:02:49 compute-0 podman[414085]: 2025-10-03 10:02:49.395092735 +0000 UTC m=+0.050473249 container create 29fa3018f6c379c270f754028bcb5b9bec225d15a835717b996e1cbeb5fc3446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:02:49 compute-0 systemd[1]: Started libpod-conmon-29fa3018f6c379c270f754028bcb5b9bec225d15a835717b996e1cbeb5fc3446.scope.
Oct  3 10:02:49 compute-0 podman[414085]: 2025-10-03 10:02:49.375042442 +0000 UTC m=+0.030422976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:02:49 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:02:49 compute-0 podman[414085]: 2025-10-03 10:02:49.500702233 +0000 UTC m=+0.156082767 container init 29fa3018f6c379c270f754028bcb5b9bec225d15a835717b996e1cbeb5fc3446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_nash, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 10:02:49 compute-0 podman[414085]: 2025-10-03 10:02:49.50872125 +0000 UTC m=+0.164101764 container start 29fa3018f6c379c270f754028bcb5b9bec225d15a835717b996e1cbeb5fc3446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_nash, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 10:02:49 compute-0 pedantic_nash[414101]: 167 167
Oct  3 10:02:49 compute-0 podman[414085]: 2025-10-03 10:02:49.514801755 +0000 UTC m=+0.170182289 container attach 29fa3018f6c379c270f754028bcb5b9bec225d15a835717b996e1cbeb5fc3446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_nash, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 10:02:49 compute-0 systemd[1]: libpod-29fa3018f6c379c270f754028bcb5b9bec225d15a835717b996e1cbeb5fc3446.scope: Deactivated successfully.
Oct  3 10:02:49 compute-0 podman[414085]: 2025-10-03 10:02:49.516405866 +0000 UTC m=+0.171786430 container died 29fa3018f6c379c270f754028bcb5b9bec225d15a835717b996e1cbeb5fc3446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_nash, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 10:02:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-50898fdd32a873d7780bc05cb0b60a35aeba478e1435d1bfffdce3253303494a-merged.mount: Deactivated successfully.
Oct  3 10:02:49 compute-0 podman[414085]: 2025-10-03 10:02:49.569350164 +0000 UTC m=+0.224730678 container remove 29fa3018f6c379c270f754028bcb5b9bec225d15a835717b996e1cbeb5fc3446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_nash, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 10:02:49 compute-0 systemd[1]: libpod-conmon-29fa3018f6c379c270f754028bcb5b9bec225d15a835717b996e1cbeb5fc3446.scope: Deactivated successfully.
Oct  3 10:02:49 compute-0 podman[414126]: 2025-10-03 10:02:49.75532896 +0000 UTC m=+0.056979309 container create 193cd426b92d8bec62c148502f8ce70d0f94b50ebc5eadabf08fd634bf6a8589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  3 10:02:49 compute-0 systemd[1]: Started libpod-conmon-193cd426b92d8bec62c148502f8ce70d0f94b50ebc5eadabf08fd634bf6a8589.scope.
Oct  3 10:02:49 compute-0 podman[414126]: 2025-10-03 10:02:49.7281913 +0000 UTC m=+0.029841659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:02:49 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b15e5d58c5f0e2a3dc83f2649227ea80fb3cacfc919a61fd3aae4387b8c505/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b15e5d58c5f0e2a3dc83f2649227ea80fb3cacfc919a61fd3aae4387b8c505/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b15e5d58c5f0e2a3dc83f2649227ea80fb3cacfc919a61fd3aae4387b8c505/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b15e5d58c5f0e2a3dc83f2649227ea80fb3cacfc919a61fd3aae4387b8c505/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:02:49 compute-0 podman[414126]: 2025-10-03 10:02:49.869553273 +0000 UTC m=+0.171203642 container init 193cd426b92d8bec62c148502f8ce70d0f94b50ebc5eadabf08fd634bf6a8589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:02:49 compute-0 podman[414126]: 2025-10-03 10:02:49.885426982 +0000 UTC m=+0.187077331 container start 193cd426b92d8bec62c148502f8ce70d0f94b50ebc5eadabf08fd634bf6a8589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 10:02:49 compute-0 podman[414126]: 2025-10-03 10:02:49.892930774 +0000 UTC m=+0.194581143 container attach 193cd426b92d8bec62c148502f8ce70d0f94b50ebc5eadabf08fd634bf6a8589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:02:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:02:50.191 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:73:2e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:70:f7:73:f2:48'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 10:02:50 compute-0 nova_compute[351685]: 2025-10-03 10:02:50.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:02:50.192 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  3 10:02:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]: {
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:        "osd_id": 1,
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:        "type": "bluestore"
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:    },
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:        "osd_id": 2,
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:        "type": "bluestore"
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:    },
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:        "osd_id": 0,
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:        "type": "bluestore"
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]:    }
Oct  3 10:02:50 compute-0 beautiful_keldysh[414142]: }
Oct  3 10:02:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:02:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 6137 writes, 25K keys, 6137 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6137 writes, 1080 syncs, 5.68 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 461 writes, 1321 keys, 461 commit groups, 1.0 writes per commit group, ingest: 1.18 MB, 0.00 MB/s#012Interval WAL: 461 writes, 193 syncs, 2.39 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:02:50 compute-0 systemd[1]: libpod-193cd426b92d8bec62c148502f8ce70d0f94b50ebc5eadabf08fd634bf6a8589.scope: Deactivated successfully.
Oct  3 10:02:50 compute-0 podman[414126]: 2025-10-03 10:02:50.962917024 +0000 UTC m=+1.264567383 container died 193cd426b92d8bec62c148502f8ce70d0f94b50ebc5eadabf08fd634bf6a8589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:02:50 compute-0 systemd[1]: libpod-193cd426b92d8bec62c148502f8ce70d0f94b50ebc5eadabf08fd634bf6a8589.scope: Consumed 1.058s CPU time.
Oct  3 10:02:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-38b15e5d58c5f0e2a3dc83f2649227ea80fb3cacfc919a61fd3aae4387b8c505-merged.mount: Deactivated successfully.
Oct  3 10:02:51 compute-0 podman[414126]: 2025-10-03 10:02:51.038493848 +0000 UTC m=+1.340144207 container remove 193cd426b92d8bec62c148502f8ce70d0f94b50ebc5eadabf08fd634bf6a8589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:02:51 compute-0 systemd[1]: libpod-conmon-193cd426b92d8bec62c148502f8ce70d0f94b50ebc5eadabf08fd634bf6a8589.scope: Deactivated successfully.
Oct  3 10:02:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:02:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:02:51 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:02:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:02:51 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:02:51 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 607bc1f0-8738-4102-9b79-719b9521dfe0 does not exist
Oct  3 10:02:51 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 73671425-1526-493f-a35e-97d7880049dd does not exist
Oct  3 10:02:51 compute-0 nova_compute[351685]: 2025-10-03 10:02:51.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:52 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:02:52 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:02:52 compute-0 nova_compute[351685]: 2025-10-03 10:02:52.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:02:53 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:02:53.194 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:02:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:02:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2918826600' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:02:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:02:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2918826600' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:02:54 compute-0 podman[414238]: 2025-10-03 10:02:54.806167076 +0000 UTC m=+0.067306640 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Oct  3 10:02:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:02:54 compute-0 podman[414239]: 2025-10-03 10:02:54.843501513 +0000 UTC m=+0.104114120 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, io.buildah.version=1.41.4, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 10:02:54 compute-0 podman[414240]: 2025-10-03 10:02:54.884049214 +0000 UTC m=+0.138079431 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:02:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:02:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:02:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:02:56 compute-0 nova_compute[351685]: 2025-10-03 10:02:56.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:57 compute-0 nova_compute[351685]: 2025-10-03 10:02:57.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:02:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:02:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 7239 writes, 29K keys, 7239 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7239 writes, 1438 syncs, 5.03 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 419 writes, 1152 keys, 419 commit groups, 1.0 writes per commit group, ingest: 0.95 MB, 0.00 MB/s#012Interval WAL: 419 writes, 177 syncs, 2.37 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:02:58 compute-0 nova_compute[351685]: 2025-10-03 10:02:58.073 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "5b008829-2c76-4e40-b9e6-0e3d73095522" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:02:58 compute-0 nova_compute[351685]: 2025-10-03 10:02:58.074 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "5b008829-2c76-4e40-b9e6-0e3d73095522" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:02:58 compute-0 nova_compute[351685]: 2025-10-03 10:02:58.094 2 DEBUG nova.compute.manager [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  3 10:02:58 compute-0 nova_compute[351685]: 2025-10-03 10:02:58.184 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:02:58 compute-0 nova_compute[351685]: 2025-10-03 10:02:58.185 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:02:58 compute-0 nova_compute[351685]: 2025-10-03 10:02:58.195 2 DEBUG nova.virt.hardware [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  3 10:02:58 compute-0 nova_compute[351685]: 2025-10-03 10:02:58.196 2 INFO nova.compute.claims [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  3 10:02:58 compute-0 nova_compute[351685]: 2025-10-03 10:02:58.323 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:02:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:02:58 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3450859492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:02:58 compute-0 nova_compute[351685]: 2025-10-03 10:02:58.775 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:02:58 compute-0 nova_compute[351685]: 2025-10-03 10:02:58.783 2 DEBUG nova.compute.provider_tree [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:02:58 compute-0 nova_compute[351685]: 2025-10-03 10:02:58.805 2 DEBUG nova.scheduler.client.report [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:02:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct  3 10:02:58 compute-0 nova_compute[351685]: 2025-10-03 10:02:58.840 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:02:58 compute-0 nova_compute[351685]: 2025-10-03 10:02:58.841 2 DEBUG nova.compute.manager [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  3 10:02:58 compute-0 nova_compute[351685]: 2025-10-03 10:02:58.897 2 DEBUG nova.compute.manager [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  3 10:02:58 compute-0 nova_compute[351685]: 2025-10-03 10:02:58.899 2 DEBUG nova.network.neutron [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  3 10:02:58 compute-0 nova_compute[351685]: 2025-10-03 10:02:58.921 2 INFO nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  3 10:02:58 compute-0 nova_compute[351685]: 2025-10-03 10:02:58.962 2 DEBUG nova.compute.manager [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.055 2 DEBUG nova.compute.manager [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.056 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.057 2 INFO nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Creating image(s)#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.086 2 DEBUG nova.storage.rbd_utils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 5b008829-2c76-4e40-b9e6-0e3d73095522_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.119 2 DEBUG nova.storage.rbd_utils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 5b008829-2c76-4e40-b9e6-0e3d73095522_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.154 2 DEBUG nova.storage.rbd_utils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 5b008829-2c76-4e40-b9e6-0e3d73095522_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.161 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.223 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.224 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "8123da205344dbbb79d5d821c9749dc540280b1e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.225 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "8123da205344dbbb79d5d821c9749dc540280b1e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.226 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "8123da205344dbbb79d5d821c9749dc540280b1e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.255 2 DEBUG nova.storage.rbd_utils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 5b008829-2c76-4e40-b9e6-0e3d73095522_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.262 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e 5b008829-2c76-4e40-b9e6-0e3d73095522_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.621 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e 5b008829-2c76-4e40-b9e6-0e3d73095522_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.713 2 DEBUG nova.storage.rbd_utils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] resizing rbd image 5b008829-2c76-4e40-b9e6-0e3d73095522_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  3 10:02:59 compute-0 podman[157165]: time="2025-10-03T10:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:02:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:02:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9029 "" "Go-http-client/1.1"
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.883 2 DEBUG nova.objects.instance [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lazy-loading 'migration_context' on Instance uuid 5b008829-2c76-4e40-b9e6-0e3d73095522 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.927 2 DEBUG nova.storage.rbd_utils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 5b008829-2c76-4e40-b9e6-0e3d73095522_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.969 2 DEBUG nova.storage.rbd_utils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 5b008829-2c76-4e40-b9e6-0e3d73095522_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:02:59 compute-0 nova_compute[351685]: 2025-10-03 10:02:59.975 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.033 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.034 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.035 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.035 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.062 2 DEBUG nova.storage.rbd_utils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 5b008829-2c76-4e40-b9e6-0e3d73095522_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.069 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 5b008829-2c76-4e40-b9e6-0e3d73095522_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.410 2 DEBUG nova.network.neutron [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Successfully updated port: d601bb86-7265-40b5-ac1c-42d005514cfd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.431 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.431 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquired lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.431 2 DEBUG nova.network.neutron [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.496 2 DEBUG nova.compute.manager [req-5104c717-de44-46a2-acbe-bd1b55f07e1d req-ce846f4b-fc53-418a-81a9-e2f87d92b1ba 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Received event network-changed-d601bb86-7265-40b5-ac1c-42d005514cfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.496 2 DEBUG nova.compute.manager [req-5104c717-de44-46a2-acbe-bd1b55f07e1d req-ce846f4b-fc53-418a-81a9-e2f87d92b1ba 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Refreshing instance network info cache due to event network-changed-d601bb86-7265-40b5-ac1c-42d005514cfd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.497 2 DEBUG oslo_concurrency.lockutils [req-5104c717-de44-46a2-acbe-bd1b55f07e1d req-ce846f4b-fc53-418a-81a9-e2f87d92b1ba 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.500 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 5b008829-2c76-4e40-b9e6-0e3d73095522_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.580 2 DEBUG nova.network.neutron [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.635 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.635 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Ensure instance console log exists: /var/lib/nova/instances/5b008829-2c76-4e40-b9e6-0e3d73095522/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.636 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.637 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:03:00 compute-0 nova_compute[351685]: 2025-10-03 10:03:00.637 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:03:00 compute-0 podman[414617]: 2025-10-03 10:03:00.79943989 +0000 UTC m=+0.061980049 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct  3 10:03:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 124 KiB/s wr, 14 op/s
Oct  3 10:03:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:03:01 compute-0 openstack_network_exporter[367524]: ERROR   10:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:03:01 compute-0 openstack_network_exporter[367524]: ERROR   10:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:03:01 compute-0 openstack_network_exporter[367524]: ERROR   10:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:03:01 compute-0 openstack_network_exporter[367524]: ERROR   10:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:03:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:03:01 compute-0 openstack_network_exporter[367524]: ERROR   10:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:03:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:03:01 compute-0 nova_compute[351685]: 2025-10-03 10:03:01.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.146 2 DEBUG nova.network.neutron [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Updating instance_info_cache with network_info: [{"id": "d601bb86-7265-40b5-ac1c-42d005514cfd", "address": "fa:16:3e:4c:23:11", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd601bb86-72", "ovs_interfaceid": "d601bb86-7265-40b5-ac1c-42d005514cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.170 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Releasing lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.170 2 DEBUG nova.compute.manager [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Instance network_info: |[{"id": "d601bb86-7265-40b5-ac1c-42d005514cfd", "address": "fa:16:3e:4c:23:11", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd601bb86-72", "ovs_interfaceid": "d601bb86-7265-40b5-ac1c-42d005514cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.171 2 DEBUG oslo_concurrency.lockutils [req-5104c717-de44-46a2-acbe-bd1b55f07e1d req-ce846f4b-fc53-418a-81a9-e2f87d92b1ba 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.171 2 DEBUG nova.network.neutron [req-5104c717-de44-46a2-acbe-bd1b55f07e1d req-ce846f4b-fc53-418a-81a9-e2f87d92b1ba 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Refreshing network info cache for port d601bb86-7265-40b5-ac1c-42d005514cfd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.175 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Start _get_guest_xml network_info=[{"id": "d601bb86-7265-40b5-ac1c-42d005514cfd", "address": "fa:16:3e:4c:23:11", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd601bb86-72", "ovs_interfaceid": "d601bb86-7265-40b5-ac1c-42d005514cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-03T10:00:30Z,direct_url=<?>,disk_format='qcow2',id=37f03e8a-3aed-46a5-8219-fc87e355127e,min_disk=0,min_ram=0,name='cirros',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-03T10:00:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 0, 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}], 'ephemerals': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vdb', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 1, 'encrypted': False, 'encryption_options': None, 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.182 2 WARNING nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.191 2 DEBUG nova.virt.libvirt.host [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.191 2 DEBUG nova.virt.libvirt.host [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.197 2 DEBUG nova.virt.libvirt.host [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.198 2 DEBUG nova.virt.libvirt.host [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.200 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.200 2 DEBUG nova.virt.hardware [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-03T10:00:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='ada739ee-222b-4269-8d29-62bea534173e',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-03T10:00:30Z,direct_url=<?>,disk_format='qcow2',id=37f03e8a-3aed-46a5-8219-fc87e355127e,min_disk=0,min_ram=0,name='cirros',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-03T10:00:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.201 2 DEBUG nova.virt.hardware [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.201 2 DEBUG nova.virt.hardware [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.201 2 DEBUG nova.virt.hardware [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.201 2 DEBUG nova.virt.hardware [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.202 2 DEBUG nova.virt.hardware [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.202 2 DEBUG nova.virt.hardware [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.202 2 DEBUG nova.virt.hardware [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.203 2 DEBUG nova.virt.hardware [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.203 2 DEBUG nova.virt.hardware [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.203 2 DEBUG nova.virt.hardware [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.206 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:03:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:03:02 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/194218572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.689 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:03:02 compute-0 nova_compute[351685]: 2025-10-03 10:03:02.690 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:03:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 124 KiB/s wr, 14 op/s
Oct  3 10:03:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:03:03 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3577994602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.189 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.218 2 DEBUG nova.storage.rbd_utils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 5b008829-2c76-4e40-b9e6-0e3d73095522_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.225 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:03:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:03:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 6257 writes, 25K keys, 6257 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6257 writes, 1129 syncs, 5.54 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 519 writes, 1502 keys, 519 commit groups, 1.0 writes per commit group, ingest: 1.47 MB, 0.00 MB/s#012Interval WAL: 519 writes, 213 syncs, 2.44 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:03:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:03:03 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/902797867' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.723 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.725 2 DEBUG nova.virt.libvirt.vif [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T10:02:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-yplftfd-pkavijefxpwp-6g6pxdaavpud-vnf-i3ubmryl4tho',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-yplftfd-pkavijefxpwp-6g6pxdaavpud-vnf-i3ubmryl4tho',id=2,image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='09b6fef3-eb54-4e45-9716-a57b7d592bd8'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee75a4dc6ade43baab6ee923c9cf4cdf',ramdisk_id='',reservation_id='r-u8cxzf0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T10:02:58Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yMjMyNTU5MzIzMDk3ODU0NzE1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTIyMzI1NTkzMjMwOTc4NTQ3MTU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MjIzMjU1OTMyMzA5Nzg1NDcxNT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTIyMzI1NTkzMjMwOTc4NTQ3MTU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yMjMyNTU5MzIzMDk3ODU0NzE1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yMjMyNTU5MzIzMDk3ODU0NzE1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcW
Oct  3 10:03:03 compute-0 nova_compute[351685]: Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MjIzMjU1OTMyMzA5Nzg1NDcxNT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTIyMzI1NTkzMjMwOTc4NTQ3MTU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yMjMyNTU5MzIzMDk3ODU0NzE1PT0tLQo=',user_id='2f408449ba0f42fcb69f92dbf541f2e3',uuid=5b008829-2c76-4e40-b9e6-0e3d73095522,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d601bb86-7265-40b5-ac1c-42d005514cfd", "address": "fa:16:3e:4c:23:11", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd601bb86-72", "ovs_interfaceid": "d601bb86-7265-40b5-ac1c-42d005514cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.725 2 DEBUG nova.network.os_vif_util [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converting VIF {"id": "d601bb86-7265-40b5-ac1c-42d005514cfd", "address": "fa:16:3e:4c:23:11", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd601bb86-72", "ovs_interfaceid": "d601bb86-7265-40b5-ac1c-42d005514cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.726 2 DEBUG nova.network.os_vif_util [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:23:11,bridge_name='br-int',has_traffic_filtering=True,id=d601bb86-7265-40b5-ac1c-42d005514cfd,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd601bb86-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.727 2 DEBUG nova.objects.instance [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lazy-loading 'pci_devices' on Instance uuid 5b008829-2c76-4e40-b9e6-0e3d73095522 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.729 2 DEBUG nova.network.neutron [req-5104c717-de44-46a2-acbe-bd1b55f07e1d req-ce846f4b-fc53-418a-81a9-e2f87d92b1ba 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Updated VIF entry in instance network info cache for port d601bb86-7265-40b5-ac1c-42d005514cfd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.730 2 DEBUG nova.network.neutron [req-5104c717-de44-46a2-acbe-bd1b55f07e1d req-ce846f4b-fc53-418a-81a9-e2f87d92b1ba 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Updating instance_info_cache with network_info: [{"id": "d601bb86-7265-40b5-ac1c-42d005514cfd", "address": "fa:16:3e:4c:23:11", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd601bb86-72", "ovs_interfaceid": "d601bb86-7265-40b5-ac1c-42d005514cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.742 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] End _get_guest_xml xml=<domain type="kvm">
Oct  3 10:03:03 compute-0 nova_compute[351685]:  <uuid>5b008829-2c76-4e40-b9e6-0e3d73095522</uuid>
Oct  3 10:03:03 compute-0 nova_compute[351685]:  <name>instance-00000002</name>
Oct  3 10:03:03 compute-0 nova_compute[351685]:  <memory>524288</memory>
Oct  3 10:03:03 compute-0 nova_compute[351685]:  <vcpu>1</vcpu>
Oct  3 10:03:03 compute-0 nova_compute[351685]:  <metadata>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <nova:name>vn-yplftfd-pkavijefxpwp-6g6pxdaavpud-vnf-i3ubmryl4tho</nova:name>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <nova:creationTime>2025-10-03 10:03:02</nova:creationTime>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <nova:flavor name="m1.small">
Oct  3 10:03:03 compute-0 nova_compute[351685]:        <nova:memory>512</nova:memory>
Oct  3 10:03:03 compute-0 nova_compute[351685]:        <nova:disk>1</nova:disk>
Oct  3 10:03:03 compute-0 nova_compute[351685]:        <nova:swap>0</nova:swap>
Oct  3 10:03:03 compute-0 nova_compute[351685]:        <nova:ephemeral>1</nova:ephemeral>
Oct  3 10:03:03 compute-0 nova_compute[351685]:        <nova:vcpus>1</nova:vcpus>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      </nova:flavor>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <nova:owner>
Oct  3 10:03:03 compute-0 nova_compute[351685]:        <nova:user uuid="2f408449ba0f42fcb69f92dbf541f2e3">admin</nova:user>
Oct  3 10:03:03 compute-0 nova_compute[351685]:        <nova:project uuid="ee75a4dc6ade43baab6ee923c9cf4cdf">admin</nova:project>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      </nova:owner>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <nova:root type="image" uuid="37f03e8a-3aed-46a5-8219-fc87e355127e"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <nova:ports>
Oct  3 10:03:03 compute-0 nova_compute[351685]:        <nova:port uuid="d601bb86-7265-40b5-ac1c-42d005514cfd">
Oct  3 10:03:03 compute-0 nova_compute[351685]:          <nova:ip type="fixed" address="192.168.0.19" ipVersion="4"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:        </nova:port>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      </nova:ports>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    </nova:instance>
Oct  3 10:03:03 compute-0 nova_compute[351685]:  </metadata>
Oct  3 10:03:03 compute-0 nova_compute[351685]:  <sysinfo type="smbios">
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <system>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <entry name="manufacturer">RDO</entry>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <entry name="product">OpenStack Compute</entry>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <entry name="serial">5b008829-2c76-4e40-b9e6-0e3d73095522</entry>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <entry name="uuid">5b008829-2c76-4e40-b9e6-0e3d73095522</entry>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <entry name="family">Virtual Machine</entry>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    </system>
Oct  3 10:03:03 compute-0 nova_compute[351685]:  </sysinfo>
Oct  3 10:03:03 compute-0 nova_compute[351685]:  <os>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <boot dev="hd"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <smbios mode="sysinfo"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:  </os>
Oct  3 10:03:03 compute-0 nova_compute[351685]:  <features>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <acpi/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <apic/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <vmcoreinfo/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:  </features>
Oct  3 10:03:03 compute-0 nova_compute[351685]:  <clock offset="utc">
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <timer name="pit" tickpolicy="delay"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <timer name="hpet" present="no"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:  </clock>
Oct  3 10:03:03 compute-0 nova_compute[351685]:  <cpu mode="host-model" match="exact">
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <topology sockets="1" cores="1" threads="1"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:  </cpu>
Oct  3 10:03:03 compute-0 nova_compute[351685]:  <devices>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/5b008829-2c76-4e40-b9e6-0e3d73095522_disk">
Oct  3 10:03:03 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      </source>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:03:03 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <target dev="vda" bus="virtio"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/5b008829-2c76-4e40-b9e6-0e3d73095522_disk.eph0">
Oct  3 10:03:03 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      </source>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:03:03 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <target dev="vdb" bus="virtio"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <disk type="network" device="cdrom">
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/5b008829-2c76-4e40-b9e6-0e3d73095522_disk.config">
Oct  3 10:03:03 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      </source>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:03:03 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <target dev="sda" bus="sata"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <interface type="ethernet">
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <mac address="fa:16:3e:4c:23:11"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <driver name="vhost" rx_queue_size="512"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <mtu size="1442"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <target dev="tapd601bb86-72"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    </interface>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <serial type="pty">
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <log file="/var/lib/nova/instances/5b008829-2c76-4e40-b9e6-0e3d73095522/console.log" append="off"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    </serial>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <video>
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    </video>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <input type="tablet" bus="usb"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <rng model="virtio">
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <backend model="random">/dev/urandom</backend>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    </rng>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <controller type="usb" index="0"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    <memballoon model="virtio">
Oct  3 10:03:03 compute-0 nova_compute[351685]:      <stats period="10"/>
Oct  3 10:03:03 compute-0 nova_compute[351685]:    </memballoon>
Oct  3 10:03:03 compute-0 nova_compute[351685]:  </devices>
Oct  3 10:03:03 compute-0 nova_compute[351685]: </domain>
Oct  3 10:03:03 compute-0 nova_compute[351685]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.743 2 DEBUG nova.compute.manager [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Preparing to wait for external event network-vif-plugged-d601bb86-7265-40b5-ac1c-42d005514cfd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.744 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.744 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.744 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.745 2 DEBUG nova.virt.libvirt.vif [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T10:02:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-yplftfd-pkavijefxpwp-6g6pxdaavpud-vnf-i3ubmryl4tho',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-yplftfd-pkavijefxpwp-6g6pxdaavpud-vnf-i3ubmryl4tho',id=2,image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='09b6fef3-eb54-4e45-9716-a57b7d592bd8'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee75a4dc6ade43baab6ee923c9cf4cdf',ramdisk_id='',reservation_id='r-u8cxzf0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T10:02:58Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yMjMyNTU5MzIzMDk3ODU0NzE1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTIyMzI1NTkzMjMwOTc4NTQ3MTU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MjIzMjU1OTMyMzA5Nzg1NDcxNT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTIyMzI1NTkzMjMwOTc4NTQ3MTU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yMjMyNTU5MzIzMDk3ODU0NzE1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yMjMyNTU5MzIzMDk3ODU0NzE1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykp
Oct  3 10:03:03 compute-0 nova_compute[351685]: YXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MjIzMjU1OTMyMzA5Nzg1NDcxNT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTIyMzI1NTkzMjMwOTc4NTQ3MTU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yMjMyNTU5MzIzMDk3ODU0NzE1PT0tLQo=',user_id='2f408449ba0f42fcb69f92dbf541f2e3',uuid=5b008829-2c76-4e40-b9e6-0e3d73095522,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d601bb86-7265-40b5-ac1c-42d005514cfd", "address": "fa:16:3e:4c:23:11", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd601bb86-72", "ovs_interfaceid": "d601bb86-7265-40b5-ac1c-42d005514cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.745 2 DEBUG nova.network.os_vif_util [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converting VIF {"id": "d601bb86-7265-40b5-ac1c-42d005514cfd", "address": "fa:16:3e:4c:23:11", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd601bb86-72", "ovs_interfaceid": "d601bb86-7265-40b5-ac1c-42d005514cfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.746 2 DEBUG nova.network.os_vif_util [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:23:11,bridge_name='br-int',has_traffic_filtering=True,id=d601bb86-7265-40b5-ac1c-42d005514cfd,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd601bb86-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.746 2 DEBUG os_vif [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:23:11,bridge_name='br-int',has_traffic_filtering=True,id=d601bb86-7265-40b5-ac1c-42d005514cfd,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd601bb86-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.747 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.748 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.750 2 DEBUG oslo_concurrency.lockutils [req-5104c717-de44-46a2-acbe-bd1b55f07e1d req-ce846f4b-fc53-418a-81a9-e2f87d92b1ba 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.752 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd601bb86-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.752 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd601bb86-72, col_values=(('external_ids', {'iface-id': 'd601bb86-7265-40b5-ac1c-42d005514cfd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4c:23:11', 'vm-uuid': '5b008829-2c76-4e40-b9e6-0e3d73095522'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:03:03 compute-0 NetworkManager[45015]: <info>  [1759485783.7560] manager: (tapd601bb86-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.763 2 INFO os_vif [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:23:11,bridge_name='br-int',has_traffic_filtering=True,id=d601bb86-7265-40b5-ac1c-42d005514cfd,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd601bb86-72')#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.809 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.809 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.810 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.810 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No VIF found with MAC fa:16:3e:4c:23:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.810 2 INFO nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Using config drive#033[00m
Oct  3 10:03:03 compute-0 nova_compute[351685]: 2025-10-03 10:03:03.841 2 DEBUG nova.storage.rbd_utils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 5b008829-2c76-4e40-b9e6-0e3d73095522_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:03:03 compute-0 rsyslogd[187556]: message too long (8192) with configured size 8096, begin of message is: 2025-10-03 10:03:03.725 2 DEBUG nova.virt.libvirt.vif [None req-98e30987-2a58-49 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  3 10:03:03 compute-0 rsyslogd[187556]: message too long (8192) with configured size 8096, begin of message is: 2025-10-03 10:03:03.745 2 DEBUG nova.virt.libvirt.vif [None req-98e30987-2a58-49 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  3 10:03:04 compute-0 nova_compute[351685]: 2025-10-03 10:03:04.549 2 INFO nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Creating config drive at /var/lib/nova/instances/5b008829-2c76-4e40-b9e6-0e3d73095522/disk.config#033[00m
Oct  3 10:03:04 compute-0 nova_compute[351685]: 2025-10-03 10:03:04.556 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5b008829-2c76-4e40-b9e6-0e3d73095522/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb_g6apwz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:03:04 compute-0 nova_compute[351685]: 2025-10-03 10:03:04.683 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5b008829-2c76-4e40-b9e6-0e3d73095522/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb_g6apwz" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:03:04 compute-0 nova_compute[351685]: 2025-10-03 10:03:04.722 2 DEBUG nova.storage.rbd_utils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 5b008829-2c76-4e40-b9e6-0e3d73095522_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:03:04 compute-0 nova_compute[351685]: 2025-10-03 10:03:04.731 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5b008829-2c76-4e40-b9e6-0e3d73095522/disk.config 5b008829-2c76-4e40-b9e6-0e3d73095522_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:03:04 compute-0 podman[414759]: 2025-10-03 10:03:04.803109829 +0000 UTC m=+0.064815710 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:03:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 110 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Oct  3 10:03:04 compute-0 nova_compute[351685]: 2025-10-03 10:03:04.953 2 DEBUG oslo_concurrency.processutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5b008829-2c76-4e40-b9e6-0e3d73095522/disk.config 5b008829-2c76-4e40-b9e6-0e3d73095522_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:03:04 compute-0 nova_compute[351685]: 2025-10-03 10:03:04.954 2 INFO nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Deleting local config drive /var/lib/nova/instances/5b008829-2c76-4e40-b9e6-0e3d73095522/disk.config because it was imported into RBD.#033[00m
Oct  3 10:03:05 compute-0 kernel: tapd601bb86-72: entered promiscuous mode
Oct  3 10:03:05 compute-0 NetworkManager[45015]: <info>  [1759485785.0169] manager: (tapd601bb86-72): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Oct  3 10:03:05 compute-0 ovn_controller[88471]: 2025-10-03T10:03:05Z|00035|binding|INFO|Claiming lport d601bb86-7265-40b5-ac1c-42d005514cfd for this chassis.
Oct  3 10:03:05 compute-0 ovn_controller[88471]: 2025-10-03T10:03:05Z|00036|binding|INFO|d601bb86-7265-40b5-ac1c-42d005514cfd: Claiming fa:16:3e:4c:23:11 192.168.0.19
Oct  3 10:03:05 compute-0 nova_compute[351685]: 2025-10-03 10:03:05.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:03:05.022 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:23:11 192.168.0.19'], port_security=['fa:16:3e:4c:23:11 192.168.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-e77v6yplftfd-pkavijefxpwp-6g6pxdaavpud-port-rqxwqbtnumad', 'neutron:cidrs': '192.168.0.19/24', 'neutron:device_id': '5b008829-2c76-4e40-b9e6-0e3d73095522', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-e77v6yplftfd-pkavijefxpwp-6g6pxdaavpud-port-rqxwqbtnumad', 'neutron:project_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c22f45fa-3e9c-4adb-8724-80552acd48b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c829e9c2-c63b-44e6-806c-cc11bdf56258, chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=d601bb86-7265-40b5-ac1c-42d005514cfd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 10:03:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:03:05.023 284328 INFO neutron.agent.ovn.metadata.agent [-] Port d601bb86-7265-40b5-ac1c-42d005514cfd in datapath 67eed0ac-d131-40ed-a5fe-0484d04236a0 bound to our chassis#033[00m
Oct  3 10:03:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:03:05.024 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67eed0ac-d131-40ed-a5fe-0484d04236a0#033[00m
Oct  3 10:03:05 compute-0 ovn_controller[88471]: 2025-10-03T10:03:05Z|00037|binding|INFO|Setting lport d601bb86-7265-40b5-ac1c-42d005514cfd ovn-installed in OVS
Oct  3 10:03:05 compute-0 ovn_controller[88471]: 2025-10-03T10:03:05Z|00038|binding|INFO|Setting lport d601bb86-7265-40b5-ac1c-42d005514cfd up in Southbound
Oct  3 10:03:05 compute-0 nova_compute[351685]: 2025-10-03 10:03:05.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:05 compute-0 nova_compute[351685]: 2025-10-03 10:03:05.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:03:05.040 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[62e757c1-56fb-4a79-bc54-580e4687dfa6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:03:05 compute-0 systemd-machined[137653]: New machine qemu-2-instance-00000002.
Oct  3 10:03:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:03:05.070 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[af49f188-e81d-40a0-9d2e-8ba723d63f32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:03:05 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Oct  3 10:03:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:03:05.074 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[c3bc1ba4-9143-44db-ad0d-e6de4df37ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:03:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:03:05.105 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[016ede05-84fd-43d2-a74d-d839c8816c21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:03:05 compute-0 systemd-udevd[414841]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 10:03:05 compute-0 NetworkManager[45015]: <info>  [1759485785.1213] device (tapd601bb86-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 10:03:05 compute-0 NetworkManager[45015]: <info>  [1759485785.1220] device (tapd601bb86-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  3 10:03:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:03:05.122 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[d75ef0e4-e368-4eec-8f55-c7a09e256791]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67eed0ac-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489539, 'reachable_time': 37052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414847, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:03:05 compute-0 podman[414814]: 2025-10-03 10:03:05.141063569 +0000 UTC m=+0.093123979 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 10:03:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:03:05.139 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[df3896c7-5b63-4de2-bd19-207fa754fa0b]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap67eed0ac-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489551, 'tstamp': 489551}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414865, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67eed0ac-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489554, 'tstamp': 489554}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414865, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:03:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:03:05.141 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67eed0ac-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:03:05 compute-0 nova_compute[351685]: 2025-10-03 10:03:05.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:05 compute-0 nova_compute[351685]: 2025-10-03 10:03:05.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:03:05.146 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67eed0ac-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:03:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:03:05.146 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:03:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:03:05.147 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67eed0ac-d0, col_values=(('external_ids', {'iface-id': 'e79720f4-8084-4b6f-a8ef-933cf0e7b8bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:03:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:03:05.147 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:03:05 compute-0 podman[414816]: 2025-10-03 10:03:05.15762552 +0000 UTC m=+0.097832629 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:03:05 compute-0 nova_compute[351685]: 2025-10-03 10:03:05.567 2 DEBUG nova.compute.manager [req-4b157e47-6bf4-4e98-bbe5-a07bd292156c req-632ebfbe-7ddc-4b58-862d-89f455c6d229 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Received event network-vif-plugged-d601bb86-7265-40b5-ac1c-42d005514cfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:03:05 compute-0 nova_compute[351685]: 2025-10-03 10:03:05.567 2 DEBUG oslo_concurrency.lockutils [req-4b157e47-6bf4-4e98-bbe5-a07bd292156c req-632ebfbe-7ddc-4b58-862d-89f455c6d229 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:03:05 compute-0 nova_compute[351685]: 2025-10-03 10:03:05.568 2 DEBUG oslo_concurrency.lockutils [req-4b157e47-6bf4-4e98-bbe5-a07bd292156c req-632ebfbe-7ddc-4b58-862d-89f455c6d229 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:03:05 compute-0 nova_compute[351685]: 2025-10-03 10:03:05.568 2 DEBUG oslo_concurrency.lockutils [req-4b157e47-6bf4-4e98-bbe5-a07bd292156c req-632ebfbe-7ddc-4b58-862d-89f455c6d229 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:03:05 compute-0 nova_compute[351685]: 2025-10-03 10:03:05.568 2 DEBUG nova.compute.manager [req-4b157e47-6bf4-4e98-bbe5-a07bd292156c req-632ebfbe-7ddc-4b58-862d-89f455c6d229 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Processing event network-vif-plugged-d601bb86-7265-40b5-ac1c-42d005514cfd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  3 10:03:05 compute-0 ceph-mgr[192071]: [devicehealth INFO root] Check health
Oct  3 10:03:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.308 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759485786.3078766, 5b008829-2c76-4e40-b9e6-0e3d73095522 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.309 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] VM Started (Lifecycle Event)#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.311 2 DEBUG nova.compute.manager [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.316 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.322 2 INFO nova.virt.libvirt.driver [-] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Instance spawned successfully.#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.323 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.329 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.335 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.353 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.354 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.355 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.355 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.356 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.357 2 DEBUG nova.virt.libvirt.driver [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.361 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.362 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759485786.307988, 5b008829-2c76-4e40-b9e6-0e3d73095522 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.362 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] VM Paused (Lifecycle Event)#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.403 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.410 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759485786.3157215, 5b008829-2c76-4e40-b9e6-0e3d73095522 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.410 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] VM Resumed (Lifecycle Event)#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.441 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.447 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.452 2 INFO nova.compute.manager [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Took 7.40 seconds to spawn the instance on the hypervisor.#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.453 2 DEBUG nova.compute.manager [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.466 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.514 2 INFO nova.compute.manager [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Took 8.36 seconds to build instance.#033[00m
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.535 2 DEBUG oslo_concurrency.lockutils [None req-98e30987-2a58-490c-8441-86ee983b1cbb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "5b008829-2c76-4e40-b9e6-0e3d73095522" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:03:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 110 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 MiB/s wr, 41 op/s
Oct  3 10:03:06 compute-0 nova_compute[351685]: 2025-10-03 10:03:06.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:07 compute-0 nova_compute[351685]: 2025-10-03 10:03:07.687 2 DEBUG nova.compute.manager [req-ec78881a-dbdf-45a5-bdef-77131273d5f7 req-6fecd9f6-e22f-4bdb-989b-26dd22330031 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Received event network-vif-plugged-d601bb86-7265-40b5-ac1c-42d005514cfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:03:07 compute-0 nova_compute[351685]: 2025-10-03 10:03:07.687 2 DEBUG oslo_concurrency.lockutils [req-ec78881a-dbdf-45a5-bdef-77131273d5f7 req-6fecd9f6-e22f-4bdb-989b-26dd22330031 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:03:07 compute-0 nova_compute[351685]: 2025-10-03 10:03:07.688 2 DEBUG oslo_concurrency.lockutils [req-ec78881a-dbdf-45a5-bdef-77131273d5f7 req-6fecd9f6-e22f-4bdb-989b-26dd22330031 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:03:07 compute-0 nova_compute[351685]: 2025-10-03 10:03:07.688 2 DEBUG oslo_concurrency.lockutils [req-ec78881a-dbdf-45a5-bdef-77131273d5f7 req-6fecd9f6-e22f-4bdb-989b-26dd22330031 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:03:07 compute-0 nova_compute[351685]: 2025-10-03 10:03:07.688 2 DEBUG nova.compute.manager [req-ec78881a-dbdf-45a5-bdef-77131273d5f7 req-6fecd9f6-e22f-4bdb-989b-26dd22330031 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] No waiting events found dispatching network-vif-plugged-d601bb86-7265-40b5-ac1c-42d005514cfd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 10:03:07 compute-0 nova_compute[351685]: 2025-10-03 10:03:07.689 2 WARNING nova.compute.manager [req-ec78881a-dbdf-45a5-bdef-77131273d5f7 req-6fecd9f6-e22f-4bdb-989b-26dd22330031 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Received unexpected event network-vif-plugged-d601bb86-7265-40b5-ac1c-42d005514cfd for instance with vm_state active and task_state None.#033[00m
Oct  3 10:03:08 compute-0 nova_compute[351685]: 2025-10-03 10:03:08.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 111 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 1.4 MiB/s wr, 48 op/s
Oct  3 10:03:09 compute-0 podman[414931]: 2025-10-03 10:03:09.84163121 +0000 UTC m=+0.105206315 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:03:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 111 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 416 KiB/s rd, 1.4 MiB/s wr, 62 op/s
Oct  3 10:03:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:03:11 compute-0 nova_compute[351685]: 2025-10-03 10:03:11.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 111 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 408 KiB/s rd, 1.3 MiB/s wr, 48 op/s
Oct  3 10:03:13 compute-0 nova_compute[351685]: 2025-10-03 10:03:13.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:14 compute-0 podman[414951]: 2025-10-03 10:03:14.785133983 +0000 UTC m=+0.087759625 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct  3 10:03:14 compute-0 podman[414950]: 2025-10-03 10:03:14.792884292 +0000 UTC m=+0.101250599 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:03:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 111 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.3 MiB/s wr, 83 op/s
Oct  3 10:03:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:03:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:03:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:03:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:03:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:03:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:03:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:03:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 111 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Oct  3 10:03:16 compute-0 nova_compute[351685]: 2025-10-03 10:03:16.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:18 compute-0 nova_compute[351685]: 2025-10-03 10:03:18.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 111 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 57 op/s
Oct  3 10:03:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 111 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 255 B/s wr, 50 op/s
Oct  3 10:03:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:03:21 compute-0 nova_compute[351685]: 2025-10-03 10:03:21.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 111 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 35 op/s
Oct  3 10:03:23 compute-0 nova_compute[351685]: 2025-10-03 10:03:23.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 111 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 35 op/s
Oct  3 10:03:25 compute-0 podman[414994]: 2025-10-03 10:03:25.83721812 +0000 UTC m=+0.103392489 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 10:03:25 compute-0 podman[414995]: 2025-10-03 10:03:25.838401597 +0000 UTC m=+0.101292600 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 10:03:25 compute-0 podman[414996]: 2025-10-03 10:03:25.850217336 +0000 UTC m=+0.108486180 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 10:03:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:03:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 111 MiB data, 215 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:03:26 compute-0 nova_compute[351685]: 2025-10-03 10:03:26.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:28 compute-0 nova_compute[351685]: 2025-10-03 10:03:28.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 111 MiB data, 215 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:03:29 compute-0 podman[157165]: time="2025-10-03T10:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:03:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:03:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9042 "" "Go-http-client/1.1"
Oct  3 10:03:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 111 MiB data, 215 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:03:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:03:31 compute-0 openstack_network_exporter[367524]: ERROR   10:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:03:31 compute-0 openstack_network_exporter[367524]: ERROR   10:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:03:31 compute-0 openstack_network_exporter[367524]: ERROR   10:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:03:31 compute-0 openstack_network_exporter[367524]: ERROR   10:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:03:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:03:31 compute-0 openstack_network_exporter[367524]: ERROR   10:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:03:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:03:31 compute-0 podman[415057]: 2025-10-03 10:03:31.794042485 +0000 UTC m=+0.063524359 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  3 10:03:31 compute-0 nova_compute[351685]: 2025-10-03 10:03:31.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:32 compute-0 nova_compute[351685]: 2025-10-03 10:03:32.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:03:32 compute-0 nova_compute[351685]: 2025-10-03 10:03:32.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:03:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 111 MiB data, 215 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:03:33 compute-0 nova_compute[351685]: 2025-10-03 10:03:33.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:34 compute-0 nova_compute[351685]: 2025-10-03 10:03:34.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:03:34 compute-0 nova_compute[351685]: 2025-10-03 10:03:34.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:03:34 compute-0 nova_compute[351685]: 2025-10-03 10:03:34.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:03:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 111 MiB data, 215 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:03:35 compute-0 ovn_controller[88471]: 2025-10-03T10:03:35Z|00039|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct  3 10:03:35 compute-0 nova_compute[351685]: 2025-10-03 10:03:35.455 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:03:35 compute-0 nova_compute[351685]: 2025-10-03 10:03:35.455 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:03:35 compute-0 nova_compute[351685]: 2025-10-03 10:03:35.456 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:03:35 compute-0 nova_compute[351685]: 2025-10-03 10:03:35.456 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:03:35 compute-0 podman[415076]: 2025-10-03 10:03:35.810149641 +0000 UTC m=+0.076637828 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:03:35 compute-0 podman[415078]: 2025-10-03 10:03:35.863530843 +0000 UTC m=+0.119673099 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  3 10:03:35 compute-0 podman[415077]: 2025-10-03 10:03:35.871466478 +0000 UTC m=+0.123973927 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  3 10:03:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:03:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 111 MiB data, 215 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:03:36 compute-0 nova_compute[351685]: 2025-10-03 10:03:36.889 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:03:36 compute-0 nova_compute[351685]: 2025-10-03 10:03:36.908 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:03:36 compute-0 nova_compute[351685]: 2025-10-03 10:03:36.909 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:03:36 compute-0 nova_compute[351685]: 2025-10-03 10:03:36.909 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:03:36 compute-0 nova_compute[351685]: 2025-10-03 10:03:36.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:36 compute-0 nova_compute[351685]: 2025-10-03 10:03:36.931 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:03:36 compute-0 nova_compute[351685]: 2025-10-03 10:03:36.932 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:03:36 compute-0 nova_compute[351685]: 2025-10-03 10:03:36.932 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:03:36 compute-0 nova_compute[351685]: 2025-10-03 10:03:36.933 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:03:36 compute-0 nova_compute[351685]: 2025-10-03 10:03:36.934 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:03:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:03:37 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/664734940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:03:37 compute-0 nova_compute[351685]: 2025-10-03 10:03:37.418 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:03:37 compute-0 nova_compute[351685]: 2025-10-03 10:03:37.686 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:03:37 compute-0 nova_compute[351685]: 2025-10-03 10:03:37.687 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:03:37 compute-0 nova_compute[351685]: 2025-10-03 10:03:37.687 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:03:37 compute-0 nova_compute[351685]: 2025-10-03 10:03:37.694 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:03:37 compute-0 nova_compute[351685]: 2025-10-03 10:03:37.695 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:03:37 compute-0 nova_compute[351685]: 2025-10-03 10:03:37.696 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:03:38 compute-0 nova_compute[351685]: 2025-10-03 10:03:38.085 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:03:38 compute-0 nova_compute[351685]: 2025-10-03 10:03:38.087 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3846MB free_disk=59.93906021118164GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:03:38 compute-0 nova_compute[351685]: 2025-10-03 10:03:38.087 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:03:38 compute-0 nova_compute[351685]: 2025-10-03 10:03:38.088 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:03:38 compute-0 nova_compute[351685]: 2025-10-03 10:03:38.401 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:03:38 compute-0 nova_compute[351685]: 2025-10-03 10:03:38.402 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 5b008829-2c76-4e40-b9e6-0e3d73095522 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:03:38 compute-0 nova_compute[351685]: 2025-10-03 10:03:38.402 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:03:38 compute-0 nova_compute[351685]: 2025-10-03 10:03:38.403 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:03:38 compute-0 nova_compute[351685]: 2025-10-03 10:03:38.454 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:03:38 compute-0 nova_compute[351685]: 2025-10-03 10:03:38.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 111 MiB data, 215 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:03:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:03:38 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1187566004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:03:38 compute-0 nova_compute[351685]: 2025-10-03 10:03:38.935 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:03:38 compute-0 nova_compute[351685]: 2025-10-03 10:03:38.942 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:03:39 compute-0 nova_compute[351685]: 2025-10-03 10:03:39.170 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:03:39 compute-0 nova_compute[351685]: 2025-10-03 10:03:39.198 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:03:39 compute-0 nova_compute[351685]: 2025-10-03 10:03:39.199 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:03:39 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  3 10:03:39 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  3 10:03:40 compute-0 nova_compute[351685]: 2025-10-03 10:03:40.020 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:03:40 compute-0 nova_compute[351685]: 2025-10-03 10:03:40.020 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:03:40 compute-0 nova_compute[351685]: 2025-10-03 10:03:40.021 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:03:40 compute-0 nova_compute[351685]: 2025-10-03 10:03:40.021 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:03:40 compute-0 nova_compute[351685]: 2025-10-03 10:03:40.021 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:03:40 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  3 10:03:40 compute-0 podman[415180]: 2025-10-03 10:03:40.821842462 +0000 UTC m=+0.080872215 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible)
Oct  3 10:03:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 117 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 341 KiB/s wr, 8 op/s
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.881 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.881 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.881 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.882 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.882 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.882 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.882 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.887 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 5b008829-2c76-4e40-b9e6-0e3d73095522 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct  3 10:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:03:40 compute-0 ovn_controller[88471]: 2025-10-03T10:03:40Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4c:23:11 192.168.0.19
Oct  3 10:03:40 compute-0 ovn_controller[88471]: 2025-10-03T10:03:40Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4c:23:11 192.168.0.19
Oct  3 10:03:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:41.272 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/5b008829-2c76-4e40-b9e6-0e3d73095522 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ef276854e7699a1234d40a89e1ecb6415a6c739b2915f53a3c625cf782ff31fe" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct  3 10:03:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:03:41.591 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:03:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:03:41.591 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:03:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:03:41.592 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:03:41 compute-0 nova_compute[351685]: 2025-10-03 10:03:41.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.132 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Fri, 03 Oct 2025 10:03:41 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-e09d2452-0c7a-40b0-9ef5-406a8a1a5c65 x-openstack-request-id: req-e09d2452-0c7a-40b0-9ef5-406a8a1a5c65 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.132 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "5b008829-2c76-4e40-b9e6-0e3d73095522", "name": "vn-yplftfd-pkavijefxpwp-6g6pxdaavpud-vnf-i3ubmryl4tho", "status": "ACTIVE", "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "user_id": "2f408449ba0f42fcb69f92dbf541f2e3", "metadata": {"metering.server_group": "09b6fef3-eb54-4e45-9716-a57b7d592bd8"}, "hostId": "b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85", "image": {"id": "37f03e8a-3aed-46a5-8219-fc87e355127e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/37f03e8a-3aed-46a5-8219-fc87e355127e"}]}, "flavor": {"id": "ada739ee-222b-4269-8d29-62bea534173e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/ada739ee-222b-4269-8d29-62bea534173e"}]}, "created": "2025-10-03T10:02:55Z", "updated": "2025-10-03T10:03:06Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.19", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4c:23:11"}, {"version": 4, "addr": "192.168.122.180", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4c:23:11"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/5b008829-2c76-4e40-b9e6-0e3d73095522"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/5b008829-2c76-4e40-b9e6-0e3d73095522"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-03T10:03:06.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.133 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/5b008829-2c76-4e40-b9e6-0e3d73095522 used request id req-e09d2452-0c7a-40b0-9ef5-406a8a1a5c65 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.135 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5b008829-2c76-4e40-b9e6-0e3d73095522', 'name': 'vn-yplftfd-pkavijefxpwp-6g6pxdaavpud-vnf-i3ubmryl4tho', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {'metering.server_group': '09b6fef3-eb54-4e45-9716-a57b7d592bd8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.138 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b43db93c-a4fe-46e9-8418-eedf4f5c135a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.139 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b43db93c-a4fe-46e9-8418-eedf4f5c135a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ef276854e7699a1234d40a89e1ecb6415a6c739b2915f53a3c625cf782ff31fe" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.482 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1850 Content-Type: application/json Date: Fri, 03 Oct 2025 10:03:42 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-be5c9e34-d2dc-468b-85bd-4c953462f7c1 x-openstack-request-id: req-be5c9e34-d2dc-468b-85bd-4c953462f7c1 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.482 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b43db93c-a4fe-46e9-8418-eedf4f5c135a", "name": "test_0", "status": "ACTIVE", "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "user_id": "2f408449ba0f42fcb69f92dbf541f2e3", "metadata": {}, "hostId": "b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85", "image": {"id": "37f03e8a-3aed-46a5-8219-fc87e355127e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/37f03e8a-3aed-46a5-8219-fc87e355127e"}]}, "flavor": {"id": "ada739ee-222b-4269-8d29-62bea534173e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/ada739ee-222b-4269-8d29-62bea534173e"}]}, "created": "2025-10-03T10:01:36Z", "updated": "2025-10-03T10:01:50Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.158", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a9:40:5c"}, {"version": 4, "addr": "192.168.122.250", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a9:40:5c"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b43db93c-a4fe-46e9-8418-eedf4f5c135a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b43db93c-a4fe-46e9-8418-eedf4f5c135a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-03T10:01:50.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.482 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b43db93c-a4fe-46e9-8418-eedf4f5c135a used request id req-be5c9e34-d2dc-468b-85bd-4c953462f7c1 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.483 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.483 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.483 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.484 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.484 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.485 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:03:42.484089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.488 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 5b008829-2c76-4e40-b9e6-0e3d73095522 / tapd601bb86-72 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.488 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.492 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b43db93c-a4fe-46e9-8418-eedf4f5c135a / tapa8897fbc-9f inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.492 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.493 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.493 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.493 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.493 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.493 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.494 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.494 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.494 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:03:42.493784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.494 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.494 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.495 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.495 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.495 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.495 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.495 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.496 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:03:42.495438) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.512 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.513 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.513 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.533 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.534 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.534 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.535 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.535 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.535 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.535 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.535 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.536 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:03:42.535590) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.573 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.574 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.574 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.606 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.607 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.607 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.608 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.608 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.608 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.608 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.608 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.608 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.609 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.latency volume: 1243590887 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.609 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.latency volume: 207399736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.609 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.latency volume: 144385577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.609 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.610 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.610 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.610 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.611 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.611 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.611 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.611 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.611 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.611 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.611 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:03:42.608937) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.612 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.612 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:03:42.611478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.612 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.612 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.612 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.612 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.613 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.613 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.613 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.613 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.613 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:03:42.613826) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.614 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.614 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.614 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.615 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.615 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.615 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.615 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.616 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.616 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.616 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.616 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.616 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.616 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.616 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.617 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.617 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.617 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.618 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.618 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.618 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.618 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.619 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:03:42.616360) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.619 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.619 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.bytes volume: 41590784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.619 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.619 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.620 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.620 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.620 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.621 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.621 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.621 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.621 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.621 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.621 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:03:42.619099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.622 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.622 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:03:42.621991) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.647 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.668 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.668 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.668 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.669 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.669 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.669 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.669 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.669 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.latency volume: 13222491893 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.669 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.latency volume: 25497777 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.670 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.670 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.670 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.671 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.671 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.671 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.671 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.672 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.672 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.672 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.672 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.requests volume: 215 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.672 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.672 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:03:42.669564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:03:42.672329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.673 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.673 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.673 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.673 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.674 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.674 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.674 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.674 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.674 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.674 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.675 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:03:42.674834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.675 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.675 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.675 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.675 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.676 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.676 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.676 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.676 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.676 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.676 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-03T10:03:42.676277) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.676 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-yplftfd-pkavijefxpwp-6g6pxdaavpud-vnf-i3ubmryl4tho>, <NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-yplftfd-pkavijefxpwp-6g6pxdaavpud-vnf-i3ubmryl4tho>, <NovaLikeServer: test_0>]
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.678 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.678 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.679 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:03:42.678894) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.679 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.679 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.680 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.680 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.680 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:03:42.680217) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.680 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.681 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.681 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.681 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.681 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.681 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.682 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.682 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:03:42.682116) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.683 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.683 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.683 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.683 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.683 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.684 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.684 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:03:42.683605) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.684 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.684 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.685 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:03:42.685055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.685 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/cpu volume: 33200000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.685 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 34890000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.685 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.686 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.686 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.686 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.686 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.686 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.686 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.686 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.687 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:03:42.686529) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.687 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.687 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.687 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.687 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.688 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.688 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.688 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.bytes volume: 1261 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.688 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.688 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:03:42.688268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.689 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.689 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.689 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.689 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.689 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.690 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.690 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.690 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:03:42.689970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.690 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.691 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.691 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.691 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.691 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.691 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.691 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.691 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/memory.usage volume: 33.296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.691 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:03:42.691581) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.692 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 49.0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.692 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.692 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.692 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.692 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.692 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.693 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.693 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.693 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-yplftfd-pkavijefxpwp-6g6pxdaavpud-vnf-i3ubmryl4tho>, <NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-yplftfd-pkavijefxpwp-6g6pxdaavpud-vnf-i3ubmryl4tho>, <NovaLikeServer: test_0>]
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.693 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.693 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.693 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.693 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.693 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.694 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-03T10:03:42.692969) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.694 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.bytes volume: 1786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.694 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2268 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.694 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.695 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.695 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:03:42.693959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.695 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.695 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.695 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.695 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.695 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.packets volume: 8 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.696 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:03:42.695646) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.696 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.696 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:03:42.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:03:42 compute-0 nova_compute[351685]: 2025-10-03 10:03:42.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:03:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 117 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 341 KiB/s wr, 8 op/s
Oct  3 10:03:43 compute-0 nova_compute[351685]: 2025-10-03 10:03:43.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 138 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 1.4 MiB/s wr, 41 op/s
Oct  3 10:03:45 compute-0 podman[415201]: 2025-10-03 10:03:45.812081715 +0000 UTC m=+0.076665370 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:03:45 compute-0 podman[415202]: 2025-10-03 10:03:45.815962049 +0000 UTC m=+0.077333511 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, version=9.4, container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:03:46
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'images', '.mgr', 'default.rgw.control', 'default.rgw.meta']
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:03:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:03:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct  3 10:03:46 compute-0 nova_compute[351685]: 2025-10-03 10:03:46.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:48 compute-0 nova_compute[351685]: 2025-10-03 10:03:48.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct  3 10:03:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Oct  3 10:03:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:03:51 compute-0 nova_compute[351685]: 2025-10-03 10:03:51.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:03:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:03:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:03:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:03:52 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 10:03:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 1.2 MiB/s wr, 49 op/s
Oct  3 10:03:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:03:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:03:53 compute-0 podman[415633]: 2025-10-03 10:03:53.649803191 +0000 UTC m=+0.078152147 container create 1e02d731721339266691ecb902dd7adcc1c3f0e391d12ec52ee680f5bd78b18f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:03:53 compute-0 podman[415633]: 2025-10-03 10:03:53.602862106 +0000 UTC m=+0.031211082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:03:53 compute-0 systemd[1]: Started libpod-conmon-1e02d731721339266691ecb902dd7adcc1c3f0e391d12ec52ee680f5bd78b18f.scope.
Oct  3 10:03:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:03:53 compute-0 nova_compute[351685]: 2025-10-03 10:03:53.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:53 compute-0 podman[415633]: 2025-10-03 10:03:53.818200273 +0000 UTC m=+0.246549249 container init 1e02d731721339266691ecb902dd7adcc1c3f0e391d12ec52ee680f5bd78b18f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 10:03:53 compute-0 podman[415633]: 2025-10-03 10:03:53.830707573 +0000 UTC m=+0.259056520 container start 1e02d731721339266691ecb902dd7adcc1c3f0e391d12ec52ee680f5bd78b18f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 10:03:53 compute-0 gifted_golick[415649]: 167 167
Oct  3 10:03:53 compute-0 systemd[1]: libpod-1e02d731721339266691ecb902dd7adcc1c3f0e391d12ec52ee680f5bd78b18f.scope: Deactivated successfully.
Oct  3 10:03:53 compute-0 podman[415633]: 2025-10-03 10:03:53.844120105 +0000 UTC m=+0.272469081 container attach 1e02d731721339266691ecb902dd7adcc1c3f0e391d12ec52ee680f5bd78b18f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:03:53 compute-0 podman[415633]: 2025-10-03 10:03:53.845204899 +0000 UTC m=+0.273553865 container died 1e02d731721339266691ecb902dd7adcc1c3f0e391d12ec52ee680f5bd78b18f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Oct  3 10:03:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5f9d3f8ca9439e74a3ce2bfbc5a98abc075e30e300c0aa144b01bdf5a8e2eee-merged.mount: Deactivated successfully.
Oct  3 10:03:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:03:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3670465477' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:03:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:03:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3670465477' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:03:53 compute-0 podman[415633]: 2025-10-03 10:03:53.970836928 +0000 UTC m=+0.399185874 container remove 1e02d731721339266691ecb902dd7adcc1c3f0e391d12ec52ee680f5bd78b18f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:03:53 compute-0 systemd[1]: libpod-conmon-1e02d731721339266691ecb902dd7adcc1c3f0e391d12ec52ee680f5bd78b18f.scope: Deactivated successfully.
Oct  3 10:03:54 compute-0 podman[415671]: 2025-10-03 10:03:54.202398296 +0000 UTC m=+0.080667579 container create e48937ca034335cf1f747c138058b6da61b8cd102c9a8a20f543394b3c545323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_heyrovsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:03:54 compute-0 podman[415671]: 2025-10-03 10:03:54.158082874 +0000 UTC m=+0.036352167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:03:54 compute-0 systemd[1]: Started libpod-conmon-e48937ca034335cf1f747c138058b6da61b8cd102c9a8a20f543394b3c545323.scope.
Oct  3 10:03:54 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67edb29d58818e690a6e10f96b592b87cf482ac43663a011ecf3d6dacfe25238/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67edb29d58818e690a6e10f96b592b87cf482ac43663a011ecf3d6dacfe25238/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67edb29d58818e690a6e10f96b592b87cf482ac43663a011ecf3d6dacfe25238/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67edb29d58818e690a6e10f96b592b87cf482ac43663a011ecf3d6dacfe25238/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:03:54 compute-0 podman[415671]: 2025-10-03 10:03:54.48844647 +0000 UTC m=+0.366715753 container init e48937ca034335cf1f747c138058b6da61b8cd102c9a8a20f543394b3c545323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct  3 10:03:54 compute-0 podman[415671]: 2025-10-03 10:03:54.497039066 +0000 UTC m=+0.375308339 container start e48937ca034335cf1f747c138058b6da61b8cd102c9a8a20f543394b3c545323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_heyrovsky, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:03:54 compute-0 podman[415671]: 2025-10-03 10:03:54.541905135 +0000 UTC m=+0.420174438 container attach e48937ca034335cf1f747c138058b6da61b8cd102c9a8a20f543394b3c545323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_heyrovsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:03:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 1.2 MiB/s wr, 49 op/s
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011045705948427428 of space, bias 1.0, pg target 0.3313711784528228 quantized to 32 (current 32)
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:03:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:03:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]: [
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:    {
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:        "available": false,
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:        "ceph_device": false,
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:        "lsm_data": {},
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:        "lvs": [],
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:        "path": "/dev/sr0",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:        "rejected_reasons": [
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "Insufficient space (<5GB)",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "Has a FileSystem"
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:        ],
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:        "sys_api": {
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "actuators": null,
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "device_nodes": "sr0",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "devname": "sr0",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "human_readable_size": "482.00 KB",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "id_bus": "ata",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "model": "QEMU DVD-ROM",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "nr_requests": "2",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "parent": "/dev/sr0",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "partitions": {},
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "path": "/dev/sr0",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "removable": "1",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "rev": "2.5+",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "ro": "0",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "rotational": "0",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "sas_address": "",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "sas_device_handle": "",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "scheduler_mode": "mq-deadline",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "sectors": 0,
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "sectorsize": "2048",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "size": 493568.0,
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "support_discard": "2048",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "type": "disk",
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:            "vendor": "QEMU"
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:        }
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]:    }
Oct  3 10:03:56 compute-0 jovial_heyrovsky[415687]: ]
Oct  3 10:03:56 compute-0 systemd[1]: libpod-e48937ca034335cf1f747c138058b6da61b8cd102c9a8a20f543394b3c545323.scope: Deactivated successfully.
Oct  3 10:03:56 compute-0 systemd[1]: libpod-e48937ca034335cf1f747c138058b6da61b8cd102c9a8a20f543394b3c545323.scope: Consumed 2.277s CPU time.
Oct  3 10:03:56 compute-0 podman[418220]: 2025-10-03 10:03:56.801800792 +0000 UTC m=+0.045340596 container died e48937ca034335cf1f747c138058b6da61b8cd102c9a8a20f543394b3c545323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_heyrovsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:03:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 39 KiB/s wr, 16 op/s
Oct  3 10:03:56 compute-0 podman[418216]: 2025-10-03 10:03:56.856554268 +0000 UTC m=+0.114072240 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute)
Oct  3 10:03:56 compute-0 podman[418215]: 2025-10-03 10:03:56.8563105 +0000 UTC m=+0.114020668 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., architecture=x86_64, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Oct  3 10:03:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-67edb29d58818e690a6e10f96b592b87cf482ac43663a011ecf3d6dacfe25238-merged.mount: Deactivated successfully.
Oct  3 10:03:56 compute-0 podman[418217]: 2025-10-03 10:03:56.890657161 +0000 UTC m=+0.148712561 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:03:56 compute-0 nova_compute[351685]: 2025-10-03 10:03:56.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:57 compute-0 podman[418220]: 2025-10-03 10:03:57.002173248 +0000 UTC m=+0.245713022 container remove e48937ca034335cf1f747c138058b6da61b8cd102c9a8a20f543394b3c545323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:03:57 compute-0 systemd[1]: libpod-conmon-e48937ca034335cf1f747c138058b6da61b8cd102c9a8a20f543394b3c545323.scope: Deactivated successfully.
Oct  3 10:03:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:03:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:03:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:03:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:03:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:03:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:03:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:03:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:03:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:03:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:03:57 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8c5aa62e-f004-4624-a6ab-d9e215505d46 does not exist
Oct  3 10:03:57 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a1ab84a8-0365-4449-b7b5-4533f0780300 does not exist
Oct  3 10:03:57 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f3c0b020-a884-4c8d-8972-8e3a39173e93 does not exist
Oct  3 10:03:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:03:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:03:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:03:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:03:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:03:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:03:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:03:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:03:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:03:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:03:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:03:58 compute-0 podman[418432]: 2025-10-03 10:03:58.093758492 +0000 UTC m=+0.073199339 container create 315f9a35989aa4fc78d14f56a79948f1622c49754ce51cb280375868d240534d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:03:58 compute-0 podman[418432]: 2025-10-03 10:03:58.055554156 +0000 UTC m=+0.034995033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:03:58 compute-0 systemd[1]: Started libpod-conmon-315f9a35989aa4fc78d14f56a79948f1622c49754ce51cb280375868d240534d.scope.
Oct  3 10:03:58 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:03:58 compute-0 podman[418432]: 2025-10-03 10:03:58.232322566 +0000 UTC m=+0.211763463 container init 315f9a35989aa4fc78d14f56a79948f1622c49754ce51cb280375868d240534d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 10:03:58 compute-0 podman[418432]: 2025-10-03 10:03:58.247188663 +0000 UTC m=+0.226629510 container start 315f9a35989aa4fc78d14f56a79948f1622c49754ce51cb280375868d240534d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:03:58 compute-0 boring_mclaren[418448]: 167 167
Oct  3 10:03:58 compute-0 systemd[1]: libpod-315f9a35989aa4fc78d14f56a79948f1622c49754ce51cb280375868d240534d.scope: Deactivated successfully.
Oct  3 10:03:58 compute-0 podman[418432]: 2025-10-03 10:03:58.263457915 +0000 UTC m=+0.242898812 container attach 315f9a35989aa4fc78d14f56a79948f1622c49754ce51cb280375868d240534d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:03:58 compute-0 podman[418432]: 2025-10-03 10:03:58.264375384 +0000 UTC m=+0.243816251 container died 315f9a35989aa4fc78d14f56a79948f1622c49754ce51cb280375868d240534d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  3 10:03:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-16f94846e79b5b0b337f10f25fe185d96aacfd6522ed9c3dccb6a0f07f85f31b-merged.mount: Deactivated successfully.
Oct  3 10:03:58 compute-0 podman[418432]: 2025-10-03 10:03:58.443686696 +0000 UTC m=+0.423127543 container remove 315f9a35989aa4fc78d14f56a79948f1622c49754ce51cb280375868d240534d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 10:03:58 compute-0 systemd[1]: libpod-conmon-315f9a35989aa4fc78d14f56a79948f1622c49754ce51cb280375868d240534d.scope: Deactivated successfully.
Oct  3 10:03:58 compute-0 podman[418475]: 2025-10-03 10:03:58.703688705 +0000 UTC m=+0.080061469 container create b4c638462154c941e0065f7629a78bded7c806b63774b6e9ac6d49f026d5af92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 10:03:58 compute-0 podman[418475]: 2025-10-03 10:03:58.654975952 +0000 UTC m=+0.031348726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:03:58 compute-0 systemd[1]: Started libpod-conmon-b4c638462154c941e0065f7629a78bded7c806b63774b6e9ac6d49f026d5af92.scope.
Oct  3 10:03:58 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:03:58 compute-0 nova_compute[351685]: 2025-10-03 10:03:58.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:03:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05a82b8cdd1650d432dc877e0e6a8caea542874187595f134e1e4d02fb73ca4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:03:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05a82b8cdd1650d432dc877e0e6a8caea542874187595f134e1e4d02fb73ca4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:03:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05a82b8cdd1650d432dc877e0e6a8caea542874187595f134e1e4d02fb73ca4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:03:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05a82b8cdd1650d432dc877e0e6a8caea542874187595f134e1e4d02fb73ca4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:03:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05a82b8cdd1650d432dc877e0e6a8caea542874187595f134e1e4d02fb73ca4d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:03:58 compute-0 podman[418475]: 2025-10-03 10:03:58.828423826 +0000 UTC m=+0.204796590 container init b4c638462154c941e0065f7629a78bded7c806b63774b6e9ac6d49f026d5af92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hofstadter, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 10:03:58 compute-0 podman[418475]: 2025-10-03 10:03:58.840128301 +0000 UTC m=+0.216501045 container start b4c638462154c941e0065f7629a78bded7c806b63774b6e9ac6d49f026d5af92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hofstadter, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:03:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Oct  3 10:03:58 compute-0 podman[418475]: 2025-10-03 10:03:58.84507882 +0000 UTC m=+0.221451624 container attach b4c638462154c941e0065f7629a78bded7c806b63774b6e9ac6d49f026d5af92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hofstadter, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:03:59 compute-0 podman[157165]: time="2025-10-03T10:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:03:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47978 "" "Go-http-client/1.1"
Oct  3 10:03:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9467 "" "Go-http-client/1.1"
Oct  3 10:04:00 compute-0 objective_hofstadter[418491]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:04:00 compute-0 objective_hofstadter[418491]: --> relative data size: 1.0
Oct  3 10:04:00 compute-0 objective_hofstadter[418491]: --> All data devices are unavailable
Oct  3 10:04:00 compute-0 systemd[1]: libpod-b4c638462154c941e0065f7629a78bded7c806b63774b6e9ac6d49f026d5af92.scope: Deactivated successfully.
Oct  3 10:04:00 compute-0 systemd[1]: libpod-b4c638462154c941e0065f7629a78bded7c806b63774b6e9ac6d49f026d5af92.scope: Consumed 1.120s CPU time.
Oct  3 10:04:00 compute-0 podman[418475]: 2025-10-03 10:04:00.044664097 +0000 UTC m=+1.421036911 container died b4c638462154c941e0065f7629a78bded7c806b63774b6e9ac6d49f026d5af92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 10:04:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-05a82b8cdd1650d432dc877e0e6a8caea542874187595f134e1e4d02fb73ca4d-merged.mount: Deactivated successfully.
Oct  3 10:04:00 compute-0 podman[418475]: 2025-10-03 10:04:00.284684495 +0000 UTC m=+1.661057239 container remove b4c638462154c941e0065f7629a78bded7c806b63774b6e9ac6d49f026d5af92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_hofstadter, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 10:04:00 compute-0 systemd[1]: libpod-conmon-b4c638462154c941e0065f7629a78bded7c806b63774b6e9ac6d49f026d5af92.scope: Deactivated successfully.
Oct  3 10:04:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Oct  3 10:04:01 compute-0 podman[418671]: 2025-10-03 10:04:01.035524849 +0000 UTC m=+0.053960472 container create 310b2997c5fa014a16bbccf73f31a9751c1323688ff0fdefe8d73512bfdd2e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_brown, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 10:04:01 compute-0 systemd[1]: Started libpod-conmon-310b2997c5fa014a16bbccf73f31a9751c1323688ff0fdefe8d73512bfdd2e02.scope.
Oct  3 10:04:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:04:01 compute-0 podman[418671]: 2025-10-03 10:04:01.014309628 +0000 UTC m=+0.032745281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:04:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:04:01 compute-0 podman[418671]: 2025-10-03 10:04:01.151671384 +0000 UTC m=+0.170107037 container init 310b2997c5fa014a16bbccf73f31a9751c1323688ff0fdefe8d73512bfdd2e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_brown, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 10:04:01 compute-0 podman[418671]: 2025-10-03 10:04:01.160973752 +0000 UTC m=+0.179409375 container start 310b2997c5fa014a16bbccf73f31a9751c1323688ff0fdefe8d73512bfdd2e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 10:04:01 compute-0 stoic_brown[418688]: 167 167
Oct  3 10:04:01 compute-0 systemd[1]: libpod-310b2997c5fa014a16bbccf73f31a9751c1323688ff0fdefe8d73512bfdd2e02.scope: Deactivated successfully.
Oct  3 10:04:01 compute-0 podman[418671]: 2025-10-03 10:04:01.178483924 +0000 UTC m=+0.196919567 container attach 310b2997c5fa014a16bbccf73f31a9751c1323688ff0fdefe8d73512bfdd2e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_brown, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct  3 10:04:01 compute-0 podman[418671]: 2025-10-03 10:04:01.17932033 +0000 UTC m=+0.197755973 container died 310b2997c5fa014a16bbccf73f31a9751c1323688ff0fdefe8d73512bfdd2e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_brown, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 10:04:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4137d7a263938baf02fa22f68e524ea0a95a7dd12472af8a723cf21c4a9cea6e-merged.mount: Deactivated successfully.
Oct  3 10:04:01 compute-0 podman[418671]: 2025-10-03 10:04:01.372135076 +0000 UTC m=+0.390570699 container remove 310b2997c5fa014a16bbccf73f31a9751c1323688ff0fdefe8d73512bfdd2e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 10:04:01 compute-0 systemd[1]: libpod-conmon-310b2997c5fa014a16bbccf73f31a9751c1323688ff0fdefe8d73512bfdd2e02.scope: Deactivated successfully.
Oct  3 10:04:01 compute-0 openstack_network_exporter[367524]: ERROR   10:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:04:01 compute-0 openstack_network_exporter[367524]: ERROR   10:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:04:01 compute-0 openstack_network_exporter[367524]: ERROR   10:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:04:01 compute-0 openstack_network_exporter[367524]: ERROR   10:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:04:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:04:01 compute-0 openstack_network_exporter[367524]: ERROR   10:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:04:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:04:01 compute-0 podman[418714]: 2025-10-03 10:04:01.59365079 +0000 UTC m=+0.065333406 container create c28c39535fe6067e17761916e7c55972e23f2334699f1f6da8f0fe67b5b6db06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_knuth, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:04:01 compute-0 podman[418714]: 2025-10-03 10:04:01.566121917 +0000 UTC m=+0.037804583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:04:01 compute-0 systemd[1]: Started libpod-conmon-c28c39535fe6067e17761916e7c55972e23f2334699f1f6da8f0fe67b5b6db06.scope.
Oct  3 10:04:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:04:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb51c3569a47d5e38cea54026278cdfd18b2c11f713c54a478f84d9618d4496f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:04:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb51c3569a47d5e38cea54026278cdfd18b2c11f713c54a478f84d9618d4496f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:04:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb51c3569a47d5e38cea54026278cdfd18b2c11f713c54a478f84d9618d4496f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:04:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb51c3569a47d5e38cea54026278cdfd18b2c11f713c54a478f84d9618d4496f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:04:01 compute-0 podman[418714]: 2025-10-03 10:04:01.798193561 +0000 UTC m=+0.269876187 container init c28c39535fe6067e17761916e7c55972e23f2334699f1f6da8f0fe67b5b6db06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  3 10:04:01 compute-0 podman[418714]: 2025-10-03 10:04:01.809819044 +0000 UTC m=+0.281501660 container start c28c39535fe6067e17761916e7c55972e23f2334699f1f6da8f0fe67b5b6db06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:04:01 compute-0 podman[418714]: 2025-10-03 10:04:01.862156594 +0000 UTC m=+0.333839230 container attach c28c39535fe6067e17761916e7c55972e23f2334699f1f6da8f0fe67b5b6db06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:04:01 compute-0 nova_compute[351685]: 2025-10-03 10:04:01.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:02 compute-0 keen_knuth[418731]: {
Oct  3 10:04:02 compute-0 keen_knuth[418731]:    "0": [
Oct  3 10:04:02 compute-0 keen_knuth[418731]:        {
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "devices": [
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "/dev/loop3"
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            ],
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "lv_name": "ceph_lv0",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "lv_size": "21470642176",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "name": "ceph_lv0",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "tags": {
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.cluster_name": "ceph",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.crush_device_class": "",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.encrypted": "0",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.osd_id": "0",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.type": "block",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.vdo": "0"
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            },
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "type": "block",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "vg_name": "ceph_vg0"
Oct  3 10:04:02 compute-0 keen_knuth[418731]:        }
Oct  3 10:04:02 compute-0 keen_knuth[418731]:    ],
Oct  3 10:04:02 compute-0 keen_knuth[418731]:    "1": [
Oct  3 10:04:02 compute-0 keen_knuth[418731]:        {
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "devices": [
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "/dev/loop4"
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            ],
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "lv_name": "ceph_lv1",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "lv_size": "21470642176",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "name": "ceph_lv1",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "tags": {
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.cluster_name": "ceph",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.crush_device_class": "",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.encrypted": "0",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.osd_id": "1",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.type": "block",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.vdo": "0"
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            },
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "type": "block",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "vg_name": "ceph_vg1"
Oct  3 10:04:02 compute-0 keen_knuth[418731]:        }
Oct  3 10:04:02 compute-0 keen_knuth[418731]:    ],
Oct  3 10:04:02 compute-0 keen_knuth[418731]:    "2": [
Oct  3 10:04:02 compute-0 keen_knuth[418731]:        {
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "devices": [
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "/dev/loop5"
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            ],
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "lv_name": "ceph_lv2",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "lv_size": "21470642176",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "name": "ceph_lv2",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "tags": {
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.cluster_name": "ceph",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.crush_device_class": "",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.encrypted": "0",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.osd_id": "2",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.type": "block",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:                "ceph.vdo": "0"
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            },
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "type": "block",
Oct  3 10:04:02 compute-0 keen_knuth[418731]:            "vg_name": "ceph_vg2"
Oct  3 10:04:02 compute-0 keen_knuth[418731]:        }
Oct  3 10:04:02 compute-0 keen_knuth[418731]:    ]
Oct  3 10:04:02 compute-0 keen_knuth[418731]: }
Oct  3 10:04:02 compute-0 systemd[1]: libpod-c28c39535fe6067e17761916e7c55972e23f2334699f1f6da8f0fe67b5b6db06.scope: Deactivated successfully.
Oct  3 10:04:02 compute-0 podman[418740]: 2025-10-03 10:04:02.73219824 +0000 UTC m=+0.033238377 container died c28c39535fe6067e17761916e7c55972e23f2334699f1f6da8f0fe67b5b6db06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:04:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:04:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb51c3569a47d5e38cea54026278cdfd18b2c11f713c54a478f84d9618d4496f-merged.mount: Deactivated successfully.
Oct  3 10:04:03 compute-0 podman[418740]: 2025-10-03 10:04:03.139150662 +0000 UTC m=+0.440190769 container remove c28c39535fe6067e17761916e7c55972e23f2334699f1f6da8f0fe67b5b6db06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_knuth, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct  3 10:04:03 compute-0 systemd[1]: libpod-conmon-c28c39535fe6067e17761916e7c55972e23f2334699f1f6da8f0fe67b5b6db06.scope: Deactivated successfully.
Oct  3 10:04:03 compute-0 podman[418741]: 2025-10-03 10:04:03.188785825 +0000 UTC m=+0.467121024 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  3 10:04:03 compute-0 nova_compute[351685]: 2025-10-03 10:04:03.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:04 compute-0 podman[418913]: 2025-10-03 10:04:03.90633542 +0000 UTC m=+0.031810111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:04:04 compute-0 podman[418913]: 2025-10-03 10:04:04.039900004 +0000 UTC m=+0.165374675 container create 4b5295c40f34cdc3f70e961321ff32e7835f6a1b8ec8b5e50487fd79506c952f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:04:04 compute-0 systemd[1]: Started libpod-conmon-4b5295c40f34cdc3f70e961321ff32e7835f6a1b8ec8b5e50487fd79506c952f.scope.
Oct  3 10:04:04 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:04:04 compute-0 podman[418913]: 2025-10-03 10:04:04.264834359 +0000 UTC m=+0.390309060 container init 4b5295c40f34cdc3f70e961321ff32e7835f6a1b8ec8b5e50487fd79506c952f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ardinghelli, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:04:04 compute-0 podman[418913]: 2025-10-03 10:04:04.282553407 +0000 UTC m=+0.408028078 container start 4b5295c40f34cdc3f70e961321ff32e7835f6a1b8ec8b5e50487fd79506c952f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 10:04:04 compute-0 adoring_ardinghelli[418929]: 167 167
Oct  3 10:04:04 compute-0 systemd[1]: libpod-4b5295c40f34cdc3f70e961321ff32e7835f6a1b8ec8b5e50487fd79506c952f.scope: Deactivated successfully.
Oct  3 10:04:04 compute-0 podman[418913]: 2025-10-03 10:04:04.326806676 +0000 UTC m=+0.452281347 container attach 4b5295c40f34cdc3f70e961321ff32e7835f6a1b8ec8b5e50487fd79506c952f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 10:04:04 compute-0 podman[418913]: 2025-10-03 10:04:04.327194519 +0000 UTC m=+0.452669200 container died 4b5295c40f34cdc3f70e961321ff32e7835f6a1b8ec8b5e50487fd79506c952f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:04:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d78e2634c988f9d71136d10acbcd45f46615d5982e0f8ebe61b829586f700f7-merged.mount: Deactivated successfully.
Oct  3 10:04:04 compute-0 podman[418913]: 2025-10-03 10:04:04.512643447 +0000 UTC m=+0.638118118 container remove 4b5295c40f34cdc3f70e961321ff32e7835f6a1b8ec8b5e50487fd79506c952f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ardinghelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  3 10:04:04 compute-0 systemd[1]: libpod-conmon-4b5295c40f34cdc3f70e961321ff32e7835f6a1b8ec8b5e50487fd79506c952f.scope: Deactivated successfully.
Oct  3 10:04:04 compute-0 podman[418954]: 2025-10-03 10:04:04.744333869 +0000 UTC m=+0.069681266 container create b2df5c481e5d489ac8a2a94830bb598eb0de9a574e7d48e992a5489116383059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mclaren, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 10:04:04 compute-0 podman[418954]: 2025-10-03 10:04:04.704355126 +0000 UTC m=+0.029702553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:04:04 compute-0 systemd[1]: Started libpod-conmon-b2df5c481e5d489ac8a2a94830bb598eb0de9a574e7d48e992a5489116383059.scope.
Oct  3 10:04:04 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:04:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3243454844c0e62a25492ed56c37ae80a042546b3c6b569f06bd46914723fa5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:04:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3243454844c0e62a25492ed56c37ae80a042546b3c6b569f06bd46914723fa5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:04:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3243454844c0e62a25492ed56c37ae80a042546b3c6b569f06bd46914723fa5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:04:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3243454844c0e62a25492ed56c37ae80a042546b3c6b569f06bd46914723fa5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:04:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:04:04 compute-0 podman[418954]: 2025-10-03 10:04:04.887013325 +0000 UTC m=+0.212360742 container init b2df5c481e5d489ac8a2a94830bb598eb0de9a574e7d48e992a5489116383059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mclaren, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:04:04 compute-0 podman[418954]: 2025-10-03 10:04:04.898941758 +0000 UTC m=+0.224289155 container start b2df5c481e5d489ac8a2a94830bb598eb0de9a574e7d48e992a5489116383059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 10:04:04 compute-0 podman[418954]: 2025-10-03 10:04:04.905400624 +0000 UTC m=+0.230748041 container attach b2df5c481e5d489ac8a2a94830bb598eb0de9a574e7d48e992a5489116383059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mclaren, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]: {
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:        "osd_id": 1,
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:        "type": "bluestore"
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:    },
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:        "osd_id": 2,
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:        "type": "bluestore"
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:    },
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:        "osd_id": 0,
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:        "type": "bluestore"
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]:    }
Oct  3 10:04:05 compute-0 sweet_mclaren[418971]: }
Oct  3 10:04:06 compute-0 systemd[1]: libpod-b2df5c481e5d489ac8a2a94830bb598eb0de9a574e7d48e992a5489116383059.scope: Deactivated successfully.
Oct  3 10:04:06 compute-0 systemd[1]: libpod-b2df5c481e5d489ac8a2a94830bb598eb0de9a574e7d48e992a5489116383059.scope: Consumed 1.121s CPU time.
Oct  3 10:04:06 compute-0 podman[418954]: 2025-10-03 10:04:06.027023661 +0000 UTC m=+1.352371058 container died b2df5c481e5d489ac8a2a94830bb598eb0de9a574e7d48e992a5489116383059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mclaren, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:04:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:04:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3243454844c0e62a25492ed56c37ae80a042546b3c6b569f06bd46914723fa5-merged.mount: Deactivated successfully.
Oct  3 10:04:06 compute-0 podman[418954]: 2025-10-03 10:04:06.717693774 +0000 UTC m=+2.043041171 container remove b2df5c481e5d489ac8a2a94830bb598eb0de9a574e7d48e992a5489116383059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  3 10:04:06 compute-0 podman[419004]: 2025-10-03 10:04:06.768947958 +0000 UTC m=+0.708559348 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:04:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:04:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:04:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:04:06 compute-0 podman[419012]: 2025-10-03 10:04:06.805731268 +0000 UTC m=+0.743048694 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:04:06 compute-0 systemd[1]: libpod-conmon-b2df5c481e5d489ac8a2a94830bb598eb0de9a574e7d48e992a5489116383059.scope: Deactivated successfully.
Oct  3 10:04:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:04:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:04:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 0a2df6bf-9681-4a96-b7b1-22e6198ecc72 does not exist
Oct  3 10:04:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3ac829b3-f7bb-403e-b55f-d2a63e1adc7b does not exist
Oct  3 10:04:06 compute-0 podman[419011]: 2025-10-03 10:04:06.867696936 +0000 UTC m=+0.809167475 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 10:04:06 compute-0 nova_compute[351685]: 2025-10-03 10:04:06.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:04:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:04:08 compute-0 nova_compute[351685]: 2025-10-03 10:04:08.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:04:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:04:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:04:11 compute-0 podman[419123]: 2025-10-03 10:04:11.860834562 +0000 UTC m=+0.126208629 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 10:04:11 compute-0 nova_compute[351685]: 2025-10-03 10:04:11.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:04:13 compute-0 nova_compute[351685]: 2025-10-03 10:04:13.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Oct  3 10:04:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:04:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:04:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:04:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:04:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:04:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:04:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:04:16 compute-0 podman[419143]: 2025-10-03 10:04:16.815687261 +0000 UTC m=+0.066554535 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:04:16 compute-0 podman[419144]: 2025-10-03 10:04:16.848885486 +0000 UTC m=+0.095826324 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.openshift.expose-services=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, release-0.7.12=, version=9.4, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, architecture=x86_64)
Oct  3 10:04:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Oct  3 10:04:16 compute-0 nova_compute[351685]: 2025-10-03 10:04:16.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:18 compute-0 nova_compute[351685]: 2025-10-03 10:04:18.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Oct  3 10:04:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Oct  3 10:04:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:04:21 compute-0 nova_compute[351685]: 2025-10-03 10:04:21.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Oct  3 10:04:23 compute-0 nova_compute[351685]: 2025-10-03 10:04:23.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Oct  3 10:04:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:04:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:04:26 compute-0 nova_compute[351685]: 2025-10-03 10:04:26.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:27 compute-0 podman[419188]: 2025-10-03 10:04:27.817397204 +0000 UTC m=+0.076978560 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 10:04:27 compute-0 podman[419187]: 2025-10-03 10:04:27.81664607 +0000 UTC m=+0.083264492 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, release=1755695350, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, name=ubi9-minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct  3 10:04:27 compute-0 podman[419189]: 2025-10-03 10:04:27.868606477 +0000 UTC m=+0.126057775 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:04:28 compute-0 nova_compute[351685]: 2025-10-03 10:04:28.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:04:29 compute-0 podman[157165]: time="2025-10-03T10:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:04:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:04:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9058 "" "Go-http-client/1.1"
Oct  3 10:04:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:04:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:04:31 compute-0 openstack_network_exporter[367524]: ERROR   10:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:04:31 compute-0 openstack_network_exporter[367524]: ERROR   10:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:04:31 compute-0 openstack_network_exporter[367524]: ERROR   10:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:04:31 compute-0 openstack_network_exporter[367524]: ERROR   10:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:04:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:04:31 compute-0 openstack_network_exporter[367524]: ERROR   10:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:04:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:04:31 compute-0 nova_compute[351685]: 2025-10-03 10:04:31.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:04:33 compute-0 nova_compute[351685]: 2025-10-03 10:04:33.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:04:33 compute-0 podman[419249]: 2025-10-03 10:04:33.819177082 +0000 UTC m=+0.080077110 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:04:33 compute-0 nova_compute[351685]: 2025-10-03 10:04:33.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:34 compute-0 nova_compute[351685]: 2025-10-03 10:04:34.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:04:34 compute-0 nova_compute[351685]: 2025-10-03 10:04:34.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:04:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Oct  3 10:04:34 compute-0 nova_compute[351685]: 2025-10-03 10:04:34.943 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:04:34 compute-0 nova_compute[351685]: 2025-10-03 10:04:34.944 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:04:34 compute-0 nova_compute[351685]: 2025-10-03 10:04:34.945 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:04:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:04:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Oct  3 10:04:36 compute-0 nova_compute[351685]: 2025-10-03 10:04:36.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:37 compute-0 nova_compute[351685]: 2025-10-03 10:04:37.795 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Updating instance_info_cache with network_info: [{"id": "d601bb86-7265-40b5-ac1c-42d005514cfd", "address": "fa:16:3e:4c:23:11", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd601bb86-72", "ovs_interfaceid": "d601bb86-7265-40b5-ac1c-42d005514cfd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:04:37 compute-0 podman[419266]: 2025-10-03 10:04:37.8094696 +0000 UTC m=+0.073994183 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:04:37 compute-0 nova_compute[351685]: 2025-10-03 10:04:37.814 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:04:37 compute-0 nova_compute[351685]: 2025-10-03 10:04:37.815 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:04:37 compute-0 nova_compute[351685]: 2025-10-03 10:04:37.815 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:04:37 compute-0 nova_compute[351685]: 2025-10-03 10:04:37.815 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:04:37 compute-0 nova_compute[351685]: 2025-10-03 10:04:37.816 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:04:37 compute-0 nova_compute[351685]: 2025-10-03 10:04:37.816 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:04:37 compute-0 podman[419268]: 2025-10-03 10:04:37.819147251 +0000 UTC m=+0.075996188 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3)
Oct  3 10:04:37 compute-0 nova_compute[351685]: 2025-10-03 10:04:37.842 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:04:37 compute-0 nova_compute[351685]: 2025-10-03 10:04:37.842 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:04:37 compute-0 nova_compute[351685]: 2025-10-03 10:04:37.843 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:04:37 compute-0 nova_compute[351685]: 2025-10-03 10:04:37.843 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:04:37 compute-0 nova_compute[351685]: 2025-10-03 10:04:37.843 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:04:37 compute-0 podman[419267]: 2025-10-03 10:04:37.848194432 +0000 UTC m=+0.107857070 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct  3 10:04:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:04:38 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4245470341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:04:38 compute-0 nova_compute[351685]: 2025-10-03 10:04:38.294 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:04:38 compute-0 nova_compute[351685]: 2025-10-03 10:04:38.375 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:04:38 compute-0 nova_compute[351685]: 2025-10-03 10:04:38.376 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:04:38 compute-0 nova_compute[351685]: 2025-10-03 10:04:38.376 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:04:38 compute-0 nova_compute[351685]: 2025-10-03 10:04:38.381 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:04:38 compute-0 nova_compute[351685]: 2025-10-03 10:04:38.381 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:04:38 compute-0 nova_compute[351685]: 2025-10-03 10:04:38.382 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:04:38 compute-0 nova_compute[351685]: 2025-10-03 10:04:38.737 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:04:38 compute-0 nova_compute[351685]: 2025-10-03 10:04:38.738 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3772MB free_disk=59.922019958496094GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:04:38 compute-0 nova_compute[351685]: 2025-10-03 10:04:38.738 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:04:38 compute-0 nova_compute[351685]: 2025-10-03 10:04:38.739 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:04:38 compute-0 nova_compute[351685]: 2025-10-03 10:04:38.809 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:04:38 compute-0 nova_compute[351685]: 2025-10-03 10:04:38.810 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 5b008829-2c76-4e40-b9e6-0e3d73095522 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:04:38 compute-0 nova_compute[351685]: 2025-10-03 10:04:38.810 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:04:38 compute-0 nova_compute[351685]: 2025-10-03 10:04:38.810 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:04:38 compute-0 nova_compute[351685]: 2025-10-03 10:04:38.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:38 compute-0 nova_compute[351685]: 2025-10-03 10:04:38.854 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:04:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct  3 10:04:39 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:04:39 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2367411646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:04:39 compute-0 nova_compute[351685]: 2025-10-03 10:04:39.345 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:04:39 compute-0 nova_compute[351685]: 2025-10-03 10:04:39.353 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:04:39 compute-0 nova_compute[351685]: 2025-10-03 10:04:39.371 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:04:39 compute-0 nova_compute[351685]: 2025-10-03 10:04:39.373 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:04:39 compute-0 nova_compute[351685]: 2025-10-03 10:04:39.373 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:04:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 10:04:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:04:41 compute-0 nova_compute[351685]: 2025-10-03 10:04:41.288 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:04:41 compute-0 nova_compute[351685]: 2025-10-03 10:04:41.289 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:04:41 compute-0 nova_compute[351685]: 2025-10-03 10:04:41.289 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:04:41 compute-0 nova_compute[351685]: 2025-10-03 10:04:41.289 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:04:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:04:41.592 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:04:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:04:41.592 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:04:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:04:41.593 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:04:41 compute-0 nova_compute[351685]: 2025-10-03 10:04:41.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:42 compute-0 nova_compute[351685]: 2025-10-03 10:04:42.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:04:42 compute-0 podman[419371]: 2025-10-03 10:04:42.831531284 +0000 UTC m=+0.095404001 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 10:04:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 10:04:43 compute-0 nova_compute[351685]: 2025-10-03 10:04:43.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 85 B/s wr, 60 op/s
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:04:46
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.log', '.rgw.root', 'vms', '.mgr', 'default.rgw.control', 'default.rgw.meta']
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:04:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:04:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 85 B/s wr, 39 op/s
Oct  3 10:04:46 compute-0 nova_compute[351685]: 2025-10-03 10:04:46.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:47 compute-0 podman[419390]: 2025-10-03 10:04:47.798163998 +0000 UTC m=+0.064725933 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:04:47 compute-0 podman[419391]: 2025-10-03 10:04:47.81879734 +0000 UTC m=+0.082069580 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, name=ubi9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., release=1214.1726694543, config_id=edpm, io.buildah.version=1.29.0, release-0.7.12=, vcs-type=git)
Oct  3 10:04:48 compute-0 nova_compute[351685]: 2025-10-03 10:04:48.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 85 B/s wr, 20 op/s
Oct  3 10:04:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 2.7 KiB/s wr, 4 op/s
Oct  3 10:04:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:04:51 compute-0 nova_compute[351685]: 2025-10-03 10:04:51.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Oct  3 10:04:53 compute-0 nova_compute[351685]: 2025-10-03 10:04:53.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:04:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1355100547' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:04:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:04:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1355100547' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:04:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001105079320505307 of space, bias 1.0, pg target 0.3315237961515921 quantized to 32 (current 32)
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:04:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:04:55.967008) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485895967049, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2070, "num_deletes": 251, "total_data_size": 3482263, "memory_usage": 3549616, "flush_reason": "Manual Compaction"}
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485895984484, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3393836, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21014, "largest_seqno": 23083, "table_properties": {"data_size": 3384482, "index_size": 5912, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18954, "raw_average_key_size": 20, "raw_value_size": 3365673, "raw_average_value_size": 3565, "num_data_blocks": 267, "num_entries": 944, "num_filter_entries": 944, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759485674, "oldest_key_time": 1759485674, "file_creation_time": 1759485895, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 17520 microseconds, and 8193 cpu microseconds.
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:04:55.984532) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3393836 bytes OK
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:04:55.984548) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:04:55.986359) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:04:55.986374) EVENT_LOG_v1 {"time_micros": 1759485895986369, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:04:55.986394) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3473582, prev total WAL file size 3473582, number of live WAL files 2.
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:04:55.987606) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3314KB)], [50(7517KB)]
Oct  3 10:04:55 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485895987699, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11091392, "oldest_snapshot_seqno": -1}
Oct  3 10:04:56 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4766 keys, 9350525 bytes, temperature: kUnknown
Oct  3 10:04:56 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485896054883, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9350525, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9315809, "index_size": 21678, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 116822, "raw_average_key_size": 24, "raw_value_size": 9226747, "raw_average_value_size": 1935, "num_data_blocks": 911, "num_entries": 4766, "num_filter_entries": 4766, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759485895, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:04:56 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:04:56 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:04:56.055187) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9350525 bytes
Oct  3 10:04:56 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:04:56.058077) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.8 rd, 139.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.3 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.0) write-amplify(2.8) OK, records in: 5284, records dropped: 518 output_compression: NoCompression
Oct  3 10:04:56 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:04:56.058109) EVENT_LOG_v1 {"time_micros": 1759485896058094, "job": 26, "event": "compaction_finished", "compaction_time_micros": 67288, "compaction_time_cpu_micros": 24693, "output_level": 6, "num_output_files": 1, "total_output_size": 9350525, "num_input_records": 5284, "num_output_records": 4766, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:04:56 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:04:56 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485896059615, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct  3 10:04:56 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:04:56 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759485896062184, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct  3 10:04:56 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:04:55.987373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:04:56 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:04:56.062393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:04:56 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:04:56.062401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:04:56 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:04:56.062404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:04:56 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:04:56.062406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:04:56 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:04:56.062408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:04:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:04:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Oct  3 10:04:56 compute-0 nova_compute[351685]: 2025-10-03 10:04:56.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:58 compute-0 podman[419437]: 2025-10-03 10:04:58.821422178 +0000 UTC m=+0.074512288 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct  3 10:04:58 compute-0 nova_compute[351685]: 2025-10-03 10:04:58.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:04:58 compute-0 podman[419436]: 2025-10-03 10:04:58.86179012 +0000 UTC m=+0.117940637 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Oct  3 10:04:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Oct  3 10:04:58 compute-0 podman[419438]: 2025-10-03 10:04:58.875434988 +0000 UTC m=+0.124008073 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  3 10:04:59 compute-0 podman[157165]: time="2025-10-03T10:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:04:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:04:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9051 "" "Go-http-client/1.1"
Oct  3 10:05:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Oct  3 10:05:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:05:01 compute-0 openstack_network_exporter[367524]: ERROR   10:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:05:01 compute-0 openstack_network_exporter[367524]: ERROR   10:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:05:01 compute-0 openstack_network_exporter[367524]: ERROR   10:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:05:01 compute-0 openstack_network_exporter[367524]: ERROR   10:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:05:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:05:01 compute-0 openstack_network_exporter[367524]: ERROR   10:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:05:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:05:01 compute-0 nova_compute[351685]: 2025-10-03 10:05:01.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:03 compute-0 nova_compute[351685]: 2025-10-03 10:05:03.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:04 compute-0 podman[419498]: 2025-10-03 10:05:04.840896196 +0000 UTC m=+0.103843497 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct  3 10:05:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:05 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  3 10:05:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:05:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1128: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:06 compute-0 nova_compute[351685]: 2025-10-03 10:05:06.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:07 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  3 10:05:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  3 10:05:07 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 10:05:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:05:07 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:05:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:05:07 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:05:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:05:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:05:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev cd077d23-7dd6-44c7-b379-3f9cd4d531b5 does not exist
Oct  3 10:05:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev cf5a79dd-e2c2-48eb-9d9f-80b6aa83d016 does not exist
Oct  3 10:05:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 16661ff8-1039-45ae-8cb3-dab437f2bf28 does not exist
Oct  3 10:05:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:05:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:05:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:05:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:05:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:05:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:05:08 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 10:05:08 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:05:08 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:05:08 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:05:08 compute-0 podman[419674]: 2025-10-03 10:05:08.206839727 +0000 UTC m=+0.066017355 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 10:05:08 compute-0 podman[419672]: 2025-10-03 10:05:08.228923724 +0000 UTC m=+0.100882611 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:05:08 compute-0 podman[419673]: 2025-10-03 10:05:08.237403666 +0000 UTC m=+0.101990777 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2)
Oct  3 10:05:08 compute-0 nova_compute[351685]: 2025-10-03 10:05:08.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:08 compute-0 podman[419846]: 2025-10-03 10:05:08.869764486 +0000 UTC m=+0.075212230 container create 4e4fa6fbb4ed1f8ddb3a9b868b8464471750503f997da046cf803089be5d6645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mcclintock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 10:05:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:08 compute-0 podman[419846]: 2025-10-03 10:05:08.842861374 +0000 UTC m=+0.048309108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:05:08 compute-0 systemd[1]: Started libpod-conmon-4e4fa6fbb4ed1f8ddb3a9b868b8464471750503f997da046cf803089be5d6645.scope.
Oct  3 10:05:08 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:05:09 compute-0 podman[419846]: 2025-10-03 10:05:09.005566384 +0000 UTC m=+0.211014138 container init 4e4fa6fbb4ed1f8ddb3a9b868b8464471750503f997da046cf803089be5d6645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:05:09 compute-0 podman[419846]: 2025-10-03 10:05:09.015729959 +0000 UTC m=+0.221177703 container start 4e4fa6fbb4ed1f8ddb3a9b868b8464471750503f997da046cf803089be5d6645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mcclintock, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:05:09 compute-0 podman[419846]: 2025-10-03 10:05:09.020826933 +0000 UTC m=+0.226274697 container attach 4e4fa6fbb4ed1f8ddb3a9b868b8464471750503f997da046cf803089be5d6645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 10:05:09 compute-0 angry_mcclintock[419860]: 167 167
Oct  3 10:05:09 compute-0 systemd[1]: libpod-4e4fa6fbb4ed1f8ddb3a9b868b8464471750503f997da046cf803089be5d6645.scope: Deactivated successfully.
Oct  3 10:05:09 compute-0 podman[419846]: 2025-10-03 10:05:09.026490765 +0000 UTC m=+0.231938519 container died 4e4fa6fbb4ed1f8ddb3a9b868b8464471750503f997da046cf803089be5d6645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 10:05:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1222aa795e3e6a807350fd5d1086784411c45f63234b9a38d89e190264f8b98-merged.mount: Deactivated successfully.
Oct  3 10:05:09 compute-0 podman[419846]: 2025-10-03 10:05:09.090744312 +0000 UTC m=+0.296192076 container remove 4e4fa6fbb4ed1f8ddb3a9b868b8464471750503f997da046cf803089be5d6645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 10:05:09 compute-0 systemd[1]: libpod-conmon-4e4fa6fbb4ed1f8ddb3a9b868b8464471750503f997da046cf803089be5d6645.scope: Deactivated successfully.
Oct  3 10:05:09 compute-0 podman[419885]: 2025-10-03 10:05:09.341496232 +0000 UTC m=+0.074916120 container create c0c2c227b507af324b8bae864c0c744a79454f6211a5386a33b3dccfe5a1f52b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:05:09 compute-0 systemd[1]: Started libpod-conmon-c0c2c227b507af324b8bae864c0c744a79454f6211a5386a33b3dccfe5a1f52b.scope.
Oct  3 10:05:09 compute-0 podman[419885]: 2025-10-03 10:05:09.316905194 +0000 UTC m=+0.050325102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:05:09 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b626358ad4135ecaba4292e348d377dae7003fb73863f731b758b51632417208/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b626358ad4135ecaba4292e348d377dae7003fb73863f731b758b51632417208/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b626358ad4135ecaba4292e348d377dae7003fb73863f731b758b51632417208/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b626358ad4135ecaba4292e348d377dae7003fb73863f731b758b51632417208/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:05:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b626358ad4135ecaba4292e348d377dae7003fb73863f731b758b51632417208/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:05:09 compute-0 podman[419885]: 2025-10-03 10:05:09.460907535 +0000 UTC m=+0.194327453 container init c0c2c227b507af324b8bae864c0c744a79454f6211a5386a33b3dccfe5a1f52b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_banzai, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 10:05:09 compute-0 podman[419885]: 2025-10-03 10:05:09.472277739 +0000 UTC m=+0.205697617 container start c0c2c227b507af324b8bae864c0c744a79454f6211a5386a33b3dccfe5a1f52b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_banzai, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:05:09 compute-0 podman[419885]: 2025-10-03 10:05:09.476554607 +0000 UTC m=+0.209974495 container attach c0c2c227b507af324b8bae864c0c744a79454f6211a5386a33b3dccfe5a1f52b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 10:05:10 compute-0 flamboyant_banzai[419901]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:05:10 compute-0 flamboyant_banzai[419901]: --> relative data size: 1.0
Oct  3 10:05:10 compute-0 flamboyant_banzai[419901]: --> All data devices are unavailable
Oct  3 10:05:10 compute-0 systemd[1]: libpod-c0c2c227b507af324b8bae864c0c744a79454f6211a5386a33b3dccfe5a1f52b.scope: Deactivated successfully.
Oct  3 10:05:10 compute-0 systemd[1]: libpod-c0c2c227b507af324b8bae864c0c744a79454f6211a5386a33b3dccfe5a1f52b.scope: Consumed 1.016s CPU time.
Oct  3 10:05:10 compute-0 podman[419930]: 2025-10-03 10:05:10.616589116 +0000 UTC m=+0.035914552 container died c0c2c227b507af324b8bae864c0c744a79454f6211a5386a33b3dccfe5a1f52b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_banzai, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:05:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b626358ad4135ecaba4292e348d377dae7003fb73863f731b758b51632417208-merged.mount: Deactivated successfully.
Oct  3 10:05:10 compute-0 podman[419930]: 2025-10-03 10:05:10.741504715 +0000 UTC m=+0.160830131 container remove c0c2c227b507af324b8bae864c0c744a79454f6211a5386a33b3dccfe5a1f52b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_banzai, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:05:10 compute-0 systemd[1]: libpod-conmon-c0c2c227b507af324b8bae864c0c744a79454f6211a5386a33b3dccfe5a1f52b.scope: Deactivated successfully.
Oct  3 10:05:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:05:11 compute-0 podman[420083]: 2025-10-03 10:05:11.592299902 +0000 UTC m=+0.060138357 container create 55dc93bf2ab52af3ab077fd6db75329358d580ae35d83464b9fef71c9c439dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 10:05:11 compute-0 podman[420083]: 2025-10-03 10:05:11.565575176 +0000 UTC m=+0.033413661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:05:11 compute-0 systemd[1]: Started libpod-conmon-55dc93bf2ab52af3ab077fd6db75329358d580ae35d83464b9fef71c9c439dd0.scope.
Oct  3 10:05:11 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:05:11 compute-0 podman[420083]: 2025-10-03 10:05:11.749584089 +0000 UTC m=+0.217422564 container init 55dc93bf2ab52af3ab077fd6db75329358d580ae35d83464b9fef71c9c439dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:05:11 compute-0 podman[420083]: 2025-10-03 10:05:11.760808248 +0000 UTC m=+0.228646703 container start 55dc93bf2ab52af3ab077fd6db75329358d580ae35d83464b9fef71c9c439dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:05:11 compute-0 pensive_swirles[420099]: 167 167
Oct  3 10:05:11 compute-0 systemd[1]: libpod-55dc93bf2ab52af3ab077fd6db75329358d580ae35d83464b9fef71c9c439dd0.scope: Deactivated successfully.
Oct  3 10:05:11 compute-0 podman[420083]: 2025-10-03 10:05:11.778420922 +0000 UTC m=+0.246259397 container attach 55dc93bf2ab52af3ab077fd6db75329358d580ae35d83464b9fef71c9c439dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 10:05:11 compute-0 podman[420083]: 2025-10-03 10:05:11.779874548 +0000 UTC m=+0.247713033 container died 55dc93bf2ab52af3ab077fd6db75329358d580ae35d83464b9fef71c9c439dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:05:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-49f44a280da8d61d6fc3e8e39b697e2b80dabc8856fb1d0394e5b486676c8478-merged.mount: Deactivated successfully.
Oct  3 10:05:11 compute-0 podman[420083]: 2025-10-03 10:05:11.874064904 +0000 UTC m=+0.341903359 container remove 55dc93bf2ab52af3ab077fd6db75329358d580ae35d83464b9fef71c9c439dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 10:05:11 compute-0 systemd[1]: libpod-conmon-55dc93bf2ab52af3ab077fd6db75329358d580ae35d83464b9fef71c9c439dd0.scope: Deactivated successfully.
Oct  3 10:05:11 compute-0 nova_compute[351685]: 2025-10-03 10:05:11.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:12 compute-0 podman[420122]: 2025-10-03 10:05:12.085742524 +0000 UTC m=+0.025076624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:05:12 compute-0 podman[420122]: 2025-10-03 10:05:12.181459119 +0000 UTC m=+0.120793199 container create 9ea37427544245694fe90c88984faf492236a5e314ef983a32d8baeb23253770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:05:12 compute-0 systemd[1]: Started libpod-conmon-9ea37427544245694fe90c88984faf492236a5e314ef983a32d8baeb23253770.scope.
Oct  3 10:05:12 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bf3aceb0d69584dcd505d8487fa6ae68cf2b1818cc7732e354aa657ea4ba03e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bf3aceb0d69584dcd505d8487fa6ae68cf2b1818cc7732e354aa657ea4ba03e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bf3aceb0d69584dcd505d8487fa6ae68cf2b1818cc7732e354aa657ea4ba03e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bf3aceb0d69584dcd505d8487fa6ae68cf2b1818cc7732e354aa657ea4ba03e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:05:12 compute-0 podman[420122]: 2025-10-03 10:05:12.345397809 +0000 UTC m=+0.284731909 container init 9ea37427544245694fe90c88984faf492236a5e314ef983a32d8baeb23253770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:05:12 compute-0 podman[420122]: 2025-10-03 10:05:12.358637142 +0000 UTC m=+0.297971222 container start 9ea37427544245694fe90c88984faf492236a5e314ef983a32d8baeb23253770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:05:12 compute-0 podman[420122]: 2025-10-03 10:05:12.362772555 +0000 UTC m=+0.302106635 container attach 9ea37427544245694fe90c88984faf492236a5e314ef983a32d8baeb23253770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:05:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:13 compute-0 serene_davinci[420137]: {
Oct  3 10:05:13 compute-0 serene_davinci[420137]:    "0": [
Oct  3 10:05:13 compute-0 serene_davinci[420137]:        {
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "devices": [
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "/dev/loop3"
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            ],
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "lv_name": "ceph_lv0",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "lv_size": "21470642176",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "name": "ceph_lv0",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "tags": {
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.cluster_name": "ceph",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.crush_device_class": "",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.encrypted": "0",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.osd_id": "0",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.type": "block",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.vdo": "0"
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            },
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "type": "block",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "vg_name": "ceph_vg0"
Oct  3 10:05:13 compute-0 serene_davinci[420137]:        }
Oct  3 10:05:13 compute-0 serene_davinci[420137]:    ],
Oct  3 10:05:13 compute-0 serene_davinci[420137]:    "1": [
Oct  3 10:05:13 compute-0 serene_davinci[420137]:        {
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "devices": [
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "/dev/loop4"
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            ],
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "lv_name": "ceph_lv1",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "lv_size": "21470642176",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "name": "ceph_lv1",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "tags": {
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.cluster_name": "ceph",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.crush_device_class": "",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.encrypted": "0",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.osd_id": "1",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.type": "block",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.vdo": "0"
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            },
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "type": "block",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "vg_name": "ceph_vg1"
Oct  3 10:05:13 compute-0 serene_davinci[420137]:        }
Oct  3 10:05:13 compute-0 serene_davinci[420137]:    ],
Oct  3 10:05:13 compute-0 serene_davinci[420137]:    "2": [
Oct  3 10:05:13 compute-0 serene_davinci[420137]:        {
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "devices": [
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "/dev/loop5"
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            ],
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "lv_name": "ceph_lv2",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "lv_size": "21470642176",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "name": "ceph_lv2",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "tags": {
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.cluster_name": "ceph",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.crush_device_class": "",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.encrypted": "0",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.osd_id": "2",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.type": "block",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:                "ceph.vdo": "0"
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            },
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "type": "block",
Oct  3 10:05:13 compute-0 serene_davinci[420137]:            "vg_name": "ceph_vg2"
Oct  3 10:05:13 compute-0 serene_davinci[420137]:        }
Oct  3 10:05:13 compute-0 serene_davinci[420137]:    ]
Oct  3 10:05:13 compute-0 serene_davinci[420137]: }
Oct  3 10:05:13 compute-0 systemd[1]: libpod-9ea37427544245694fe90c88984faf492236a5e314ef983a32d8baeb23253770.scope: Deactivated successfully.
Oct  3 10:05:13 compute-0 podman[420122]: 2025-10-03 10:05:13.244593495 +0000 UTC m=+1.183927575 container died 9ea37427544245694fe90c88984faf492236a5e314ef983a32d8baeb23253770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 10:05:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bf3aceb0d69584dcd505d8487fa6ae68cf2b1818cc7732e354aa657ea4ba03e-merged.mount: Deactivated successfully.
Oct  3 10:05:13 compute-0 podman[420122]: 2025-10-03 10:05:13.558767226 +0000 UTC m=+1.498101306 container remove 9ea37427544245694fe90c88984faf492236a5e314ef983a32d8baeb23253770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 10:05:13 compute-0 systemd[1]: libpod-conmon-9ea37427544245694fe90c88984faf492236a5e314ef983a32d8baeb23253770.scope: Deactivated successfully.
Oct  3 10:05:13 compute-0 podman[420147]: 2025-10-03 10:05:13.726945212 +0000 UTC m=+0.432371918 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 10:05:13 compute-0 nova_compute[351685]: 2025-10-03 10:05:13.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:14 compute-0 podman[420311]: 2025-10-03 10:05:14.436445582 +0000 UTC m=+0.073963309 container create 0bcadbf3859b13f5b3709ff60e2b338a856741e0be6ed84e4be20aec59f1baaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:05:14 compute-0 systemd[1]: Started libpod-conmon-0bcadbf3859b13f5b3709ff60e2b338a856741e0be6ed84e4be20aec59f1baaf.scope.
Oct  3 10:05:14 compute-0 podman[420311]: 2025-10-03 10:05:14.405359147 +0000 UTC m=+0.042876894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:05:14 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:05:14 compute-0 podman[420311]: 2025-10-03 10:05:14.532381175 +0000 UTC m=+0.169898932 container init 0bcadbf3859b13f5b3709ff60e2b338a856741e0be6ed84e4be20aec59f1baaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kirch, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:05:14 compute-0 podman[420311]: 2025-10-03 10:05:14.542047384 +0000 UTC m=+0.179565101 container start 0bcadbf3859b13f5b3709ff60e2b338a856741e0be6ed84e4be20aec59f1baaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:05:14 compute-0 podman[420311]: 2025-10-03 10:05:14.545840316 +0000 UTC m=+0.183358053 container attach 0bcadbf3859b13f5b3709ff60e2b338a856741e0be6ed84e4be20aec59f1baaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 10:05:14 compute-0 focused_kirch[420326]: 167 167
Oct  3 10:05:14 compute-0 systemd[1]: libpod-0bcadbf3859b13f5b3709ff60e2b338a856741e0be6ed84e4be20aec59f1baaf.scope: Deactivated successfully.
Oct  3 10:05:14 compute-0 podman[420311]: 2025-10-03 10:05:14.552577242 +0000 UTC m=+0.190094989 container died 0bcadbf3859b13f5b3709ff60e2b338a856741e0be6ed84e4be20aec59f1baaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kirch, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:05:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c4e59522054dad205cdcb880e18e5ce712c7637e898e8460f3eb0f9452639cc-merged.mount: Deactivated successfully.
Oct  3 10:05:14 compute-0 podman[420311]: 2025-10-03 10:05:14.596442766 +0000 UTC m=+0.233960483 container remove 0bcadbf3859b13f5b3709ff60e2b338a856741e0be6ed84e4be20aec59f1baaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kirch, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  3 10:05:14 compute-0 systemd[1]: libpod-conmon-0bcadbf3859b13f5b3709ff60e2b338a856741e0be6ed84e4be20aec59f1baaf.scope: Deactivated successfully.
Oct  3 10:05:14 compute-0 podman[420348]: 2025-10-03 10:05:14.837105583 +0000 UTC m=+0.054875428 container create 147e379332367d51cdbaa8298b3a31983fd1a07dde2f894a3356b8620676c938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:05:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:14 compute-0 systemd[1]: Started libpod-conmon-147e379332367d51cdbaa8298b3a31983fd1a07dde2f894a3356b8620676c938.scope.
Oct  3 10:05:14 compute-0 podman[420348]: 2025-10-03 10:05:14.817670191 +0000 UTC m=+0.035440056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:05:14 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:05:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed28a2b35c6ffa9039d38088a3725153c1fb3b22b0df542f9ce62176b29e6ead/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:05:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed28a2b35c6ffa9039d38088a3725153c1fb3b22b0df542f9ce62176b29e6ead/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:05:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed28a2b35c6ffa9039d38088a3725153c1fb3b22b0df542f9ce62176b29e6ead/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:05:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed28a2b35c6ffa9039d38088a3725153c1fb3b22b0df542f9ce62176b29e6ead/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:05:14 compute-0 podman[420348]: 2025-10-03 10:05:14.969206924 +0000 UTC m=+0.186976829 container init 147e379332367d51cdbaa8298b3a31983fd1a07dde2f894a3356b8620676c938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 10:05:14 compute-0 podman[420348]: 2025-10-03 10:05:14.993438599 +0000 UTC m=+0.211208454 container start 147e379332367d51cdbaa8298b3a31983fd1a07dde2f894a3356b8620676c938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:05:14 compute-0 podman[420348]: 2025-10-03 10:05:14.998188222 +0000 UTC m=+0.215958127 container attach 147e379332367d51cdbaa8298b3a31983fd1a07dde2f894a3356b8620676c938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jepsen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]: {
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:        "osd_id": 1,
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:        "type": "bluestore"
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:    },
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:        "osd_id": 2,
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:        "type": "bluestore"
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:    },
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:        "osd_id": 0,
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:        "type": "bluestore"
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]:    }
Oct  3 10:05:15 compute-0 awesome_jepsen[420363]: }
Oct  3 10:05:15 compute-0 systemd[1]: libpod-147e379332367d51cdbaa8298b3a31983fd1a07dde2f894a3356b8620676c938.scope: Deactivated successfully.
Oct  3 10:05:15 compute-0 podman[420348]: 2025-10-03 10:05:15.967375459 +0000 UTC m=+1.185145394 container died 147e379332367d51cdbaa8298b3a31983fd1a07dde2f894a3356b8620676c938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:05:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed28a2b35c6ffa9039d38088a3725153c1fb3b22b0df542f9ce62176b29e6ead-merged.mount: Deactivated successfully.
Oct  3 10:05:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:05:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:05:16 compute-0 podman[420348]: 2025-10-03 10:05:16.071455812 +0000 UTC m=+1.289225667 container remove 147e379332367d51cdbaa8298b3a31983fd1a07dde2f894a3356b8620676c938 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 10:05:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:05:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:05:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:05:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:05:16 compute-0 systemd[1]: libpod-conmon-147e379332367d51cdbaa8298b3a31983fd1a07dde2f894a3356b8620676c938.scope: Deactivated successfully.
Oct  3 10:05:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:05:16 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:05:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:05:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:05:16 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:05:16 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 366b02f7-d6ce-4fa5-a7f1-7932b0b8820b does not exist
Oct  3 10:05:16 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6537555e-00c3-447a-8340-7ad79f14e402 does not exist
Oct  3 10:05:16 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:05:16 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:05:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:16 compute-0 nova_compute[351685]: 2025-10-03 10:05:16.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:18 compute-0 podman[420459]: 2025-10-03 10:05:18.83183756 +0000 UTC m=+0.089612070 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, container_name=kepler, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9)
Oct  3 10:05:18 compute-0 podman[420458]: 2025-10-03 10:05:18.840289351 +0000 UTC m=+0.097314367 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:05:18 compute-0 nova_compute[351685]: 2025-10-03 10:05:18.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 2.1 KiB/s wr, 0 op/s
Oct  3 10:05:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:05:21 compute-0 nova_compute[351685]: 2025-10-03 10:05:21.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 2.1 KiB/s wr, 0 op/s
Oct  3 10:05:23 compute-0 nova_compute[351685]: 2025-10-03 10:05:23.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 2.1 KiB/s wr, 0 op/s
Oct  3 10:05:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:05:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 2.1 KiB/s wr, 0 op/s
Oct  3 10:05:26 compute-0 nova_compute[351685]: 2025-10-03 10:05:26.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:28 compute-0 nova_compute[351685]: 2025-10-03 10:05:28.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 2.1 KiB/s wr, 0 op/s
Oct  3 10:05:29 compute-0 podman[157165]: time="2025-10-03T10:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:05:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:05:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9048 "" "Go-http-client/1.1"
Oct  3 10:05:29 compute-0 podman[420501]: 2025-10-03 10:05:29.848278411 +0000 UTC m=+0.107158282 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, distribution-scope=public)
Oct  3 10:05:29 compute-0 podman[420502]: 2025-10-03 10:05:29.876181295 +0000 UTC m=+0.122476343 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true)
Oct  3 10:05:29 compute-0 podman[420504]: 2025-10-03 10:05:29.87789453 +0000 UTC m=+0.123367351 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:05:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 2.1 KiB/s wr, 0 op/s
Oct  3 10:05:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:05:31 compute-0 openstack_network_exporter[367524]: ERROR   10:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:05:31 compute-0 openstack_network_exporter[367524]: ERROR   10:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:05:31 compute-0 openstack_network_exporter[367524]: ERROR   10:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:05:31 compute-0 openstack_network_exporter[367524]: ERROR   10:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:05:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:05:31 compute-0 openstack_network_exporter[367524]: ERROR   10:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:05:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:05:31 compute-0 nova_compute[351685]: 2025-10-03 10:05:31.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:32 compute-0 nova_compute[351685]: 2025-10-03 10:05:32.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:05:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:33 compute-0 nova_compute[351685]: 2025-10-03 10:05:33.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:35 compute-0 nova_compute[351685]: 2025-10-03 10:05:35.747 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:05:35 compute-0 nova_compute[351685]: 2025-10-03 10:05:35.779 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:05:35 compute-0 nova_compute[351685]: 2025-10-03 10:05:35.779 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:05:35 compute-0 nova_compute[351685]: 2025-10-03 10:05:35.780 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:05:35 compute-0 nova_compute[351685]: 2025-10-03 10:05:35.780 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:05:35 compute-0 nova_compute[351685]: 2025-10-03 10:05:35.780 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:05:35 compute-0 podman[420565]: 2025-10-03 10:05:35.80094191 +0000 UTC m=+0.061746449 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  3 10:05:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:05:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:05:36 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1932194544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:05:36 compute-0 nova_compute[351685]: 2025-10-03 10:05:36.276 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:05:36 compute-0 nova_compute[351685]: 2025-10-03 10:05:36.370 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:05:36 compute-0 nova_compute[351685]: 2025-10-03 10:05:36.371 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:05:36 compute-0 nova_compute[351685]: 2025-10-03 10:05:36.371 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:05:36 compute-0 nova_compute[351685]: 2025-10-03 10:05:36.377 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:05:36 compute-0 nova_compute[351685]: 2025-10-03 10:05:36.377 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:05:36 compute-0 nova_compute[351685]: 2025-10-03 10:05:36.378 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:05:36 compute-0 nova_compute[351685]: 2025-10-03 10:05:36.713 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:05:36 compute-0 nova_compute[351685]: 2025-10-03 10:05:36.715 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3758MB free_disk=59.92198944091797GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:05:36 compute-0 nova_compute[351685]: 2025-10-03 10:05:36.715 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:05:36 compute-0 nova_compute[351685]: 2025-10-03 10:05:36.716 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:05:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:36 compute-0 nova_compute[351685]: 2025-10-03 10:05:36.957 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:05:36 compute-0 nova_compute[351685]: 2025-10-03 10:05:36.958 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 5b008829-2c76-4e40-b9e6-0e3d73095522 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:05:36 compute-0 nova_compute[351685]: 2025-10-03 10:05:36.958 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:05:36 compute-0 nova_compute[351685]: 2025-10-03 10:05:36.958 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:05:36 compute-0 nova_compute[351685]: 2025-10-03 10:05:36.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:37 compute-0 nova_compute[351685]: 2025-10-03 10:05:37.061 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:05:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:05:37 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1991813475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:05:37 compute-0 nova_compute[351685]: 2025-10-03 10:05:37.486 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:05:37 compute-0 nova_compute[351685]: 2025-10-03 10:05:37.493 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:05:37 compute-0 nova_compute[351685]: 2025-10-03 10:05:37.508 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:05:37 compute-0 nova_compute[351685]: 2025-10-03 10:05:37.509 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:05:37 compute-0 nova_compute[351685]: 2025-10-03 10:05:37.510 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:05:37 compute-0 nova_compute[351685]: 2025-10-03 10:05:37.510 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:05:37 compute-0 nova_compute[351685]: 2025-10-03 10:05:37.511 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 10:05:37 compute-0 nova_compute[351685]: 2025-10-03 10:05:37.524 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 10:05:38 compute-0 nova_compute[351685]: 2025-10-03 10:05:38.506 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:05:38 compute-0 nova_compute[351685]: 2025-10-03 10:05:38.507 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:05:38 compute-0 nova_compute[351685]: 2025-10-03 10:05:38.507 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:05:38 compute-0 podman[420625]: 2025-10-03 10:05:38.794459805 +0000 UTC m=+0.062992609 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:05:38 compute-0 podman[420626]: 2025-10-03 10:05:38.810711025 +0000 UTC m=+0.076072878 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:05:38 compute-0 podman[420627]: 2025-10-03 10:05:38.823311588 +0000 UTC m=+0.086222742 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=iscsid)
Oct  3 10:05:38 compute-0 nova_compute[351685]: 2025-10-03 10:05:38.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:39 compute-0 nova_compute[351685]: 2025-10-03 10:05:39.489 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:05:39 compute-0 nova_compute[351685]: 2025-10-03 10:05:39.490 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:05:39 compute-0 nova_compute[351685]: 2025-10-03 10:05:39.490 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:05:39 compute-0 nova_compute[351685]: 2025-10-03 10:05:39.490 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:05:40 compute-0 nova_compute[351685]: 2025-10-03 10:05:40.641 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:05:40 compute-0 nova_compute[351685]: 2025-10-03 10:05:40.666 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:05:40 compute-0 nova_compute[351685]: 2025-10-03 10:05:40.666 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:05:40 compute-0 nova_compute[351685]: 2025-10-03 10:05:40.667 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:05:40 compute-0 nova_compute[351685]: 2025-10-03 10:05:40.667 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:05:40 compute-0 nova_compute[351685]: 2025-10-03 10:05:40.668 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:05:40 compute-0 nova_compute[351685]: 2025-10-03 10:05:40.668 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:05:40 compute-0 nova_compute[351685]: 2025-10-03 10:05:40.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:05:40 compute-0 nova_compute[351685]: 2025-10-03 10:05:40.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.882 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.883 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.883 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.890 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5b008829-2c76-4e40-b9e6-0e3d73095522', 'name': 'vn-yplftfd-pkavijefxpwp-6g6pxdaavpud-vnf-i3ubmryl4tho', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {'metering.server_group': '09b6fef3-eb54-4e45-9716-a57b7d592bd8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:05:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.893 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.893 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.894 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.894 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.894 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.894 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:05:40.894228) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.906 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.911 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.911 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.912 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.912 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.912 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.912 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.914 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:05:40.912321) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.915 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.915 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.916 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.916 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.916 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.916 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.916 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:05:40.916591) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.939 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.940 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.940 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.964 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.965 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.965 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.966 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.968 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.968 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.968 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.968 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.968 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:40.969 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:05:40.968588) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.012 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.012 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.012 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.052 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.054 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.054 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.054 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.054 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.054 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.054 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.latency volume: 1250055753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.055 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.latency volume: 207399736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.055 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.latency volume: 144385577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.056 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.055 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:05:41.054721) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.056 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.056 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.057 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:05:41.057516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.057 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.058 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.058 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.058 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.058 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.059 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.059 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.059 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.060 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.060 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.060 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:05:41.060382) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.061 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.061 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.061 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.062 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.062 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.063 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.064 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:05:41.063707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.064 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.064 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.064 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.064 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.065 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.066 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.066 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.066 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.066 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.066 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:05:41.066574) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.067 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.067 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.068 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.068 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.068 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.069 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.069 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.069 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.070 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.070 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.070 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:05:41.070115) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.092 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.111 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.112 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.112 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.112 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.112 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.113 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.113 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.latency volume: 13657206600 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.113 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.latency volume: 25497777 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.113 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:05:41.113038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.114 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.114 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.114 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.114 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.115 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.115 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.115 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.115 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.116 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.117 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.requests volume: 238 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.117 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:05:41.116163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.118 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.118 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.118 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.119 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.119 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.120 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.120 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.120 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.121 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.121 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:05:41.121172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.121 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.bytes.delta volume: 3321 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.121 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.123 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.123 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.123 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.123 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.123 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.124 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.124 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.124 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:05:41.123278) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.124 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.125 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.125 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.126 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:05:41.124794) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.126 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.126 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.126 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.126 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:05:41.126921) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.127 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.127 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.127 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.128 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.128 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.128 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.129 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.129 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:05:41.128022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.129 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.129 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.129 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/cpu volume: 95040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.130 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 36220000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:05:41.129816) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.130 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.130 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.130 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.130 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.131 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.131 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.131 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:05:41.131114) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.131 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.132 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.132 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:05:41.132861) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.133 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.bytes volume: 4648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.133 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.133 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.134 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.134 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.134 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.134 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:05:41.134429) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.134 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.bytes.delta volume: 3387 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.135 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.135 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.135 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.135 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.135 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.136 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.136 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.136 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/memory.usage volume: 49.12109375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.136 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 49.0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.136 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:05:41.136158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.137 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.137 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.137 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.137 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.137 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.137 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.137 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.137 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.138 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:05:41.137891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.138 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.bytes volume: 5107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.138 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2268 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.138 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.138 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.139 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.139 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.139 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.139 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:05:41.139488) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.139 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.packets volume: 39 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.139 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.140 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.141 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.141 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.141 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.142 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.142 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.142 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.142 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.142 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.142 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.143 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.143 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.143 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.143 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.145 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.145 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.145 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.145 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.145 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.146 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.146 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.146 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.146 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.146 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.147 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.147 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:05:41.147 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:05:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:05:41.594 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:05:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:05:41.595 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:05:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:05:41.595 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:05:41 compute-0 nova_compute[351685]: 2025-10-03 10:05:41.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:05:41 compute-0 nova_compute[351685]: 2025-10-03 10:05:41.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:42 compute-0 nova_compute[351685]: 2025-10-03 10:05:42.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:05:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:43 compute-0 nova_compute[351685]: 2025-10-03 10:05:43.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:44 compute-0 nova_compute[351685]: 2025-10-03 10:05:44.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:05:44 compute-0 nova_compute[351685]: 2025-10-03 10:05:44.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 10:05:44 compute-0 podman[420688]: 2025-10-03 10:05:44.765682916 +0000 UTC m=+0.076210632 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:05:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:05:46
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'images', 'default.rgw.log', 'default.rgw.meta', 'backups', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'default.rgw.control']
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:05:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:05:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:46 compute-0 nova_compute[351685]: 2025-10-03 10:05:46.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:48 compute-0 nova_compute[351685]: 2025-10-03 10:05:48.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:49 compute-0 podman[420709]: 2025-10-03 10:05:49.808102084 +0000 UTC m=+0.075008074 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:05:49 compute-0 podman[420710]: 2025-10-03 10:05:49.84516287 +0000 UTC m=+0.098028150 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.component=ubi9-container, config_id=edpm, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, release-0.7.12=, release=1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Oct  3 10:05:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:05:51 compute-0 nova_compute[351685]: 2025-10-03 10:05:51.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:53 compute-0 nova_compute[351685]: 2025-10-03 10:05:53.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:05:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/57086979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:05:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:05:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/57086979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:05:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001105079320505307 of space, bias 1.0, pg target 0.3315237961515921 quantized to 32 (current 32)
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:05:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:05:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:05:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:56 compute-0 nova_compute[351685]: 2025-10-03 10:05:56.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:58 compute-0 nova_compute[351685]: 2025-10-03 10:05:58.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:05:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:05:59 compute-0 podman[157165]: time="2025-10-03T10:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:05:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:05:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9048 "" "Go-http-client/1.1"
Oct  3 10:06:00 compute-0 podman[420752]: 2025-10-03 10:06:00.866384915 +0000 UTC m=+0.124029224 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Oct  3 10:06:00 compute-0 podman[420751]: 2025-10-03 10:06:00.868158821 +0000 UTC m=+0.118808346 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, release=1755695350, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc.)
Oct  3 10:06:00 compute-0 podman[420753]: 2025-10-03 10:06:00.878888764 +0000 UTC m=+0.130525510 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 10:06:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:06:01 compute-0 openstack_network_exporter[367524]: ERROR   10:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:06:01 compute-0 openstack_network_exporter[367524]: ERROR   10:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:06:01 compute-0 openstack_network_exporter[367524]: ERROR   10:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:06:01 compute-0 openstack_network_exporter[367524]: ERROR   10:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:06:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:06:01 compute-0 openstack_network_exporter[367524]: ERROR   10:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:06:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:06:02 compute-0 nova_compute[351685]: 2025-10-03 10:06:02.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:03 compute-0 nova_compute[351685]: 2025-10-03 10:06:03.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:06:06 compute-0 podman[420815]: 2025-10-03 10:06:06.866651766 +0000 UTC m=+0.115966104 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  3 10:06:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:07 compute-0 nova_compute[351685]: 2025-10-03 10:06:07.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:08 compute-0 nova_compute[351685]: 2025-10-03 10:06:08.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1159: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:09 compute-0 podman[420834]: 2025-10-03 10:06:09.858416825 +0000 UTC m=+0.103877267 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3)
Oct  3 10:06:09 compute-0 podman[420832]: 2025-10-03 10:06:09.864804829 +0000 UTC m=+0.108890658 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:06:09 compute-0 podman[420833]: 2025-10-03 10:06:09.884712287 +0000 UTC m=+0.133900489 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  3 10:06:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:06:12 compute-0 nova_compute[351685]: 2025-10-03 10:06:12.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:13 compute-0 nova_compute[351685]: 2025-10-03 10:06:13.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:15 compute-0 podman[420889]: 2025-10-03 10:06:15.831074563 +0000 UTC m=+0.085870551 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct  3 10:06:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:06:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:06:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:06:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:06:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:06:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:06:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:06:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:17 compute-0 nova_compute[351685]: 2025-10-03 10:06:17.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:06:17 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:06:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:06:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:06:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:06:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:06:17 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a7fb6627-44a1-4322-acec-f0f8cac942c4 does not exist
Oct  3 10:06:17 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ad15d5f0-8b60-40d1-8030-b7e0de54fcb8 does not exist
Oct  3 10:06:17 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev bd23a068-ca90-4e7f-b993-c8d55bd2b408 does not exist
Oct  3 10:06:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:06:17 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:06:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:06:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:06:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:06:17 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:06:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:06:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:06:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:06:18 compute-0 podman[421176]: 2025-10-03 10:06:18.378833492 +0000 UTC m=+0.030498837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:06:18 compute-0 podman[421176]: 2025-10-03 10:06:18.680407549 +0000 UTC m=+0.332072894 container create 9b32bbe1dd7f530061c0384c8758ac0a499b14394b9066d36eed0ddc857698f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:06:18 compute-0 systemd[1]: Started libpod-conmon-9b32bbe1dd7f530061c0384c8758ac0a499b14394b9066d36eed0ddc857698f5.scope.
Oct  3 10:06:18 compute-0 nova_compute[351685]: 2025-10-03 10:06:18.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:18 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:06:19 compute-0 podman[421176]: 2025-10-03 10:06:19.207316473 +0000 UTC m=+0.858981868 container init 9b32bbe1dd7f530061c0384c8758ac0a499b14394b9066d36eed0ddc857698f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 10:06:19 compute-0 podman[421176]: 2025-10-03 10:06:19.228329447 +0000 UTC m=+0.879994742 container start 9b32bbe1dd7f530061c0384c8758ac0a499b14394b9066d36eed0ddc857698f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:06:19 compute-0 eloquent_faraday[421190]: 167 167
Oct  3 10:06:19 compute-0 systemd[1]: libpod-9b32bbe1dd7f530061c0384c8758ac0a499b14394b9066d36eed0ddc857698f5.scope: Deactivated successfully.
Oct  3 10:06:19 compute-0 podman[421176]: 2025-10-03 10:06:19.4232878 +0000 UTC m=+1.074953135 container attach 9b32bbe1dd7f530061c0384c8758ac0a499b14394b9066d36eed0ddc857698f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_faraday, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:06:19 compute-0 podman[421176]: 2025-10-03 10:06:19.424358635 +0000 UTC m=+1.076023960 container died 9b32bbe1dd7f530061c0384c8758ac0a499b14394b9066d36eed0ddc857698f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:06:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7216ad9e76624fb37eb122aa316b9273eff432dddf546e0815b2b517177daf07-merged.mount: Deactivated successfully.
Oct  3 10:06:20 compute-0 podman[421176]: 2025-10-03 10:06:20.897711556 +0000 UTC m=+2.549376911 container remove 9b32bbe1dd7f530061c0384c8758ac0a499b14394b9066d36eed0ddc857698f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_faraday, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 10:06:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:20 compute-0 systemd[1]: libpod-conmon-9b32bbe1dd7f530061c0384c8758ac0a499b14394b9066d36eed0ddc857698f5.scope: Deactivated successfully.
Oct  3 10:06:21 compute-0 podman[421208]: 2025-10-03 10:06:21.07113514 +0000 UTC m=+1.064277593 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:06:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:06:21 compute-0 podman[421209]: 2025-10-03 10:06:21.16917182 +0000 UTC m=+1.165394521 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, com.redhat.component=ubi9-container, container_name=kepler, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, vcs-type=git, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct  3 10:06:21 compute-0 podman[421246]: 2025-10-03 10:06:21.155017786 +0000 UTC m=+0.048623297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:06:21 compute-0 podman[421246]: 2025-10-03 10:06:21.634830662 +0000 UTC m=+0.528436143 container create 98184a1b020e8bdd63022c0d98fd1da8214a0a2b99a6074fedce1abec6668484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:06:21 compute-0 systemd[1]: Started libpod-conmon-98184a1b020e8bdd63022c0d98fd1da8214a0a2b99a6074fedce1abec6668484.scope.
Oct  3 10:06:21 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:06:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/150dc46f3f6f251f7c6b86f7400f4a52d18a841208c606f5e544a2dd2094d9aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:06:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/150dc46f3f6f251f7c6b86f7400f4a52d18a841208c606f5e544a2dd2094d9aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:06:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/150dc46f3f6f251f7c6b86f7400f4a52d18a841208c606f5e544a2dd2094d9aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:06:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/150dc46f3f6f251f7c6b86f7400f4a52d18a841208c606f5e544a2dd2094d9aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:06:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/150dc46f3f6f251f7c6b86f7400f4a52d18a841208c606f5e544a2dd2094d9aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:06:22 compute-0 nova_compute[351685]: 2025-10-03 10:06:22.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:22 compute-0 podman[421246]: 2025-10-03 10:06:22.236460609 +0000 UTC m=+1.130066100 container init 98184a1b020e8bdd63022c0d98fd1da8214a0a2b99a6074fedce1abec6668484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_thompson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 10:06:22 compute-0 podman[421246]: 2025-10-03 10:06:22.245929332 +0000 UTC m=+1.139534803 container start 98184a1b020e8bdd63022c0d98fd1da8214a0a2b99a6074fedce1abec6668484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_thompson, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 10:06:22 compute-0 podman[421246]: 2025-10-03 10:06:22.501837557 +0000 UTC m=+1.395443048 container attach 98184a1b020e8bdd63022c0d98fd1da8214a0a2b99a6074fedce1abec6668484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_thompson, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:06:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:23 compute-0 awesome_thompson[421269]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:06:23 compute-0 awesome_thompson[421269]: --> relative data size: 1.0
Oct  3 10:06:23 compute-0 awesome_thompson[421269]: --> All data devices are unavailable
Oct  3 10:06:23 compute-0 systemd[1]: libpod-98184a1b020e8bdd63022c0d98fd1da8214a0a2b99a6074fedce1abec6668484.scope: Deactivated successfully.
Oct  3 10:06:23 compute-0 systemd[1]: libpod-98184a1b020e8bdd63022c0d98fd1da8214a0a2b99a6074fedce1abec6668484.scope: Consumed 1.136s CPU time.
Oct  3 10:06:23 compute-0 podman[421246]: 2025-10-03 10:06:23.459609469 +0000 UTC m=+2.353214940 container died 98184a1b020e8bdd63022c0d98fd1da8214a0a2b99a6074fedce1abec6668484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_thompson, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:06:23 compute-0 nova_compute[351685]: 2025-10-03 10:06:23.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-150dc46f3f6f251f7c6b86f7400f4a52d18a841208c606f5e544a2dd2094d9aa-merged.mount: Deactivated successfully.
Oct  3 10:06:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:25 compute-0 podman[421246]: 2025-10-03 10:06:25.044759872 +0000 UTC m=+3.938365363 container remove 98184a1b020e8bdd63022c0d98fd1da8214a0a2b99a6074fedce1abec6668484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_thompson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 10:06:25 compute-0 systemd[1]: libpod-conmon-98184a1b020e8bdd63022c0d98fd1da8214a0a2b99a6074fedce1abec6668484.scope: Deactivated successfully.
Oct  3 10:06:26 compute-0 podman[421450]: 2025-10-03 10:06:25.907098568 +0000 UTC m=+0.025633583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:06:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:06:26 compute-0 podman[421450]: 2025-10-03 10:06:26.19166173 +0000 UTC m=+0.310196725 container create 81375dd9212cf7a0c0bca3d28522fc80c95c37a40d577695d9df86be6a147aff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wiles, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:06:26 compute-0 systemd[1]: Started libpod-conmon-81375dd9212cf7a0c0bca3d28522fc80c95c37a40d577695d9df86be6a147aff.scope.
Oct  3 10:06:26 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:06:26 compute-0 podman[421450]: 2025-10-03 10:06:26.703096159 +0000 UTC m=+0.821631184 container init 81375dd9212cf7a0c0bca3d28522fc80c95c37a40d577695d9df86be6a147aff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wiles, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 10:06:26 compute-0 podman[421450]: 2025-10-03 10:06:26.711452336 +0000 UTC m=+0.829987331 container start 81375dd9212cf7a0c0bca3d28522fc80c95c37a40d577695d9df86be6a147aff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:06:26 compute-0 mystifying_wiles[421466]: 167 167
Oct  3 10:06:26 compute-0 systemd[1]: libpod-81375dd9212cf7a0c0bca3d28522fc80c95c37a40d577695d9df86be6a147aff.scope: Deactivated successfully.
Oct  3 10:06:26 compute-0 podman[421450]: 2025-10-03 10:06:26.806954245 +0000 UTC m=+0.925489250 container attach 81375dd9212cf7a0c0bca3d28522fc80c95c37a40d577695d9df86be6a147aff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wiles, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 10:06:26 compute-0 podman[421450]: 2025-10-03 10:06:26.807429299 +0000 UTC m=+0.925964334 container died 81375dd9212cf7a0c0bca3d28522fc80c95c37a40d577695d9df86be6a147aff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wiles, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 10:06:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:27 compute-0 nova_compute[351685]: 2025-10-03 10:06:27.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae80d4194ec71fb7701084641ac038a302962efc67d200824cf9177f488f061c-merged.mount: Deactivated successfully.
Oct  3 10:06:28 compute-0 podman[421450]: 2025-10-03 10:06:28.149927862 +0000 UTC m=+2.268462867 container remove 81375dd9212cf7a0c0bca3d28522fc80c95c37a40d577695d9df86be6a147aff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  3 10:06:28 compute-0 systemd[1]: libpod-conmon-81375dd9212cf7a0c0bca3d28522fc80c95c37a40d577695d9df86be6a147aff.scope: Deactivated successfully.
Oct  3 10:06:28 compute-0 podman[421489]: 2025-10-03 10:06:28.392778879 +0000 UTC m=+0.048457333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:06:28 compute-0 podman[421489]: 2025-10-03 10:06:28.504082003 +0000 UTC m=+0.159760417 container create 925d4e265314997397903c5e428548b9982f3ad89c91576cb5d4cb29ce438ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 10:06:28 compute-0 systemd[1]: Started libpod-conmon-925d4e265314997397903c5e428548b9982f3ad89c91576cb5d4cb29ce438ce4.scope.
Oct  3 10:06:28 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:06:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6904d0ad16b3dc572bb30945905f9d0d027c861c8f4ffef76e26a8239becb4ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:06:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6904d0ad16b3dc572bb30945905f9d0d027c861c8f4ffef76e26a8239becb4ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:06:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6904d0ad16b3dc572bb30945905f9d0d027c861c8f4ffef76e26a8239becb4ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:06:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6904d0ad16b3dc572bb30945905f9d0d027c861c8f4ffef76e26a8239becb4ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:06:28 compute-0 podman[421489]: 2025-10-03 10:06:28.805353371 +0000 UTC m=+0.461031795 container init 925d4e265314997397903c5e428548b9982f3ad89c91576cb5d4cb29ce438ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:06:28 compute-0 podman[421489]: 2025-10-03 10:06:28.819997421 +0000 UTC m=+0.475675825 container start 925d4e265314997397903c5e428548b9982f3ad89c91576cb5d4cb29ce438ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gagarin, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 10:06:28 compute-0 podman[421489]: 2025-10-03 10:06:28.894669152 +0000 UTC m=+0.550347556 container attach 925d4e265314997397903c5e428548b9982f3ad89c91576cb5d4cb29ce438ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:06:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:28 compute-0 nova_compute[351685]: 2025-10-03 10:06:28.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]: {
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:    "0": [
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:        {
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "devices": [
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "/dev/loop3"
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            ],
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "lv_name": "ceph_lv0",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "lv_size": "21470642176",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "name": "ceph_lv0",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "tags": {
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.cluster_name": "ceph",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.crush_device_class": "",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.encrypted": "0",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.osd_id": "0",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.type": "block",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.vdo": "0"
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            },
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "type": "block",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "vg_name": "ceph_vg0"
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:        }
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:    ],
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:    "1": [
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:        {
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "devices": [
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "/dev/loop4"
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            ],
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "lv_name": "ceph_lv1",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "lv_size": "21470642176",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "name": "ceph_lv1",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "tags": {
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.cluster_name": "ceph",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.crush_device_class": "",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.encrypted": "0",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.osd_id": "1",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.type": "block",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.vdo": "0"
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            },
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "type": "block",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "vg_name": "ceph_vg1"
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:        }
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:    ],
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:    "2": [
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:        {
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "devices": [
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "/dev/loop5"
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            ],
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "lv_name": "ceph_lv2",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "lv_size": "21470642176",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "name": "ceph_lv2",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "tags": {
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.cluster_name": "ceph",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.crush_device_class": "",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.encrypted": "0",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.osd_id": "2",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.type": "block",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:                "ceph.vdo": "0"
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            },
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "type": "block",
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:            "vg_name": "ceph_vg2"
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:        }
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]:    ]
Oct  3 10:06:29 compute-0 trusting_gagarin[421504]: }
Oct  3 10:06:29 compute-0 systemd[1]: libpod-925d4e265314997397903c5e428548b9982f3ad89c91576cb5d4cb29ce438ce4.scope: Deactivated successfully.
Oct  3 10:06:29 compute-0 podman[421489]: 2025-10-03 10:06:29.633945086 +0000 UTC m=+1.289623500 container died 925d4e265314997397903c5e428548b9982f3ad89c91576cb5d4cb29ce438ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gagarin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 10:06:29 compute-0 podman[157165]: time="2025-10-03T10:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:06:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6904d0ad16b3dc572bb30945905f9d0d027c861c8f4ffef76e26a8239becb4ca-merged.mount: Deactivated successfully.
Oct  3 10:06:30 compute-0 podman[421489]: 2025-10-03 10:06:30.70950754 +0000 UTC m=+2.365185974 container remove 925d4e265314997397903c5e428548b9982f3ad89c91576cb5d4cb29ce438ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:06:30 compute-0 podman[157165]: @ - - [03/Oct/2025:10:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:06:30 compute-0 systemd[1]: libpod-conmon-925d4e265314997397903c5e428548b9982f3ad89c91576cb5d4cb29ce438ce4.scope: Deactivated successfully.
Oct  3 10:06:30 compute-0 podman[157165]: @ - - [03/Oct/2025:10:06:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9045 "" "Go-http-client/1.1"
Oct  3 10:06:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:31 compute-0 podman[421548]: 2025-10-03 10:06:31.041065718 +0000 UTC m=+0.117263027 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true)
Oct  3 10:06:31 compute-0 podman[421547]: 2025-10-03 10:06:31.044929982 +0000 UTC m=+0.116522773 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Oct  3 10:06:31 compute-0 podman[421549]: 2025-10-03 10:06:31.086087999 +0000 UTC m=+0.154110836 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:06:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:06:31 compute-0 openstack_network_exporter[367524]: ERROR   10:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:06:31 compute-0 openstack_network_exporter[367524]: ERROR   10:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:06:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:06:31 compute-0 openstack_network_exporter[367524]: ERROR   10:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:06:31 compute-0 openstack_network_exporter[367524]: ERROR   10:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:06:31 compute-0 openstack_network_exporter[367524]: ERROR   10:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:06:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:06:31 compute-0 podman[421724]: 2025-10-03 10:06:31.670814295 +0000 UTC m=+0.045730835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:06:31 compute-0 podman[421724]: 2025-10-03 10:06:31.814485826 +0000 UTC m=+0.189402346 container create 87fea8ee8a91433a2b91a4a5f602934b607ec34bf66358ea03c3fe18b65a8ad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kilby, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:06:31 compute-0 systemd[1]: Started libpod-conmon-87fea8ee8a91433a2b91a4a5f602934b607ec34bf66358ea03c3fe18b65a8ad2.scope.
Oct  3 10:06:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:06:32 compute-0 podman[421724]: 2025-10-03 10:06:32.002556099 +0000 UTC m=+0.377472639 container init 87fea8ee8a91433a2b91a4a5f602934b607ec34bf66358ea03c3fe18b65a8ad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kilby, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Oct  3 10:06:32 compute-0 podman[421724]: 2025-10-03 10:06:32.015774832 +0000 UTC m=+0.390691342 container start 87fea8ee8a91433a2b91a4a5f602934b607ec34bf66358ea03c3fe18b65a8ad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:06:32 compute-0 nova_compute[351685]: 2025-10-03 10:06:32.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:32 compute-0 nice_kilby[421739]: 167 167
Oct  3 10:06:32 compute-0 systemd[1]: libpod-87fea8ee8a91433a2b91a4a5f602934b607ec34bf66358ea03c3fe18b65a8ad2.scope: Deactivated successfully.
Oct  3 10:06:32 compute-0 podman[421724]: 2025-10-03 10:06:32.049664597 +0000 UTC m=+0.424581117 container attach 87fea8ee8a91433a2b91a4a5f602934b607ec34bf66358ea03c3fe18b65a8ad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kilby, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  3 10:06:32 compute-0 podman[421724]: 2025-10-03 10:06:32.050503244 +0000 UTC m=+0.425419754 container died 87fea8ee8a91433a2b91a4a5f602934b607ec34bf66358ea03c3fe18b65a8ad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kilby, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:06:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-16a0c70fc5ec5d2e75da676de4c4889a982460deec8aa4c102d73bbc0b294fa7-merged.mount: Deactivated successfully.
Oct  3 10:06:32 compute-0 podman[421724]: 2025-10-03 10:06:32.300794989 +0000 UTC m=+0.675711499 container remove 87fea8ee8a91433a2b91a4a5f602934b607ec34bf66358ea03c3fe18b65a8ad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kilby, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:06:32 compute-0 systemd[1]: libpod-conmon-87fea8ee8a91433a2b91a4a5f602934b607ec34bf66358ea03c3fe18b65a8ad2.scope: Deactivated successfully.
Oct  3 10:06:32 compute-0 podman[421763]: 2025-10-03 10:06:32.51500953 +0000 UTC m=+0.071365227 container create eec9e6932e43b3f01b1e55871a52f0a25ef2e608f7f616ae4078b542517f8056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 10:06:32 compute-0 systemd[1]: Started libpod-conmon-eec9e6932e43b3f01b1e55871a52f0a25ef2e608f7f616ae4078b542517f8056.scope.
Oct  3 10:06:32 compute-0 podman[421763]: 2025-10-03 10:06:32.479858843 +0000 UTC m=+0.036214540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:06:32 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:06:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf322e7de40c0424624976e0844728f1c72a9875310323bd94bcfa495fa31901/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:06:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf322e7de40c0424624976e0844728f1c72a9875310323bd94bcfa495fa31901/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:06:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf322e7de40c0424624976e0844728f1c72a9875310323bd94bcfa495fa31901/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:06:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf322e7de40c0424624976e0844728f1c72a9875310323bd94bcfa495fa31901/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:06:32 compute-0 podman[421763]: 2025-10-03 10:06:32.634596869 +0000 UTC m=+0.190952576 container init eec9e6932e43b3f01b1e55871a52f0a25ef2e608f7f616ae4078b542517f8056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cohen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 10:06:32 compute-0 podman[421763]: 2025-10-03 10:06:32.654477236 +0000 UTC m=+0.210832913 container start eec9e6932e43b3f01b1e55871a52f0a25ef2e608f7f616ae4078b542517f8056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cohen, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 10:06:32 compute-0 podman[421763]: 2025-10-03 10:06:32.664106564 +0000 UTC m=+0.220462271 container attach eec9e6932e43b3f01b1e55871a52f0a25ef2e608f7f616ae4078b542517f8056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cohen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:06:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:33 compute-0 objective_cohen[421778]: {
Oct  3 10:06:33 compute-0 objective_cohen[421778]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:06:33 compute-0 objective_cohen[421778]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:06:33 compute-0 objective_cohen[421778]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:06:33 compute-0 objective_cohen[421778]:        "osd_id": 1,
Oct  3 10:06:33 compute-0 objective_cohen[421778]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:06:33 compute-0 objective_cohen[421778]:        "type": "bluestore"
Oct  3 10:06:33 compute-0 objective_cohen[421778]:    },
Oct  3 10:06:33 compute-0 objective_cohen[421778]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:06:33 compute-0 objective_cohen[421778]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:06:33 compute-0 objective_cohen[421778]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:06:33 compute-0 objective_cohen[421778]:        "osd_id": 2,
Oct  3 10:06:33 compute-0 objective_cohen[421778]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:06:33 compute-0 objective_cohen[421778]:        "type": "bluestore"
Oct  3 10:06:33 compute-0 objective_cohen[421778]:    },
Oct  3 10:06:33 compute-0 objective_cohen[421778]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:06:33 compute-0 objective_cohen[421778]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:06:33 compute-0 objective_cohen[421778]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:06:33 compute-0 objective_cohen[421778]:        "osd_id": 0,
Oct  3 10:06:33 compute-0 objective_cohen[421778]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:06:33 compute-0 objective_cohen[421778]:        "type": "bluestore"
Oct  3 10:06:33 compute-0 objective_cohen[421778]:    }
Oct  3 10:06:33 compute-0 objective_cohen[421778]: }
Oct  3 10:06:33 compute-0 systemd[1]: libpod-eec9e6932e43b3f01b1e55871a52f0a25ef2e608f7f616ae4078b542517f8056.scope: Deactivated successfully.
Oct  3 10:06:33 compute-0 podman[421763]: 2025-10-03 10:06:33.728377626 +0000 UTC m=+1.284733323 container died eec9e6932e43b3f01b1e55871a52f0a25ef2e608f7f616ae4078b542517f8056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 10:06:33 compute-0 systemd[1]: libpod-eec9e6932e43b3f01b1e55871a52f0a25ef2e608f7f616ae4078b542517f8056.scope: Consumed 1.065s CPU time.
Oct  3 10:06:33 compute-0 nova_compute[351685]: 2025-10-03 10:06:33.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf322e7de40c0424624976e0844728f1c72a9875310323bd94bcfa495fa31901-merged.mount: Deactivated successfully.
Oct  3 10:06:34 compute-0 podman[421763]: 2025-10-03 10:06:34.317845674 +0000 UTC m=+1.874201381 container remove eec9e6932e43b3f01b1e55871a52f0a25ef2e608f7f616ae4078b542517f8056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_cohen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 10:06:34 compute-0 systemd[1]: libpod-conmon-eec9e6932e43b3f01b1e55871a52f0a25ef2e608f7f616ae4078b542517f8056.scope: Deactivated successfully.
Oct  3 10:06:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:06:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:06:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:06:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:06:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev fae20780-ed4d-46ac-8d62-9a79dd40d460 does not exist
Oct  3 10:06:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3efb8443-7694-44f6-976a-49179e196f22 does not exist
Oct  3 10:06:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:34.625 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:73:2e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:70:f7:73:f2:48'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 10:06:34 compute-0 nova_compute[351685]: 2025-10-03 10:06:34.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:34.628 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  3 10:06:34 compute-0 nova_compute[351685]: 2025-10-03 10:06:34.746 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:06:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:06:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:06:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:06:36 compute-0 nova_compute[351685]: 2025-10-03 10:06:36.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:06:36 compute-0 nova_compute[351685]: 2025-10-03 10:06:36.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:06:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:37 compute-0 nova_compute[351685]: 2025-10-03 10:06:37.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:37 compute-0 nova_compute[351685]: 2025-10-03 10:06:37.526 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:06:37 compute-0 nova_compute[351685]: 2025-10-03 10:06:37.527 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:06:37 compute-0 nova_compute[351685]: 2025-10-03 10:06:37.527 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:06:37 compute-0 podman[421873]: 2025-10-03 10:06:37.815486192 +0000 UTC m=+0.075258681 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 10:06:38 compute-0 nova_compute[351685]: 2025-10-03 10:06:38.813 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Updating instance_info_cache with network_info: [{"id": "d601bb86-7265-40b5-ac1c-42d005514cfd", "address": "fa:16:3e:4c:23:11", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd601bb86-72", "ovs_interfaceid": "d601bb86-7265-40b5-ac1c-42d005514cfd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:06:38 compute-0 nova_compute[351685]: 2025-10-03 10:06:38.832 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:06:38 compute-0 nova_compute[351685]: 2025-10-03 10:06:38.832 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:06:38 compute-0 nova_compute[351685]: 2025-10-03 10:06:38.833 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:06:38 compute-0 nova_compute[351685]: 2025-10-03 10:06:38.833 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:06:38 compute-0 nova_compute[351685]: 2025-10-03 10:06:38.833 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:06:38 compute-0 nova_compute[351685]: 2025-10-03 10:06:38.834 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:06:38 compute-0 nova_compute[351685]: 2025-10-03 10:06:38.859 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:38 compute-0 nova_compute[351685]: 2025-10-03 10:06:38.860 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:38 compute-0 nova_compute[351685]: 2025-10-03 10:06:38.861 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:38 compute-0 nova_compute[351685]: 2025-10-03 10:06:38.861 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:06:38 compute-0 nova_compute[351685]: 2025-10-03 10:06:38.861 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:38 compute-0 nova_compute[351685]: 2025-10-03 10:06:38.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:39 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:06:39 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/845607327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:06:39 compute-0 nova_compute[351685]: 2025-10-03 10:06:39.309 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:39 compute-0 nova_compute[351685]: 2025-10-03 10:06:39.415 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:06:39 compute-0 nova_compute[351685]: 2025-10-03 10:06:39.416 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:06:39 compute-0 nova_compute[351685]: 2025-10-03 10:06:39.416 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:06:39 compute-0 nova_compute[351685]: 2025-10-03 10:06:39.420 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:06:39 compute-0 nova_compute[351685]: 2025-10-03 10:06:39.420 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:06:39 compute-0 nova_compute[351685]: 2025-10-03 10:06:39.421 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:06:39 compute-0 nova_compute[351685]: 2025-10-03 10:06:39.756 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:06:39 compute-0 nova_compute[351685]: 2025-10-03 10:06:39.756 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3740MB free_disk=59.92198944091797GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:06:39 compute-0 nova_compute[351685]: 2025-10-03 10:06:39.757 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:39 compute-0 nova_compute[351685]: 2025-10-03 10:06:39.757 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:40 compute-0 nova_compute[351685]: 2025-10-03 10:06:40.114 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:06:40 compute-0 nova_compute[351685]: 2025-10-03 10:06:40.115 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 5b008829-2c76-4e40-b9e6-0e3d73095522 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:06:40 compute-0 nova_compute[351685]: 2025-10-03 10:06:40.115 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:06:40 compute-0 nova_compute[351685]: 2025-10-03 10:06:40.115 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:06:40 compute-0 nova_compute[351685]: 2025-10-03 10:06:40.225 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:06:40 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3054454559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:06:40 compute-0 nova_compute[351685]: 2025-10-03 10:06:40.710 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:40 compute-0 nova_compute[351685]: 2025-10-03 10:06:40.716 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:06:40 compute-0 nova_compute[351685]: 2025-10-03 10:06:40.774 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:06:40 compute-0 nova_compute[351685]: 2025-10-03 10:06:40.776 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:06:40 compute-0 nova_compute[351685]: 2025-10-03 10:06:40.776 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:40 compute-0 podman[421937]: 2025-10-03 10:06:40.82036534 +0000 UTC m=+0.074943950 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001)
Oct  3 10:06:40 compute-0 podman[421935]: 2025-10-03 10:06:40.822709685 +0000 UTC m=+0.085257390 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:06:40 compute-0 podman[421936]: 2025-10-03 10:06:40.854425931 +0000 UTC m=+0.109335812 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct  3 10:06:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:06:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:41.595 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:41.596 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:41.597 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:42 compute-0 nova_compute[351685]: 2025-10-03 10:06:42.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:42 compute-0 nova_compute[351685]: 2025-10-03 10:06:42.673 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:06:42 compute-0 nova_compute[351685]: 2025-10-03 10:06:42.673 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:06:42 compute-0 nova_compute[351685]: 2025-10-03 10:06:42.673 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:06:42 compute-0 nova_compute[351685]: 2025-10-03 10:06:42.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:06:42 compute-0 nova_compute[351685]: 2025-10-03 10:06:42.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:06:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:43.632 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:06:43 compute-0 nova_compute[351685]: 2025-10-03 10:06:43.801 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "4515b342-533d-419f-8737-773b7845ab0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:43 compute-0 nova_compute[351685]: 2025-10-03 10:06:43.801 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "4515b342-533d-419f-8737-773b7845ab0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:43 compute-0 nova_compute[351685]: 2025-10-03 10:06:43.826 2 DEBUG nova.compute.manager [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  3 10:06:43 compute-0 nova_compute[351685]: 2025-10-03 10:06:43.903 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:43 compute-0 nova_compute[351685]: 2025-10-03 10:06:43.904 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:43 compute-0 nova_compute[351685]: 2025-10-03 10:06:43.912 2 DEBUG nova.virt.hardware [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  3 10:06:43 compute-0 nova_compute[351685]: 2025-10-03 10:06:43.912 2 INFO nova.compute.claims [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  3 10:06:43 compute-0 nova_compute[351685]: 2025-10-03 10:06:43.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.028 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.239 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "cd0be179-1941-400f-a1e6-8ee6243ee71a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.241 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.257 2 DEBUG nova.compute.manager [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.319 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:06:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1836764099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.486 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.493 2 DEBUG nova.compute.provider_tree [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.508 2 DEBUG nova.scheduler.client.report [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.533 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.534 2 DEBUG nova.compute.manager [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.536 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.543 2 DEBUG nova.virt.hardware [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.543 2 INFO nova.compute.claims [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.624 2 DEBUG nova.compute.manager [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.625 2 DEBUG nova.network.neutron [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.665 2 INFO nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.709 2 DEBUG nova.compute.manager [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.776 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.845 2 DEBUG nova.compute.manager [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.847 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.847 2 INFO nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Creating image(s)#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.879 2 DEBUG nova.storage.rbd_utils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 4515b342-533d-419f-8737-773b7845ab0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.924 2 DEBUG nova.storage.rbd_utils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 4515b342-533d-419f-8737-773b7845ab0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.965 2 DEBUG nova.storage.rbd_utils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 4515b342-533d-419f-8737-773b7845ab0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:44 compute-0 nova_compute[351685]: 2025-10-03 10:06:44.972 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.038 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.038 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "8123da205344dbbb79d5d821c9749dc540280b1e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.039 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "8123da205344dbbb79d5d821c9749dc540280b1e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.039 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "8123da205344dbbb79d5d821c9749dc540280b1e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.074 2 DEBUG nova.storage.rbd_utils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 4515b342-533d-419f-8737-773b7845ab0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.080 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e 4515b342-533d-419f-8737-773b7845ab0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:06:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128202727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.234 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.241 2 DEBUG nova.compute.provider_tree [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.263 2 DEBUG nova.scheduler.client.report [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.290 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.291 2 DEBUG nova.compute.manager [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.335 2 DEBUG nova.compute.manager [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.335 2 DEBUG nova.network.neutron [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.383 2 INFO nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.436 2 DEBUG nova.compute.manager [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.470 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e 4515b342-533d-419f-8737-773b7845ab0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.587 2 DEBUG nova.compute.manager [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.590 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.590 2 INFO nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Creating image(s)#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.620 2 DEBUG nova.storage.rbd_utils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image cd0be179-1941-400f-a1e6-8ee6243ee71a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.648 2 DEBUG nova.storage.rbd_utils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image cd0be179-1941-400f-a1e6-8ee6243ee71a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.683 2 DEBUG nova.storage.rbd_utils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image cd0be179-1941-400f-a1e6-8ee6243ee71a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.690 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.725 2 DEBUG nova.storage.rbd_utils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] resizing rbd image 4515b342-533d-419f-8737-773b7845ab0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.771 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.771 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "8123da205344dbbb79d5d821c9749dc540280b1e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.773 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "8123da205344dbbb79d5d821c9749dc540280b1e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.773 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "8123da205344dbbb79d5d821c9749dc540280b1e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.803 2 DEBUG nova.storage.rbd_utils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image cd0be179-1941-400f-a1e6-8ee6243ee71a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.810 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e cd0be179-1941-400f-a1e6-8ee6243ee71a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.851 2 DEBUG nova.network.neutron [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Successfully updated port: 17d7d099-6f86-4e60-91b4-3f39e651bd00 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.905 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "refresh_cache-4515b342-533d-419f-8737-773b7845ab0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.905 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquired lock "refresh_cache-4515b342-533d-419f-8737-773b7845ab0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.906 2 DEBUG nova.network.neutron [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.915 2 DEBUG nova.objects.instance [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lazy-loading 'migration_context' on Instance uuid 4515b342-533d-419f-8737-773b7845ab0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:06:45 compute-0 nova_compute[351685]: 2025-10-03 10:06:45.989 2 DEBUG nova.storage.rbd_utils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 4515b342-533d-419f-8737-773b7845ab0f_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:06:46
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'vms', 'images', '.mgr']
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.080 2 DEBUG nova.storage.rbd_utils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 4515b342-533d-419f-8737-773b7845ab0f_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.088 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.119 2 DEBUG nova.network.neutron [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.125 2 DEBUG nova.compute.manager [req-6b3e061b-cc0f-4a03-b2e4-7c63a56300d3 req-43199e88-ca38-4adb-8774-aa1e2e634167 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Received event network-changed-17d7d099-6f86-4e60-91b4-3f39e651bd00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.126 2 DEBUG nova.compute.manager [req-6b3e061b-cc0f-4a03-b2e4-7c63a56300d3 req-43199e88-ca38-4adb-8774-aa1e2e634167 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Refreshing instance network info cache due to event network-changed-17d7d099-6f86-4e60-91b4-3f39e651bd00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.127 2 DEBUG oslo_concurrency.lockutils [req-6b3e061b-cc0f-4a03-b2e4-7c63a56300d3 req-43199e88-ca38-4adb-8774-aa1e2e634167 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-4515b342-533d-419f-8737-773b7845ab0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.185 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.186 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.186 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.187 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:06:46.196219) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486006196296, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1382, "num_deletes": 509, "total_data_size": 1673523, "memory_usage": 1701824, "flush_reason": "Manual Compaction"}
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486006206511, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1396311, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23084, "largest_seqno": 24465, "table_properties": {"data_size": 1390667, "index_size": 2464, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 15646, "raw_average_key_size": 18, "raw_value_size": 1377091, "raw_average_value_size": 1667, "num_data_blocks": 111, "num_entries": 826, "num_filter_entries": 826, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759485896, "oldest_key_time": 1759485896, "file_creation_time": 1759486006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 10350 microseconds, and 4212 cpu microseconds.
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:06:46.206566) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1396311 bytes OK
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:06:46.206584) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:06:46.208671) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:06:46.208684) EVENT_LOG_v1 {"time_micros": 1759486006208679, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:06:46.208701) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1666268, prev total WAL file size 1666268, number of live WAL files 2.
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:06:46.209688) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353034' seq:72057594037927935, type:22 .. '6C6F676D00373537' seq:0, type:0; will stop at (end)
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1363KB)], [53(9131KB)]
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486006209763, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10746836, "oldest_snapshot_seqno": -1}
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.223 2 DEBUG nova.storage.rbd_utils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 4515b342-533d-419f-8737-773b7845ab0f_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.232 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 4515b342-533d-419f-8737-773b7845ab0f_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.265 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e cd0be179-1941-400f-a1e6-8ee6243ee71a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4580 keys, 7588199 bytes, temperature: kUnknown
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486006272946, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7588199, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7557170, "index_size": 18503, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11461, "raw_key_size": 114553, "raw_average_key_size": 25, "raw_value_size": 7473825, "raw_average_value_size": 1631, "num_data_blocks": 769, "num_entries": 4580, "num_filter_entries": 4580, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759486006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:06:46.273189) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7588199 bytes
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:06:46.277438) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.9 rd, 120.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.9 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(13.1) write-amplify(5.4) OK, records in: 5592, records dropped: 1012 output_compression: NoCompression
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:06:46.277469) EVENT_LOG_v1 {"time_micros": 1759486006277455, "job": 28, "event": "compaction_finished", "compaction_time_micros": 63259, "compaction_time_cpu_micros": 30074, "output_level": 6, "num_output_files": 1, "total_output_size": 7588199, "num_input_records": 5592, "num_output_records": 4580, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486006278150, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486006282426, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:06:46.209412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:06:46.282607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:06:46.282614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:06:46.282617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:06:46.282620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:06:46 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:06:46.282623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.399 2 DEBUG nova.storage.rbd_utils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] resizing rbd image cd0be179-1941-400f-a1e6-8ee6243ee71a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.574 2 DEBUG nova.network.neutron [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Successfully updated port: 13472a1d-91d3-44c2-8d02-1ced64234ab1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.584 2 DEBUG nova.objects.instance [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lazy-loading 'migration_context' on Instance uuid cd0be179-1941-400f-a1e6-8ee6243ee71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.601 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "refresh_cache-cd0be179-1941-400f-a1e6-8ee6243ee71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.602 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquired lock "refresh_cache-cd0be179-1941-400f-a1e6-8ee6243ee71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.602 2 DEBUG nova.network.neutron [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.683 2 DEBUG nova.storage.rbd_utils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image cd0be179-1941-400f-a1e6-8ee6243ee71a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.731 2 DEBUG nova.storage.rbd_utils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image cd0be179-1941-400f-a1e6-8ee6243ee71a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.746 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.765 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 4515b342-533d-419f-8737-773b7845ab0f_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:46 compute-0 podman[422480]: 2025-10-03 10:06:46.809636111 +0000 UTC m=+0.069802086 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.814 2 DEBUG nova.network.neutron [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.818 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.818 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.819 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.819 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.848 2 DEBUG nova.storage.rbd_utils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image cd0be179-1941-400f-a1e6-8ee6243ee71a_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.854 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 cd0be179-1941-400f-a1e6-8ee6243ee71a_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 139 MiB data, 285 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.987 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.989 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Ensure instance console log exists: /var/lib/nova/instances/4515b342-533d-419f-8737-773b7845ab0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.990 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.992 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:46 compute-0 nova_compute[351685]: 2025-10-03 10:06:46.993 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.002 2 DEBUG nova.network.neutron [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Updating instance_info_cache with network_info: [{"id": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "address": "fa:16:3e:b0:d7:1b", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.155", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17d7d099-6f", "ovs_interfaceid": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.020 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Releasing lock "refresh_cache-4515b342-533d-419f-8737-773b7845ab0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.020 2 DEBUG nova.compute.manager [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Instance network_info: |[{"id": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "address": "fa:16:3e:b0:d7:1b", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.155", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17d7d099-6f", "ovs_interfaceid": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.021 2 DEBUG oslo_concurrency.lockutils [req-6b3e061b-cc0f-4a03-b2e4-7c63a56300d3 req-43199e88-ca38-4adb-8774-aa1e2e634167 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-4515b342-533d-419f-8737-773b7845ab0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.021 2 DEBUG nova.network.neutron [req-6b3e061b-cc0f-4a03-b2e4-7c63a56300d3 req-43199e88-ca38-4adb-8774-aa1e2e634167 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Refreshing network info cache for port 17d7d099-6f86-4e60-91b4-3f39e651bd00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.028 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Start _get_guest_xml network_info=[{"id": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "address": "fa:16:3e:b0:d7:1b", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.155", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17d7d099-6f", "ovs_interfaceid": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-03T10:00:30Z,direct_url=<?>,disk_format='qcow2',id=37f03e8a-3aed-46a5-8219-fc87e355127e,min_disk=0,min_ram=0,name='cirros',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-03T10:00:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 0, 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}], 'ephemerals': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vdb', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 1, 'encrypted': False, 'encryption_options': None, 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.038 2 WARNING nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.050 2 DEBUG nova.virt.libvirt.host [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.051 2 DEBUG nova.virt.libvirt.host [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.056 2 DEBUG nova.virt.libvirt.host [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.056 2 DEBUG nova.virt.libvirt.host [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.057 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.057 2 DEBUG nova.virt.hardware [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-03T10:00:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='ada739ee-222b-4269-8d29-62bea534173e',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-03T10:00:30Z,direct_url=<?>,disk_format='qcow2',id=37f03e8a-3aed-46a5-8219-fc87e355127e,min_disk=0,min_ram=0,name='cirros',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-03T10:00:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.057 2 DEBUG nova.virt.hardware [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.057 2 DEBUG nova.virt.hardware [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.058 2 DEBUG nova.virt.hardware [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.058 2 DEBUG nova.virt.hardware [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.058 2 DEBUG nova.virt.hardware [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.058 2 DEBUG nova.virt.hardware [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.058 2 DEBUG nova.virt.hardware [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.058 2 DEBUG nova.virt.hardware [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.059 2 DEBUG nova.virt.hardware [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.059 2 DEBUG nova.virt.hardware [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.062 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.286 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 cd0be179-1941-400f-a1e6-8ee6243ee71a_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.456 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.457 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Ensure instance console log exists: /var/lib/nova/instances/cd0be179-1941-400f-a1e6-8ee6243ee71a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.457 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.458 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.458 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:06:47 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2100749' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.568 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.569 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.752 2 DEBUG nova.network.neutron [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Updating instance_info_cache with network_info: [{"id": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "address": "fa:16:3e:3f:37:aa", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13472a1d-91", "ovs_interfaceid": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.786 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Releasing lock "refresh_cache-cd0be179-1941-400f-a1e6-8ee6243ee71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.786 2 DEBUG nova.compute.manager [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Instance network_info: |[{"id": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "address": "fa:16:3e:3f:37:aa", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13472a1d-91", "ovs_interfaceid": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.791 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Start _get_guest_xml network_info=[{"id": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "address": "fa:16:3e:3f:37:aa", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13472a1d-91", "ovs_interfaceid": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-03T10:00:30Z,direct_url=<?>,disk_format='qcow2',id=37f03e8a-3aed-46a5-8219-fc87e355127e,min_disk=0,min_ram=0,name='cirros',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-03T10:00:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 0, 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}], 'ephemerals': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vdb', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 1, 'encrypted': False, 'encryption_options': None, 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.802 2 WARNING nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.817 2 DEBUG nova.virt.libvirt.host [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.818 2 DEBUG nova.virt.libvirt.host [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.824 2 DEBUG nova.virt.libvirt.host [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.825 2 DEBUG nova.virt.libvirt.host [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.825 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.826 2 DEBUG nova.virt.hardware [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-03T10:00:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='ada739ee-222b-4269-8d29-62bea534173e',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-03T10:00:30Z,direct_url=<?>,disk_format='qcow2',id=37f03e8a-3aed-46a5-8219-fc87e355127e,min_disk=0,min_ram=0,name='cirros',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-03T10:00:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.826 2 DEBUG nova.virt.hardware [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.826 2 DEBUG nova.virt.hardware [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.826 2 DEBUG nova.virt.hardware [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.826 2 DEBUG nova.virt.hardware [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.827 2 DEBUG nova.virt.hardware [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.827 2 DEBUG nova.virt.hardware [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.827 2 DEBUG nova.virt.hardware [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.827 2 DEBUG nova.virt.hardware [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.827 2 DEBUG nova.virt.hardware [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.828 2 DEBUG nova.virt.hardware [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  3 10:06:47 compute-0 nova_compute[351685]: 2025-10-03 10:06:47.831 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:06:48 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2986197841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.080 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.112 2 DEBUG nova.storage.rbd_utils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 4515b342-533d-419f-8737-773b7845ab0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.120 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.154 2 DEBUG nova.compute.manager [req-3fb960bc-fd64-4c49-b27e-c71a6028423d req-5461064c-8809-49ae-8d4d-cc6dad20d570 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Received event network-changed-13472a1d-91d3-44c2-8d02-1ced64234ab1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.155 2 DEBUG nova.compute.manager [req-3fb960bc-fd64-4c49-b27e-c71a6028423d req-5461064c-8809-49ae-8d4d-cc6dad20d570 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Refreshing instance network info cache due to event network-changed-13472a1d-91d3-44c2-8d02-1ced64234ab1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.155 2 DEBUG oslo_concurrency.lockutils [req-3fb960bc-fd64-4c49-b27e-c71a6028423d req-5461064c-8809-49ae-8d4d-cc6dad20d570 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-cd0be179-1941-400f-a1e6-8ee6243ee71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.155 2 DEBUG oslo_concurrency.lockutils [req-3fb960bc-fd64-4c49-b27e-c71a6028423d req-5461064c-8809-49ae-8d4d-cc6dad20d570 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-cd0be179-1941-400f-a1e6-8ee6243ee71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.155 2 DEBUG nova.network.neutron [req-3fb960bc-fd64-4c49-b27e-c71a6028423d req-5461064c-8809-49ae-8d4d-cc6dad20d570 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Refreshing network info cache for port 13472a1d-91d3-44c2-8d02-1ced64234ab1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.265 2 DEBUG nova.network.neutron [req-6b3e061b-cc0f-4a03-b2e4-7c63a56300d3 req-43199e88-ca38-4adb-8774-aa1e2e634167 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Updated VIF entry in instance network info cache for port 17d7d099-6f86-4e60-91b4-3f39e651bd00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.266 2 DEBUG nova.network.neutron [req-6b3e061b-cc0f-4a03-b2e4-7c63a56300d3 req-43199e88-ca38-4adb-8774-aa1e2e634167 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Updating instance_info_cache with network_info: [{"id": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "address": "fa:16:3e:b0:d7:1b", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.155", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17d7d099-6f", "ovs_interfaceid": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.282 2 DEBUG oslo_concurrency.lockutils [req-6b3e061b-cc0f-4a03-b2e4-7c63a56300d3 req-43199e88-ca38-4adb-8774-aa1e2e634167 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-4515b342-533d-419f-8737-773b7845ab0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:06:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:06:48 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/428102931' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.311 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.315 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:06:48 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1306967588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.578 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.580 2 DEBUG nova.virt.libvirt.vif [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T10:06:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-yplftfd-uusud7ile66j-pamjds7x5q7s-vnf-4niegkol5tze',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-yplftfd-uusud7ile66j-pamjds7x5q7s-vnf-4niegkol5tze',id=3,image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='09b6fef3-eb54-4e45-9716-a57b7d592bd8'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee75a4dc6ade43baab6ee923c9cf4cdf',ramdisk_id='',reservation_id='r-po108x5f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T10:06:44Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT00MzE3OTE1NzU0NDAwMDMyMDc2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTQzMTc5MTU3NTQ0MDAwMzIwNzY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NDMxNzkxNTc1NDQwMDAzMjA3Nj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTQzMTc5MTU3NTQ0MDAwMzIwNzY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT00MzE3OTE1NzU0NDAwMDMyMDc2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT00MzE3OTE1NzU0NDAwMDMyMDc2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcW
Oct  3 10:06:48 compute-0 nova_compute[351685]: Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NDMxNzkxNTc1NDQwMDAzMjA3Nj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTQzMTc5MTU3NTQ0MDAwMzIwNzY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT00MzE3OTE1NzU0NDAwMDMyMDc2PT0tLQo=',user_id='2f408449ba0f42fcb69f92dbf541f2e3',uuid=4515b342-533d-419f-8737-773b7845ab0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "address": "fa:16:3e:b0:d7:1b", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.155", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17d7d099-6f", "ovs_interfaceid": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.580 2 DEBUG nova.network.os_vif_util [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converting VIF {"id": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "address": "fa:16:3e:b0:d7:1b", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.155", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17d7d099-6f", "ovs_interfaceid": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.582 2 DEBUG nova.network.os_vif_util [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:d7:1b,bridge_name='br-int',has_traffic_filtering=True,id=17d7d099-6f86-4e60-91b4-3f39e651bd00,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap17d7d099-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.582 2 DEBUG nova.objects.instance [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lazy-loading 'pci_devices' on Instance uuid 4515b342-533d-419f-8737-773b7845ab0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.598 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] End _get_guest_xml xml=<domain type="kvm">
Oct  3 10:06:48 compute-0 nova_compute[351685]:  <uuid>4515b342-533d-419f-8737-773b7845ab0f</uuid>
Oct  3 10:06:48 compute-0 nova_compute[351685]:  <name>instance-00000003</name>
Oct  3 10:06:48 compute-0 nova_compute[351685]:  <memory>524288</memory>
Oct  3 10:06:48 compute-0 nova_compute[351685]:  <vcpu>1</vcpu>
Oct  3 10:06:48 compute-0 nova_compute[351685]:  <metadata>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <nova:name>vn-yplftfd-uusud7ile66j-pamjds7x5q7s-vnf-4niegkol5tze</nova:name>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <nova:creationTime>2025-10-03 10:06:47</nova:creationTime>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <nova:flavor name="m1.small">
Oct  3 10:06:48 compute-0 nova_compute[351685]:        <nova:memory>512</nova:memory>
Oct  3 10:06:48 compute-0 nova_compute[351685]:        <nova:disk>1</nova:disk>
Oct  3 10:06:48 compute-0 nova_compute[351685]:        <nova:swap>0</nova:swap>
Oct  3 10:06:48 compute-0 nova_compute[351685]:        <nova:ephemeral>1</nova:ephemeral>
Oct  3 10:06:48 compute-0 nova_compute[351685]:        <nova:vcpus>1</nova:vcpus>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      </nova:flavor>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <nova:owner>
Oct  3 10:06:48 compute-0 nova_compute[351685]:        <nova:user uuid="2f408449ba0f42fcb69f92dbf541f2e3">admin</nova:user>
Oct  3 10:06:48 compute-0 nova_compute[351685]:        <nova:project uuid="ee75a4dc6ade43baab6ee923c9cf4cdf">admin</nova:project>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      </nova:owner>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <nova:root type="image" uuid="37f03e8a-3aed-46a5-8219-fc87e355127e"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <nova:ports>
Oct  3 10:06:48 compute-0 nova_compute[351685]:        <nova:port uuid="17d7d099-6f86-4e60-91b4-3f39e651bd00">
Oct  3 10:06:48 compute-0 nova_compute[351685]:          <nova:ip type="fixed" address="192.168.0.155" ipVersion="4"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:        </nova:port>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      </nova:ports>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    </nova:instance>
Oct  3 10:06:48 compute-0 nova_compute[351685]:  </metadata>
Oct  3 10:06:48 compute-0 nova_compute[351685]:  <sysinfo type="smbios">
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <system>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <entry name="manufacturer">RDO</entry>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <entry name="product">OpenStack Compute</entry>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <entry name="serial">4515b342-533d-419f-8737-773b7845ab0f</entry>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <entry name="uuid">4515b342-533d-419f-8737-773b7845ab0f</entry>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <entry name="family">Virtual Machine</entry>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    </system>
Oct  3 10:06:48 compute-0 nova_compute[351685]:  </sysinfo>
Oct  3 10:06:48 compute-0 nova_compute[351685]:  <os>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <boot dev="hd"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <smbios mode="sysinfo"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:  </os>
Oct  3 10:06:48 compute-0 nova_compute[351685]:  <features>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <acpi/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <apic/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <vmcoreinfo/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:  </features>
Oct  3 10:06:48 compute-0 nova_compute[351685]:  <clock offset="utc">
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <timer name="pit" tickpolicy="delay"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <timer name="hpet" present="no"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:  </clock>
Oct  3 10:06:48 compute-0 nova_compute[351685]:  <cpu mode="host-model" match="exact">
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <topology sockets="1" cores="1" threads="1"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:  </cpu>
Oct  3 10:06:48 compute-0 nova_compute[351685]:  <devices>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/4515b342-533d-419f-8737-773b7845ab0f_disk">
Oct  3 10:06:48 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      </source>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:06:48 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <target dev="vda" bus="virtio"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/4515b342-533d-419f-8737-773b7845ab0f_disk.eph0">
Oct  3 10:06:48 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      </source>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:06:48 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <target dev="vdb" bus="virtio"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <disk type="network" device="cdrom">
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/4515b342-533d-419f-8737-773b7845ab0f_disk.config">
Oct  3 10:06:48 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      </source>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:06:48 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <target dev="sda" bus="sata"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <interface type="ethernet">
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <mac address="fa:16:3e:b0:d7:1b"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <driver name="vhost" rx_queue_size="512"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <mtu size="1442"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <target dev="tap17d7d099-6f"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    </interface>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <serial type="pty">
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <log file="/var/lib/nova/instances/4515b342-533d-419f-8737-773b7845ab0f/console.log" append="off"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    </serial>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <video>
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    </video>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <input type="tablet" bus="usb"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <rng model="virtio">
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <backend model="random">/dev/urandom</backend>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    </rng>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <controller type="usb" index="0"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    <memballoon model="virtio">
Oct  3 10:06:48 compute-0 nova_compute[351685]:      <stats period="10"/>
Oct  3 10:06:48 compute-0 nova_compute[351685]:    </memballoon>
Oct  3 10:06:48 compute-0 nova_compute[351685]:  </devices>
Oct  3 10:06:48 compute-0 nova_compute[351685]: </domain>
Oct  3 10:06:48 compute-0 nova_compute[351685]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.598 2 DEBUG nova.compute.manager [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Preparing to wait for external event network-vif-plugged-17d7d099-6f86-4e60-91b4-3f39e651bd00 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.598 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "4515b342-533d-419f-8737-773b7845ab0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.599 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "4515b342-533d-419f-8737-773b7845ab0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.599 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "4515b342-533d-419f-8737-773b7845ab0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:48 compute-0 NetworkManager[45015]: <info>  [1759486008.6080] manager: (tap17d7d099-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.599 2 DEBUG nova.virt.libvirt.vif [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T10:06:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-yplftfd-uusud7ile66j-pamjds7x5q7s-vnf-4niegkol5tze',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-yplftfd-uusud7ile66j-pamjds7x5q7s-vnf-4niegkol5tze',id=3,image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='09b6fef3-eb54-4e45-9716-a57b7d592bd8'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee75a4dc6ade43baab6ee923c9cf4cdf',ramdisk_id='',reservation_id='r-po108x5f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T10:06:44Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT00MzE3OTE1NzU0NDAwMDMyMDc2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTQzMTc5MTU3NTQ0MDAwMzIwNzY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NDMxNzkxNTc1NDQwMDAzMjA3Nj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTQzMTc5MTU3NTQ0MDAwMzIwNzY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT00MzE3OTE1NzU0NDAwMDMyMDc2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT00MzE3OTE1NzU0NDAwMDMyMDc2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykp
Oct  3 10:06:48 compute-0 nova_compute[351685]: YXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NDMxNzkxNTc1NDQwMDAzMjA3Nj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTQzMTc5MTU3NTQ0MDAwMzIwNzY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT00MzE3OTE1NzU0NDAwMDMyMDc2PT0tLQo=',user_id='2f408449ba0f42fcb69f92dbf541f2e3',uuid=4515b342-533d-419f-8737-773b7845ab0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "address": "fa:16:3e:b0:d7:1b", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.155", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17d7d099-6f", "ovs_interfaceid": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.600 2 DEBUG nova.network.os_vif_util [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converting VIF {"id": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "address": "fa:16:3e:b0:d7:1b", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.155", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17d7d099-6f", "ovs_interfaceid": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.600 2 DEBUG nova.network.os_vif_util [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:d7:1b,bridge_name='br-int',has_traffic_filtering=True,id=17d7d099-6f86-4e60-91b4-3f39e651bd00,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap17d7d099-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.600 2 DEBUG os_vif [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:d7:1b,bridge_name='br-int',has_traffic_filtering=True,id=17d7d099-6f86-4e60-91b4-3f39e651bd00,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap17d7d099-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.602 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.605 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17d7d099-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.605 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap17d7d099-6f, col_values=(('external_ids', {'iface-id': '17d7d099-6f86-4e60-91b4-3f39e651bd00', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:d7:1b', 'vm-uuid': '4515b342-533d-419f-8737-773b7845ab0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.621 2 INFO os_vif [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:d7:1b,bridge_name='br-int',has_traffic_filtering=True,id=17d7d099-6f86-4e60-91b4-3f39e651bd00,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap17d7d099-6f')#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.670 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.670 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.670 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.670 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No VIF found with MAC fa:16:3e:b0:d7:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.671 2 INFO nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Using config drive#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.708 2 DEBUG nova.storage.rbd_utils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 4515b342-533d-419f-8737-773b7845ab0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:06:48 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4066018903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.816 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.849 2 DEBUG nova.storage.rbd_utils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image cd0be179-1941-400f-a1e6-8ee6243ee71a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:48 compute-0 rsyslogd[187556]: message too long (8192) with configured size 8096, begin of message is: 2025-10-03 10:06:48.580 2 DEBUG nova.virt.libvirt.vif [None req-922dbeba-27c6-40 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  3 10:06:48 compute-0 nova_compute[351685]: 2025-10-03 10:06:48.862 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:48 compute-0 rsyslogd[187556]: message too long (8192) with configured size 8096, begin of message is: 2025-10-03 10:06:48.599 2 DEBUG nova.virt.libvirt.vif [None req-922dbeba-27c6-40 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  3 10:06:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 160 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 469 KiB/s wr, 34 op/s
Oct  3 10:06:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:06:49 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/374856954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.375 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.379 2 DEBUG nova.virt.libvirt.vif [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T10:06:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-yplftfd-tmqfhxgxqbpz-nlbkra67kned-vnf-ck27xuhmg25j',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-yplftfd-tmqfhxgxqbpz-nlbkra67kned-vnf-ck27xuhmg25j',id=4,image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='09b6fef3-eb54-4e45-9716-a57b7d592bd8'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee75a4dc6ade43baab6ee923c9cf4cdf',ramdisk_id='',reservation_id='r-3ggt3wig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T10:06:45Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xNjg5NzM2NDI2NDAzMDM0MTYyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTE2ODk3MzY0MjY0MDMwMzQxNjI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTY4OTczNjQyNjQwMzAzNDE2Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTE2ODk3MzY0MjY0MDMwMzQxNjI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xNjg5NzM2NDI2NDAzMDM0MTYyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xNjg5NzM2NDI2NDAzMDM0MTYyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcW
Oct  3 10:06:49 compute-0 nova_compute[351685]: Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTY4OTczNjQyNjQwMzAzNDE2Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTE2ODk3MzY0MjY0MDMwMzQxNjI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xNjg5NzM2NDI2NDAzMDM0MTYyPT0tLQo=',user_id='2f408449ba0f42fcb69f92dbf541f2e3',uuid=cd0be179-1941-400f-a1e6-8ee6243ee71a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "address": "fa:16:3e:3f:37:aa", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13472a1d-91", "ovs_interfaceid": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.380 2 DEBUG nova.network.os_vif_util [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converting VIF {"id": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "address": "fa:16:3e:3f:37:aa", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13472a1d-91", "ovs_interfaceid": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.382 2 DEBUG nova.network.os_vif_util [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:37:aa,bridge_name='br-int',has_traffic_filtering=True,id=13472a1d-91d3-44c2-8d02-1ced64234ab1,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap13472a1d-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.385 2 DEBUG nova.objects.instance [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lazy-loading 'pci_devices' on Instance uuid cd0be179-1941-400f-a1e6-8ee6243ee71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.406 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] End _get_guest_xml xml=<domain type="kvm">
Oct  3 10:06:49 compute-0 nova_compute[351685]:  <uuid>cd0be179-1941-400f-a1e6-8ee6243ee71a</uuid>
Oct  3 10:06:49 compute-0 nova_compute[351685]:  <name>instance-00000004</name>
Oct  3 10:06:49 compute-0 nova_compute[351685]:  <memory>524288</memory>
Oct  3 10:06:49 compute-0 nova_compute[351685]:  <vcpu>1</vcpu>
Oct  3 10:06:49 compute-0 nova_compute[351685]:  <metadata>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <nova:name>vn-yplftfd-tmqfhxgxqbpz-nlbkra67kned-vnf-ck27xuhmg25j</nova:name>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <nova:creationTime>2025-10-03 10:06:47</nova:creationTime>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <nova:flavor name="m1.small">
Oct  3 10:06:49 compute-0 nova_compute[351685]:        <nova:memory>512</nova:memory>
Oct  3 10:06:49 compute-0 nova_compute[351685]:        <nova:disk>1</nova:disk>
Oct  3 10:06:49 compute-0 nova_compute[351685]:        <nova:swap>0</nova:swap>
Oct  3 10:06:49 compute-0 nova_compute[351685]:        <nova:ephemeral>1</nova:ephemeral>
Oct  3 10:06:49 compute-0 nova_compute[351685]:        <nova:vcpus>1</nova:vcpus>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      </nova:flavor>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <nova:owner>
Oct  3 10:06:49 compute-0 nova_compute[351685]:        <nova:user uuid="2f408449ba0f42fcb69f92dbf541f2e3">admin</nova:user>
Oct  3 10:06:49 compute-0 nova_compute[351685]:        <nova:project uuid="ee75a4dc6ade43baab6ee923c9cf4cdf">admin</nova:project>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      </nova:owner>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <nova:root type="image" uuid="37f03e8a-3aed-46a5-8219-fc87e355127e"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <nova:ports>
Oct  3 10:06:49 compute-0 nova_compute[351685]:        <nova:port uuid="13472a1d-91d3-44c2-8d02-1ced64234ab1">
Oct  3 10:06:49 compute-0 nova_compute[351685]:          <nova:ip type="fixed" address="192.168.0.177" ipVersion="4"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:        </nova:port>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      </nova:ports>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    </nova:instance>
Oct  3 10:06:49 compute-0 nova_compute[351685]:  </metadata>
Oct  3 10:06:49 compute-0 nova_compute[351685]:  <sysinfo type="smbios">
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <system>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <entry name="manufacturer">RDO</entry>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <entry name="product">OpenStack Compute</entry>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <entry name="serial">cd0be179-1941-400f-a1e6-8ee6243ee71a</entry>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <entry name="uuid">cd0be179-1941-400f-a1e6-8ee6243ee71a</entry>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <entry name="family">Virtual Machine</entry>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    </system>
Oct  3 10:06:49 compute-0 nova_compute[351685]:  </sysinfo>
Oct  3 10:06:49 compute-0 nova_compute[351685]:  <os>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <boot dev="hd"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <smbios mode="sysinfo"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:  </os>
Oct  3 10:06:49 compute-0 nova_compute[351685]:  <features>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <acpi/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <apic/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <vmcoreinfo/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:  </features>
Oct  3 10:06:49 compute-0 nova_compute[351685]:  <clock offset="utc">
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <timer name="pit" tickpolicy="delay"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <timer name="hpet" present="no"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:  </clock>
Oct  3 10:06:49 compute-0 nova_compute[351685]:  <cpu mode="host-model" match="exact">
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <topology sockets="1" cores="1" threads="1"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:  </cpu>
Oct  3 10:06:49 compute-0 nova_compute[351685]:  <devices>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/cd0be179-1941-400f-a1e6-8ee6243ee71a_disk">
Oct  3 10:06:49 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      </source>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:06:49 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <target dev="vda" bus="virtio"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/cd0be179-1941-400f-a1e6-8ee6243ee71a_disk.eph0">
Oct  3 10:06:49 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      </source>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:06:49 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <target dev="vdb" bus="virtio"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <disk type="network" device="cdrom">
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/cd0be179-1941-400f-a1e6-8ee6243ee71a_disk.config">
Oct  3 10:06:49 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      </source>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:06:49 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <target dev="sda" bus="sata"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <interface type="ethernet">
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <mac address="fa:16:3e:3f:37:aa"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <driver name="vhost" rx_queue_size="512"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <mtu size="1442"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <target dev="tap13472a1d-91"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    </interface>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <serial type="pty">
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <log file="/var/lib/nova/instances/cd0be179-1941-400f-a1e6-8ee6243ee71a/console.log" append="off"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    </serial>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <video>
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    </video>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <input type="tablet" bus="usb"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <rng model="virtio">
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <backend model="random">/dev/urandom</backend>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    </rng>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <controller type="usb" index="0"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    <memballoon model="virtio">
Oct  3 10:06:49 compute-0 nova_compute[351685]:      <stats period="10"/>
Oct  3 10:06:49 compute-0 nova_compute[351685]:    </memballoon>
Oct  3 10:06:49 compute-0 nova_compute[351685]:  </devices>
Oct  3 10:06:49 compute-0 nova_compute[351685]: </domain>
Oct  3 10:06:49 compute-0 nova_compute[351685]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.407 2 DEBUG nova.compute.manager [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Preparing to wait for external event network-vif-plugged-13472a1d-91d3-44c2-8d02-1ced64234ab1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.407 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.408 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.409 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.410 2 DEBUG nova.virt.libvirt.vif [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T10:06:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-yplftfd-tmqfhxgxqbpz-nlbkra67kned-vnf-ck27xuhmg25j',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-yplftfd-tmqfhxgxqbpz-nlbkra67kned-vnf-ck27xuhmg25j',id=4,image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='09b6fef3-eb54-4e45-9716-a57b7d592bd8'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee75a4dc6ade43baab6ee923c9cf4cdf',ramdisk_id='',reservation_id='r-3ggt3wig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T10:06:45Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xNjg5NzM2NDI2NDAzMDM0MTYyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTE2ODk3MzY0MjY0MDMwMzQxNjI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTY4OTczNjQyNjQwMzAzNDE2Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTE2ODk3MzY0MjY0MDMwMzQxNjI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xNjg5NzM2NDI2NDAzMDM0MTYyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xNjg5NzM2NDI2NDAzMDM0MTYyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykp
Oct  3 10:06:49 compute-0 nova_compute[351685]: YXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTY4OTczNjQyNjQwMzAzNDE2Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTE2ODk3MzY0MjY0MDMwMzQxNjI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xNjg5NzM2NDI2NDAzMDM0MTYyPT0tLQo=',user_id='2f408449ba0f42fcb69f92dbf541f2e3',uuid=cd0be179-1941-400f-a1e6-8ee6243ee71a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "address": "fa:16:3e:3f:37:aa", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13472a1d-91", "ovs_interfaceid": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.410 2 DEBUG nova.network.os_vif_util [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converting VIF {"id": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "address": "fa:16:3e:3f:37:aa", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13472a1d-91", "ovs_interfaceid": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.411 2 DEBUG nova.network.os_vif_util [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:37:aa,bridge_name='br-int',has_traffic_filtering=True,id=13472a1d-91d3-44c2-8d02-1ced64234ab1,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap13472a1d-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.412 2 DEBUG os_vif [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:37:aa,bridge_name='br-int',has_traffic_filtering=True,id=13472a1d-91d3-44c2-8d02-1ced64234ab1,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap13472a1d-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13472a1d-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.420 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap13472a1d-91, col_values=(('external_ids', {'iface-id': '13472a1d-91d3-44c2-8d02-1ced64234ab1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:37:aa', 'vm-uuid': 'cd0be179-1941-400f-a1e6-8ee6243ee71a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:49 compute-0 NetworkManager[45015]: <info>  [1759486009.4247] manager: (tap13472a1d-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.434 2 INFO os_vif [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:37:aa,bridge_name='br-int',has_traffic_filtering=True,id=13472a1d-91d3-44c2-8d02-1ced64234ab1,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap13472a1d-91')#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.487 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.487 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.488 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.488 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No VIF found with MAC fa:16:3e:3f:37:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  3 10:06:49 compute-0 rsyslogd[187556]: message too long (8192) with configured size 8096, begin of message is: 2025-10-03 10:06:49.379 2 DEBUG nova.virt.libvirt.vif [None req-c827cef2-eb71-4e [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.489 2 INFO nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Using config drive#033[00m
Oct  3 10:06:49 compute-0 rsyslogd[187556]: message too long (8192) with configured size 8096, begin of message is: 2025-10-03 10:06:49.410 2 DEBUG nova.virt.libvirt.vif [None req-c827cef2-eb71-4e [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  3 10:06:49 compute-0 nova_compute[351685]: 2025-10-03 10:06:49.528 2 DEBUG nova.storage.rbd_utils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image cd0be179-1941-400f-a1e6-8ee6243ee71a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.018 2 INFO nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Creating config drive at /var/lib/nova/instances/4515b342-533d-419f-8737-773b7845ab0f/disk.config#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.030 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4515b342-533d-419f-8737-773b7845ab0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5evnex4i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.094 2 INFO nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Creating config drive at /var/lib/nova/instances/cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.config#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.101 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp055zehww execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.180 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4515b342-533d-419f-8737-773b7845ab0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5evnex4i" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.229 2 DEBUG nova.storage.rbd_utils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 4515b342-533d-419f-8737-773b7845ab0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.243 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4515b342-533d-419f-8737-773b7845ab0f/disk.config 4515b342-533d-419f-8737-773b7845ab0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.269 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp055zehww" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.315 2 DEBUG nova.storage.rbd_utils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image cd0be179-1941-400f-a1e6-8ee6243ee71a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.332 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.config cd0be179-1941-400f-a1e6-8ee6243ee71a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.445 2 DEBUG oslo_concurrency.processutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4515b342-533d-419f-8737-773b7845ab0f/disk.config 4515b342-533d-419f-8737-773b7845ab0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.447 2 INFO nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Deleting local config drive /var/lib/nova/instances/4515b342-533d-419f-8737-773b7845ab0f/disk.config because it was imported into RBD.#033[00m
Oct  3 10:06:50 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.488 2 DEBUG nova.network.neutron [req-3fb960bc-fd64-4c49-b27e-c71a6028423d req-5461064c-8809-49ae-8d4d-cc6dad20d570 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Updated VIF entry in instance network info cache for port 13472a1d-91d3-44c2-8d02-1ced64234ab1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.488 2 DEBUG nova.network.neutron [req-3fb960bc-fd64-4c49-b27e-c71a6028423d req-5461064c-8809-49ae-8d4d-cc6dad20d570 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Updating instance_info_cache with network_info: [{"id": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "address": "fa:16:3e:3f:37:aa", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13472a1d-91", "ovs_interfaceid": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:06:50 compute-0 systemd[1]: Started libvirt secret daemon.
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.505 2 DEBUG oslo_concurrency.lockutils [req-3fb960bc-fd64-4c49-b27e-c71a6028423d req-5461064c-8809-49ae-8d4d-cc6dad20d570 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-cd0be179-1941-400f-a1e6-8ee6243ee71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:06:50 compute-0 NetworkManager[45015]: <info>  [1759486010.5541] manager: (tap17d7d099-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Oct  3 10:06:50 compute-0 kernel: tap17d7d099-6f: entered promiscuous mode
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:50 compute-0 ovn_controller[88471]: 2025-10-03T10:06:50Z|00040|binding|INFO|Claiming lport 17d7d099-6f86-4e60-91b4-3f39e651bd00 for this chassis.
Oct  3 10:06:50 compute-0 ovn_controller[88471]: 2025-10-03T10:06:50Z|00041|binding|INFO|17d7d099-6f86-4e60-91b4-3f39e651bd00: Claiming fa:16:3e:b0:d7:1b 192.168.0.155
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.579 2 DEBUG oslo_concurrency.processutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.config cd0be179-1941-400f-a1e6-8ee6243ee71a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.579 2 INFO nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Deleting local config drive /var/lib/nova/instances/cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.config because it was imported into RBD.#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.590 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:d7:1b 192.168.0.155'], port_security=['fa:16:3e:b0:d7:1b 192.168.0.155'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-e77v6yplftfd-uusud7ile66j-pamjds7x5q7s-port-vjfjwrjaud4q', 'neutron:cidrs': '192.168.0.155/24', 'neutron:device_id': '4515b342-533d-419f-8737-773b7845ab0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-e77v6yplftfd-uusud7ile66j-pamjds7x5q7s-port-vjfjwrjaud4q', 'neutron:project_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c22f45fa-3e9c-4adb-8724-80552acd48b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c829e9c2-c63b-44e6-806c-cc11bdf56258, chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=17d7d099-6f86-4e60-91b4-3f39e651bd00) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.591 284328 INFO neutron.agent.ovn.metadata.agent [-] Port 17d7d099-6f86-4e60-91b4-3f39e651bd00 in datapath 67eed0ac-d131-40ed-a5fe-0484d04236a0 bound to our chassis#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.592 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67eed0ac-d131-40ed-a5fe-0484d04236a0#033[00m
Oct  3 10:06:50 compute-0 ovn_controller[88471]: 2025-10-03T10:06:50Z|00042|binding|INFO|Setting lport 17d7d099-6f86-4e60-91b4-3f39e651bd00 ovn-installed in OVS
Oct  3 10:06:50 compute-0 ovn_controller[88471]: 2025-10-03T10:06:50Z|00043|binding|INFO|Setting lport 17d7d099-6f86-4e60-91b4-3f39e651bd00 up in Southbound
Oct  3 10:06:50 compute-0 systemd-machined[137653]: New machine qemu-3-instance-00000003.
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:50 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Oct  3 10:06:50 compute-0 systemd-udevd[422974]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.611 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[0566f870-ce76-4764-b941-952d219f43a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:50 compute-0 NetworkManager[45015]: <info>  [1759486010.6363] device (tap17d7d099-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 10:06:50 compute-0 NetworkManager[45015]: <info>  [1759486010.6404] device (tap17d7d099-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.651 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[471e1f4b-bc60-4f43-93cb-253ede914f5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.654 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[3954fdbc-2208-41a5-a9b4-aac66d90eb4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:50 compute-0 kernel: tap13472a1d-91: entered promiscuous mode
Oct  3 10:06:50 compute-0 systemd-udevd[422979]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 10:06:50 compute-0 NetworkManager[45015]: <info>  [1759486010.6695] manager: (tap13472a1d-91): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:50 compute-0 ovn_controller[88471]: 2025-10-03T10:06:50Z|00044|binding|INFO|Claiming lport 13472a1d-91d3-44c2-8d02-1ced64234ab1 for this chassis.
Oct  3 10:06:50 compute-0 ovn_controller[88471]: 2025-10-03T10:06:50Z|00045|binding|INFO|13472a1d-91d3-44c2-8d02-1ced64234ab1: Claiming fa:16:3e:3f:37:aa 192.168.0.177
Oct  3 10:06:50 compute-0 NetworkManager[45015]: <info>  [1759486010.6844] device (tap13472a1d-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 10:06:50 compute-0 NetworkManager[45015]: <info>  [1759486010.6854] device (tap13472a1d-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.685 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:37:aa 192.168.0.177'], port_security=['fa:16:3e:3f:37:aa 192.168.0.177'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-e77v6yplftfd-tmqfhxgxqbpz-nlbkra67kned-port-rzcgch7wjejz', 'neutron:cidrs': '192.168.0.177/24', 'neutron:device_id': 'cd0be179-1941-400f-a1e6-8ee6243ee71a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-e77v6yplftfd-tmqfhxgxqbpz-nlbkra67kned-port-rzcgch7wjejz', 'neutron:project_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c22f45fa-3e9c-4adb-8724-80552acd48b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.209'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c829e9c2-c63b-44e6-806c-cc11bdf56258, chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=13472a1d-91d3-44c2-8d02-1ced64234ab1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:50 compute-0 ovn_controller[88471]: 2025-10-03T10:06:50Z|00046|binding|INFO|Setting lport 13472a1d-91d3-44c2-8d02-1ced64234ab1 ovn-installed in OVS
Oct  3 10:06:50 compute-0 ovn_controller[88471]: 2025-10-03T10:06:50Z|00047|binding|INFO|Setting lport 13472a1d-91d3-44c2-8d02-1ced64234ab1 up in Southbound
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.694 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[7d523349-f966-434b-94f2-227c93c4f4b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.711 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[564854a2-ba87-4dd7-888d-7dcb6c8db092]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67eed0ac-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489539, 'reachable_time': 32866, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422999, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:50 compute-0 systemd-machined[137653]: New machine qemu-4-instance-00000004.
Oct  3 10:06:50 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.728 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[b65c1cbc-033f-4a0b-89bb-7f411518c9e4]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap67eed0ac-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489551, 'tstamp': 489551}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423001, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67eed0ac-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489554, 'tstamp': 489554}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423001, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.732 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67eed0ac-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.736 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67eed0ac-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.736 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.737 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67eed0ac-d0, col_values=(('external_ids', {'iface-id': 'e79720f4-8084-4b6f-a8ef-933cf0e7b8bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.738 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.739 284328 INFO neutron.agent.ovn.metadata.agent [-] Port 13472a1d-91d3-44c2-8d02-1ced64234ab1 in datapath 67eed0ac-d131-40ed-a5fe-0484d04236a0 unbound from our chassis#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.741 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67eed0ac-d131-40ed-a5fe-0484d04236a0#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.759 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[f9134ddc-ab87-4916-a8f0-e42a80f39d21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.792 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[ffd4dca2-e655-4027-a738-83da0270e227]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.795 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e67d03-9cb7-4351-b312-b2d90ec2a4dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.834 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[6709c535-2465-4fcb-a918-e4c23f47359a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.863 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[05805e95-d960-4d54-83de-562feb06f2f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67eed0ac-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489539, 'reachable_time': 32866, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423013, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.884 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[a5100dab-1a9c-44b1-8df4-797aff484789]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap67eed0ac-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489551, 'tstamp': 489551}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423014, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67eed0ac-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489554, 'tstamp': 489554}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423014, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.887 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67eed0ac-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:50 compute-0 nova_compute[351685]: 2025-10-03 10:06:50.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.895 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67eed0ac-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.896 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.898 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67eed0ac-d0, col_values=(('external_ids', {'iface-id': 'e79720f4-8084-4b6f-a8ef-933cf0e7b8bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:06:50 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:50.899 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:06:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 205 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.7 MiB/s wr, 74 op/s
Oct  3 10:06:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.736 2 DEBUG nova.compute.manager [req-8f3d0c20-5edb-454d-87f2-6fe67510223d req-0626c396-c397-48be-beda-437d93aaa59c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Received event network-vif-plugged-17d7d099-6f86-4e60-91b4-3f39e651bd00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.737 2 DEBUG oslo_concurrency.lockutils [req-8f3d0c20-5edb-454d-87f2-6fe67510223d req-0626c396-c397-48be-beda-437d93aaa59c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "4515b342-533d-419f-8737-773b7845ab0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.738 2 DEBUG oslo_concurrency.lockutils [req-8f3d0c20-5edb-454d-87f2-6fe67510223d req-0626c396-c397-48be-beda-437d93aaa59c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "4515b342-533d-419f-8737-773b7845ab0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.738 2 DEBUG oslo_concurrency.lockutils [req-8f3d0c20-5edb-454d-87f2-6fe67510223d req-0626c396-c397-48be-beda-437d93aaa59c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "4515b342-533d-419f-8737-773b7845ab0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.738 2 DEBUG nova.compute.manager [req-8f3d0c20-5edb-454d-87f2-6fe67510223d req-0626c396-c397-48be-beda-437d93aaa59c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Processing event network-vif-plugged-17d7d099-6f86-4e60-91b4-3f39e651bd00 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  3 10:06:51 compute-0 podman[423135]: 2025-10-03 10:06:51.816609293 +0000 UTC m=+0.077315226 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:06:51 compute-0 podman[423136]: 2025-10-03 10:06:51.82305908 +0000 UTC m=+0.082592226 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, container_name=kepler, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, managed_by=edpm_ansible, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.expose-services=)
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.937 2 DEBUG nova.compute.manager [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.938 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759486011.9372187, 4515b342-533d-419f-8737-773b7845ab0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.938 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 4515b342-533d-419f-8737-773b7845ab0f] VM Started (Lifecycle Event)#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.944 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.950 2 INFO nova.virt.libvirt.driver [-] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Instance spawned successfully.#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.950 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.962 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.968 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.980 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.982 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.983 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.984 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.985 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.987 2 DEBUG nova.virt.libvirt.driver [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.992 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 4515b342-533d-419f-8737-773b7845ab0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.992 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759486011.9374082, 4515b342-533d-419f-8737-773b7845ab0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:06:51 compute-0 nova_compute[351685]: 2025-10-03 10:06:51.998 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 4515b342-533d-419f-8737-773b7845ab0f] VM Paused (Lifecycle Event)#033[00m
Oct  3 10:06:52 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.029 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.035 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759486011.9419506, 4515b342-533d-419f-8737-773b7845ab0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.036 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 4515b342-533d-419f-8737-773b7845ab0f] VM Resumed (Lifecycle Event)#033[00m
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.053 2 INFO nova.compute.manager [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Took 7.21 seconds to spawn the instance on the hypervisor.#033[00m
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.053 2 DEBUG nova.compute.manager [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:06:52 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.063 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.067 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.103 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 4515b342-533d-419f-8737-773b7845ab0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.103 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759486012.0397248, cd0be179-1941-400f-a1e6-8ee6243ee71a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.103 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] VM Started (Lifecycle Event)#033[00m
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.128 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.130 2 INFO nova.compute.manager [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Took 8.24 seconds to build instance.#033[00m
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.135 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759486012.0398352, cd0be179-1941-400f-a1e6-8ee6243ee71a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.135 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] VM Paused (Lifecycle Event)#033[00m
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.148 2 DEBUG oslo_concurrency.lockutils [None req-922dbeba-27c6-40ca-87a0-06df75b5a35b 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "4515b342-533d-419f-8737-773b7845ab0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.153 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.158 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 10:06:52 compute-0 nova_compute[351685]: 2025-10-03 10:06:52.174 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 10:06:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 205 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.7 MiB/s wr, 74 op/s
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.815 2 DEBUG nova.compute.manager [req-b3898734-ee3b-45e4-b532-e78c9b042112 req-d5ad1145-4063-4c3b-adb6-a81bf8f313a9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Received event network-vif-plugged-17d7d099-6f86-4e60-91b4-3f39e651bd00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.816 2 DEBUG oslo_concurrency.lockutils [req-b3898734-ee3b-45e4-b532-e78c9b042112 req-d5ad1145-4063-4c3b-adb6-a81bf8f313a9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "4515b342-533d-419f-8737-773b7845ab0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.816 2 DEBUG oslo_concurrency.lockutils [req-b3898734-ee3b-45e4-b532-e78c9b042112 req-d5ad1145-4063-4c3b-adb6-a81bf8f313a9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "4515b342-533d-419f-8737-773b7845ab0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.816 2 DEBUG oslo_concurrency.lockutils [req-b3898734-ee3b-45e4-b532-e78c9b042112 req-d5ad1145-4063-4c3b-adb6-a81bf8f313a9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "4515b342-533d-419f-8737-773b7845ab0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.817 2 DEBUG nova.compute.manager [req-b3898734-ee3b-45e4-b532-e78c9b042112 req-d5ad1145-4063-4c3b-adb6-a81bf8f313a9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] No waiting events found dispatching network-vif-plugged-17d7d099-6f86-4e60-91b4-3f39e651bd00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.817 2 WARNING nova.compute.manager [req-b3898734-ee3b-45e4-b532-e78c9b042112 req-d5ad1145-4063-4c3b-adb6-a81bf8f313a9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Received unexpected event network-vif-plugged-17d7d099-6f86-4e60-91b4-3f39e651bd00 for instance with vm_state active and task_state None.#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.817 2 DEBUG nova.compute.manager [req-b3898734-ee3b-45e4-b532-e78c9b042112 req-d5ad1145-4063-4c3b-adb6-a81bf8f313a9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Received event network-vif-plugged-13472a1d-91d3-44c2-8d02-1ced64234ab1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.818 2 DEBUG oslo_concurrency.lockutils [req-b3898734-ee3b-45e4-b532-e78c9b042112 req-d5ad1145-4063-4c3b-adb6-a81bf8f313a9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.818 2 DEBUG oslo_concurrency.lockutils [req-b3898734-ee3b-45e4-b532-e78c9b042112 req-d5ad1145-4063-4c3b-adb6-a81bf8f313a9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.819 2 DEBUG oslo_concurrency.lockutils [req-b3898734-ee3b-45e4-b532-e78c9b042112 req-d5ad1145-4063-4c3b-adb6-a81bf8f313a9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.819 2 DEBUG nova.compute.manager [req-b3898734-ee3b-45e4-b532-e78c9b042112 req-d5ad1145-4063-4c3b-adb6-a81bf8f313a9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Processing event network-vif-plugged-13472a1d-91d3-44c2-8d02-1ced64234ab1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.819 2 DEBUG nova.compute.manager [req-b3898734-ee3b-45e4-b532-e78c9b042112 req-d5ad1145-4063-4c3b-adb6-a81bf8f313a9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Received event network-vif-plugged-13472a1d-91d3-44c2-8d02-1ced64234ab1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.820 2 DEBUG oslo_concurrency.lockutils [req-b3898734-ee3b-45e4-b532-e78c9b042112 req-d5ad1145-4063-4c3b-adb6-a81bf8f313a9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.820 2 DEBUG oslo_concurrency.lockutils [req-b3898734-ee3b-45e4-b532-e78c9b042112 req-d5ad1145-4063-4c3b-adb6-a81bf8f313a9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.820 2 DEBUG oslo_concurrency.lockutils [req-b3898734-ee3b-45e4-b532-e78c9b042112 req-d5ad1145-4063-4c3b-adb6-a81bf8f313a9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.821 2 DEBUG nova.compute.manager [req-b3898734-ee3b-45e4-b532-e78c9b042112 req-d5ad1145-4063-4c3b-adb6-a81bf8f313a9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] No waiting events found dispatching network-vif-plugged-13472a1d-91d3-44c2-8d02-1ced64234ab1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.821 2 WARNING nova.compute.manager [req-b3898734-ee3b-45e4-b532-e78c9b042112 req-d5ad1145-4063-4c3b-adb6-a81bf8f313a9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Received unexpected event network-vif-plugged-13472a1d-91d3-44c2-8d02-1ced64234ab1 for instance with vm_state building and task_state spawning.#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.822 2 DEBUG nova.compute.manager [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.825 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759486013.8256114, cd0be179-1941-400f-a1e6-8ee6243ee71a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.826 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] VM Resumed (Lifecycle Event)#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.829 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.834 2 INFO nova.virt.libvirt.driver [-] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Instance spawned successfully.#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.835 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.857 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.864 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.869 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.869 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.870 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.870 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.870 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.871 2 DEBUG nova.virt.libvirt.driver [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.892 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 10:06:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:06:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2522877366' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:06:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:06:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2522877366' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.926 2 INFO nova.compute.manager [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Took 8.34 seconds to spawn the instance on the hypervisor.#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.926 2 DEBUG nova.compute.manager [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.983 2 INFO nova.compute.manager [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Took 9.68 seconds to build instance.#033[00m
Oct  3 10:06:53 compute-0 nova_compute[351685]: 2025-10-03 10:06:53.997 2 DEBUG oslo_concurrency.lockutils [None req-c827cef2-eb71-4e64-b6fb-3c903e17f951 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:54 compute-0 nova_compute[351685]: 2025-10-03 10:06:54.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 206 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 2.8 MiB/s wr, 105 op/s
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016429295272514164 of space, bias 1.0, pg target 0.4928788581754249 quantized to 32 (current 32)
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:06:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:06:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:06:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 206 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.8 MiB/s wr, 137 op/s
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.056 2 DEBUG oslo_concurrency.lockutils [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "4515b342-533d-419f-8737-773b7845ab0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.057 2 DEBUG oslo_concurrency.lockutils [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "4515b342-533d-419f-8737-773b7845ab0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.057 2 DEBUG oslo_concurrency.lockutils [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "4515b342-533d-419f-8737-773b7845ab0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.057 2 DEBUG oslo_concurrency.lockutils [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "4515b342-533d-419f-8737-773b7845ab0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.057 2 DEBUG oslo_concurrency.lockutils [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "4515b342-533d-419f-8737-773b7845ab0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.059 2 INFO nova.compute.manager [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Terminating instance#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.060 2 DEBUG nova.compute.manager [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  3 10:06:57 compute-0 kernel: tap17d7d099-6f (unregistering): left promiscuous mode
Oct  3 10:06:57 compute-0 NetworkManager[45015]: <info>  [1759486017.1500] device (tap17d7d099-6f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:57 compute-0 ovn_controller[88471]: 2025-10-03T10:06:57Z|00048|binding|INFO|Releasing lport 17d7d099-6f86-4e60-91b4-3f39e651bd00 from this chassis (sb_readonly=0)
Oct  3 10:06:57 compute-0 ovn_controller[88471]: 2025-10-03T10:06:57Z|00049|binding|INFO|Setting lport 17d7d099-6f86-4e60-91b4-3f39e651bd00 down in Southbound
Oct  3 10:06:57 compute-0 ovn_controller[88471]: 2025-10-03T10:06:57Z|00050|binding|INFO|Removing iface tap17d7d099-6f ovn-installed in OVS
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:57 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:57.179 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:d7:1b 192.168.0.155'], port_security=['fa:16:3e:b0:d7:1b 192.168.0.155'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-e77v6yplftfd-uusud7ile66j-pamjds7x5q7s-port-vjfjwrjaud4q', 'neutron:cidrs': '192.168.0.155/24', 'neutron:device_id': '4515b342-533d-419f-8737-773b7845ab0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-e77v6yplftfd-uusud7ile66j-pamjds7x5q7s-port-vjfjwrjaud4q', 'neutron:project_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c22f45fa-3e9c-4adb-8724-80552acd48b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.182', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c829e9c2-c63b-44e6-806c-cc11bdf56258, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=17d7d099-6f86-4e60-91b4-3f39e651bd00) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 10:06:57 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:57.180 284328 INFO neutron.agent.ovn.metadata.agent [-] Port 17d7d099-6f86-4e60-91b4-3f39e651bd00 in datapath 67eed0ac-d131-40ed-a5fe-0484d04236a0 unbound from our chassis#033[00m
Oct  3 10:06:57 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:57.182 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67eed0ac-d131-40ed-a5fe-0484d04236a0#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:57 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:57.198 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[d092b9cd-c97d-4759-9fed-bbbab7af26c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:57 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct  3 10:06:57 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 6.529s CPU time.
Oct  3 10:06:57 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:57.232 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[df975100-8d1f-4618-b59d-8c2449da65a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:57 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:57.235 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc6efd0-c6b4-4f9b-8111-d019ff8355c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:57 compute-0 systemd-machined[137653]: Machine qemu-3-instance-00000003 terminated.
Oct  3 10:06:57 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:57.269 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[b012b8ee-6daf-4938-a41c-2b6c9cd17e26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:57 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:57.290 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[2a05b475-4971-4116-ab6f-800cfb77c25b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67eed0ac-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489539, 'reachable_time': 32866, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423205, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.303 2 INFO nova.virt.libvirt.driver [-] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Instance destroyed successfully.#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.303 2 DEBUG nova.objects.instance [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lazy-loading 'resources' on Instance uuid 4515b342-533d-419f-8737-773b7845ab0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:06:57 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:57.313 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed351a9-950a-4b0a-8a54-982bd62920cd]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap67eed0ac-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489551, 'tstamp': 489551}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423213, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67eed0ac-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489554, 'tstamp': 489554}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423213, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:06:57 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:57.315 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67eed0ac-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.321 2 DEBUG nova.virt.libvirt.vif [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T10:06:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-yplftfd-uusud7ile66j-pamjds7x5q7s-vnf-4niegkol5tze',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-yplftfd-uusud7ile66j-pamjds7x5q7s-vnf-4niegkol5tze',id=3,image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-03T10:06:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='09b6fef3-eb54-4e45-9716-a57b7d592bd8'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ee75a4dc6ade43baab6ee923c9cf4cdf',ramdisk_id='',reservation_id='r-po108x5f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-03T10:06:52Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT00MzE3OTE1NzU0NDAwMDMyMDc2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTQzMTc5MTU3NTQ0MDAwMzIwNzY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NDMxNzkxNTc1NDQwMDAzMjA3Nj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTQzMTc5MTU3NTQ0MDAwMzIwNzY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT00MzE3OTE1NzU0NDAwMDMyMDc2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT00MzE3OTE1NzU0NDAwMDMyMDc2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgIC
Oct  3 10:06:57 compute-0 nova_compute[351685]: xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NDMxNzkxNTc1NDQwMDAzMjA3Nj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTQzMTc5MTU3NTQ0MDAwMzIwNzY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT00MzE3OTE1NzU0NDAwMDMyMDc2PT0tLQo=',user_id='2f408449ba0f42fcb69f92dbf541f2e3',uuid=4515b342-533d-419f-8737-773b7845ab0f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "address": "fa:16:3e:b0:d7:1b", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.155", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17d7d099-6f", "ovs_interfaceid": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.321 2 DEBUG nova.network.os_vif_util [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converting VIF {"id": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "address": "fa:16:3e:b0:d7:1b", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.155", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17d7d099-6f", "ovs_interfaceid": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.322 2 DEBUG nova.network.os_vif_util [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:d7:1b,bridge_name='br-int',has_traffic_filtering=True,id=17d7d099-6f86-4e60-91b4-3f39e651bd00,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap17d7d099-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.323 2 DEBUG os_vif [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:d7:1b,bridge_name='br-int',has_traffic_filtering=True,id=17d7d099-6f86-4e60-91b4-3f39e651bd00,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap17d7d099-6f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  3 10:06:57 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:57.322 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67eed0ac-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:06:57 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:57.323 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:06:57 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:57.323 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67eed0ac-d0, col_values=(('external_ids', {'iface-id': 'e79720f4-8084-4b6f-a8ef-933cf0e7b8bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:06:57 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:06:57.323 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.328 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17d7d099-6f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.336 2 INFO os_vif [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:d7:1b,bridge_name='br-int',has_traffic_filtering=True,id=17d7d099-6f86-4e60-91b4-3f39e651bd00,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap17d7d099-6f')#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.363 2 DEBUG nova.compute.manager [req-7f862f91-7c86-45ec-8cde-fd31909b4794 req-86be99dc-cf7f-4b8e-bf4d-02c7d5f266cf 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Received event network-vif-unplugged-17d7d099-6f86-4e60-91b4-3f39e651bd00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.364 2 DEBUG oslo_concurrency.lockutils [req-7f862f91-7c86-45ec-8cde-fd31909b4794 req-86be99dc-cf7f-4b8e-bf4d-02c7d5f266cf 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "4515b342-533d-419f-8737-773b7845ab0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.364 2 DEBUG oslo_concurrency.lockutils [req-7f862f91-7c86-45ec-8cde-fd31909b4794 req-86be99dc-cf7f-4b8e-bf4d-02c7d5f266cf 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "4515b342-533d-419f-8737-773b7845ab0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.364 2 DEBUG oslo_concurrency.lockutils [req-7f862f91-7c86-45ec-8cde-fd31909b4794 req-86be99dc-cf7f-4b8e-bf4d-02c7d5f266cf 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "4515b342-533d-419f-8737-773b7845ab0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.364 2 DEBUG nova.compute.manager [req-7f862f91-7c86-45ec-8cde-fd31909b4794 req-86be99dc-cf7f-4b8e-bf4d-02c7d5f266cf 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] No waiting events found dispatching network-vif-unplugged-17d7d099-6f86-4e60-91b4-3f39e651bd00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 10:06:57 compute-0 nova_compute[351685]: 2025-10-03 10:06:57.365 2 DEBUG nova.compute.manager [req-7f862f91-7c86-45ec-8cde-fd31909b4794 req-86be99dc-cf7f-4b8e-bf4d-02c7d5f266cf 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Received event network-vif-unplugged-17d7d099-6f86-4e60-91b4-3f39e651bd00 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  3 10:06:57 compute-0 rsyslogd[187556]: message too long (8192) with configured size 8096, begin of message is: 2025-10-03 10:06:57.321 2 DEBUG nova.virt.libvirt.vif [None req-52f85bfb-1f3b-42 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  3 10:06:58 compute-0 nova_compute[351685]: 2025-10-03 10:06:58.375 2 INFO nova.virt.libvirt.driver [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Deleting instance files /var/lib/nova/instances/4515b342-533d-419f-8737-773b7845ab0f_del#033[00m
Oct  3 10:06:58 compute-0 nova_compute[351685]: 2025-10-03 10:06:58.376 2 INFO nova.virt.libvirt.driver [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Deletion of /var/lib/nova/instances/4515b342-533d-419f-8737-773b7845ab0f_del complete#033[00m
Oct  3 10:06:58 compute-0 nova_compute[351685]: 2025-10-03 10:06:58.434 2 DEBUG nova.virt.libvirt.host [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Oct  3 10:06:58 compute-0 nova_compute[351685]: 2025-10-03 10:06:58.434 2 INFO nova.virt.libvirt.host [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] UEFI support detected#033[00m
Oct  3 10:06:58 compute-0 nova_compute[351685]: 2025-10-03 10:06:58.437 2 INFO nova.compute.manager [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Took 1.38 seconds to destroy the instance on the hypervisor.#033[00m
Oct  3 10:06:58 compute-0 nova_compute[351685]: 2025-10-03 10:06:58.437 2 DEBUG oslo.service.loopingcall [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  3 10:06:58 compute-0 nova_compute[351685]: 2025-10-03 10:06:58.437 2 DEBUG nova.compute.manager [-] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  3 10:06:58 compute-0 nova_compute[351685]: 2025-10-03 10:06:58.438 2 DEBUG nova.network.neutron [-] [instance: 4515b342-533d-419f-8737-773b7845ab0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  3 10:06:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 198 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.8 MiB/s wr, 177 op/s
Oct  3 10:06:59 compute-0 nova_compute[351685]: 2025-10-03 10:06:59.436 2 DEBUG nova.compute.manager [req-c584d9e1-ad32-4d11-a48a-a2c8fc6abdcf req-c5ad7a14-1726-414a-aada-0221e2c3fa04 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Received event network-vif-plugged-17d7d099-6f86-4e60-91b4-3f39e651bd00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:06:59 compute-0 nova_compute[351685]: 2025-10-03 10:06:59.436 2 DEBUG oslo_concurrency.lockutils [req-c584d9e1-ad32-4d11-a48a-a2c8fc6abdcf req-c5ad7a14-1726-414a-aada-0221e2c3fa04 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "4515b342-533d-419f-8737-773b7845ab0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:06:59 compute-0 nova_compute[351685]: 2025-10-03 10:06:59.436 2 DEBUG oslo_concurrency.lockutils [req-c584d9e1-ad32-4d11-a48a-a2c8fc6abdcf req-c5ad7a14-1726-414a-aada-0221e2c3fa04 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "4515b342-533d-419f-8737-773b7845ab0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:06:59 compute-0 nova_compute[351685]: 2025-10-03 10:06:59.436 2 DEBUG oslo_concurrency.lockutils [req-c584d9e1-ad32-4d11-a48a-a2c8fc6abdcf req-c5ad7a14-1726-414a-aada-0221e2c3fa04 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "4515b342-533d-419f-8737-773b7845ab0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:06:59 compute-0 nova_compute[351685]: 2025-10-03 10:06:59.436 2 DEBUG nova.compute.manager [req-c584d9e1-ad32-4d11-a48a-a2c8fc6abdcf req-c5ad7a14-1726-414a-aada-0221e2c3fa04 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] No waiting events found dispatching network-vif-plugged-17d7d099-6f86-4e60-91b4-3f39e651bd00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 10:06:59 compute-0 nova_compute[351685]: 2025-10-03 10:06:59.437 2 WARNING nova.compute.manager [req-c584d9e1-ad32-4d11-a48a-a2c8fc6abdcf req-c5ad7a14-1726-414a-aada-0221e2c3fa04 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Received unexpected event network-vif-plugged-17d7d099-6f86-4e60-91b4-3f39e651bd00 for instance with vm_state active and task_state deleting.#033[00m
Oct  3 10:06:59 compute-0 nova_compute[351685]: 2025-10-03 10:06:59.437 2 DEBUG nova.compute.manager [req-c584d9e1-ad32-4d11-a48a-a2c8fc6abdcf req-c5ad7a14-1726-414a-aada-0221e2c3fa04 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Received event network-changed-17d7d099-6f86-4e60-91b4-3f39e651bd00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:06:59 compute-0 nova_compute[351685]: 2025-10-03 10:06:59.437 2 DEBUG nova.compute.manager [req-c584d9e1-ad32-4d11-a48a-a2c8fc6abdcf req-c5ad7a14-1726-414a-aada-0221e2c3fa04 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Refreshing instance network info cache due to event network-changed-17d7d099-6f86-4e60-91b4-3f39e651bd00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 10:06:59 compute-0 nova_compute[351685]: 2025-10-03 10:06:59.437 2 DEBUG oslo_concurrency.lockutils [req-c584d9e1-ad32-4d11-a48a-a2c8fc6abdcf req-c5ad7a14-1726-414a-aada-0221e2c3fa04 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-4515b342-533d-419f-8737-773b7845ab0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:06:59 compute-0 nova_compute[351685]: 2025-10-03 10:06:59.437 2 DEBUG oslo_concurrency.lockutils [req-c584d9e1-ad32-4d11-a48a-a2c8fc6abdcf req-c5ad7a14-1726-414a-aada-0221e2c3fa04 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-4515b342-533d-419f-8737-773b7845ab0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:06:59 compute-0 nova_compute[351685]: 2025-10-03 10:06:59.437 2 DEBUG nova.network.neutron [req-c584d9e1-ad32-4d11-a48a-a2c8fc6abdcf req-c5ad7a14-1726-414a-aada-0221e2c3fa04 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Refreshing network info cache for port 17d7d099-6f86-4e60-91b4-3f39e651bd00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 10:06:59 compute-0 podman[157165]: time="2025-10-03T10:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:06:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:06:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9046 "" "Go-http-client/1.1"
Oct  3 10:07:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 172 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.3 MiB/s wr, 199 op/s
Oct  3 10:07:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:07:01 compute-0 openstack_network_exporter[367524]: ERROR   10:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:07:01 compute-0 openstack_network_exporter[367524]: ERROR   10:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:07:01 compute-0 openstack_network_exporter[367524]: ERROR   10:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:07:01 compute-0 openstack_network_exporter[367524]: ERROR   10:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:07:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:07:01 compute-0 openstack_network_exporter[367524]: ERROR   10:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:07:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:07:01 compute-0 podman[423236]: 2025-10-03 10:07:01.819811596 +0000 UTC m=+0.075175228 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Oct  3 10:07:01 compute-0 podman[423237]: 2025-10-03 10:07:01.855013084 +0000 UTC m=+0.103657570 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm)
Oct  3 10:07:01 compute-0 podman[423238]: 2025-10-03 10:07:01.906912946 +0000 UTC m=+0.142026089 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  3 10:07:02 compute-0 nova_compute[351685]: 2025-10-03 10:07:02.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:02 compute-0 nova_compute[351685]: 2025-10-03 10:07:02.055 2 DEBUG nova.network.neutron [req-c584d9e1-ad32-4d11-a48a-a2c8fc6abdcf req-c5ad7a14-1726-414a-aada-0221e2c3fa04 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Updated VIF entry in instance network info cache for port 17d7d099-6f86-4e60-91b4-3f39e651bd00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 10:07:02 compute-0 nova_compute[351685]: 2025-10-03 10:07:02.056 2 DEBUG nova.network.neutron [req-c584d9e1-ad32-4d11-a48a-a2c8fc6abdcf req-c5ad7a14-1726-414a-aada-0221e2c3fa04 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Updating instance_info_cache with network_info: [{"id": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "address": "fa:16:3e:b0:d7:1b", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.155", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17d7d099-6f", "ovs_interfaceid": "17d7d099-6f86-4e60-91b4-3f39e651bd00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:07:02 compute-0 nova_compute[351685]: 2025-10-03 10:07:02.079 2 DEBUG oslo_concurrency.lockutils [req-c584d9e1-ad32-4d11-a48a-a2c8fc6abdcf req-c5ad7a14-1726-414a-aada-0221e2c3fa04 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-4515b342-533d-419f-8737-773b7845ab0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:07:02 compute-0 nova_compute[351685]: 2025-10-03 10:07:02.081 2 DEBUG nova.network.neutron [-] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:07:02 compute-0 nova_compute[351685]: 2025-10-03 10:07:02.099 2 INFO nova.compute.manager [-] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Took 3.66 seconds to deallocate network for instance.#033[00m
Oct  3 10:07:02 compute-0 nova_compute[351685]: 2025-10-03 10:07:02.145 2 DEBUG oslo_concurrency.lockutils [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:07:02 compute-0 nova_compute[351685]: 2025-10-03 10:07:02.146 2 DEBUG oslo_concurrency.lockutils [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:07:02 compute-0 nova_compute[351685]: 2025-10-03 10:07:02.267 2 DEBUG oslo_concurrency.processutils [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:07:02 compute-0 nova_compute[351685]: 2025-10-03 10:07:02.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:07:02 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4075783070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:07:02 compute-0 nova_compute[351685]: 2025-10-03 10:07:02.801 2 DEBUG oslo_concurrency.processutils [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:07:02 compute-0 nova_compute[351685]: 2025-10-03 10:07:02.811 2 DEBUG nova.compute.provider_tree [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:07:02 compute-0 nova_compute[351685]: 2025-10-03 10:07:02.828 2 DEBUG nova.scheduler.client.report [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:07:02 compute-0 nova_compute[351685]: 2025-10-03 10:07:02.847 2 DEBUG oslo_concurrency.lockutils [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:07:02 compute-0 nova_compute[351685]: 2025-10-03 10:07:02.876 2 INFO nova.scheduler.client.report [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Deleted allocations for instance 4515b342-533d-419f-8737-773b7845ab0f#033[00m
Oct  3 10:07:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 172 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 44 KiB/s wr, 159 op/s
Oct  3 10:07:02 compute-0 nova_compute[351685]: 2025-10-03 10:07:02.939 2 DEBUG oslo_concurrency.lockutils [None req-52f85bfb-1f3b-42b7-b3de-d70664224c37 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "4515b342-533d-419f-8737-773b7845ab0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:07:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 172 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 44 KiB/s wr, 160 op/s
Oct  3 10:07:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:07:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 172 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.8 KiB/s wr, 129 op/s
Oct  3 10:07:07 compute-0 nova_compute[351685]: 2025-10-03 10:07:07.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:07 compute-0 nova_compute[351685]: 2025-10-03 10:07:07.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:08 compute-0 podman[423320]: 2025-10-03 10:07:08.862198042 +0000 UTC m=+0.112782253 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:07:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 172 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.7 KiB/s wr, 97 op/s
Oct  3 10:07:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 172 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.4 KiB/s wr, 57 op/s
Oct  3 10:07:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:07:11 compute-0 podman[423338]: 2025-10-03 10:07:11.840207949 +0000 UTC m=+0.086164770 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:07:11 compute-0 podman[423339]: 2025-10-03 10:07:11.864109704 +0000 UTC m=+0.111520712 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  3 10:07:11 compute-0 podman[423340]: 2025-10-03 10:07:11.864977282 +0000 UTC m=+0.097876515 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  3 10:07:12 compute-0 nova_compute[351685]: 2025-10-03 10:07:12.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:12 compute-0 nova_compute[351685]: 2025-10-03 10:07:12.292 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759486017.2919075, 4515b342-533d-419f-8737-773b7845ab0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:07:12 compute-0 nova_compute[351685]: 2025-10-03 10:07:12.293 2 INFO nova.compute.manager [-] [instance: 4515b342-533d-419f-8737-773b7845ab0f] VM Stopped (Lifecycle Event)#033[00m
Oct  3 10:07:12 compute-0 nova_compute[351685]: 2025-10-03 10:07:12.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:12 compute-0 nova_compute[351685]: 2025-10-03 10:07:12.358 2 DEBUG nova.compute.manager [None req-838763da-2aa4-4e53-bd3d-a631802ab3c3 - - - - - -] [instance: 4515b342-533d-419f-8737-773b7845ab0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:07:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 172 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  3 10:07:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 172 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct  3 10:07:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:07:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:07:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:07:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:07:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:07:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:07:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:07:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 172 MiB data, 302 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:07:17 compute-0 nova_compute[351685]: 2025-10-03 10:07:17.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:17 compute-0 nova_compute[351685]: 2025-10-03 10:07:17.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:17 compute-0 podman[423399]: 2025-10-03 10:07:17.842477796 +0000 UTC m=+0.101130669 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 10:07:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 172 MiB data, 302 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:07:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 172 MiB data, 302 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:07:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:07:22 compute-0 nova_compute[351685]: 2025-10-03 10:07:22.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:22 compute-0 nova_compute[351685]: 2025-10-03 10:07:22.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:22 compute-0 podman[423421]: 2025-10-03 10:07:22.473265923 +0000 UTC m=+0.099305162 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vendor=Red Hat, Inc., distribution-scope=public, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=kepler)
Oct  3 10:07:22 compute-0 podman[423420]: 2025-10-03 10:07:22.490811945 +0000 UTC m=+0.120489950 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:07:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 172 MiB data, 302 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:07:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 172 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct  3 10:07:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:07:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 172 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct  3 10:07:27 compute-0 nova_compute[351685]: 2025-10-03 10:07:27.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:27 compute-0 nova_compute[351685]: 2025-10-03 10:07:27.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 172 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 341 B/s wr, 0 op/s
Oct  3 10:07:29 compute-0 ovn_controller[88471]: 2025-10-03T10:07:29Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:37:aa 192.168.0.177
Oct  3 10:07:29 compute-0 ovn_controller[88471]: 2025-10-03T10:07:29Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:37:aa 192.168.0.177
Oct  3 10:07:29 compute-0 podman[157165]: time="2025-10-03T10:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:07:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:07:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9050 "" "Go-http-client/1.1"
Oct  3 10:07:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 183 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 923 KiB/s wr, 13 op/s
Oct  3 10:07:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:07:31 compute-0 openstack_network_exporter[367524]: ERROR   10:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:07:31 compute-0 openstack_network_exporter[367524]: ERROR   10:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:07:31 compute-0 openstack_network_exporter[367524]: ERROR   10:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:07:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:07:31 compute-0 openstack_network_exporter[367524]: ERROR   10:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:07:31 compute-0 openstack_network_exporter[367524]: ERROR   10:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:07:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:07:32 compute-0 nova_compute[351685]: 2025-10-03 10:07:32.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:32 compute-0 nova_compute[351685]: 2025-10-03 10:07:32.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:32 compute-0 podman[423463]: 2025-10-03 10:07:32.821128372 +0000 UTC m=+0.082523435 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-type=git)
Oct  3 10:07:32 compute-0 podman[423464]: 2025-10-03 10:07:32.852620109 +0000 UTC m=+0.109436075 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 10:07:32 compute-0 podman[423465]: 2025-10-03 10:07:32.878195548 +0000 UTC m=+0.127386949 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 10:07:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 183 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 923 KiB/s wr, 13 op/s
Oct  3 10:07:33 compute-0 ovn_controller[88471]: 2025-10-03T10:07:33Z|00051|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct  3 10:07:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 199 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 147 KiB/s rd, 1.5 MiB/s wr, 54 op/s
Oct  3 10:07:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:07:36 compute-0 podman[423693]: 2025-10-03 10:07:36.501744659 +0000 UTC m=+0.854120572 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:07:36 compute-0 podman[423693]: 2025-10-03 10:07:36.631758472 +0000 UTC m=+0.984134375 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 10:07:36 compute-0 nova_compute[351685]: 2025-10-03 10:07:36.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:07:36 compute-0 nova_compute[351685]: 2025-10-03 10:07:36.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:07:36 compute-0 nova_compute[351685]: 2025-10-03 10:07:36.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:07:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct  3 10:07:37 compute-0 nova_compute[351685]: 2025-10-03 10:07:37.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:37 compute-0 nova_compute[351685]: 2025-10-03 10:07:37.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:07:37 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:07:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:07:37 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:07:37 compute-0 nova_compute[351685]: 2025-10-03 10:07:37.523 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:07:37 compute-0 nova_compute[351685]: 2025-10-03 10:07:37.524 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:07:37 compute-0 nova_compute[351685]: 2025-10-03 10:07:37.524 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:07:37 compute-0 nova_compute[351685]: 2025-10-03 10:07:37.524 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:07:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:07:38 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:07:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:07:38 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:07:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:07:38 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:07:38 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:07:38 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:07:38 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 48d871e2-8581-49e1-9620-a19f32f5a4bb does not exist
Oct  3 10:07:38 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6a3cf3dc-9311-4dee-9c4a-e26630612abb does not exist
Oct  3 10:07:38 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d5648588-2484-4d56-9f08-a13995fceb7a does not exist
Oct  3 10:07:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:07:38 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:07:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:07:38 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:07:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:07:38 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:07:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct  3 10:07:39 compute-0 podman[424049]: 2025-10-03 10:07:39.075595644 +0000 UTC m=+0.111555504 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  3 10:07:39 compute-0 podman[424131]: 2025-10-03 10:07:39.52011015 +0000 UTC m=+0.072246596 container create e998d84ef7e0dfcf4bcb748d876f7b2c4653e68c80402b160f890529913bcb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lehmann, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  3 10:07:39 compute-0 systemd[1]: Started libpod-conmon-e998d84ef7e0dfcf4bcb748d876f7b2c4653e68c80402b160f890529913bcb27.scope.
Oct  3 10:07:39 compute-0 podman[424131]: 2025-10-03 10:07:39.492393762 +0000 UTC m=+0.044530238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:07:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:07:39 compute-0 podman[424131]: 2025-10-03 10:07:39.639271205 +0000 UTC m=+0.191407701 container init e998d84ef7e0dfcf4bcb748d876f7b2c4653e68c80402b160f890529913bcb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lehmann, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 10:07:39 compute-0 podman[424131]: 2025-10-03 10:07:39.652442357 +0000 UTC m=+0.204578823 container start e998d84ef7e0dfcf4bcb748d876f7b2c4653e68c80402b160f890529913bcb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 10:07:39 compute-0 podman[424131]: 2025-10-03 10:07:39.657581052 +0000 UTC m=+0.209717518 container attach e998d84ef7e0dfcf4bcb748d876f7b2c4653e68c80402b160f890529913bcb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 10:07:39 compute-0 elated_lehmann[424148]: 167 167
Oct  3 10:07:39 compute-0 systemd[1]: libpod-e998d84ef7e0dfcf4bcb748d876f7b2c4653e68c80402b160f890529913bcb27.scope: Deactivated successfully.
Oct  3 10:07:39 compute-0 podman[424131]: 2025-10-03 10:07:39.663035987 +0000 UTC m=+0.215172483 container died e998d84ef7e0dfcf4bcb748d876f7b2c4653e68c80402b160f890529913bcb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 10:07:39 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:07:39 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:07:39 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:07:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d59f2b20345db5262e9e642c5591ac1f445ccd9d8f05ad081ca11e87d0ce169a-merged.mount: Deactivated successfully.
Oct  3 10:07:39 compute-0 podman[424131]: 2025-10-03 10:07:39.731146767 +0000 UTC m=+0.283283213 container remove e998d84ef7e0dfcf4bcb748d876f7b2c4653e68c80402b160f890529913bcb27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 10:07:39 compute-0 systemd[1]: libpod-conmon-e998d84ef7e0dfcf4bcb748d876f7b2c4653e68c80402b160f890529913bcb27.scope: Deactivated successfully.
Oct  3 10:07:39 compute-0 podman[424170]: 2025-10-03 10:07:39.955728229 +0000 UTC m=+0.063274117 container create 168be153232a3e78631c24e340bb14b31b6c2476b70786031955756285545c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct  3 10:07:40 compute-0 podman[424170]: 2025-10-03 10:07:39.930707428 +0000 UTC m=+0.038253336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:07:40 compute-0 systemd[1]: Started libpod-conmon-168be153232a3e78631c24e340bb14b31b6c2476b70786031955756285545c77.scope.
Oct  3 10:07:40 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b367ddbaa59844f5eb818d2f4d7e96572e5bdf8d0f780f71a09d4640ac4d33c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b367ddbaa59844f5eb818d2f4d7e96572e5bdf8d0f780f71a09d4640ac4d33c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b367ddbaa59844f5eb818d2f4d7e96572e5bdf8d0f780f71a09d4640ac4d33c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b367ddbaa59844f5eb818d2f4d7e96572e5bdf8d0f780f71a09d4640ac4d33c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b367ddbaa59844f5eb818d2f4d7e96572e5bdf8d0f780f71a09d4640ac4d33c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:07:40 compute-0 podman[424170]: 2025-10-03 10:07:40.108849853 +0000 UTC m=+0.216395761 container init 168be153232a3e78631c24e340bb14b31b6c2476b70786031955756285545c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:07:40 compute-0 podman[424170]: 2025-10-03 10:07:40.125984901 +0000 UTC m=+0.233530789 container start 168be153232a3e78631c24e340bb14b31b6c2476b70786031955756285545c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 10:07:40 compute-0 podman[424170]: 2025-10-03 10:07:40.130117794 +0000 UTC m=+0.237663682 container attach 168be153232a3e78631c24e340bb14b31b6c2476b70786031955756285545c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.203 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.222 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.223 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.223 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.224 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.224 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.225 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.255 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.255 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.255 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.256 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.256 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:07:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:07:40 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3742621170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.759 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.864 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.864 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.864 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.869 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.869 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.870 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.874 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.874 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:07:40 compute-0 nova_compute[351685]: 2025-10-03 10:07:40.875 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.883 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.884 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.884 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.890 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance cd0be179-1941-400f-a1e6-8ee6243ee71a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct  3 10:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:40.891 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/cd0be179-1941-400f-a1e6-8ee6243ee71a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ef276854e7699a1234d40a89e1ecb6415a6c739b2915f53a3c625cf782ff31fe" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct  3 10:07:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct  3 10:07:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:07:41 compute-0 compassionate_visvesvaraya[424186]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:07:41 compute-0 compassionate_visvesvaraya[424186]: --> relative data size: 1.0
Oct  3 10:07:41 compute-0 compassionate_visvesvaraya[424186]: --> All data devices are unavailable
Oct  3 10:07:41 compute-0 nova_compute[351685]: 2025-10-03 10:07:41.283 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:07:41 compute-0 nova_compute[351685]: 2025-10-03 10:07:41.285 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3454MB free_disk=59.88881301879883GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:07:41 compute-0 systemd[1]: libpod-168be153232a3e78631c24e340bb14b31b6c2476b70786031955756285545c77.scope: Deactivated successfully.
Oct  3 10:07:41 compute-0 systemd[1]: libpod-168be153232a3e78631c24e340bb14b31b6c2476b70786031955756285545c77.scope: Consumed 1.040s CPU time.
Oct  3 10:07:41 compute-0 nova_compute[351685]: 2025-10-03 10:07:41.285 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:07:41 compute-0 nova_compute[351685]: 2025-10-03 10:07:41.288 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:07:41 compute-0 podman[424239]: 2025-10-03 10:07:41.33407521 +0000 UTC m=+0.032211763 container died 168be153232a3e78631c24e340bb14b31b6c2476b70786031955756285545c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_visvesvaraya, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  3 10:07:41 compute-0 nova_compute[351685]: 2025-10-03 10:07:41.363 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:07:41 compute-0 nova_compute[351685]: 2025-10-03 10:07:41.364 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 5b008829-2c76-4e40-b9e6-0e3d73095522 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:07:41 compute-0 nova_compute[351685]: 2025-10-03 10:07:41.364 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance cd0be179-1941-400f-a1e6-8ee6243ee71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:07:41 compute-0 nova_compute[351685]: 2025-10-03 10:07:41.364 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:07:41 compute-0 nova_compute[351685]: 2025-10-03 10:07:41.365 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:07:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b367ddbaa59844f5eb818d2f4d7e96572e5bdf8d0f780f71a09d4640ac4d33c5-merged.mount: Deactivated successfully.
Oct  3 10:07:41 compute-0 nova_compute[351685]: 2025-10-03 10:07:41.382 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 10:07:41 compute-0 podman[424239]: 2025-10-03 10:07:41.396827669 +0000 UTC m=+0.094964202 container remove 168be153232a3e78631c24e340bb14b31b6c2476b70786031955756285545c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_visvesvaraya, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 10:07:41 compute-0 nova_compute[351685]: 2025-10-03 10:07:41.400 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 10:07:41 compute-0 nova_compute[351685]: 2025-10-03 10:07:41.400 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 10:07:41 compute-0 systemd[1]: libpod-conmon-168be153232a3e78631c24e340bb14b31b6c2476b70786031955756285545c77.scope: Deactivated successfully.
Oct  3 10:07:41 compute-0 nova_compute[351685]: 2025-10-03 10:07:41.414 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 10:07:41 compute-0 nova_compute[351685]: 2025-10-03 10:07:41.434 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 10:07:41 compute-0 nova_compute[351685]: 2025-10-03 10:07:41.506 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:07:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:07:41.597 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:07:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:07:41.597 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:07:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:07:41.598 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.643 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Fri, 03 Oct 2025 10:07:40 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-dcbe446e-4a16-4444-b2ef-6d6d06906a69 x-openstack-request-id: req-dcbe446e-4a16-4444-b2ef-6d6d06906a69 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.643 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "cd0be179-1941-400f-a1e6-8ee6243ee71a", "name": "vn-yplftfd-tmqfhxgxqbpz-nlbkra67kned-vnf-ck27xuhmg25j", "status": "ACTIVE", "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "user_id": "2f408449ba0f42fcb69f92dbf541f2e3", "metadata": {"metering.server_group": "09b6fef3-eb54-4e45-9716-a57b7d592bd8"}, "hostId": "b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85", "image": {"id": "37f03e8a-3aed-46a5-8219-fc87e355127e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/37f03e8a-3aed-46a5-8219-fc87e355127e"}]}, "flavor": {"id": "ada739ee-222b-4269-8d29-62bea534173e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/ada739ee-222b-4269-8d29-62bea534173e"}]}, "created": "2025-10-03T10:06:43Z", "updated": "2025-10-03T10:06:53Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.177", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:3f:37:aa"}, {"version": 4, "addr": "192.168.122.209", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:3f:37:aa"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/cd0be179-1941-400f-a1e6-8ee6243ee71a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/cd0be179-1941-400f-a1e6-8ee6243ee71a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-03T10:06:53.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.643 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/cd0be179-1941-400f-a1e6-8ee6243ee71a used request id req-dcbe446e-4a16-4444-b2ef-6d6d06906a69 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.645 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'cd0be179-1941-400f-a1e6-8ee6243ee71a', 'name': 'vn-yplftfd-tmqfhxgxqbpz-nlbkra67kned-vnf-ck27xuhmg25j', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {'metering.server_group': '09b6fef3-eb54-4e45-9716-a57b7d592bd8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.649 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5b008829-2c76-4e40-b9e6-0e3d73095522', 'name': 'vn-yplftfd-pkavijefxpwp-6g6pxdaavpud-vnf-i3ubmryl4tho', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {'metering.server_group': '09b6fef3-eb54-4e45-9716-a57b7d592bd8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.655 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.656 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.656 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.657 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.657 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.660 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:07:41.657592) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.663 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for cd0be179-1941-400f-a1e6-8ee6243ee71a / tap13472a1d-91 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.664 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.671 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.679 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.680 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.681 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.681 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.681 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.681 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.681 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.681 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.682 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:07:41.681524) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.683 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.683 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.684 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.684 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:07:41.684099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.706 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.707 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.707 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.731 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.732 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.732 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.754 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.755 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.755 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.756 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.756 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.756 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.757 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.757 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.757 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:07:41.757087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.819 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.820 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.820 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.873 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.874 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.874 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.911 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.912 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.912 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.913 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.913 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.913 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.913 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.914 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.914 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.914 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.latency volume: 1480162541 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.914 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.latency volume: 246885128 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.915 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.latency volume: 161615200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.915 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.latency volume: 1250055753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:07:41.914177) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.916 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.latency volume: 207399736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.916 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.latency volume: 144385577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.916 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.917 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.917 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.917 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.918 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.918 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.918 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.918 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.918 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.918 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.918 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.919 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.919 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.919 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.919 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.920 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.920 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.920 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.921 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.921 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.921 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:07:41.918462) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.921 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.922 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.922 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.922 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.922 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.922 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.922 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.923 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:07:41.922174) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.923 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.923 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.923 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.924 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.924 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.924 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.925 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.925 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.925 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.925 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.925 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.925 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.925 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.926 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.926 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.926 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.927 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.927 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:07:41.925812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.927 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.927 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.927 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.928 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.928 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.928 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.928 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.928 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.928 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.929 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.929 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.bytes volume: 41693184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.929 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.929 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.929 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.930 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.930 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.930 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.930 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:07:41.929043) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.930 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.931 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.931 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.931 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.931 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.932 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.932 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.932 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:07:41.932137) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.956 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:41.979 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:07:41 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/132022526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.000 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.001 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.001 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.001 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.001 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.002 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.002 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.latency volume: 7027132732 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.002 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.latency volume: 33230824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.002 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.002 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.latency volume: 13700045630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.003 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.latency volume: 25497777 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.003 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.003 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:07:42.002013) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.004 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.004 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.004 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.005 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.005 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.005 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.005 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.005 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.requests volume: 217 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.005 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:07:42.005516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.006 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.006 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.006 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.requests volume: 241 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.006 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.007 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.008 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.008 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.008 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.008 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.008 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.009 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.009 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.bytes.delta volume: 3599 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:07:42.008911) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.010 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.010 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.010 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.010 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.011 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-yplftfd-tmqfhxgxqbpz-nlbkra67kned-vnf-ck27xuhmg25j>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-yplftfd-tmqfhxgxqbpz-nlbkra67kned-vnf-ck27xuhmg25j>]
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.011 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-03T10:07:42.010751) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.012 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:07:42.011860) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.013 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:07:42.013413) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.013 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.packets volume: 57 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.014 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.014 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.015 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.016 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:07:42.015020) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.016 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.016 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.016 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.016 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.016 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.017 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.018 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.018 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/cpu volume: 34850000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.018 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/cpu volume: 159770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.018 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 37700000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.019 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.020 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:07:42.016417) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.020 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:07:42.018346) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:07:42.020141) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.021 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.021 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.022 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.outgoing.bytes volume: 1751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.022 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.bytes volume: 7390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.022 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.023 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.023 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:07:42.021916) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.023 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.023 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.024 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:07:42.023668) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.024 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.bytes.delta volume: 2742 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.024 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.025 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.025 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.025 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.025 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.025 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/memory.usage volume: 49.61328125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:07:42.025551) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.025 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/memory.usage volume: 49.08203125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.026 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.84765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.026 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.026 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.027 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-03T10:07:42.027100) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.027 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.027 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-yplftfd-tmqfhxgxqbpz-nlbkra67kned-vnf-ck27xuhmg25j>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-yplftfd-tmqfhxgxqbpz-nlbkra67kned-vnf-ck27xuhmg25j>]
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.027 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.027 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.028 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.028 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.028 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.incoming.bytes volume: 1870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:07:42.028153) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.028 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.bytes volume: 8706 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.028 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.029 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.029 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.029 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.029 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.029 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.030 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.030 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.outgoing.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:07:42.029979) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.030 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.packets volume: 63 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.030 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.031 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:07:42.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:07:42 compute-0 nova_compute[351685]: 2025-10-03 10:07:42.034 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:07:42 compute-0 nova_compute[351685]: 2025-10-03 10:07:42.045 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:07:42 compute-0 nova_compute[351685]: 2025-10-03 10:07:42.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:42 compute-0 nova_compute[351685]: 2025-10-03 10:07:42.070 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:07:42 compute-0 nova_compute[351685]: 2025-10-03 10:07:42.091 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:07:42 compute-0 nova_compute[351685]: 2025-10-03 10:07:42.091 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:07:42 compute-0 podman[424412]: 2025-10-03 10:07:42.297954296 +0000 UTC m=+0.072368668 container create 97fb351cb0dc7ea8303057e202e4c9ac5aa378f9a493a2b444b5590da8b9355d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:07:42 compute-0 podman[424412]: 2025-10-03 10:07:42.261858081 +0000 UTC m=+0.036272503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:07:42 compute-0 systemd[1]: Started libpod-conmon-97fb351cb0dc7ea8303057e202e4c9ac5aa378f9a493a2b444b5590da8b9355d.scope.
Oct  3 10:07:42 compute-0 nova_compute[351685]: 2025-10-03 10:07:42.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:42 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:07:42 compute-0 podman[424412]: 2025-10-03 10:07:42.413118445 +0000 UTC m=+0.187532827 container init 97fb351cb0dc7ea8303057e202e4c9ac5aa378f9a493a2b444b5590da8b9355d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:07:42 compute-0 podman[424412]: 2025-10-03 10:07:42.424697646 +0000 UTC m=+0.199111988 container start 97fb351cb0dc7ea8303057e202e4c9ac5aa378f9a493a2b444b5590da8b9355d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 10:07:42 compute-0 podman[424412]: 2025-10-03 10:07:42.430341266 +0000 UTC m=+0.204755628 container attach 97fb351cb0dc7ea8303057e202e4c9ac5aa378f9a493a2b444b5590da8b9355d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 10:07:42 compute-0 blissful_archimedes[424442]: 167 167
Oct  3 10:07:42 compute-0 podman[424412]: 2025-10-03 10:07:42.434662425 +0000 UTC m=+0.209076767 container died 97fb351cb0dc7ea8303057e202e4c9ac5aa378f9a493a2b444b5590da8b9355d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:07:42 compute-0 systemd[1]: libpod-97fb351cb0dc7ea8303057e202e4c9ac5aa378f9a493a2b444b5590da8b9355d.scope: Deactivated successfully.
Oct  3 10:07:42 compute-0 podman[424425]: 2025-10-03 10:07:42.468307943 +0000 UTC m=+0.112561136 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 10:07:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-69bec25eaf14463a42c9c86513fa1a417e5ac6d808b729810b92c4432cdd9061-merged.mount: Deactivated successfully.
Oct  3 10:07:42 compute-0 podman[424428]: 2025-10-03 10:07:42.481576717 +0000 UTC m=+0.124695344 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 10:07:42 compute-0 podman[424429]: 2025-10-03 10:07:42.493582002 +0000 UTC m=+0.133865419 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct  3 10:07:42 compute-0 podman[424412]: 2025-10-03 10:07:42.491133273 +0000 UTC m=+0.265547605 container remove 97fb351cb0dc7ea8303057e202e4c9ac5aa378f9a493a2b444b5590da8b9355d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 10:07:42 compute-0 systemd[1]: libpod-conmon-97fb351cb0dc7ea8303057e202e4c9ac5aa378f9a493a2b444b5590da8b9355d.scope: Deactivated successfully.
Oct  3 10:07:42 compute-0 podman[424512]: 2025-10-03 10:07:42.741571343 +0000 UTC m=+0.081127629 container create 29a67942e3c1f3ee397a4ad34b19670e5c362fa7f71414471faa76fa4ba35e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 10:07:42 compute-0 podman[424512]: 2025-10-03 10:07:42.718947439 +0000 UTC m=+0.058503735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:07:42 compute-0 systemd[1]: Started libpod-conmon-29a67942e3c1f3ee397a4ad34b19670e5c362fa7f71414471faa76fa4ba35e33.scope.
Oct  3 10:07:42 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:07:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2b92074ad58f267c0e6ac61421950905e5bce4ea0c4dfcde3f7898ba6f33ae6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:07:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2b92074ad58f267c0e6ac61421950905e5bce4ea0c4dfcde3f7898ba6f33ae6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:07:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2b92074ad58f267c0e6ac61421950905e5bce4ea0c4dfcde3f7898ba6f33ae6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:07:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2b92074ad58f267c0e6ac61421950905e5bce4ea0c4dfcde3f7898ba6f33ae6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:07:42 compute-0 podman[424512]: 2025-10-03 10:07:42.886403741 +0000 UTC m=+0.225960057 container init 29a67942e3c1f3ee397a4ad34b19670e5c362fa7f71414471faa76fa4ba35e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:07:42 compute-0 podman[424512]: 2025-10-03 10:07:42.907985393 +0000 UTC m=+0.247541679 container start 29a67942e3c1f3ee397a4ad34b19670e5c362fa7f71414471faa76fa4ba35e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:07:42 compute-0 podman[424512]: 2025-10-03 10:07:42.914283504 +0000 UTC m=+0.253839800 container attach 29a67942e3c1f3ee397a4ad34b19670e5c362fa7f71414471faa76fa4ba35e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:07:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 597 KiB/s wr, 44 op/s
Oct  3 10:07:43 compute-0 nova_compute[351685]: 2025-10-03 10:07:43.597 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:07:43 compute-0 nova_compute[351685]: 2025-10-03 10:07:43.599 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:07:43 compute-0 nova_compute[351685]: 2025-10-03 10:07:43.600 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:07:43 compute-0 nova_compute[351685]: 2025-10-03 10:07:43.600 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:07:43 compute-0 nova_compute[351685]: 2025-10-03 10:07:43.600 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:07:43 compute-0 modest_tesla[424528]: {
Oct  3 10:07:43 compute-0 modest_tesla[424528]:    "0": [
Oct  3 10:07:43 compute-0 modest_tesla[424528]:        {
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "devices": [
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "/dev/loop3"
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            ],
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "lv_name": "ceph_lv0",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "lv_size": "21470642176",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "name": "ceph_lv0",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "tags": {
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.cluster_name": "ceph",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.crush_device_class": "",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.encrypted": "0",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.osd_id": "0",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.type": "block",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.vdo": "0"
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            },
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "type": "block",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "vg_name": "ceph_vg0"
Oct  3 10:07:43 compute-0 modest_tesla[424528]:        }
Oct  3 10:07:43 compute-0 modest_tesla[424528]:    ],
Oct  3 10:07:43 compute-0 modest_tesla[424528]:    "1": [
Oct  3 10:07:43 compute-0 modest_tesla[424528]:        {
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "devices": [
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "/dev/loop4"
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            ],
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "lv_name": "ceph_lv1",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "lv_size": "21470642176",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "name": "ceph_lv1",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "tags": {
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.cluster_name": "ceph",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.crush_device_class": "",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.encrypted": "0",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.osd_id": "1",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.type": "block",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.vdo": "0"
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            },
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "type": "block",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "vg_name": "ceph_vg1"
Oct  3 10:07:43 compute-0 modest_tesla[424528]:        }
Oct  3 10:07:43 compute-0 modest_tesla[424528]:    ],
Oct  3 10:07:43 compute-0 modest_tesla[424528]:    "2": [
Oct  3 10:07:43 compute-0 modest_tesla[424528]:        {
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "devices": [
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "/dev/loop5"
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            ],
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "lv_name": "ceph_lv2",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "lv_size": "21470642176",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "name": "ceph_lv2",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "tags": {
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.cluster_name": "ceph",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.crush_device_class": "",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.encrypted": "0",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.osd_id": "2",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.type": "block",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:                "ceph.vdo": "0"
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            },
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "type": "block",
Oct  3 10:07:43 compute-0 modest_tesla[424528]:            "vg_name": "ceph_vg2"
Oct  3 10:07:43 compute-0 modest_tesla[424528]:        }
Oct  3 10:07:43 compute-0 modest_tesla[424528]:    ]
Oct  3 10:07:43 compute-0 modest_tesla[424528]: }
Oct  3 10:07:43 compute-0 systemd[1]: libpod-29a67942e3c1f3ee397a4ad34b19670e5c362fa7f71414471faa76fa4ba35e33.scope: Deactivated successfully.
Oct  3 10:07:43 compute-0 podman[424512]: 2025-10-03 10:07:43.770786862 +0000 UTC m=+1.110343138 container died 29a67942e3c1f3ee397a4ad34b19670e5c362fa7f71414471faa76fa4ba35e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:07:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2b92074ad58f267c0e6ac61421950905e5bce4ea0c4dfcde3f7898ba6f33ae6-merged.mount: Deactivated successfully.
Oct  3 10:07:44 compute-0 podman[424512]: 2025-10-03 10:07:44.08455123 +0000 UTC m=+1.424107506 container remove 29a67942e3c1f3ee397a4ad34b19670e5c362fa7f71414471faa76fa4ba35e33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 10:07:44 compute-0 systemd[1]: libpod-conmon-29a67942e3c1f3ee397a4ad34b19670e5c362fa7f71414471faa76fa4ba35e33.scope: Deactivated successfully.
Oct  3 10:07:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 597 KiB/s wr, 44 op/s
Oct  3 10:07:44 compute-0 podman[424685]: 2025-10-03 10:07:44.903488526 +0000 UTC m=+0.043306418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:07:45 compute-0 podman[424685]: 2025-10-03 10:07:45.194841526 +0000 UTC m=+0.334659408 container create 1043bd73e2afbf9498f3072d7824a14591b4cc3bd48e739a3198e953662325c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_fermi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:07:45 compute-0 systemd[1]: Started libpod-conmon-1043bd73e2afbf9498f3072d7824a14591b4cc3bd48e739a3198e953662325c3.scope.
Oct  3 10:07:45 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:07:45 compute-0 podman[424685]: 2025-10-03 10:07:45.429012954 +0000 UTC m=+0.568830886 container init 1043bd73e2afbf9498f3072d7824a14591b4cc3bd48e739a3198e953662325c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_fermi, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:07:45 compute-0 podman[424685]: 2025-10-03 10:07:45.44885684 +0000 UTC m=+0.588674692 container start 1043bd73e2afbf9498f3072d7824a14591b4cc3bd48e739a3198e953662325c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_fermi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 10:07:45 compute-0 peaceful_fermi[424701]: 167 167
Oct  3 10:07:45 compute-0 systemd[1]: libpod-1043bd73e2afbf9498f3072d7824a14591b4cc3bd48e739a3198e953662325c3.scope: Deactivated successfully.
Oct  3 10:07:45 compute-0 podman[424685]: 2025-10-03 10:07:45.506589538 +0000 UTC m=+0.646407470 container attach 1043bd73e2afbf9498f3072d7824a14591b4cc3bd48e739a3198e953662325c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:07:45 compute-0 podman[424685]: 2025-10-03 10:07:45.507611491 +0000 UTC m=+0.647429363 container died 1043bd73e2afbf9498f3072d7824a14591b4cc3bd48e739a3198e953662325c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_fermi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 10:07:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f38fa0772b128db9b96ae8ead0a74039699149b1a4c24509eb9b08afbd72bc4f-merged.mount: Deactivated successfully.
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:07:46
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.log', 'backups', '.mgr', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'images']
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:07:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:07:46 compute-0 podman[424685]: 2025-10-03 10:07:46.470607651 +0000 UTC m=+1.610425513 container remove 1043bd73e2afbf9498f3072d7824a14591b4cc3bd48e739a3198e953662325c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_fermi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct  3 10:07:46 compute-0 systemd[1]: libpod-conmon-1043bd73e2afbf9498f3072d7824a14591b4cc3bd48e739a3198e953662325c3.scope: Deactivated successfully.
Oct  3 10:07:46 compute-0 podman[424723]: 2025-10-03 10:07:46.704191581 +0000 UTC m=+0.040050823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:07:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 12 KiB/s wr, 3 op/s
Oct  3 10:07:46 compute-0 podman[424723]: 2025-10-03 10:07:46.991950846 +0000 UTC m=+0.327809998 container create b0aa9e1afa7e828234a75cce1361aa5c311b564dc0095d0189e8dfec9037275c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 10:07:47 compute-0 nova_compute[351685]: 2025-10-03 10:07:47.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:47 compute-0 systemd[1]: Started libpod-conmon-b0aa9e1afa7e828234a75cce1361aa5c311b564dc0095d0189e8dfec9037275c.scope.
Oct  3 10:07:47 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5173e593b7442b2db7fffd3bb8d62caa573cf3e9dcd637b282c3497af477ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5173e593b7442b2db7fffd3bb8d62caa573cf3e9dcd637b282c3497af477ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5173e593b7442b2db7fffd3bb8d62caa573cf3e9dcd637b282c3497af477ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5173e593b7442b2db7fffd3bb8d62caa573cf3e9dcd637b282c3497af477ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:07:47 compute-0 podman[424723]: 2025-10-03 10:07:47.211893759 +0000 UTC m=+0.547752931 container init b0aa9e1afa7e828234a75cce1361aa5c311b564dc0095d0189e8dfec9037275c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:07:47 compute-0 podman[424723]: 2025-10-03 10:07:47.227124467 +0000 UTC m=+0.562983639 container start b0aa9e1afa7e828234a75cce1361aa5c311b564dc0095d0189e8dfec9037275c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:07:47 compute-0 podman[424723]: 2025-10-03 10:07:47.236863049 +0000 UTC m=+0.572722261 container attach b0aa9e1afa7e828234a75cce1361aa5c311b564dc0095d0189e8dfec9037275c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 10:07:47 compute-0 nova_compute[351685]: 2025-10-03 10:07:47.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]: {
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:        "osd_id": 1,
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:        "type": "bluestore"
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:    },
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:        "osd_id": 2,
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:        "type": "bluestore"
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:    },
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:        "osd_id": 0,
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:        "type": "bluestore"
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]:    }
Oct  3 10:07:48 compute-0 blissful_lichterman[424740]: }
Oct  3 10:07:48 compute-0 systemd[1]: libpod-b0aa9e1afa7e828234a75cce1361aa5c311b564dc0095d0189e8dfec9037275c.scope: Deactivated successfully.
Oct  3 10:07:48 compute-0 systemd[1]: libpod-b0aa9e1afa7e828234a75cce1361aa5c311b564dc0095d0189e8dfec9037275c.scope: Consumed 1.112s CPU time.
Oct  3 10:07:48 compute-0 podman[424723]: 2025-10-03 10:07:48.344524701 +0000 UTC m=+1.680383883 container died b0aa9e1afa7e828234a75cce1361aa5c311b564dc0095d0189e8dfec9037275c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 10:07:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b5173e593b7442b2db7fffd3bb8d62caa573cf3e9dcd637b282c3497af477ca-merged.mount: Deactivated successfully.
Oct  3 10:07:48 compute-0 podman[424773]: 2025-10-03 10:07:48.483195091 +0000 UTC m=+0.105192249 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  3 10:07:48 compute-0 podman[424723]: 2025-10-03 10:07:48.582814262 +0000 UTC m=+1.918673414 container remove b0aa9e1afa7e828234a75cce1361aa5c311b564dc0095d0189e8dfec9037275c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 10:07:48 compute-0 systemd[1]: libpod-conmon-b0aa9e1afa7e828234a75cce1361aa5c311b564dc0095d0189e8dfec9037275c.scope: Deactivated successfully.
Oct  3 10:07:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:07:48 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:07:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:07:48 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:07:48 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev cf9d5401-d164-465a-8740-1d52fae884a3 does not exist
Oct  3 10:07:48 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 2b32e492-daf6-490e-b28d-56d836715d28 does not exist
Oct  3 10:07:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s wr, 0 op/s
Oct  3 10:07:49 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:07:49 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:07:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:07:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:07:52 compute-0 nova_compute[351685]: 2025-10-03 10:07:52.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:52 compute-0 nova_compute[351685]: 2025-10-03 10:07:52.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:52 compute-0 podman[424856]: 2025-10-03 10:07:52.847556182 +0000 UTC m=+0.090381465 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:07:52 compute-0 podman[424857]: 2025-10-03 10:07:52.861797949 +0000 UTC m=+0.103860247 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=)
Oct  3 10:07:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:07:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:07:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1715542441' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:07:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:07:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1715542441' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:07:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016581277064205248 of space, bias 1.0, pg target 0.49743831192615745 quantized to 32 (current 32)
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:07:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:07:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:07:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:07:57 compute-0 nova_compute[351685]: 2025-10-03 10:07:57.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:57 compute-0 nova_compute[351685]: 2025-10-03 10:07:57.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:07:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:07:59 compute-0 podman[157165]: time="2025-10-03T10:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:07:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:07:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9048 "" "Go-http-client/1.1"
Oct  3 10:08:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:08:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:08:01 compute-0 openstack_network_exporter[367524]: ERROR   10:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:08:01 compute-0 openstack_network_exporter[367524]: ERROR   10:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:08:01 compute-0 openstack_network_exporter[367524]: ERROR   10:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:08:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:08:01 compute-0 openstack_network_exporter[367524]: ERROR   10:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:08:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:08:01 compute-0 openstack_network_exporter[367524]: ERROR   10:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:08:02 compute-0 nova_compute[351685]: 2025-10-03 10:08:02.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:02 compute-0 nova_compute[351685]: 2025-10-03 10:08:02.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:08:03 compute-0 podman[424902]: 2025-10-03 10:08:03.832736921 +0000 UTC m=+0.085986904 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter)
Oct  3 10:08:03 compute-0 podman[424903]: 2025-10-03 10:08:03.861561415 +0000 UTC m=+0.109460887 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Oct  3 10:08:03 compute-0 podman[424904]: 2025-10-03 10:08:03.901813174 +0000 UTC m=+0.132713671 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  3 10:08:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct  3 10:08:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:08:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct  3 10:08:07 compute-0 nova_compute[351685]: 2025-10-03 10:08:07.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:07 compute-0 nova_compute[351685]: 2025-10-03 10:08:07.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct  3 10:08:09 compute-0 podman[424965]: 2025-10-03 10:08:09.855412552 +0000 UTC m=+0.109292441 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  3 10:08:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct  3 10:08:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:08:12 compute-0 nova_compute[351685]: 2025-10-03 10:08:12.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:12 compute-0 nova_compute[351685]: 2025-10-03 10:08:12.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:12 compute-0 podman[424986]: 2025-10-03 10:08:12.847012035 +0000 UTC m=+0.098851136 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:08:12 compute-0 podman[424988]: 2025-10-03 10:08:12.85773869 +0000 UTC m=+0.091855544 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid)
Oct  3 10:08:12 compute-0 podman[424987]: 2025-10-03 10:08:12.872466211 +0000 UTC m=+0.108824486 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:08:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct  3 10:08:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct  3 10:08:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:08:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:08:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:08:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:08:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:08:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:08:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:08:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Oct  3 10:08:17 compute-0 nova_compute[351685]: 2025-10-03 10:08:17.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:17 compute-0 nova_compute[351685]: 2025-10-03 10:08:17.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:18 compute-0 podman[425049]: 2025-10-03 10:08:18.834390259 +0000 UTC m=+0.092669929 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:08:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:08:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:08:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:08:22 compute-0 nova_compute[351685]: 2025-10-03 10:08:22.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:22 compute-0 nova_compute[351685]: 2025-10-03 10:08:22.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:08:23 compute-0 podman[425070]: 2025-10-03 10:08:23.833888956 +0000 UTC m=+0.089101294 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:08:23 compute-0 podman[425071]: 2025-10-03 10:08:23.885019644 +0000 UTC m=+0.119091136 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, version=9.4, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct  3 10:08:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:08:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:08:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:08:27 compute-0 nova_compute[351685]: 2025-10-03 10:08:27.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:27 compute-0 nova_compute[351685]: 2025-10-03 10:08:27.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:08:29 compute-0 podman[157165]: time="2025-10-03T10:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:08:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:08:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9048 "" "Go-http-client/1.1"
Oct  3 10:08:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:08:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:08:31 compute-0 openstack_network_exporter[367524]: ERROR   10:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:08:31 compute-0 openstack_network_exporter[367524]: ERROR   10:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:08:31 compute-0 openstack_network_exporter[367524]: ERROR   10:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:08:31 compute-0 openstack_network_exporter[367524]: ERROR   10:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:08:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:08:31 compute-0 openstack_network_exporter[367524]: ERROR   10:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:08:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:08:32 compute-0 nova_compute[351685]: 2025-10-03 10:08:32.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:32 compute-0 nova_compute[351685]: 2025-10-03 10:08:32.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:08:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:34.320 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:73:2e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:70:f7:73:f2:48'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 10:08:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:34.322 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  3 10:08:34 compute-0 nova_compute[351685]: 2025-10-03 10:08:34.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:34 compute-0 podman[425114]: 2025-10-03 10:08:34.843131484 +0000 UTC m=+0.093227195 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Oct  3 10:08:34 compute-0 podman[425113]: 2025-10-03 10:08:34.859813319 +0000 UTC m=+0.104242339 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct  3 10:08:34 compute-0 podman[425115]: 2025-10-03 10:08:34.8926125 +0000 UTC m=+0.137426872 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  3 10:08:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct  3 10:08:35 compute-0 nova_compute[351685]: 2025-10-03 10:08:35.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:08:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:08:36 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:36.324 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:08:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct  3 10:08:37 compute-0 nova_compute[351685]: 2025-10-03 10:08:37.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:37 compute-0 nova_compute[351685]: 2025-10-03 10:08:37.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:37 compute-0 nova_compute[351685]: 2025-10-03 10:08:37.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:08:37 compute-0 nova_compute[351685]: 2025-10-03 10:08:37.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:08:38 compute-0 nova_compute[351685]: 2025-10-03 10:08:38.371 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "10f21e57-50ad-48e0-a664-66fd8affbe73" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:08:38 compute-0 nova_compute[351685]: 2025-10-03 10:08:38.372 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:08:38 compute-0 nova_compute[351685]: 2025-10-03 10:08:38.387 2 DEBUG nova.compute.manager [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  3 10:08:38 compute-0 nova_compute[351685]: 2025-10-03 10:08:38.456 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:08:38 compute-0 nova_compute[351685]: 2025-10-03 10:08:38.457 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:08:38 compute-0 nova_compute[351685]: 2025-10-03 10:08:38.467 2 DEBUG nova.virt.hardware [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  3 10:08:38 compute-0 nova_compute[351685]: 2025-10-03 10:08:38.467 2 INFO nova.compute.claims [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  3 10:08:38 compute-0 nova_compute[351685]: 2025-10-03 10:08:38.608 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:08:38 compute-0 nova_compute[351685]: 2025-10-03 10:08:38.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:08:38 compute-0 nova_compute[351685]: 2025-10-03 10:08:38.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:08:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s wr, 0 op/s
Oct  3 10:08:39 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:08:39 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1119369705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.031 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.038 2 DEBUG nova.compute.provider_tree [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.070 2 DEBUG nova.scheduler.client.report [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.125 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.126 2 DEBUG nova.compute.manager [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.170 2 DEBUG nova.compute.manager [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.170 2 DEBUG nova.network.neutron [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.192 2 INFO nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.234 2 DEBUG nova.compute.manager [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.314 2 DEBUG nova.compute.manager [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.316 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.317 2 INFO nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Creating image(s)#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.374 2 DEBUG nova.storage.rbd_utils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 10f21e57-50ad-48e0-a664-66fd8affbe73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.412 2 DEBUG nova.storage.rbd_utils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 10f21e57-50ad-48e0-a664-66fd8affbe73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.449 2 DEBUG nova.storage.rbd_utils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 10f21e57-50ad-48e0-a664-66fd8affbe73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.457 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.523 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.524 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "8123da205344dbbb79d5d821c9749dc540280b1e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.525 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "8123da205344dbbb79d5d821c9749dc540280b1e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.526 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "8123da205344dbbb79d5d821c9749dc540280b1e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.560 2 DEBUG nova.storage.rbd_utils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 10f21e57-50ad-48e0-a664-66fd8affbe73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.566 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e 10f21e57-50ad-48e0-a664-66fd8affbe73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.613 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.613 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.614 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:08:39 compute-0 nova_compute[351685]: 2025-10-03 10:08:39.996 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e 10f21e57-50ad-48e0-a664-66fd8affbe73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:08:40 compute-0 nova_compute[351685]: 2025-10-03 10:08:40.126 2 DEBUG nova.storage.rbd_utils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] resizing rbd image 10f21e57-50ad-48e0-a664-66fd8affbe73_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  3 10:08:40 compute-0 nova_compute[351685]: 2025-10-03 10:08:40.754 2 DEBUG nova.objects.instance [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lazy-loading 'migration_context' on Instance uuid 10f21e57-50ad-48e0-a664-66fd8affbe73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:08:40 compute-0 podman[425360]: 2025-10-03 10:08:40.803866521 +0000 UTC m=+0.063323479 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:08:40 compute-0 nova_compute[351685]: 2025-10-03 10:08:40.817 2 DEBUG nova.storage.rbd_utils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 10f21e57-50ad-48e0-a664-66fd8affbe73_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:08:40 compute-0 nova_compute[351685]: 2025-10-03 10:08:40.855 2 DEBUG nova.storage.rbd_utils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 10f21e57-50ad-48e0-a664-66fd8affbe73_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:08:40 compute-0 nova_compute[351685]: 2025-10-03 10:08:40.863 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:08:40 compute-0 nova_compute[351685]: 2025-10-03 10:08:40.925 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:08:40 compute-0 nova_compute[351685]: 2025-10-03 10:08:40.926 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:08:40 compute-0 nova_compute[351685]: 2025-10-03 10:08:40.927 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:08:40 compute-0 nova_compute[351685]: 2025-10-03 10:08:40.928 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:08:40 compute-0 nova_compute[351685]: 2025-10-03 10:08:40.959 2 DEBUG nova.storage.rbd_utils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 10f21e57-50ad-48e0-a664-66fd8affbe73_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:08:40 compute-0 nova_compute[351685]: 2025-10-03 10:08:40.966 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 10f21e57-50ad-48e0-a664-66fd8affbe73_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:08:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 206 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 305 KiB/s wr, 1 op/s
Oct  3 10:08:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:08:41 compute-0 nova_compute[351685]: 2025-10-03 10:08:41.486 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 10f21e57-50ad-48e0-a664-66fd8affbe73_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:08:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:41.598 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:08:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:41.600 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:08:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:41.602 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:08:41 compute-0 nova_compute[351685]: 2025-10-03 10:08:41.697 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  3 10:08:41 compute-0 nova_compute[351685]: 2025-10-03 10:08:41.698 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Ensure instance console log exists: /var/lib/nova/instances/10f21e57-50ad-48e0-a664-66fd8affbe73/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  3 10:08:41 compute-0 nova_compute[351685]: 2025-10-03 10:08:41.699 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:08:41 compute-0 nova_compute[351685]: 2025-10-03 10:08:41.699 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:08:41 compute-0 nova_compute[351685]: 2025-10-03 10:08:41.699 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.269 2 DEBUG nova.network.neutron [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Successfully updated port: b4892d0b-79ef-407a-9e1d-ac886b07daba _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.290 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "refresh_cache-10f21e57-50ad-48e0-a664-66fd8affbe73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.291 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquired lock "refresh_cache-10f21e57-50ad-48e0-a664-66fd8affbe73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.292 2 DEBUG nova.network.neutron [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.390 2 DEBUG nova.compute.manager [req-51043bdb-b1d7-44a5-83e2-21d0540ae955 req-252dbd78-1c57-425f-840b-f49d6ab4143d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Received event network-changed-b4892d0b-79ef-407a-9e1d-ac886b07daba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.391 2 DEBUG nova.compute.manager [req-51043bdb-b1d7-44a5-83e2-21d0540ae955 req-252dbd78-1c57-425f-840b-f49d6ab4143d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Refreshing instance network info cache due to event network-changed-b4892d0b-79ef-407a-9e1d-ac886b07daba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.391 2 DEBUG oslo_concurrency.lockutils [req-51043bdb-b1d7-44a5-83e2-21d0540ae955 req-252dbd78-1c57-425f-840b-f49d6ab4143d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-10f21e57-50ad-48e0-a664-66fd8affbe73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.452 2 DEBUG nova.network.neutron [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.522 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Updating instance_info_cache with network_info: [{"id": "d601bb86-7265-40b5-ac1c-42d005514cfd", "address": "fa:16:3e:4c:23:11", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd601bb86-72", "ovs_interfaceid": "d601bb86-7265-40b5-ac1c-42d005514cfd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.545 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.546 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.546 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.546 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.588 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.588 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.588 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.588 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:08:42 compute-0 nova_compute[351685]: 2025-10-03 10:08:42.589 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:08:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 206 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 305 KiB/s wr, 1 op/s
Oct  3 10:08:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:08:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4086076070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.084 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.229 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.230 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.230 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.239 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.239 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.239 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.246 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.246 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.247 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:08:43.452972) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486123453007, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1173, "num_deletes": 251, "total_data_size": 1773761, "memory_usage": 1800032, "flush_reason": "Manual Compaction"}
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486123481730, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1745936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24466, "largest_seqno": 25638, "table_properties": {"data_size": 1740267, "index_size": 3064, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11948, "raw_average_key_size": 19, "raw_value_size": 1728979, "raw_average_value_size": 2862, "num_data_blocks": 137, "num_entries": 604, "num_filter_entries": 604, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759486007, "oldest_key_time": 1759486007, "file_creation_time": 1759486123, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 28866 microseconds, and 5292 cpu microseconds.
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:08:43.481832) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1745936 bytes OK
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:08:43.481859) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:08:43.484935) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:08:43.484962) EVENT_LOG_v1 {"time_micros": 1759486123484953, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:08:43.484987) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1768395, prev total WAL file size 1768395, number of live WAL files 2.
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:08:43.486307) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1705KB)], [56(7410KB)]
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486123486361, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9334135, "oldest_snapshot_seqno": -1}
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4670 keys, 7594727 bytes, temperature: kUnknown
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486123567008, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7594727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7563090, "index_size": 18855, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11717, "raw_key_size": 117074, "raw_average_key_size": 25, "raw_value_size": 7478128, "raw_average_value_size": 1601, "num_data_blocks": 778, "num_entries": 4670, "num_filter_entries": 4670, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759486123, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:08:43.567286) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7594727 bytes
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:08:43.602160) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.6 rd, 94.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.2 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(9.7) write-amplify(4.3) OK, records in: 5184, records dropped: 514 output_compression: NoCompression
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:08:43.602215) EVENT_LOG_v1 {"time_micros": 1759486123602191, "job": 30, "event": "compaction_finished", "compaction_time_micros": 80719, "compaction_time_cpu_micros": 20113, "output_level": 6, "num_output_files": 1, "total_output_size": 7594727, "num_input_records": 5184, "num_output_records": 4670, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486123603012, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486123605872, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:08:43.486108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:08:43.606065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:08:43.606074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:08:43.606078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:08:43.606082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:08:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:08:43.606086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.705 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.706 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3459MB free_disk=59.885379791259766GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.706 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.707 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.819 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.819 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 5b008829-2c76-4e40-b9e6-0e3d73095522 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.819 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance cd0be179-1941-400f-a1e6-8ee6243ee71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.819 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 10f21e57-50ad-48e0-a664-66fd8affbe73 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.820 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.820 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:08:43 compute-0 podman[425532]: 2025-10-03 10:08:43.867378957 +0000 UTC m=+0.105776958 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:08:43 compute-0 podman[425534]: 2025-10-03 10:08:43.875993843 +0000 UTC m=+0.105078847 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  3 10:08:43 compute-0 podman[425533]: 2025-10-03 10:08:43.899531437 +0000 UTC m=+0.133383703 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 10:08:43 compute-0 nova_compute[351685]: 2025-10-03 10:08:43.920 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:08:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:08:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2747678120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.406 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.422 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.444 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.477 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.478 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.643 2 DEBUG nova.network.neutron [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Updating instance_info_cache with network_info: [{"id": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "address": "fa:16:3e:f6:2d:c9", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.248", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4892d0b-79", "ovs_interfaceid": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.660 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.661 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.661 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.661 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.669 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Releasing lock "refresh_cache-10f21e57-50ad-48e0-a664-66fd8affbe73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.670 2 DEBUG nova.compute.manager [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Instance network_info: |[{"id": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "address": "fa:16:3e:f6:2d:c9", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.248", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4892d0b-79", "ovs_interfaceid": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.670 2 DEBUG oslo_concurrency.lockutils [req-51043bdb-b1d7-44a5-83e2-21d0540ae955 req-252dbd78-1c57-425f-840b-f49d6ab4143d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-10f21e57-50ad-48e0-a664-66fd8affbe73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.671 2 DEBUG nova.network.neutron [req-51043bdb-b1d7-44a5-83e2-21d0540ae955 req-252dbd78-1c57-425f-840b-f49d6ab4143d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Refreshing network info cache for port b4892d0b-79ef-407a-9e1d-ac886b07daba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.675 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Start _get_guest_xml network_info=[{"id": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "address": "fa:16:3e:f6:2d:c9", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.248", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4892d0b-79", "ovs_interfaceid": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-03T10:00:30Z,direct_url=<?>,disk_format='qcow2',id=37f03e8a-3aed-46a5-8219-fc87e355127e,min_disk=0,min_ram=0,name='cirros',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-03T10:00:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 0, 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}], 'ephemerals': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vdb', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 1, 'encrypted': False, 'encryption_options': None, 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.700 2 WARNING nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.705 2 DEBUG nova.virt.libvirt.host [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.706 2 DEBUG nova.virt.libvirt.host [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.709 2 DEBUG nova.virt.libvirt.host [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.709 2 DEBUG nova.virt.libvirt.host [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.710 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.710 2 DEBUG nova.virt.hardware [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-03T10:00:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='ada739ee-222b-4269-8d29-62bea534173e',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-03T10:00:30Z,direct_url=<?>,disk_format='qcow2',id=37f03e8a-3aed-46a5-8219-fc87e355127e,min_disk=0,min_ram=0,name='cirros',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-03T10:00:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.710 2 DEBUG nova.virt.hardware [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.711 2 DEBUG nova.virt.hardware [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.711 2 DEBUG nova.virt.hardware [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.711 2 DEBUG nova.virt.hardware [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.711 2 DEBUG nova.virt.hardware [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.712 2 DEBUG nova.virt.hardware [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.712 2 DEBUG nova.virt.hardware [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.712 2 DEBUG nova.virt.hardware [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.712 2 DEBUG nova.virt.hardware [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.712 2 DEBUG nova.virt.hardware [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.715 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:08:44 compute-0 nova_compute[351685]: 2025-10-03 10:08:44.735 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:08:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 234 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Oct  3 10:08:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:08:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2583919322' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:08:45 compute-0 nova_compute[351685]: 2025-10-03 10:08:45.197 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:08:45 compute-0 nova_compute[351685]: 2025-10-03 10:08:45.198 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:08:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:08:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2810544996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:08:45 compute-0 nova_compute[351685]: 2025-10-03 10:08:45.792 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:08:45 compute-0 nova_compute[351685]: 2025-10-03 10:08:45.827 2 DEBUG nova.storage.rbd_utils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 10f21e57-50ad-48e0-a664-66fd8affbe73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:08:45 compute-0 nova_compute[351685]: 2025-10-03 10:08:45.837 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:08:46
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'backups']
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:08:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:08:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:08:46 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1595633580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.317 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.320 2 DEBUG nova.virt.libvirt.vif [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T10:08:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-yplftfd-y3rllwivvz5w-5iuabdtdgdic-vnf-ze2ouau6iet5',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-yplftfd-y3rllwivvz5w-5iuabdtdgdic-vnf-ze2ouau6iet5',id=5,image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='09b6fef3-eb54-4e45-9716-a57b7d592bd8'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee75a4dc6ade43baab6ee923c9cf4cdf',ramdisk_id='',reservation_id='r-k650uwcj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T10:08:39Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NDgwMDA5NTU5MzA0NjMxNTc3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU0ODAwMDk1NTkzMDQ2MzE1Nzc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTQ4MDAwOTU1OTMwNDYzMTU3Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU0ODAwMDk1NTkzMDQ2MzE1Nzc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NDgwMDA5NTU5MzA0NjMxNTc3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NDgwMDA5NTU5MzA0NjMxNTc3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcW
Oct  3 10:08:46 compute-0 nova_compute[351685]: Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTQ4MDAwOTU1OTMwNDYzMTU3Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU0ODAwMDk1NTkzMDQ2MzE1Nzc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NDgwMDA5NTU5MzA0NjMxNTc3PT0tLQo=',user_id='2f408449ba0f42fcb69f92dbf541f2e3',uuid=10f21e57-50ad-48e0-a664-66fd8affbe73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "address": "fa:16:3e:f6:2d:c9", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.248", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4892d0b-79", "ovs_interfaceid": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.321 2 DEBUG nova.network.os_vif_util [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converting VIF {"id": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "address": "fa:16:3e:f6:2d:c9", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.248", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4892d0b-79", "ovs_interfaceid": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.322 2 DEBUG nova.network.os_vif_util [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:2d:c9,bridge_name='br-int',has_traffic_filtering=True,id=b4892d0b-79ef-407a-9e1d-ac886b07daba,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb4892d0b-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.323 2 DEBUG nova.objects.instance [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lazy-loading 'pci_devices' on Instance uuid 10f21e57-50ad-48e0-a664-66fd8affbe73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.341 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] End _get_guest_xml xml=<domain type="kvm">
Oct  3 10:08:46 compute-0 nova_compute[351685]:  <uuid>10f21e57-50ad-48e0-a664-66fd8affbe73</uuid>
Oct  3 10:08:46 compute-0 nova_compute[351685]:  <name>instance-00000005</name>
Oct  3 10:08:46 compute-0 nova_compute[351685]:  <memory>524288</memory>
Oct  3 10:08:46 compute-0 nova_compute[351685]:  <vcpu>1</vcpu>
Oct  3 10:08:46 compute-0 nova_compute[351685]:  <metadata>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <nova:name>vn-yplftfd-y3rllwivvz5w-5iuabdtdgdic-vnf-ze2ouau6iet5</nova:name>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <nova:creationTime>2025-10-03 10:08:44</nova:creationTime>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <nova:flavor name="m1.small">
Oct  3 10:08:46 compute-0 nova_compute[351685]:        <nova:memory>512</nova:memory>
Oct  3 10:08:46 compute-0 nova_compute[351685]:        <nova:disk>1</nova:disk>
Oct  3 10:08:46 compute-0 nova_compute[351685]:        <nova:swap>0</nova:swap>
Oct  3 10:08:46 compute-0 nova_compute[351685]:        <nova:ephemeral>1</nova:ephemeral>
Oct  3 10:08:46 compute-0 nova_compute[351685]:        <nova:vcpus>1</nova:vcpus>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      </nova:flavor>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <nova:owner>
Oct  3 10:08:46 compute-0 nova_compute[351685]:        <nova:user uuid="2f408449ba0f42fcb69f92dbf541f2e3">admin</nova:user>
Oct  3 10:08:46 compute-0 nova_compute[351685]:        <nova:project uuid="ee75a4dc6ade43baab6ee923c9cf4cdf">admin</nova:project>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      </nova:owner>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <nova:root type="image" uuid="37f03e8a-3aed-46a5-8219-fc87e355127e"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <nova:ports>
Oct  3 10:08:46 compute-0 nova_compute[351685]:        <nova:port uuid="b4892d0b-79ef-407a-9e1d-ac886b07daba">
Oct  3 10:08:46 compute-0 nova_compute[351685]:          <nova:ip type="fixed" address="192.168.0.248" ipVersion="4"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:        </nova:port>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      </nova:ports>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    </nova:instance>
Oct  3 10:08:46 compute-0 nova_compute[351685]:  </metadata>
Oct  3 10:08:46 compute-0 nova_compute[351685]:  <sysinfo type="smbios">
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <system>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <entry name="manufacturer">RDO</entry>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <entry name="product">OpenStack Compute</entry>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <entry name="serial">10f21e57-50ad-48e0-a664-66fd8affbe73</entry>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <entry name="uuid">10f21e57-50ad-48e0-a664-66fd8affbe73</entry>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <entry name="family">Virtual Machine</entry>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    </system>
Oct  3 10:08:46 compute-0 nova_compute[351685]:  </sysinfo>
Oct  3 10:08:46 compute-0 nova_compute[351685]:  <os>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <boot dev="hd"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <smbios mode="sysinfo"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:  </os>
Oct  3 10:08:46 compute-0 nova_compute[351685]:  <features>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <acpi/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <apic/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <vmcoreinfo/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:  </features>
Oct  3 10:08:46 compute-0 nova_compute[351685]:  <clock offset="utc">
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <timer name="pit" tickpolicy="delay"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <timer name="hpet" present="no"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:  </clock>
Oct  3 10:08:46 compute-0 nova_compute[351685]:  <cpu mode="host-model" match="exact">
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <topology sockets="1" cores="1" threads="1"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:  </cpu>
Oct  3 10:08:46 compute-0 nova_compute[351685]:  <devices>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/10f21e57-50ad-48e0-a664-66fd8affbe73_disk">
Oct  3 10:08:46 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      </source>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:08:46 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <target dev="vda" bus="virtio"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/10f21e57-50ad-48e0-a664-66fd8affbe73_disk.eph0">
Oct  3 10:08:46 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      </source>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:08:46 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <target dev="vdb" bus="virtio"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <disk type="network" device="cdrom">
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/10f21e57-50ad-48e0-a664-66fd8affbe73_disk.config">
Oct  3 10:08:46 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      </source>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:08:46 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <target dev="sda" bus="sata"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <interface type="ethernet">
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <mac address="fa:16:3e:f6:2d:c9"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <driver name="vhost" rx_queue_size="512"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <mtu size="1442"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <target dev="tapb4892d0b-79"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    </interface>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <serial type="pty">
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <log file="/var/lib/nova/instances/10f21e57-50ad-48e0-a664-66fd8affbe73/console.log" append="off"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    </serial>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <video>
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    </video>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <input type="tablet" bus="usb"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <rng model="virtio">
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <backend model="random">/dev/urandom</backend>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    </rng>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <controller type="usb" index="0"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    <memballoon model="virtio">
Oct  3 10:08:46 compute-0 nova_compute[351685]:      <stats period="10"/>
Oct  3 10:08:46 compute-0 nova_compute[351685]:    </memballoon>
Oct  3 10:08:46 compute-0 nova_compute[351685]:  </devices>
Oct  3 10:08:46 compute-0 nova_compute[351685]: </domain>
Oct  3 10:08:46 compute-0 nova_compute[351685]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.342 2 DEBUG nova.compute.manager [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Preparing to wait for external event network-vif-plugged-b4892d0b-79ef-407a-9e1d-ac886b07daba prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.342 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.342 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.343 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.343 2 DEBUG nova.virt.libvirt.vif [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T10:08:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-yplftfd-y3rllwivvz5w-5iuabdtdgdic-vnf-ze2ouau6iet5',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-yplftfd-y3rllwivvz5w-5iuabdtdgdic-vnf-ze2ouau6iet5',id=5,image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='09b6fef3-eb54-4e45-9716-a57b7d592bd8'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ee75a4dc6ade43baab6ee923c9cf4cdf',ramdisk_id='',reservation_id='r-k650uwcj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T10:08:39Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NDgwMDA5NTU5MzA0NjMxNTc3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU0ODAwMDk1NTkzMDQ2MzE1Nzc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTQ4MDAwOTU1OTMwNDYzMTU3Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU0ODAwMDk1NTkzMDQ2MzE1Nzc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NDgwMDA5NTU5MzA0NjMxNTc3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NDgwMDA5NTU5MzA0NjMxNTc3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykp
Oct  3 10:08:46 compute-0 nova_compute[351685]: YXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTQ4MDAwOTU1OTMwNDYzMTU3Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU0ODAwMDk1NTkzMDQ2MzE1Nzc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NDgwMDA5NTU5MzA0NjMxNTc3PT0tLQo=',user_id='2f408449ba0f42fcb69f92dbf541f2e3',uuid=10f21e57-50ad-48e0-a664-66fd8affbe73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "address": "fa:16:3e:f6:2d:c9", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.248", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4892d0b-79", "ovs_interfaceid": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.344 2 DEBUG nova.network.os_vif_util [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converting VIF {"id": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "address": "fa:16:3e:f6:2d:c9", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.248", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4892d0b-79", "ovs_interfaceid": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.344 2 DEBUG nova.network.os_vif_util [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:2d:c9,bridge_name='br-int',has_traffic_filtering=True,id=b4892d0b-79ef-407a-9e1d-ac886b07daba,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb4892d0b-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.345 2 DEBUG os_vif [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:2d:c9,bridge_name='br-int',has_traffic_filtering=True,id=b4892d0b-79ef-407a-9e1d-ac886b07daba,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb4892d0b-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.350 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4892d0b-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.351 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb4892d0b-79, col_values=(('external_ids', {'iface-id': 'b4892d0b-79ef-407a-9e1d-ac886b07daba', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:2d:c9', 'vm-uuid': '10f21e57-50ad-48e0-a664-66fd8affbe73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:46 compute-0 NetworkManager[45015]: <info>  [1759486126.3550] manager: (tapb4892d0b-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.362 2 INFO os_vif [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:2d:c9,bridge_name='br-int',has_traffic_filtering=True,id=b4892d0b-79ef-407a-9e1d-ac886b07daba,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb4892d0b-79')#033[00m
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.411 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.411 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.411 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.412 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No VIF found with MAC fa:16:3e:f6:2d:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.412 2 INFO nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Using config drive#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.441 2 DEBUG nova.storage.rbd_utils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 10f21e57-50ad-48e0-a664-66fd8affbe73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.451 2 DEBUG nova.network.neutron [req-51043bdb-b1d7-44a5-83e2-21d0540ae955 req-252dbd78-1c57-425f-840b-f49d6ab4143d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Updated VIF entry in instance network info cache for port b4892d0b-79ef-407a-9e1d-ac886b07daba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.452 2 DEBUG nova.network.neutron [req-51043bdb-b1d7-44a5-83e2-21d0540ae955 req-252dbd78-1c57-425f-840b-f49d6ab4143d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Updating instance_info_cache with network_info: [{"id": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "address": "fa:16:3e:f6:2d:c9", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.248", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4892d0b-79", "ovs_interfaceid": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.470 2 DEBUG oslo_concurrency.lockutils [req-51043bdb-b1d7-44a5-83e2-21d0540ae955 req-252dbd78-1c57-425f-840b-f49d6ab4143d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-10f21e57-50ad-48e0-a664-66fd8affbe73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:08:46 compute-0 rsyslogd[187556]: message too long (8192) with configured size 8096, begin of message is: 2025-10-03 10:08:46.320 2 DEBUG nova.virt.libvirt.vif [None req-1e79f1b5-3141-41 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  3 10:08:46 compute-0 rsyslogd[187556]: message too long (8192) with configured size 8096, begin of message is: 2025-10-03 10:08:46.343 2 DEBUG nova.virt.libvirt.vif [None req-1e79f1b5-3141-41 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.752 2 INFO nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Creating config drive at /var/lib/nova/instances/10f21e57-50ad-48e0-a664-66fd8affbe73/disk.config#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.767 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/10f21e57-50ad-48e0-a664-66fd8affbe73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpegpf3we7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.905 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/10f21e57-50ad-48e0-a664-66fd8affbe73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpegpf3we7" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.960 2 DEBUG nova.storage.rbd_utils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 10f21e57-50ad-48e0-a664-66fd8affbe73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:08:46 compute-0 nova_compute[351685]: 2025-10-03 10:08:46.969 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/10f21e57-50ad-48e0-a664-66fd8affbe73/disk.config 10f21e57-50ad-48e0-a664-66fd8affbe73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:08:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Oct  3 10:08:47 compute-0 nova_compute[351685]: 2025-10-03 10:08:47.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:47 compute-0 nova_compute[351685]: 2025-10-03 10:08:47.231 2 DEBUG oslo_concurrency.processutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/10f21e57-50ad-48e0-a664-66fd8affbe73/disk.config 10f21e57-50ad-48e0-a664-66fd8affbe73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:08:47 compute-0 nova_compute[351685]: 2025-10-03 10:08:47.232 2 INFO nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Deleting local config drive /var/lib/nova/instances/10f21e57-50ad-48e0-a664-66fd8affbe73/disk.config because it was imported into RBD.#033[00m
Oct  3 10:08:47 compute-0 NetworkManager[45015]: <info>  [1759486127.3037] manager: (tapb4892d0b-79): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Oct  3 10:08:47 compute-0 kernel: tapb4892d0b-79: entered promiscuous mode
Oct  3 10:08:47 compute-0 nova_compute[351685]: 2025-10-03 10:08:47.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:47 compute-0 ovn_controller[88471]: 2025-10-03T10:08:47Z|00052|binding|INFO|Claiming lport b4892d0b-79ef-407a-9e1d-ac886b07daba for this chassis.
Oct  3 10:08:47 compute-0 ovn_controller[88471]: 2025-10-03T10:08:47Z|00053|binding|INFO|b4892d0b-79ef-407a-9e1d-ac886b07daba: Claiming fa:16:3e:f6:2d:c9 192.168.0.248
Oct  3 10:08:47 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:47.331 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:2d:c9 192.168.0.248'], port_security=['fa:16:3e:f6:2d:c9 192.168.0.248'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-e77v6yplftfd-y3rllwivvz5w-5iuabdtdgdic-port-txgbpgprlo7w', 'neutron:cidrs': '192.168.0.248/24', 'neutron:device_id': '10f21e57-50ad-48e0-a664-66fd8affbe73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-e77v6yplftfd-y3rllwivvz5w-5iuabdtdgdic-port-txgbpgprlo7w', 'neutron:project_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c22f45fa-3e9c-4adb-8724-80552acd48b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c829e9c2-c63b-44e6-806c-cc11bdf56258, chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=b4892d0b-79ef-407a-9e1d-ac886b07daba) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 10:08:47 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:47.333 284328 INFO neutron.agent.ovn.metadata.agent [-] Port b4892d0b-79ef-407a-9e1d-ac886b07daba in datapath 67eed0ac-d131-40ed-a5fe-0484d04236a0 bound to our chassis#033[00m
Oct  3 10:08:47 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:47.334 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67eed0ac-d131-40ed-a5fe-0484d04236a0#033[00m
Oct  3 10:08:47 compute-0 ovn_controller[88471]: 2025-10-03T10:08:47Z|00054|binding|INFO|Setting lport b4892d0b-79ef-407a-9e1d-ac886b07daba ovn-installed in OVS
Oct  3 10:08:47 compute-0 ovn_controller[88471]: 2025-10-03T10:08:47Z|00055|binding|INFO|Setting lport b4892d0b-79ef-407a-9e1d-ac886b07daba up in Southbound
Oct  3 10:08:47 compute-0 nova_compute[351685]: 2025-10-03 10:08:47.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:47 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:47.351 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[dfe108a1-95dc-44a8-a264-daa5270d26d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:08:47 compute-0 systemd-machined[137653]: New machine qemu-5-instance-00000005.
Oct  3 10:08:47 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Oct  3 10:08:47 compute-0 systemd-udevd[425776]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 10:08:47 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:47.397 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[baa1c112-1b9f-4a06-b173-9d3deb5da8c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:08:47 compute-0 NetworkManager[45015]: <info>  [1759486127.4004] device (tapb4892d0b-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 10:08:47 compute-0 NetworkManager[45015]: <info>  [1759486127.4018] device (tapb4892d0b-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  3 10:08:47 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:47.403 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[22d62162-1cae-4b94-9852-c5fbefb35141]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:08:47 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:47.441 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[0f8989ae-f41e-4fbd-b9e6-fe5abd9a9f87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:08:47 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:47.465 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[37bae372-9399-49c3-ac7b-aa38580de7b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67eed0ac-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 832, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 832, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489539, 'reachable_time': 32866, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 425786, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:08:47 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:47.484 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[918c38fb-18a7-4f39-80df-a472662ccddd]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap67eed0ac-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489551, 'tstamp': 489551}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 425788, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67eed0ac-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489554, 'tstamp': 489554}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 425788, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:08:47 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:47.486 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67eed0ac-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:08:47 compute-0 nova_compute[351685]: 2025-10-03 10:08:47.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:47 compute-0 nova_compute[351685]: 2025-10-03 10:08:47.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:47 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:47.493 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67eed0ac-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:08:47 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:47.493 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:08:47 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:47.494 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67eed0ac-d0, col_values=(('external_ids', {'iface-id': 'e79720f4-8084-4b6f-a8ef-933cf0e7b8bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:08:47 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:08:47.494 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:08:47 compute-0 nova_compute[351685]: 2025-10-03 10:08:47.768 2 DEBUG nova.compute.manager [req-828d5f0b-17cb-4aed-ba6a-cef3360a1b15 req-43f19cd1-b3ac-45e4-aed9-e8ccc990619a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Received event network-vif-plugged-b4892d0b-79ef-407a-9e1d-ac886b07daba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:08:47 compute-0 nova_compute[351685]: 2025-10-03 10:08:47.769 2 DEBUG oslo_concurrency.lockutils [req-828d5f0b-17cb-4aed-ba6a-cef3360a1b15 req-43f19cd1-b3ac-45e4-aed9-e8ccc990619a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:08:47 compute-0 nova_compute[351685]: 2025-10-03 10:08:47.770 2 DEBUG oslo_concurrency.lockutils [req-828d5f0b-17cb-4aed-ba6a-cef3360a1b15 req-43f19cd1-b3ac-45e4-aed9-e8ccc990619a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:08:47 compute-0 nova_compute[351685]: 2025-10-03 10:08:47.771 2 DEBUG oslo_concurrency.lockutils [req-828d5f0b-17cb-4aed-ba6a-cef3360a1b15 req-43f19cd1-b3ac-45e4-aed9-e8ccc990619a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:08:47 compute-0 nova_compute[351685]: 2025-10-03 10:08:47.771 2 DEBUG nova.compute.manager [req-828d5f0b-17cb-4aed-ba6a-cef3360a1b15 req-43f19cd1-b3ac-45e4-aed9-e8ccc990619a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Processing event network-vif-plugged-b4892d0b-79ef-407a-9e1d-ac886b07daba _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  3 10:08:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 MiB/s wr, 39 op/s
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.088 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759486129.087584, 10f21e57-50ad-48e0-a664-66fd8affbe73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.089 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] VM Started (Lifecycle Event)#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.092 2 DEBUG nova.compute.manager [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.097 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.103 2 INFO nova.virt.libvirt.driver [-] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Instance spawned successfully.#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.103 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.111 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.116 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.125 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.126 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.126 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.126 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.127 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.127 2 DEBUG nova.virt.libvirt.driver [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.132 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.132 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759486129.0877433, 10f21e57-50ad-48e0-a664-66fd8affbe73 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.133 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] VM Paused (Lifecycle Event)#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.155 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.164 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759486129.0971236, 10f21e57-50ad-48e0-a664-66fd8affbe73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.164 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] VM Resumed (Lifecycle Event)#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.182 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.188 2 INFO nova.compute.manager [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Took 9.87 seconds to spawn the instance on the hypervisor.#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.189 2 DEBUG nova.compute.manager [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.191 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.241 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 10:08:49 compute-0 podman[425873]: 2025-10-03 10:08:49.252957233 +0000 UTC m=+0.121971576 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.288 2 INFO nova.compute.manager [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Took 10.86 seconds to build instance.#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.308 2 DEBUG oslo_concurrency.lockutils [None req-1e79f1b5-3141-41cf-9439-bd156b88c135 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.834 2 DEBUG nova.compute.manager [req-f21dbb50-fbf6-4243-ba53-3e1ae89b3037 req-8450715b-2775-4636-9a10-b029cdf1c11a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Received event network-vif-plugged-b4892d0b-79ef-407a-9e1d-ac886b07daba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.835 2 DEBUG oslo_concurrency.lockutils [req-f21dbb50-fbf6-4243-ba53-3e1ae89b3037 req-8450715b-2775-4636-9a10-b029cdf1c11a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.836 2 DEBUG oslo_concurrency.lockutils [req-f21dbb50-fbf6-4243-ba53-3e1ae89b3037 req-8450715b-2775-4636-9a10-b029cdf1c11a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.836 2 DEBUG oslo_concurrency.lockutils [req-f21dbb50-fbf6-4243-ba53-3e1ae89b3037 req-8450715b-2775-4636-9a10-b029cdf1c11a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.837 2 DEBUG nova.compute.manager [req-f21dbb50-fbf6-4243-ba53-3e1ae89b3037 req-8450715b-2775-4636-9a10-b029cdf1c11a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] No waiting events found dispatching network-vif-plugged-b4892d0b-79ef-407a-9e1d-ac886b07daba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 10:08:49 compute-0 nova_compute[351685]: 2025-10-03 10:08:49.837 2 WARNING nova.compute.manager [req-f21dbb50-fbf6-4243-ba53-3e1ae89b3037 req-8450715b-2775-4636-9a10-b029cdf1c11a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Received unexpected event network-vif-plugged-b4892d0b-79ef-407a-9e1d-ac886b07daba for instance with vm_state active and task_state None.#033[00m
Oct  3 10:08:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:08:49 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:08:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:08:49 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:08:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:08:50 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:08:50 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 5db4c981-e495-47ad-ad6d-8dc34dc687af does not exist
Oct  3 10:08:50 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d19b6b7f-9fc6-4b4a-849e-2ff0fc8bda12 does not exist
Oct  3 10:08:50 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev cfd834a1-4318-4945-8b5e-33543222ed87 does not exist
Oct  3 10:08:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:08:50 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:08:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:08:50 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:08:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:08:50 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:08:50 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:08:50 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:08:50 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:08:50 compute-0 podman[426138]: 2025-10-03 10:08:50.808479478 +0000 UTC m=+0.064182777 container create 619e0f7c95fa3ea62585015596c73c4a8fbed3c5f6606c016547f7c5a57a12d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_raman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct  3 10:08:50 compute-0 systemd[1]: Started libpod-conmon-619e0f7c95fa3ea62585015596c73c4a8fbed3c5f6606c016547f7c5a57a12d6.scope.
Oct  3 10:08:50 compute-0 podman[426138]: 2025-10-03 10:08:50.785718939 +0000 UTC m=+0.041422258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:08:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:08:50 compute-0 podman[426138]: 2025-10-03 10:08:50.938661626 +0000 UTC m=+0.194364945 container init 619e0f7c95fa3ea62585015596c73c4a8fbed3c5f6606c016547f7c5a57a12d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 10:08:50 compute-0 podman[426138]: 2025-10-03 10:08:50.949355519 +0000 UTC m=+0.205058818 container start 619e0f7c95fa3ea62585015596c73c4a8fbed3c5f6606c016547f7c5a57a12d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_raman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 10:08:50 compute-0 podman[426138]: 2025-10-03 10:08:50.954567995 +0000 UTC m=+0.210271294 container attach 619e0f7c95fa3ea62585015596c73c4a8fbed3c5f6606c016547f7c5a57a12d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_raman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:08:50 compute-0 elated_raman[426154]: 167 167
Oct  3 10:08:50 compute-0 systemd[1]: libpod-619e0f7c95fa3ea62585015596c73c4a8fbed3c5f6606c016547f7c5a57a12d6.scope: Deactivated successfully.
Oct  3 10:08:50 compute-0 podman[426138]: 2025-10-03 10:08:50.958416219 +0000 UTC m=+0.214119518 container died 619e0f7c95fa3ea62585015596c73c4a8fbed3c5f6606c016547f7c5a57a12d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_raman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 10:08:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Oct  3 10:08:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cf9e8b4b94a5791f1a06d2227181bc0e32b6818246ca60b167ffbef25523dff-merged.mount: Deactivated successfully.
Oct  3 10:08:51 compute-0 podman[426138]: 2025-10-03 10:08:51.01933845 +0000 UTC m=+0.275041749 container remove 619e0f7c95fa3ea62585015596c73c4a8fbed3c5f6606c016547f7c5a57a12d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 10:08:51 compute-0 systemd[1]: libpod-conmon-619e0f7c95fa3ea62585015596c73c4a8fbed3c5f6606c016547f7c5a57a12d6.scope: Deactivated successfully.
Oct  3 10:08:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:08:51 compute-0 podman[426177]: 2025-10-03 10:08:51.224530531 +0000 UTC m=+0.053944209 container create 5823017f8f248a9b2f6a3c25178cbfadf2434d40ee2c1d5c37f23795fa346d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_germain, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:08:51 compute-0 systemd[1]: Started libpod-conmon-5823017f8f248a9b2f6a3c25178cbfadf2434d40ee2c1d5c37f23795fa346d5d.scope.
Oct  3 10:08:51 compute-0 podman[426177]: 2025-10-03 10:08:51.206609207 +0000 UTC m=+0.036022905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:08:51 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:08:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9430a27834442b60141702ae0ca945bde133b4ab2b9a926f1012e1141715e7ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:08:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9430a27834442b60141702ae0ca945bde133b4ab2b9a926f1012e1141715e7ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:08:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9430a27834442b60141702ae0ca945bde133b4ab2b9a926f1012e1141715e7ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:08:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9430a27834442b60141702ae0ca945bde133b4ab2b9a926f1012e1141715e7ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:08:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9430a27834442b60141702ae0ca945bde133b4ab2b9a926f1012e1141715e7ae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:08:51 compute-0 podman[426177]: 2025-10-03 10:08:51.351294561 +0000 UTC m=+0.180708239 container init 5823017f8f248a9b2f6a3c25178cbfadf2434d40ee2c1d5c37f23795fa346d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_germain, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 10:08:51 compute-0 nova_compute[351685]: 2025-10-03 10:08:51.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:51 compute-0 podman[426177]: 2025-10-03 10:08:51.368945635 +0000 UTC m=+0.198359303 container start 5823017f8f248a9b2f6a3c25178cbfadf2434d40ee2c1d5c37f23795fa346d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_germain, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct  3 10:08:51 compute-0 podman[426177]: 2025-10-03 10:08:51.374517314 +0000 UTC m=+0.203930992 container attach 5823017f8f248a9b2f6a3c25178cbfadf2434d40ee2c1d5c37f23795fa346d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_germain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 10:08:52 compute-0 nova_compute[351685]: 2025-10-03 10:08:52.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:52 compute-0 zealous_germain[426192]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:08:52 compute-0 zealous_germain[426192]: --> relative data size: 1.0
Oct  3 10:08:52 compute-0 zealous_germain[426192]: --> All data devices are unavailable
Oct  3 10:08:52 compute-0 systemd[1]: libpod-5823017f8f248a9b2f6a3c25178cbfadf2434d40ee2c1d5c37f23795fa346d5d.scope: Deactivated successfully.
Oct  3 10:08:52 compute-0 systemd[1]: libpod-5823017f8f248a9b2f6a3c25178cbfadf2434d40ee2c1d5c37f23795fa346d5d.scope: Consumed 1.104s CPU time.
Oct  3 10:08:52 compute-0 podman[426221]: 2025-10-03 10:08:52.624118201 +0000 UTC m=+0.041786779 container died 5823017f8f248a9b2f6a3c25178cbfadf2434d40ee2c1d5c37f23795fa346d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_germain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 10:08:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9430a27834442b60141702ae0ca945bde133b4ab2b9a926f1012e1141715e7ae-merged.mount: Deactivated successfully.
Oct  3 10:08:52 compute-0 podman[426221]: 2025-10-03 10:08:52.698974838 +0000 UTC m=+0.116643406 container remove 5823017f8f248a9b2f6a3c25178cbfadf2434d40ee2c1d5c37f23795fa346d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 10:08:52 compute-0 systemd[1]: libpod-conmon-5823017f8f248a9b2f6a3c25178cbfadf2434d40ee2c1d5c37f23795fa346d5d.scope: Deactivated successfully.
Oct  3 10:08:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.1 MiB/s wr, 49 op/s
Oct  3 10:08:53 compute-0 podman[426375]: 2025-10-03 10:08:53.627664349 +0000 UTC m=+0.058644219 container create 30141395cc202a57e89b800c4ea52d95678198e650ff90dd1e3bf1b5903d2f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  3 10:08:53 compute-0 podman[426375]: 2025-10-03 10:08:53.602410541 +0000 UTC m=+0.033390431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:08:53 compute-0 systemd[1]: Started libpod-conmon-30141395cc202a57e89b800c4ea52d95678198e650ff90dd1e3bf1b5903d2f9c.scope.
Oct  3 10:08:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:08:53 compute-0 podman[426375]: 2025-10-03 10:08:53.764409338 +0000 UTC m=+0.195389228 container init 30141395cc202a57e89b800c4ea52d95678198e650ff90dd1e3bf1b5903d2f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_chaplygin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:08:53 compute-0 podman[426375]: 2025-10-03 10:08:53.784838643 +0000 UTC m=+0.215818513 container start 30141395cc202a57e89b800c4ea52d95678198e650ff90dd1e3bf1b5903d2f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_chaplygin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 10:08:53 compute-0 podman[426375]: 2025-10-03 10:08:53.788992595 +0000 UTC m=+0.219972485 container attach 30141395cc202a57e89b800c4ea52d95678198e650ff90dd1e3bf1b5903d2f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:08:53 compute-0 distracted_chaplygin[426390]: 167 167
Oct  3 10:08:53 compute-0 systemd[1]: libpod-30141395cc202a57e89b800c4ea52d95678198e650ff90dd1e3bf1b5903d2f9c.scope: Deactivated successfully.
Oct  3 10:08:53 compute-0 podman[426375]: 2025-10-03 10:08:53.794039227 +0000 UTC m=+0.225019097 container died 30141395cc202a57e89b800c4ea52d95678198e650ff90dd1e3bf1b5903d2f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 10:08:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f705b09c43c71b1e9033d0bce00b1a06c23cc80183bdad8520c8443eb9208f3-merged.mount: Deactivated successfully.
Oct  3 10:08:53 compute-0 podman[426375]: 2025-10-03 10:08:53.863339236 +0000 UTC m=+0.294319106 container remove 30141395cc202a57e89b800c4ea52d95678198e650ff90dd1e3bf1b5903d2f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  3 10:08:53 compute-0 systemd[1]: libpod-conmon-30141395cc202a57e89b800c4ea52d95678198e650ff90dd1e3bf1b5903d2f9c.scope: Deactivated successfully.
Oct  3 10:08:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:08:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3527882352' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:08:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:08:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3527882352' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:08:54 compute-0 podman[426409]: 2025-10-03 10:08:54.036356047 +0000 UTC m=+0.105626154 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, name=ubi9, config_id=edpm, maintainer=Red Hat, Inc., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.component=ubi9-container, version=9.4)
Oct  3 10:08:54 compute-0 podman[426408]: 2025-10-03 10:08:54.03643713 +0000 UTC m=+0.113312040 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:08:54 compute-0 podman[426453]: 2025-10-03 10:08:54.105019766 +0000 UTC m=+0.057564404 container create 25673724a10eb9a3267ca517c31fe3237724a592cd4817ccdc6d2a38d3b351b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:08:54 compute-0 systemd[1]: Started libpod-conmon-25673724a10eb9a3267ca517c31fe3237724a592cd4817ccdc6d2a38d3b351b7.scope.
Oct  3 10:08:54 compute-0 podman[426453]: 2025-10-03 10:08:54.078890899 +0000 UTC m=+0.031435567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:08:54 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:08:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a736dd3bff9237ece0d6584f52383af88a08653b5f988fa0a4c23ec67b6eba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:08:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a736dd3bff9237ece0d6584f52383af88a08653b5f988fa0a4c23ec67b6eba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:08:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a736dd3bff9237ece0d6584f52383af88a08653b5f988fa0a4c23ec67b6eba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:08:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a736dd3bff9237ece0d6584f52383af88a08653b5f988fa0a4c23ec67b6eba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:08:54 compute-0 podman[426453]: 2025-10-03 10:08:54.245892497 +0000 UTC m=+0.198437125 container init 25673724a10eb9a3267ca517c31fe3237724a592cd4817ccdc6d2a38d3b351b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 10:08:54 compute-0 podman[426453]: 2025-10-03 10:08:54.260536636 +0000 UTC m=+0.213081254 container start 25673724a10eb9a3267ca517c31fe3237724a592cd4817ccdc6d2a38d3b351b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 10:08:54 compute-0 podman[426453]: 2025-10-03 10:08:54.265275288 +0000 UTC m=+0.217819926 container attach 25673724a10eb9a3267ca517c31fe3237724a592cd4817ccdc6d2a38d3b351b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 10:08:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.1 MiB/s wr, 87 op/s
Oct  3 10:08:55 compute-0 condescending_wilson[426470]: {
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:    "0": [
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:        {
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "devices": [
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "/dev/loop3"
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            ],
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "lv_name": "ceph_lv0",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "lv_size": "21470642176",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "name": "ceph_lv0",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "tags": {
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.cluster_name": "ceph",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.crush_device_class": "",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.encrypted": "0",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.osd_id": "0",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.type": "block",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.vdo": "0"
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            },
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "type": "block",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "vg_name": "ceph_vg0"
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:        }
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:    ],
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:    "1": [
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:        {
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "devices": [
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "/dev/loop4"
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            ],
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "lv_name": "ceph_lv1",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "lv_size": "21470642176",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "name": "ceph_lv1",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "tags": {
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.cluster_name": "ceph",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.crush_device_class": "",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.encrypted": "0",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.osd_id": "1",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.type": "block",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.vdo": "0"
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            },
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "type": "block",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "vg_name": "ceph_vg1"
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:        }
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:    ],
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:    "2": [
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:        {
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "devices": [
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "/dev/loop5"
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            ],
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "lv_name": "ceph_lv2",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "lv_size": "21470642176",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "name": "ceph_lv2",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "tags": {
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.cluster_name": "ceph",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.crush_device_class": "",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.encrypted": "0",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.osd_id": "2",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.type": "block",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:                "ceph.vdo": "0"
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            },
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "type": "block",
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:            "vg_name": "ceph_vg2"
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:        }
Oct  3 10:08:55 compute-0 condescending_wilson[426470]:    ]
Oct  3 10:08:55 compute-0 condescending_wilson[426470]: }
Oct  3 10:08:55 compute-0 systemd[1]: libpod-25673724a10eb9a3267ca517c31fe3237724a592cd4817ccdc6d2a38d3b351b7.scope: Deactivated successfully.
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001926989219085759 of space, bias 1.0, pg target 0.5780967657257277 quantized to 32 (current 32)
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:08:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:08:55 compute-0 podman[426479]: 2025-10-03 10:08:55.243805034 +0000 UTC m=+0.055617062 container died 25673724a10eb9a3267ca517c31fe3237724a592cd4817ccdc6d2a38d3b351b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 10:08:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3a736dd3bff9237ece0d6584f52383af88a08653b5f988fa0a4c23ec67b6eba-merged.mount: Deactivated successfully.
Oct  3 10:08:55 compute-0 podman[426479]: 2025-10-03 10:08:55.332161764 +0000 UTC m=+0.143973732 container remove 25673724a10eb9a3267ca517c31fe3237724a592cd4817ccdc6d2a38d3b351b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wilson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:08:55 compute-0 systemd[1]: libpod-conmon-25673724a10eb9a3267ca517c31fe3237724a592cd4817ccdc6d2a38d3b351b7.scope: Deactivated successfully.
Oct  3 10:08:56 compute-0 podman[426633]: 2025-10-03 10:08:56.221333299 +0000 UTC m=+0.068524535 container create 0c2ab39fc4967ec54d3d421d4abe439cba4ee0fca6005f029f8a1805aeb5c4b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:08:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:08:56 compute-0 systemd[1]: Started libpod-conmon-0c2ab39fc4967ec54d3d421d4abe439cba4ee0fca6005f029f8a1805aeb5c4b6.scope.
Oct  3 10:08:56 compute-0 podman[426633]: 2025-10-03 10:08:56.19481771 +0000 UTC m=+0.042008966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:08:56 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:08:56 compute-0 podman[426633]: 2025-10-03 10:08:56.329677769 +0000 UTC m=+0.176869035 container init 0c2ab39fc4967ec54d3d421d4abe439cba4ee0fca6005f029f8a1805aeb5c4b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  3 10:08:56 compute-0 podman[426633]: 2025-10-03 10:08:56.339202923 +0000 UTC m=+0.186394149 container start 0c2ab39fc4967ec54d3d421d4abe439cba4ee0fca6005f029f8a1805aeb5c4b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct  3 10:08:56 compute-0 podman[426633]: 2025-10-03 10:08:56.343949125 +0000 UTC m=+0.191140411 container attach 0c2ab39fc4967ec54d3d421d4abe439cba4ee0fca6005f029f8a1805aeb5c4b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 10:08:56 compute-0 pedantic_gagarin[426650]: 167 167
Oct  3 10:08:56 compute-0 systemd[1]: libpod-0c2ab39fc4967ec54d3d421d4abe439cba4ee0fca6005f029f8a1805aeb5c4b6.scope: Deactivated successfully.
Oct  3 10:08:56 compute-0 nova_compute[351685]: 2025-10-03 10:08:56.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:56 compute-0 podman[426655]: 2025-10-03 10:08:56.394496834 +0000 UTC m=+0.035030543 container died 0c2ab39fc4967ec54d3d421d4abe439cba4ee0fca6005f029f8a1805aeb5c4b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 10:08:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-625b17fd66f789fe0c1fcd55f5e5e6fd8555e61043cf8b753a86fe682b91a142-merged.mount: Deactivated successfully.
Oct  3 10:08:56 compute-0 podman[426655]: 2025-10-03 10:08:56.441475679 +0000 UTC m=+0.082009368 container remove 0c2ab39fc4967ec54d3d421d4abe439cba4ee0fca6005f029f8a1805aeb5c4b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:08:56 compute-0 systemd[1]: libpod-conmon-0c2ab39fc4967ec54d3d421d4abe439cba4ee0fca6005f029f8a1805aeb5c4b6.scope: Deactivated successfully.
Oct  3 10:08:56 compute-0 podman[426677]: 2025-10-03 10:08:56.691786934 +0000 UTC m=+0.061764199 container create 463ed94c4c88f78efe8699b240dbc249eb4ff1ed2056298f488ad30f306b748b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  3 10:08:56 compute-0 podman[426677]: 2025-10-03 10:08:56.667443445 +0000 UTC m=+0.037420740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:08:56 compute-0 systemd[1]: Started libpod-conmon-463ed94c4c88f78efe8699b240dbc249eb4ff1ed2056298f488ad30f306b748b.scope.
Oct  3 10:08:56 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9f5dbce622a41537bc364f6bda249605854866ac7ab3be3ecd9437aca7b572/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9f5dbce622a41537bc364f6bda249605854866ac7ab3be3ecd9437aca7b572/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9f5dbce622a41537bc364f6bda249605854866ac7ab3be3ecd9437aca7b572/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9f5dbce622a41537bc364f6bda249605854866ac7ab3be3ecd9437aca7b572/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:08:56 compute-0 podman[426677]: 2025-10-03 10:08:56.86587941 +0000 UTC m=+0.235856765 container init 463ed94c4c88f78efe8699b240dbc249eb4ff1ed2056298f488ad30f306b748b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wright, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 10:08:56 compute-0 podman[426677]: 2025-10-03 10:08:56.879903979 +0000 UTC m=+0.249881244 container start 463ed94c4c88f78efe8699b240dbc249eb4ff1ed2056298f488ad30f306b748b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 10:08:56 compute-0 podman[426677]: 2025-10-03 10:08:56.884848757 +0000 UTC m=+0.254826022 container attach 463ed94c4c88f78efe8699b240dbc249eb4ff1ed2056298f488ad30f306b748b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 10:08:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 61 op/s
Oct  3 10:08:57 compute-0 nova_compute[351685]: 2025-10-03 10:08:57.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:08:57 compute-0 unruffled_wright[426693]: {
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:        "osd_id": 1,
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:        "type": "bluestore"
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:    },
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:        "osd_id": 2,
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:        "type": "bluestore"
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:    },
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:        "osd_id": 0,
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:        "type": "bluestore"
Oct  3 10:08:57 compute-0 unruffled_wright[426693]:    }
Oct  3 10:08:57 compute-0 unruffled_wright[426693]: }
Oct  3 10:08:57 compute-0 systemd[1]: libpod-463ed94c4c88f78efe8699b240dbc249eb4ff1ed2056298f488ad30f306b748b.scope: Deactivated successfully.
Oct  3 10:08:57 compute-0 podman[426677]: 2025-10-03 10:08:57.929105098 +0000 UTC m=+1.299082373 container died 463ed94c4c88f78efe8699b240dbc249eb4ff1ed2056298f488ad30f306b748b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 10:08:57 compute-0 systemd[1]: libpod-463ed94c4c88f78efe8699b240dbc249eb4ff1ed2056298f488ad30f306b748b.scope: Consumed 1.029s CPU time.
Oct  3 10:08:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd9f5dbce622a41537bc364f6bda249605854866ac7ab3be3ecd9437aca7b572-merged.mount: Deactivated successfully.
Oct  3 10:08:58 compute-0 podman[426677]: 2025-10-03 10:08:58.228969161 +0000 UTC m=+1.598946426 container remove 463ed94c4c88f78efe8699b240dbc249eb4ff1ed2056298f488ad30f306b748b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 10:08:58 compute-0 systemd[1]: libpod-conmon-463ed94c4c88f78efe8699b240dbc249eb4ff1ed2056298f488ad30f306b748b.scope: Deactivated successfully.
Oct  3 10:08:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:08:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:08:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:08:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:08:58 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e23592f5-5794-4664-a118-7bba56d46211 does not exist
Oct  3 10:08:58 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 90e4ca4e-801c-430e-a488-050888c66b50 does not exist
Oct  3 10:08:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:08:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:08:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Oct  3 10:08:59 compute-0 podman[157165]: time="2025-10-03T10:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:08:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:08:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9046 "" "Go-http-client/1.1"
Oct  3 10:09:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 58 op/s
Oct  3 10:09:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:09:01 compute-0 nova_compute[351685]: 2025-10-03 10:09:01.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:01 compute-0 openstack_network_exporter[367524]: ERROR   10:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:09:01 compute-0 openstack_network_exporter[367524]: ERROR   10:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:09:01 compute-0 openstack_network_exporter[367524]: ERROR   10:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:09:01 compute-0 openstack_network_exporter[367524]: ERROR   10:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:09:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:09:01 compute-0 openstack_network_exporter[367524]: ERROR   10:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:09:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:09:02 compute-0 nova_compute[351685]: 2025-10-03 10:09:02.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Oct  3 10:09:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Oct  3 10:09:05 compute-0 podman[426789]: 2025-10-03 10:09:05.845771232 +0000 UTC m=+0.101881093 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS)
Oct  3 10:09:05 compute-0 podman[426790]: 2025-10-03 10:09:05.884964287 +0000 UTC m=+0.133033581 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:09:05 compute-0 podman[426788]: 2025-10-03 10:09:05.892977314 +0000 UTC m=+0.148424974 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Oct  3 10:09:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:09:06 compute-0 nova_compute[351685]: 2025-10-03 10:09:06.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 9 op/s
Oct  3 10:09:07 compute-0 nova_compute[351685]: 2025-10-03 10:09:07.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:09:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:09:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:09:11 compute-0 nova_compute[351685]: 2025-10-03 10:09:11.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:11 compute-0 podman[426851]: 2025-10-03 10:09:11.795901349 +0000 UTC m=+0.063074650 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  3 10:09:12 compute-0 nova_compute[351685]: 2025-10-03 10:09:12.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:09:14 compute-0 podman[426870]: 2025-10-03 10:09:14.767542473 +0000 UTC m=+0.075848670 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:09:14 compute-0 podman[426876]: 2025-10-03 10:09:14.831855793 +0000 UTC m=+0.117263067 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:09:14 compute-0 podman[426871]: 2025-10-03 10:09:14.83366065 +0000 UTC m=+0.134958963 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd)
Oct  3 10:09:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:09:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:09:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:09:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:09:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:09:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:09:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:09:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:09:16 compute-0 nova_compute[351685]: 2025-10-03 10:09:16.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:09:17 compute-0 nova_compute[351685]: 2025-10-03 10:09:17.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:17 compute-0 ovn_controller[88471]: 2025-10-03T10:09:17Z|00056|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct  3 10:09:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:09:19 compute-0 podman[426934]: 2025-10-03 10:09:19.872203122 +0000 UTC m=+0.132036240 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 10:09:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:09:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:09:21 compute-0 nova_compute[351685]: 2025-10-03 10:09:21.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:22 compute-0 nova_compute[351685]: 2025-10-03 10:09:22.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:09:24 compute-0 podman[426954]: 2025-10-03 10:09:24.857166901 +0000 UTC m=+0.110619974 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:09:24 compute-0 podman[426955]: 2025-10-03 10:09:24.85714532 +0000 UTC m=+0.113266979 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, release-0.7.12=, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, container_name=kepler, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.expose-services=, name=ubi9, config_id=edpm, maintainer=Red Hat, Inc.)
Oct  3 10:09:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 op/s
Oct  3 10:09:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:09:26 compute-0 nova_compute[351685]: 2025-10-03 10:09:26.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:26 compute-0 ovn_controller[88471]: 2025-10-03T10:09:26Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:2d:c9 192.168.0.248
Oct  3 10:09:26 compute-0 ovn_controller[88471]: 2025-10-03T10:09:26Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:2d:c9 192.168.0.248
Oct  3 10:09:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 234 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 op/s
Oct  3 10:09:27 compute-0 nova_compute[351685]: 2025-10-03 10:09:27.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 252 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 979 KiB/s wr, 24 op/s
Oct  3 10:09:29 compute-0 podman[157165]: time="2025-10-03T10:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:09:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:09:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9050 "" "Go-http-client/1.1"
Oct  3 10:09:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct  3 10:09:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:09:31 compute-0 nova_compute[351685]: 2025-10-03 10:09:31.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:31 compute-0 openstack_network_exporter[367524]: ERROR   10:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:09:31 compute-0 openstack_network_exporter[367524]: ERROR   10:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:09:31 compute-0 openstack_network_exporter[367524]: ERROR   10:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:09:31 compute-0 openstack_network_exporter[367524]: ERROR   10:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:09:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:09:31 compute-0 openstack_network_exporter[367524]: ERROR   10:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:09:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:09:32 compute-0 nova_compute[351685]: 2025-10-03 10:09:32.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct  3 10:09:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Oct  3 10:09:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:09:36 compute-0 nova_compute[351685]: 2025-10-03 10:09:36.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:36 compute-0 podman[426997]: 2025-10-03 10:09:36.844122359 +0000 UTC m=+0.093249478 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:09:36 compute-0 podman[426996]: 2025-10-03 10:09:36.866871717 +0000 UTC m=+0.129306612 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, release=1755695350, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=edpm, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:09:36 compute-0 podman[426998]: 2025-10-03 10:09:36.880856405 +0000 UTC m=+0.131474631 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct  3 10:09:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Oct  3 10:09:37 compute-0 nova_compute[351685]: 2025-10-03 10:09:37.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:38 compute-0 nova_compute[351685]: 2025-10-03 10:09:38.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:09:38 compute-0 nova_compute[351685]: 2025-10-03 10:09:38.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:09:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Oct  3 10:09:39 compute-0 nova_compute[351685]: 2025-10-03 10:09:39.648 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-cd0be179-1941-400f-a1e6-8ee6243ee71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:09:39 compute-0 nova_compute[351685]: 2025-10-03 10:09:39.649 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-cd0be179-1941-400f-a1e6-8ee6243ee71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:09:39 compute-0 nova_compute[351685]: 2025-10-03 10:09:39.649 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.883 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.883 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.883 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.884 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.889 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 10f21e57-50ad-48e0-a664-66fd8affbe73 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct  3 10:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:40.890 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/10f21e57-50ad-48e0-a664-66fd8affbe73 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ef276854e7699a1234d40a89e1ecb6415a6c739b2915f53a3c625cf782ff31fe" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct  3 10:09:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 541 KiB/s wr, 33 op/s
Oct  3 10:09:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:09:41 compute-0 nova_compute[351685]: 2025-10-03 10:09:41.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:09:41.599 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:09:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:09:41.600 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:09:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:09:41.600 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:09:41 compute-0 nova_compute[351685]: 2025-10-03 10:09:41.664 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Updating instance_info_cache with network_info: [{"id": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "address": "fa:16:3e:3f:37:aa", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13472a1d-91", "ovs_interfaceid": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:09:41 compute-0 nova_compute[351685]: 2025-10-03 10:09:41.685 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-cd0be179-1941-400f-a1e6-8ee6243ee71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:09:41 compute-0 nova_compute[351685]: 2025-10-03 10:09:41.686 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:09:41 compute-0 nova_compute[351685]: 2025-10-03 10:09:41.686 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:09:41 compute-0 nova_compute[351685]: 2025-10-03 10:09:41.687 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:09:41 compute-0 nova_compute[351685]: 2025-10-03 10:09:41.687 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:09:41 compute-0 nova_compute[351685]: 2025-10-03 10:09:41.687 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:09:41 compute-0 nova_compute[351685]: 2025-10-03 10:09:41.711 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:09:41 compute-0 nova_compute[351685]: 2025-10-03 10:09:41.711 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:09:41 compute-0 nova_compute[351685]: 2025-10-03 10:09:41.712 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:09:41 compute-0 nova_compute[351685]: 2025-10-03 10:09:41.712 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:09:41 compute-0 nova_compute[351685]: 2025-10-03 10:09:41.712 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.776 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Fri, 03 Oct 2025 10:09:40 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-6e065bb9-5dba-4c01-84ca-ef439b5dbac4 x-openstack-request-id: req-6e065bb9-5dba-4c01-84ca-ef439b5dbac4 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.776 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "10f21e57-50ad-48e0-a664-66fd8affbe73", "name": "vn-yplftfd-y3rllwivvz5w-5iuabdtdgdic-vnf-ze2ouau6iet5", "status": "ACTIVE", "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "user_id": "2f408449ba0f42fcb69f92dbf541f2e3", "metadata": {"metering.server_group": "09b6fef3-eb54-4e45-9716-a57b7d592bd8"}, "hostId": "b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85", "image": {"id": "37f03e8a-3aed-46a5-8219-fc87e355127e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/37f03e8a-3aed-46a5-8219-fc87e355127e"}]}, "flavor": {"id": "ada739ee-222b-4269-8d29-62bea534173e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/ada739ee-222b-4269-8d29-62bea534173e"}]}, "created": "2025-10-03T10:08:37Z", "updated": "2025-10-03T10:08:49Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.248", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f6:2d:c9"}, {"version": 4, "addr": "192.168.122.215", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f6:2d:c9"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/10f21e57-50ad-48e0-a664-66fd8affbe73"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/10f21e57-50ad-48e0-a664-66fd8affbe73"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-03T10:08:49.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.776 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/10f21e57-50ad-48e0-a664-66fd8affbe73 used request id req-6e065bb9-5dba-4c01-84ca-ef439b5dbac4 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.777 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '10f21e57-50ad-48e0-a664-66fd8affbe73', 'name': 'vn-yplftfd-y3rllwivvz5w-5iuabdtdgdic-vnf-ze2ouau6iet5', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {'metering.server_group': '09b6fef3-eb54-4e45-9716-a57b7d592bd8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.782 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'cd0be179-1941-400f-a1e6-8ee6243ee71a', 'name': 'vn-yplftfd-tmqfhxgxqbpz-nlbkra67kned-vnf-ck27xuhmg25j', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {'metering.server_group': '09b6fef3-eb54-4e45-9716-a57b7d592bd8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.786 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '5b008829-2c76-4e40-b9e6-0e3d73095522', 'name': 'vn-yplftfd-pkavijefxpwp-6g6pxdaavpud-vnf-i3ubmryl4tho', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {'metering.server_group': '09b6fef3-eb54-4e45-9716-a57b7d592bd8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.789 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.790 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.790 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.790 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.790 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.791 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:09:41.790445) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.797 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 10f21e57-50ad-48e0-a664-66fd8affbe73 / tapb4892d0b-79 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.797 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.803 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.809 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.814 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.815 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.815 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.815 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.815 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.815 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.815 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.815 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.816 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.816 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.817 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.816 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:09:41.815725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.817 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.817 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.817 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.817 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.817 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.818 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.818 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:09:41.818029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.837 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.838 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.838 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.865 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.867 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.868 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.888 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.889 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.890 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.910 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.910 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.911 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.912 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.913 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.913 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.913 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.913 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.914 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.914 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:09:41.913996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.948 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.949 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.949 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.985 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.985 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:41.986 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.020 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.020 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.021 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.052 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.054 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.055 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.055 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.055 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.055 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.056 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.latency volume: 1220219266 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.056 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:09:42.055901) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.056 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.latency volume: 209689103 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.057 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.latency volume: 160346833 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.057 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.latency volume: 1480162541 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.058 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.latency volume: 246885128 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.058 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.latency volume: 161615200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.058 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.latency volume: 1250055753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.059 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.latency volume: 207399736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.059 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.latency volume: 144385577 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.059 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.060 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.060 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.062 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.063 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.063 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:09:42.063090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.063 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.064 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.064 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.064 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.065 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.065 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.066 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.066 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.067 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.067 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.067 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.069 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.069 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.069 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.070 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.070 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.070 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:09:42.070133) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.070 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.071 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.071 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.071 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.072 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.072 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.073 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.073 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.074 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.074 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.074 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.075 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.076 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.076 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:09:42.076631) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.077 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.077 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.078 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.078 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.079 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.079 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.079 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.080 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.080 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.081 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.081 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.083 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.084 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:09:42.083988) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.084 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.085 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.085 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.086 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.086 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.087 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.087 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.087 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.088 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.088 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.088 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.089 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.090 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.090 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.090 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.091 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:09:42.091014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.118 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.153 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:09:42 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/79832204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.174 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.187 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.204 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.205 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.205 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.206 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.206 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.206 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.207 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.latency volume: 16569151662 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.207 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.latency volume: 33411420 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:09:42.206675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.207 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.208 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.latency volume: 7225300586 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.208 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.latency volume: 33230824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.208 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.209 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.latency volume: 13700045630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.209 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.latency volume: 25497777 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.209 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.210 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.210 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.210 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.211 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.211 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.212 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.212 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.212 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.212 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.212 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.requests volume: 221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.213 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:09:42.212634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.213 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.213 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.214 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.requests volume: 228 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.214 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.214 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.215 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.requests volume: 241 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.215 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.215 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.216 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.216 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.216 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.217 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.217 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.217 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.217 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.218 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.218 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.218 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.218 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:09:42.218421) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.219 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.219 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.219 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.220 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.220 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.220 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.221 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.221 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.221 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.221 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.221 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-yplftfd-y3rllwivvz5w-5iuabdtdgdic-vnf-ze2ouau6iet5>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-yplftfd-y3rllwivvz5w-5iuabdtdgdic-vnf-ze2ouau6iet5>]
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.222 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.222 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-03T10:09:42.221450) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.222 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.223 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.223 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.223 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:09:42.223173) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.224 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.224 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.224 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.224 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.225 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.225 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.225 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.225 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.225 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:09:42.225209) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.226 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.packets volume: 59 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.226 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.227 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.227 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.227 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.227 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.228 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.228 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.229 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:09:42.228348) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.231 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.231 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.231 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.231 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.232 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.232 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.232 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:09:42.232309) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.233 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.233 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.234 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.234 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.235 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.235 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.235 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.235 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.235 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.236 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.236 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/cpu volume: 36220000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.236 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/cpu volume: 36420000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.237 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/cpu volume: 161270000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.237 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 39220000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.238 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.238 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.238 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.238 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.239 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:09:42.235945) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.239 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.239 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.239 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.240 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:09:42.239663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.241 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.241 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.241 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.242 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.242 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.243 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.243 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.243 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.243 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:09:42.243482) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.243 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.244 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.outgoing.bytes volume: 1906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.244 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.245 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.bytes volume: 7460 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.245 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.246 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.247 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.247 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.247 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.247 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.247 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.247 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.248 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:09:42.247635) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.248 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.outgoing.bytes.delta volume: 535 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.249 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.249 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.249 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.250 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.250 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.250 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.250 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.250 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.250 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:09:42.250625) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.251 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/memory.usage volume: 49.5390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.251 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/memory.usage volume: 49.109375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.251 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/memory.usage volume: 48.9296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.252 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.84765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.252 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.252 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.253 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.253 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.253 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.253 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-03T10:09:42.253358) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.253 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.253 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.254 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-yplftfd-y3rllwivvz5w-5iuabdtdgdic-vnf-ze2ouau6iet5>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-yplftfd-y3rllwivvz5w-5iuabdtdgdic-vnf-ze2ouau6iet5>]
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.254 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.254 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.254 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.254 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.254 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.255 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.incoming.bytes volume: 1786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.255 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.incoming.bytes volume: 1954 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.255 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.incoming.bytes volume: 8790 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.256 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:09:42.254935) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.256 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2604 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.257 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.257 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.257 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.257 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.257 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.257 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.258 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.258 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.258 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:09:42.257838) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.259 14 DEBUG ceilometer.compute.pollsters [-] 5b008829-2c76-4e40-b9e6-0e3d73095522/network.outgoing.packets volume: 64 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.259 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.260 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.260 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.260 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.261 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.261 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.261 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.261 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.261 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.261 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.261 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.261 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.261 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.262 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.262 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.262 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.262 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.262 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.262 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.262 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.262 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.262 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.262 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.262 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.263 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.263 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.263 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:09:42.263 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.308 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.308 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.308 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.314 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.314 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.315 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.321 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.321 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.321 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.327 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.327 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.327 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:09:42 compute-0 podman[427082]: 2025-10-03 10:09:42.346831997 +0000 UTC m=+0.093942289 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.732 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.733 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3245MB free_disk=59.85564422607422GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.733 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.734 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.808 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.809 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 5b008829-2c76-4e40-b9e6-0e3d73095522 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.809 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance cd0be179-1941-400f-a1e6-8ee6243ee71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.809 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 10f21e57-50ad-48e0-a664-66fd8affbe73 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.809 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.809 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:09:42 compute-0 nova_compute[351685]: 2025-10-03 10:09:42.876 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:09:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Oct  3 10:09:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:09:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2569697266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:09:43 compute-0 nova_compute[351685]: 2025-10-03 10:09:43.370 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:09:43 compute-0 nova_compute[351685]: 2025-10-03 10:09:43.379 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:09:43 compute-0 nova_compute[351685]: 2025-10-03 10:09:43.392 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:09:43 compute-0 nova_compute[351685]: 2025-10-03 10:09:43.414 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:09:43 compute-0 nova_compute[351685]: 2025-10-03 10:09:43.415 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:09:44 compute-0 nova_compute[351685]: 2025-10-03 10:09:44.458 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:09:44 compute-0 nova_compute[351685]: 2025-10-03 10:09:44.458 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:09:44 compute-0 nova_compute[351685]: 2025-10-03 10:09:44.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:09:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Oct  3 10:09:45 compute-0 nova_compute[351685]: 2025-10-03 10:09:45.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:09:45 compute-0 nova_compute[351685]: 2025-10-03 10:09:45.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:09:45 compute-0 podman[427123]: 2025-10-03 10:09:45.821600263 +0000 UTC m=+0.074392163 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:09:45 compute-0 podman[427125]: 2025-10-03 10:09:45.830592952 +0000 UTC m=+0.080696316 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:09:45 compute-0 podman[427124]: 2025-10-03 10:09:45.84898912 +0000 UTC m=+0.103431253 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd)
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:09:46
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'images', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'vms']
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:09:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:09:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:09:46 compute-0 nova_compute[351685]: 2025-10-03 10:09:46.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Oct  3 10:09:47 compute-0 nova_compute[351685]: 2025-10-03 10:09:47.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Oct  3 10:09:50 compute-0 podman[427184]: 2025-10-03 10:09:50.86311189 +0000 UTC m=+0.117432811 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 10:09:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:09:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:09:51 compute-0 nova_compute[351685]: 2025-10-03 10:09:51.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:52 compute-0 nova_compute[351685]: 2025-10-03 10:09:52.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:09:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:09:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1384745904' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:09:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:09:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1384745904' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022110489109201017 of space, bias 1.0, pg target 0.6633146732760306 quantized to 32 (current 32)
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:09:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:09:55 compute-0 podman[427203]: 2025-10-03 10:09:55.83910058 +0000 UTC m=+0.099014013 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:09:55 compute-0 podman[427204]: 2025-10-03 10:09:55.897173699 +0000 UTC m=+0.138977411 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public, name=ubi9, build-date=2024-09-18T21:23:30, container_name=kepler, release-0.7.12=, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Oct  3 10:09:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:09:56 compute-0 nova_compute[351685]: 2025-10-03 10:09:56.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:09:57 compute-0 nova_compute[351685]: 2025-10-03 10:09:57.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:09:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Oct  3 10:09:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:09:59 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:09:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:09:59 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:09:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:09:59 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:09:59 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8b08406c-396a-46ad-bda3-1bb8e11aee6b does not exist
Oct  3 10:09:59 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 266b9e0a-de74-4233-a6f2-6ab97e29c935 does not exist
Oct  3 10:09:59 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 371291a2-f67e-4a9a-8ebf-445421d55fdf does not exist
Oct  3 10:09:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:09:59 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:09:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:09:59 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:09:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:09:59 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:09:59 compute-0 podman[157165]: time="2025-10-03T10:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:09:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:09:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9057 "" "Go-http-client/1.1"
Oct  3 10:10:00 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:10:00 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:10:00 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:10:00 compute-0 podman[427512]: 2025-10-03 10:10:00.457015282 +0000 UTC m=+0.057247894 container create da7ac9722d1f2af2fa7551a97287677cb49cb7320fe2c4a13152109889860123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 10:10:00 compute-0 systemd[1]: Started libpod-conmon-da7ac9722d1f2af2fa7551a97287677cb49cb7320fe2c4a13152109889860123.scope.
Oct  3 10:10:00 compute-0 podman[427512]: 2025-10-03 10:10:00.42978496 +0000 UTC m=+0.030017522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:10:00 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:10:00 compute-0 podman[427512]: 2025-10-03 10:10:00.594774074 +0000 UTC m=+0.195006606 container init da7ac9722d1f2af2fa7551a97287677cb49cb7320fe2c4a13152109889860123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 10:10:00 compute-0 podman[427512]: 2025-10-03 10:10:00.603948828 +0000 UTC m=+0.204181340 container start da7ac9722d1f2af2fa7551a97287677cb49cb7320fe2c4a13152109889860123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 10:10:00 compute-0 podman[427512]: 2025-10-03 10:10:00.608628477 +0000 UTC m=+0.208861009 container attach da7ac9722d1f2af2fa7551a97287677cb49cb7320fe2c4a13152109889860123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 10:10:00 compute-0 agitated_lichterman[427528]: 167 167
Oct  3 10:10:00 compute-0 systemd[1]: libpod-da7ac9722d1f2af2fa7551a97287677cb49cb7320fe2c4a13152109889860123.scope: Deactivated successfully.
Oct  3 10:10:00 compute-0 podman[427512]: 2025-10-03 10:10:00.61464133 +0000 UTC m=+0.214873852 container died da7ac9722d1f2af2fa7551a97287677cb49cb7320fe2c4a13152109889860123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:10:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1188855dd98482e5d280928f21b5fbe9f3d42e32a7851292eef568300ce1191a-merged.mount: Deactivated successfully.
Oct  3 10:10:00 compute-0 podman[427512]: 2025-10-03 10:10:00.693508836 +0000 UTC m=+0.293741368 container remove da7ac9722d1f2af2fa7551a97287677cb49cb7320fe2c4a13152109889860123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:10:00 compute-0 systemd[1]: libpod-conmon-da7ac9722d1f2af2fa7551a97287677cb49cb7320fe2c4a13152109889860123.scope: Deactivated successfully.
Oct  3 10:10:00 compute-0 podman[427550]: 2025-10-03 10:10:00.947555911 +0000 UTC m=+0.089502517 container create 179c1e55ec2df6a19d53e6a5fe0faa809095abb6ff32f0fa6604220e36317081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lovelace, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 10:10:00 compute-0 podman[427550]: 2025-10-03 10:10:00.907503368 +0000 UTC m=+0.049450054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:10:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Oct  3 10:10:01 compute-0 systemd[1]: Started libpod-conmon-179c1e55ec2df6a19d53e6a5fe0faa809095abb6ff32f0fa6604220e36317081.scope.
Oct  3 10:10:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97c9e1aa259f2c1cd127336c7b2f43427d54b0fa056db0d2e388d2cdfdf160d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97c9e1aa259f2c1cd127336c7b2f43427d54b0fa056db0d2e388d2cdfdf160d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97c9e1aa259f2c1cd127336c7b2f43427d54b0fa056db0d2e388d2cdfdf160d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97c9e1aa259f2c1cd127336c7b2f43427d54b0fa056db0d2e388d2cdfdf160d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97c9e1aa259f2c1cd127336c7b2f43427d54b0fa056db0d2e388d2cdfdf160d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:10:01 compute-0 podman[427550]: 2025-10-03 10:10:01.101519992 +0000 UTC m=+0.243466638 container init 179c1e55ec2df6a19d53e6a5fe0faa809095abb6ff32f0fa6604220e36317081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lovelace, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:10:01 compute-0 podman[427550]: 2025-10-03 10:10:01.121065688 +0000 UTC m=+0.263012304 container start 179c1e55ec2df6a19d53e6a5fe0faa809095abb6ff32f0fa6604220e36317081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lovelace, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 10:10:01 compute-0 podman[427550]: 2025-10-03 10:10:01.12738851 +0000 UTC m=+0.269335156 container attach 179c1e55ec2df6a19d53e6a5fe0faa809095abb6ff32f0fa6604220e36317081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  3 10:10:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:10:01 compute-0 nova_compute[351685]: 2025-10-03 10:10:01.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:01 compute-0 openstack_network_exporter[367524]: ERROR   10:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:10:01 compute-0 openstack_network_exporter[367524]: ERROR   10:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:10:01 compute-0 openstack_network_exporter[367524]: ERROR   10:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:10:01 compute-0 openstack_network_exporter[367524]: ERROR   10:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:10:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:10:01 compute-0 openstack_network_exporter[367524]: ERROR   10:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:10:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:10:02 compute-0 nova_compute[351685]: 2025-10-03 10:10:02.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:02 compute-0 agitated_lovelace[427565]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:10:02 compute-0 agitated_lovelace[427565]: --> relative data size: 1.0
Oct  3 10:10:02 compute-0 agitated_lovelace[427565]: --> All data devices are unavailable
Oct  3 10:10:02 compute-0 systemd[1]: libpod-179c1e55ec2df6a19d53e6a5fe0faa809095abb6ff32f0fa6604220e36317081.scope: Deactivated successfully.
Oct  3 10:10:02 compute-0 systemd[1]: libpod-179c1e55ec2df6a19d53e6a5fe0faa809095abb6ff32f0fa6604220e36317081.scope: Consumed 1.125s CPU time.
Oct  3 10:10:02 compute-0 podman[427550]: 2025-10-03 10:10:02.334527398 +0000 UTC m=+1.476474054 container died 179c1e55ec2df6a19d53e6a5fe0faa809095abb6ff32f0fa6604220e36317081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:10:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d97c9e1aa259f2c1cd127336c7b2f43427d54b0fa056db0d2e388d2cdfdf160d-merged.mount: Deactivated successfully.
Oct  3 10:10:02 compute-0 podman[427550]: 2025-10-03 10:10:02.412614409 +0000 UTC m=+1.554561015 container remove 179c1e55ec2df6a19d53e6a5fe0faa809095abb6ff32f0fa6604220e36317081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lovelace, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:10:02 compute-0 systemd[1]: libpod-conmon-179c1e55ec2df6a19d53e6a5fe0faa809095abb6ff32f0fa6604220e36317081.scope: Deactivated successfully.
Oct  3 10:10:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Oct  3 10:10:03 compute-0 podman[427744]: 2025-10-03 10:10:03.291466713 +0000 UTC m=+0.053040160 container create f709b75e979a4542249fbe44b85c19365952e90eac87f68238cc0d1bc2a4974b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:10:03 compute-0 systemd[1]: Started libpod-conmon-f709b75e979a4542249fbe44b85c19365952e90eac87f68238cc0d1bc2a4974b.scope.
Oct  3 10:10:03 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:10:03 compute-0 podman[427744]: 2025-10-03 10:10:03.269396976 +0000 UTC m=+0.030970443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:10:03 compute-0 podman[427744]: 2025-10-03 10:10:03.387686945 +0000 UTC m=+0.149260432 container init f709b75e979a4542249fbe44b85c19365952e90eac87f68238cc0d1bc2a4974b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 10:10:03 compute-0 podman[427744]: 2025-10-03 10:10:03.397688714 +0000 UTC m=+0.159262171 container start f709b75e979a4542249fbe44b85c19365952e90eac87f68238cc0d1bc2a4974b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatterjee, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:10:03 compute-0 podman[427744]: 2025-10-03 10:10:03.402816628 +0000 UTC m=+0.164390075 container attach f709b75e979a4542249fbe44b85c19365952e90eac87f68238cc0d1bc2a4974b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct  3 10:10:03 compute-0 crazy_chatterjee[427757]: 167 167
Oct  3 10:10:03 compute-0 systemd[1]: libpod-f709b75e979a4542249fbe44b85c19365952e90eac87f68238cc0d1bc2a4974b.scope: Deactivated successfully.
Oct  3 10:10:03 compute-0 podman[427744]: 2025-10-03 10:10:03.41098465 +0000 UTC m=+0.172558087 container died f709b75e979a4542249fbe44b85c19365952e90eac87f68238cc0d1bc2a4974b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 10:10:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-67711595ddec32c7947820a599b55bc7194fc34bc61649d6513a04eb4805c2a9-merged.mount: Deactivated successfully.
Oct  3 10:10:03 compute-0 podman[427744]: 2025-10-03 10:10:03.465126254 +0000 UTC m=+0.226699711 container remove f709b75e979a4542249fbe44b85c19365952e90eac87f68238cc0d1bc2a4974b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatterjee, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 10:10:03 compute-0 systemd[1]: libpod-conmon-f709b75e979a4542249fbe44b85c19365952e90eac87f68238cc0d1bc2a4974b.scope: Deactivated successfully.
Oct  3 10:10:03 compute-0 podman[427783]: 2025-10-03 10:10:03.705708438 +0000 UTC m=+0.059155975 container create be9b8ca75e9e323d1a54713c7bed2f5363f0dc9a6c4f81df4cf5b63fde94e39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:10:03 compute-0 systemd[1]: Started libpod-conmon-be9b8ca75e9e323d1a54713c7bed2f5363f0dc9a6c4f81df4cf5b63fde94e39c.scope.
Oct  3 10:10:03 compute-0 podman[427783]: 2025-10-03 10:10:03.682740293 +0000 UTC m=+0.036187860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:10:03 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b908555b43a8f599f73e239c30ebb1d55daf943cc6c4c8aa21848ccc8269e3fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b908555b43a8f599f73e239c30ebb1d55daf943cc6c4c8aa21848ccc8269e3fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b908555b43a8f599f73e239c30ebb1d55daf943cc6c4c8aa21848ccc8269e3fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b908555b43a8f599f73e239c30ebb1d55daf943cc6c4c8aa21848ccc8269e3fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:10:03 compute-0 podman[427783]: 2025-10-03 10:10:03.80818978 +0000 UTC m=+0.161637377 container init be9b8ca75e9e323d1a54713c7bed2f5363f0dc9a6c4f81df4cf5b63fde94e39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 10:10:03 compute-0 podman[427783]: 2025-10-03 10:10:03.830771333 +0000 UTC m=+0.184218870 container start be9b8ca75e9e323d1a54713c7bed2f5363f0dc9a6c4f81df4cf5b63fde94e39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:10:03 compute-0 podman[427783]: 2025-10-03 10:10:03.834870995 +0000 UTC m=+0.188318542 container attach be9b8ca75e9e323d1a54713c7bed2f5363f0dc9a6c4f81df4cf5b63fde94e39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 10:10:04 compute-0 stoic_liskov[427799]: {
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:    "0": [
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:        {
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "devices": [
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "/dev/loop3"
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            ],
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "lv_name": "ceph_lv0",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "lv_size": "21470642176",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "name": "ceph_lv0",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "tags": {
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.cluster_name": "ceph",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.crush_device_class": "",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.encrypted": "0",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.osd_id": "0",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.type": "block",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.vdo": "0"
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            },
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "type": "block",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "vg_name": "ceph_vg0"
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:        }
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:    ],
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:    "1": [
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:        {
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "devices": [
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "/dev/loop4"
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            ],
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "lv_name": "ceph_lv1",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "lv_size": "21470642176",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "name": "ceph_lv1",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "tags": {
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.cluster_name": "ceph",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.crush_device_class": "",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.encrypted": "0",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.osd_id": "1",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.type": "block",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.vdo": "0"
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            },
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "type": "block",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "vg_name": "ceph_vg1"
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:        }
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:    ],
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:    "2": [
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:        {
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "devices": [
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "/dev/loop5"
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            ],
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "lv_name": "ceph_lv2",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "lv_size": "21470642176",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "name": "ceph_lv2",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "tags": {
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.cluster_name": "ceph",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.crush_device_class": "",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.encrypted": "0",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.osd_id": "2",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.type": "block",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:                "ceph.vdo": "0"
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            },
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "type": "block",
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:            "vg_name": "ceph_vg2"
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:        }
Oct  3 10:10:04 compute-0 stoic_liskov[427799]:    ]
Oct  3 10:10:04 compute-0 stoic_liskov[427799]: }
Oct  3 10:10:04 compute-0 systemd[1]: libpod-be9b8ca75e9e323d1a54713c7bed2f5363f0dc9a6c4f81df4cf5b63fde94e39c.scope: Deactivated successfully.
Oct  3 10:10:04 compute-0 podman[427808]: 2025-10-03 10:10:04.789781595 +0000 UTC m=+0.033194724 container died be9b8ca75e9e323d1a54713c7bed2f5363f0dc9a6c4f81df4cf5b63fde94e39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 10:10:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b908555b43a8f599f73e239c30ebb1d55daf943cc6c4c8aa21848ccc8269e3fd-merged.mount: Deactivated successfully.
Oct  3 10:10:04 compute-0 podman[427808]: 2025-10-03 10:10:04.85738873 +0000 UTC m=+0.100801849 container remove be9b8ca75e9e323d1a54713c7bed2f5363f0dc9a6c4f81df4cf5b63fde94e39c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_liskov, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:10:04 compute-0 systemd[1]: libpod-conmon-be9b8ca75e9e323d1a54713c7bed2f5363f0dc9a6c4f81df4cf5b63fde94e39c.scope: Deactivated successfully.
Oct  3 10:10:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Oct  3 10:10:05 compute-0 podman[427962]: 2025-10-03 10:10:05.764921682 +0000 UTC m=+0.065577160 container create ae0f4e9ea2190e41b76f0fefc0d6d7aacad1bf0e52f5462a5feea342cd1fe66a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_sutherland, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  3 10:10:05 compute-0 systemd[1]: Started libpod-conmon-ae0f4e9ea2190e41b76f0fefc0d6d7aacad1bf0e52f5462a5feea342cd1fe66a.scope.
Oct  3 10:10:05 compute-0 podman[427962]: 2025-10-03 10:10:05.73297633 +0000 UTC m=+0.033631828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:10:05 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:10:05 compute-0 podman[427962]: 2025-10-03 10:10:05.879300296 +0000 UTC m=+0.179955794 container init ae0f4e9ea2190e41b76f0fefc0d6d7aacad1bf0e52f5462a5feea342cd1fe66a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:10:05 compute-0 podman[427962]: 2025-10-03 10:10:05.888022054 +0000 UTC m=+0.188677552 container start ae0f4e9ea2190e41b76f0fefc0d6d7aacad1bf0e52f5462a5feea342cd1fe66a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 10:10:05 compute-0 suspicious_sutherland[427978]: 167 167
Oct  3 10:10:05 compute-0 podman[427962]: 2025-10-03 10:10:05.894355358 +0000 UTC m=+0.195010856 container attach ae0f4e9ea2190e41b76f0fefc0d6d7aacad1bf0e52f5462a5feea342cd1fe66a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_sutherland, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 10:10:05 compute-0 systemd[1]: libpod-ae0f4e9ea2190e41b76f0fefc0d6d7aacad1bf0e52f5462a5feea342cd1fe66a.scope: Deactivated successfully.
Oct  3 10:10:05 compute-0 podman[427962]: 2025-10-03 10:10:05.896056173 +0000 UTC m=+0.196711651 container died ae0f4e9ea2190e41b76f0fefc0d6d7aacad1bf0e52f5462a5feea342cd1fe66a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_sutherland, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 10:10:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-08a7ad78537b00e5575de8e33102ce5ddd7b8b2e7ec259a8f866aac26461b3f8-merged.mount: Deactivated successfully.
Oct  3 10:10:05 compute-0 podman[427962]: 2025-10-03 10:10:05.937656885 +0000 UTC m=+0.238312363 container remove ae0f4e9ea2190e41b76f0fefc0d6d7aacad1bf0e52f5462a5feea342cd1fe66a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_sutherland, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:10:05 compute-0 systemd[1]: libpod-conmon-ae0f4e9ea2190e41b76f0fefc0d6d7aacad1bf0e52f5462a5feea342cd1fe66a.scope: Deactivated successfully.
Oct  3 10:10:06 compute-0 podman[428002]: 2025-10-03 10:10:06.139280502 +0000 UTC m=+0.051888774 container create 25aa22455fc27fecf9487fbcff676138549807529f2ae5a380800b25cabb0458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 10:10:06 compute-0 podman[428002]: 2025-10-03 10:10:06.116781141 +0000 UTC m=+0.029389433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:10:06 compute-0 systemd[1]: Started libpod-conmon-25aa22455fc27fecf9487fbcff676138549807529f2ae5a380800b25cabb0458.scope.
Oct  3 10:10:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:10:06 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b55dd113db0cba41c9a5636721b13be3a5e92360c52ad74784daeb7beec26869/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b55dd113db0cba41c9a5636721b13be3a5e92360c52ad74784daeb7beec26869/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b55dd113db0cba41c9a5636721b13be3a5e92360c52ad74784daeb7beec26869/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b55dd113db0cba41c9a5636721b13be3a5e92360c52ad74784daeb7beec26869/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:10:06 compute-0 podman[428002]: 2025-10-03 10:10:06.282381924 +0000 UTC m=+0.194990206 container init 25aa22455fc27fecf9487fbcff676138549807529f2ae5a380800b25cabb0458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:10:06 compute-0 podman[428002]: 2025-10-03 10:10:06.301743054 +0000 UTC m=+0.214351326 container start 25aa22455fc27fecf9487fbcff676138549807529f2ae5a380800b25cabb0458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:10:06 compute-0 podman[428002]: 2025-10-03 10:10:06.308408787 +0000 UTC m=+0.221017059 container attach 25aa22455fc27fecf9487fbcff676138549807529f2ae5a380800b25cabb0458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:10:06 compute-0 nova_compute[351685]: 2025-10-03 10:10:06.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Oct  3 10:10:07 compute-0 nova_compute[351685]: 2025-10-03 10:10:07.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]: {
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:        "osd_id": 1,
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:        "type": "bluestore"
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:    },
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:        "osd_id": 2,
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:        "type": "bluestore"
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:    },
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:        "osd_id": 0,
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:        "type": "bluestore"
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]:    }
Oct  3 10:10:07 compute-0 xenodochial_diffie[428019]: }
Oct  3 10:10:07 compute-0 systemd[1]: libpod-25aa22455fc27fecf9487fbcff676138549807529f2ae5a380800b25cabb0458.scope: Deactivated successfully.
Oct  3 10:10:07 compute-0 systemd[1]: libpod-25aa22455fc27fecf9487fbcff676138549807529f2ae5a380800b25cabb0458.scope: Consumed 1.049s CPU time.
Oct  3 10:10:07 compute-0 podman[428002]: 2025-10-03 10:10:07.366654877 +0000 UTC m=+1.279263139 container died 25aa22455fc27fecf9487fbcff676138549807529f2ae5a380800b25cabb0458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:10:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b55dd113db0cba41c9a5636721b13be3a5e92360c52ad74784daeb7beec26869-merged.mount: Deactivated successfully.
Oct  3 10:10:07 compute-0 podman[428002]: 2025-10-03 10:10:07.438014012 +0000 UTC m=+1.350622284 container remove 25aa22455fc27fecf9487fbcff676138549807529f2ae5a380800b25cabb0458 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:10:07 compute-0 systemd[1]: libpod-conmon-25aa22455fc27fecf9487fbcff676138549807529f2ae5a380800b25cabb0458.scope: Deactivated successfully.
Oct  3 10:10:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:10:07 compute-0 podman[428054]: 2025-10-03 10:10:07.490336887 +0000 UTC m=+0.093495644 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct  3 10:10:07 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:10:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:10:07 compute-0 podman[428053]: 2025-10-03 10:10:07.495145671 +0000 UTC m=+0.098774703 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.expose-services=, name=ubi9-minimal, vendor=Red Hat, Inc.)
Oct  3 10:10:07 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:10:07 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 5aa58053-5032-4703-add2-0835165f4cfd does not exist
Oct  3 10:10:07 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3dae1142-acf1-4a4a-aeb6-cbd171b71e0d does not exist
Oct  3 10:10:07 compute-0 podman[428055]: 2025-10-03 10:10:07.531303699 +0000 UTC m=+0.128982451 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 10:10:08 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:10:08 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:10:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Oct  3 10:10:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:10:11 compute-0 nova_compute[351685]: 2025-10-03 10:10:11.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:12 compute-0 nova_compute[351685]: 2025-10-03 10:10:12.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:12 compute-0 podman[428174]: 2025-10-03 10:10:12.819366963 +0000 UTC m=+0.082548115 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:10:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:10:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:10:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:10:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:10:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:10:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:10:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:10:16 compute-0 nova_compute[351685]: 2025-10-03 10:10:16.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:16 compute-0 podman[428195]: 2025-10-03 10:10:16.84343443 +0000 UTC m=+0.084446715 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  3 10:10:16 compute-0 podman[428193]: 2025-10-03 10:10:16.852085197 +0000 UTC m=+0.105136407 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 10:10:16 compute-0 podman[428194]: 2025-10-03 10:10:16.863836174 +0000 UTC m=+0.115455539 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct  3 10:10:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:17 compute-0 nova_compute[351685]: 2025-10-03 10:10:17.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:10:21 compute-0 nova_compute[351685]: 2025-10-03 10:10:21.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:21 compute-0 podman[428256]: 2025-10-03 10:10:21.833323269 +0000 UTC m=+0.083752433 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm)
Oct  3 10:10:22 compute-0 nova_compute[351685]: 2025-10-03 10:10:22.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:10:26 compute-0 nova_compute[351685]: 2025-10-03 10:10:26.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:26 compute-0 podman[428277]: 2025-10-03 10:10:26.829937945 +0000 UTC m=+0.085151088 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:10:26 compute-0 podman[428278]: 2025-10-03 10:10:26.860798953 +0000 UTC m=+0.112911027 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.expose-services=, name=ubi9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, version=9.4)
Oct  3 10:10:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:27 compute-0 nova_compute[351685]: 2025-10-03 10:10:27.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:29 compute-0 podman[157165]: time="2025-10-03T10:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:10:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:10:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9056 "" "Go-http-client/1.1"
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.275 2 DEBUG oslo_concurrency.lockutils [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "5b008829-2c76-4e40-b9e6-0e3d73095522" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.275 2 DEBUG oslo_concurrency.lockutils [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "5b008829-2c76-4e40-b9e6-0e3d73095522" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.276 2 DEBUG oslo_concurrency.lockutils [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.276 2 DEBUG oslo_concurrency.lockutils [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.276 2 DEBUG oslo_concurrency.lockutils [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.278 2 INFO nova.compute.manager [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Terminating instance#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.279 2 DEBUG nova.compute.manager [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  3 10:10:30 compute-0 kernel: tapd601bb86-72 (unregistering): left promiscuous mode
Oct  3 10:10:30 compute-0 NetworkManager[45015]: <info>  [1759486230.4247] device (tapd601bb86-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  3 10:10:30 compute-0 ovn_controller[88471]: 2025-10-03T10:10:30Z|00057|binding|INFO|Releasing lport d601bb86-7265-40b5-ac1c-42d005514cfd from this chassis (sb_readonly=0)
Oct  3 10:10:30 compute-0 ovn_controller[88471]: 2025-10-03T10:10:30Z|00058|binding|INFO|Setting lport d601bb86-7265-40b5-ac1c-42d005514cfd down in Southbound
Oct  3 10:10:30 compute-0 ovn_controller[88471]: 2025-10-03T10:10:30Z|00059|binding|INFO|Removing iface tapd601bb86-72 ovn-installed in OVS
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:30 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:30.447 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:23:11 192.168.0.19'], port_security=['fa:16:3e:4c:23:11 192.168.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-e77v6yplftfd-pkavijefxpwp-6g6pxdaavpud-port-rqxwqbtnumad', 'neutron:cidrs': '192.168.0.19/24', 'neutron:device_id': '5b008829-2c76-4e40-b9e6-0e3d73095522', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-e77v6yplftfd-pkavijefxpwp-6g6pxdaavpud-port-rqxwqbtnumad', 'neutron:project_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c22f45fa-3e9c-4adb-8724-80552acd48b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c829e9c2-c63b-44e6-806c-cc11bdf56258, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=d601bb86-7265-40b5-ac1c-42d005514cfd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 10:10:30 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:30.448 284328 INFO neutron.agent.ovn.metadata.agent [-] Port d601bb86-7265-40b5-ac1c-42d005514cfd in datapath 67eed0ac-d131-40ed-a5fe-0484d04236a0 unbound from our chassis#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:30 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:30.450 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67eed0ac-d131-40ed-a5fe-0484d04236a0#033[00m
Oct  3 10:10:30 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:30.466 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[acd0c3bc-2f63-46e9-9c2b-c846483dc690]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:10:30 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct  3 10:10:30 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 3min 28.087s CPU time.
Oct  3 10:10:30 compute-0 systemd-machined[137653]: Machine qemu-2-instance-00000002 terminated.
Oct  3 10:10:30 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:30.495 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[3acee854-c681-4ad6-b77e-1311b58d055a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:10:30 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:30.500 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[35b277a6-7a4b-4e14-806a-e6eb5046b936]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.519 2 INFO nova.virt.libvirt.driver [-] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Instance destroyed successfully.#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.519 2 DEBUG nova.objects.instance [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lazy-loading 'resources' on Instance uuid 5b008829-2c76-4e40-b9e6-0e3d73095522 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:10:30 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:30.530 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[bccdb0df-1d5d-4f6b-8a73-6bd88d4dbb9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.534 2 DEBUG nova.virt.libvirt.vif [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T10:02:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-yplftfd-pkavijefxpwp-6g6pxdaavpud-vnf-i3ubmryl4tho',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-yplftfd-pkavijefxpwp-6g6pxdaavpud-vnf-i3ubmryl4tho',id=2,image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-03T10:03:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='09b6fef3-eb54-4e45-9716-a57b7d592bd8'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ee75a4dc6ade43baab6ee923c9cf4cdf',ramdisk_id='',reservation_id='r-u8cxzf0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-03T10:03:06Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yMjMyNTU5MzIzMDk3ODU0NzE1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTIyMzI1NTkzMjMwOTc4NTQ3MTU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MjIzMjU1OTMyMzA5Nzg1NDcxNT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTIyMzI1NTkzMjMwOTc4NTQ3MTU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yMjMyNTU5MzIzMDk3ODU0NzE1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yMjMyNTU5MzIzMDk3ODU0NzE1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgIC
Oct  3 10:10:30 compute-0 nova_compute[351685]: xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MjIzMjU1OTMyMzA5Nzg1NDcxNT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTIyMzI1NTkzMjMwOTc4NTQ3MTU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yMjMyNTU5MzIzMDk3ODU0NzE1PT0tLQo=',user_id='2f408449ba0f42fcb69f92dbf541f2e3',uuid=5b008829-2c76-4e40-b9e6-0e3d73095522,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d601bb86-7265-40b5-ac1c-42d005514cfd", "address": "fa:16:3e:4c:23:11", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd601bb86-72", "ovs_interfaceid": "d601bb86-7265-40b5-ac1c-42d005514cfd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.535 2 DEBUG nova.network.os_vif_util [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converting VIF {"id": "d601bb86-7265-40b5-ac1c-42d005514cfd", "address": "fa:16:3e:4c:23:11", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd601bb86-72", "ovs_interfaceid": "d601bb86-7265-40b5-ac1c-42d005514cfd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.535 2 DEBUG nova.network.os_vif_util [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4c:23:11,bridge_name='br-int',has_traffic_filtering=True,id=d601bb86-7265-40b5-ac1c-42d005514cfd,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd601bb86-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.536 2 DEBUG os_vif [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4c:23:11,bridge_name='br-int',has_traffic_filtering=True,id=d601bb86-7265-40b5-ac1c-42d005514cfd,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd601bb86-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.538 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd601bb86-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.543 2 INFO os_vif [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4c:23:11,bridge_name='br-int',has_traffic_filtering=True,id=d601bb86-7265-40b5-ac1c-42d005514cfd,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd601bb86-72')#033[00m
Oct  3 10:10:30 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:30.547 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[52afae62-e2d1-4907-b374-60ecf7f1e4f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67eed0ac-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 832, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 832, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489539, 'reachable_time': 39813, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 428343, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:10:30 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:30.561 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[9b8c24e7-cfef-47e2-96ec-c54f850f79ee]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap67eed0ac-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489551, 'tstamp': 489551}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 428351, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67eed0ac-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489554, 'tstamp': 489554}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 428351, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:10:30 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:30.564 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67eed0ac-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:30 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:30.568 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67eed0ac-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:10:30 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:30.568 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:10:30 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:30.570 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67eed0ac-d0, col_values=(('external_ids', {'iface-id': 'e79720f4-8084-4b6f-a8ef-933cf0e7b8bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:10:30 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:30.571 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.904 2 DEBUG nova.compute.manager [req-d5746930-b8d7-4f87-9ea5-f451f70964ab req-200ab65c-049f-4a71-b169-390eb338802a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Received event network-vif-unplugged-d601bb86-7265-40b5-ac1c-42d005514cfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.905 2 DEBUG oslo_concurrency.lockutils [req-d5746930-b8d7-4f87-9ea5-f451f70964ab req-200ab65c-049f-4a71-b169-390eb338802a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.906 2 DEBUG oslo_concurrency.lockutils [req-d5746930-b8d7-4f87-9ea5-f451f70964ab req-200ab65c-049f-4a71-b169-390eb338802a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.906 2 DEBUG oslo_concurrency.lockutils [req-d5746930-b8d7-4f87-9ea5-f451f70964ab req-200ab65c-049f-4a71-b169-390eb338802a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.907 2 DEBUG nova.compute.manager [req-d5746930-b8d7-4f87-9ea5-f451f70964ab req-200ab65c-049f-4a71-b169-390eb338802a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] No waiting events found dispatching network-vif-unplugged-d601bb86-7265-40b5-ac1c-42d005514cfd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 10:10:30 compute-0 nova_compute[351685]: 2025-10-03 10:10:30.907 2 DEBUG nova.compute.manager [req-d5746930-b8d7-4f87-9ea5-f451f70964ab req-200ab65c-049f-4a71-b169-390eb338802a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Received event network-vif-unplugged-d601bb86-7265-40b5-ac1c-42d005514cfd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  3 10:10:30 compute-0 rsyslogd[187556]: message too long (8192) with configured size 8096, begin of message is: 2025-10-03 10:10:30.534 2 DEBUG nova.virt.libvirt.vif [None req-25f84277-6cc2-4b [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  3 10:10:31 compute-0 nova_compute[351685]: 2025-10-03 10:10:31.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:31.001 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:73:2e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:70:f7:73:f2:48'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 10:10:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:31.002 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  3 10:10:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:10:31 compute-0 openstack_network_exporter[367524]: ERROR   10:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:10:31 compute-0 openstack_network_exporter[367524]: ERROR   10:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:10:31 compute-0 openstack_network_exporter[367524]: ERROR   10:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:10:31 compute-0 openstack_network_exporter[367524]: ERROR   10:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:10:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:10:31 compute-0 openstack_network_exporter[367524]: ERROR   10:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:10:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:10:31 compute-0 nova_compute[351685]: 2025-10-03 10:10:31.659 2 INFO nova.virt.libvirt.driver [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Deleting instance files /var/lib/nova/instances/5b008829-2c76-4e40-b9e6-0e3d73095522_del#033[00m
Oct  3 10:10:31 compute-0 nova_compute[351685]: 2025-10-03 10:10:31.660 2 INFO nova.virt.libvirt.driver [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Deletion of /var/lib/nova/instances/5b008829-2c76-4e40-b9e6-0e3d73095522_del complete#033[00m
Oct  3 10:10:31 compute-0 nova_compute[351685]: 2025-10-03 10:10:31.720 2 INFO nova.compute.manager [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Took 1.44 seconds to destroy the instance on the hypervisor.#033[00m
Oct  3 10:10:31 compute-0 nova_compute[351685]: 2025-10-03 10:10:31.722 2 DEBUG oslo.service.loopingcall [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  3 10:10:31 compute-0 nova_compute[351685]: 2025-10-03 10:10:31.723 2 DEBUG nova.compute.manager [-] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  3 10:10:31 compute-0 nova_compute[351685]: 2025-10-03 10:10:31.723 2 DEBUG nova.network.neutron [-] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  3 10:10:32 compute-0 nova_compute[351685]: 2025-10-03 10:10:32.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.041 2 DEBUG nova.compute.manager [req-21d88be5-4c0f-45b3-b9c9-8a2382903ce0 req-2f98578e-208c-49be-8a86-ef7e246170cc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Received event network-vif-plugged-d601bb86-7265-40b5-ac1c-42d005514cfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.042 2 DEBUG oslo_concurrency.lockutils [req-21d88be5-4c0f-45b3-b9c9-8a2382903ce0 req-2f98578e-208c-49be-8a86-ef7e246170cc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.042 2 DEBUG oslo_concurrency.lockutils [req-21d88be5-4c0f-45b3-b9c9-8a2382903ce0 req-2f98578e-208c-49be-8a86-ef7e246170cc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.042 2 DEBUG oslo_concurrency.lockutils [req-21d88be5-4c0f-45b3-b9c9-8a2382903ce0 req-2f98578e-208c-49be-8a86-ef7e246170cc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "5b008829-2c76-4e40-b9e6-0e3d73095522-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.042 2 DEBUG nova.compute.manager [req-21d88be5-4c0f-45b3-b9c9-8a2382903ce0 req-2f98578e-208c-49be-8a86-ef7e246170cc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] No waiting events found dispatching network-vif-plugged-d601bb86-7265-40b5-ac1c-42d005514cfd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.042 2 WARNING nova.compute.manager [req-21d88be5-4c0f-45b3-b9c9-8a2382903ce0 req-2f98578e-208c-49be-8a86-ef7e246170cc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Received unexpected event network-vif-plugged-d601bb86-7265-40b5-ac1c-42d005514cfd for instance with vm_state active and task_state deleting.#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.042 2 DEBUG nova.compute.manager [req-21d88be5-4c0f-45b3-b9c9-8a2382903ce0 req-2f98578e-208c-49be-8a86-ef7e246170cc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Received event network-changed-d601bb86-7265-40b5-ac1c-42d005514cfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.043 2 DEBUG nova.compute.manager [req-21d88be5-4c0f-45b3-b9c9-8a2382903ce0 req-2f98578e-208c-49be-8a86-ef7e246170cc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Refreshing instance network info cache due to event network-changed-d601bb86-7265-40b5-ac1c-42d005514cfd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.043 2 DEBUG oslo_concurrency.lockutils [req-21d88be5-4c0f-45b3-b9c9-8a2382903ce0 req-2f98578e-208c-49be-8a86-ef7e246170cc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.043 2 DEBUG oslo_concurrency.lockutils [req-21d88be5-4c0f-45b3-b9c9-8a2382903ce0 req-2f98578e-208c-49be-8a86-ef7e246170cc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.043 2 DEBUG nova.network.neutron [req-21d88be5-4c0f-45b3-b9c9-8a2382903ce0 req-2f98578e-208c-49be-8a86-ef7e246170cc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Refreshing network info cache for port d601bb86-7265-40b5-ac1c-42d005514cfd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.108 2 DEBUG nova.network.neutron [-] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.129 2 INFO nova.compute.manager [-] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Took 1.41 seconds to deallocate network for instance.#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.169 2 DEBUG oslo_concurrency.lockutils [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.170 2 DEBUG oslo_concurrency.lockutils [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.271 2 INFO nova.network.neutron [req-21d88be5-4c0f-45b3-b9c9-8a2382903ce0 req-2f98578e-208c-49be-8a86-ef7e246170cc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Port d601bb86-7265-40b5-ac1c-42d005514cfd from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.272 2 DEBUG nova.network.neutron [req-21d88be5-4c0f-45b3-b9c9-8a2382903ce0 req-2f98578e-208c-49be-8a86-ef7e246170cc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.279 2 DEBUG oslo_concurrency.processutils [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.301 2 DEBUG oslo_concurrency.lockutils [req-21d88be5-4c0f-45b3-b9c9-8a2382903ce0 req-2f98578e-208c-49be-8a86-ef7e246170cc 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-5b008829-2c76-4e40-b9e6-0e3d73095522" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:10:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:10:33 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1247122335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.796 2 DEBUG oslo_concurrency.processutils [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.806 2 DEBUG nova.compute.provider_tree [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:10:33 compute-0 nova_compute[351685]: 2025-10-03 10:10:33.990 2 DEBUG nova.scheduler.client.report [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:10:34 compute-0 nova_compute[351685]: 2025-10-03 10:10:34.261 2 DEBUG oslo_concurrency.lockutils [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:10:34 compute-0 nova_compute[351685]: 2025-10-03 10:10:34.417 2 INFO nova.scheduler.client.report [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Deleted allocations for instance 5b008829-2c76-4e40-b9e6-0e3d73095522#033[00m
Oct  3 10:10:35 compute-0 nova_compute[351685]: 2025-10-03 10:10:35.001 2 DEBUG oslo_concurrency.lockutils [None req-25f84277-6cc2-4b0c-827a-2de9a6886c68 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "5b008829-2c76-4e40-b9e6-0e3d73095522" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:10:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 218 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 28 op/s
Oct  3 10:10:35 compute-0 nova_compute[351685]: 2025-10-03 10:10:35.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:10:36 compute-0 nova_compute[351685]: 2025-10-03 10:10:36.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:10:36 compute-0 nova_compute[351685]: 2025-10-03 10:10:36.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 10:10:36 compute-0 nova_compute[351685]: 2025-10-03 10:10:36.753 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 10:10:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct  3 10:10:37 compute-0 nova_compute[351685]: 2025-10-03 10:10:37.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:37 compute-0 nova_compute[351685]: 2025-10-03 10:10:37.748 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:10:37 compute-0 podman[428387]: 2025-10-03 10:10:37.822635213 +0000 UTC m=+0.075911801 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Oct  3 10:10:37 compute-0 podman[428386]: 2025-10-03 10:10:37.855016021 +0000 UTC m=+0.109501418 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41)
Oct  3 10:10:37 compute-0 podman[428388]: 2025-10-03 10:10:37.867458348 +0000 UTC m=+0.111618445 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  3 10:10:39 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:39.004 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:10:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct  3 10:10:39 compute-0 nova_compute[351685]: 2025-10-03 10:10:39.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:10:39 compute-0 nova_compute[351685]: 2025-10-03 10:10:39.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:10:39 compute-0 nova_compute[351685]: 2025-10-03 10:10:39.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:10:39 compute-0 nova_compute[351685]: 2025-10-03 10:10:39.946 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:10:39 compute-0 nova_compute[351685]: 2025-10-03 10:10:39.946 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:10:39 compute-0 nova_compute[351685]: 2025-10-03 10:10:39.946 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:10:39 compute-0 nova_compute[351685]: 2025-10-03 10:10:39.946 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:10:40 compute-0 nova_compute[351685]: 2025-10-03 10:10:40.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:40 compute-0 nova_compute[351685]: 2025-10-03 10:10:40.984 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:10:41 compute-0 nova_compute[351685]: 2025-10-03 10:10:41.035 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:10:41 compute-0 nova_compute[351685]: 2025-10-03 10:10:41.035 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:10:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct  3 10:10:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:10:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:41.600 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:10:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:41.601 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:10:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:10:41.601 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:10:41 compute-0 nova_compute[351685]: 2025-10-03 10:10:41.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:10:41 compute-0 nova_compute[351685]: 2025-10-03 10:10:41.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:10:41 compute-0 nova_compute[351685]: 2025-10-03 10:10:41.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:10:41 compute-0 nova_compute[351685]: 2025-10-03 10:10:41.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:10:41 compute-0 nova_compute[351685]: 2025-10-03 10:10:41.782 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:10:41 compute-0 nova_compute[351685]: 2025-10-03 10:10:41.783 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:10:41 compute-0 nova_compute[351685]: 2025-10-03 10:10:41.784 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:10:41 compute-0 nova_compute[351685]: 2025-10-03 10:10:41.785 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:10:41 compute-0 nova_compute[351685]: 2025-10-03 10:10:41.786 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:10:42 compute-0 nova_compute[351685]: 2025-10-03 10:10:42.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:10:42 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/954984422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:10:42 compute-0 nova_compute[351685]: 2025-10-03 10:10:42.283 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:10:42 compute-0 nova_compute[351685]: 2025-10-03 10:10:42.479 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:10:42 compute-0 nova_compute[351685]: 2025-10-03 10:10:42.480 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:10:42 compute-0 nova_compute[351685]: 2025-10-03 10:10:42.481 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:10:42 compute-0 nova_compute[351685]: 2025-10-03 10:10:42.490 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:10:42 compute-0 nova_compute[351685]: 2025-10-03 10:10:42.491 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:10:42 compute-0 nova_compute[351685]: 2025-10-03 10:10:42.492 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:10:42 compute-0 nova_compute[351685]: 2025-10-03 10:10:42.500 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:10:42 compute-0 nova_compute[351685]: 2025-10-03 10:10:42.501 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:10:42 compute-0 nova_compute[351685]: 2025-10-03 10:10:42.501 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:10:42 compute-0 nova_compute[351685]: 2025-10-03 10:10:42.871 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:10:42 compute-0 nova_compute[351685]: 2025-10-03 10:10:42.872 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3451MB free_disk=59.88885498046875GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:10:42 compute-0 nova_compute[351685]: 2025-10-03 10:10:42.873 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:10:42 compute-0 nova_compute[351685]: 2025-10-03 10:10:42.874 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:10:43 compute-0 nova_compute[351685]: 2025-10-03 10:10:43.012 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:10:43 compute-0 nova_compute[351685]: 2025-10-03 10:10:43.013 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance cd0be179-1941-400f-a1e6-8ee6243ee71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:10:43 compute-0 nova_compute[351685]: 2025-10-03 10:10:43.013 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 10f21e57-50ad-48e0-a664-66fd8affbe73 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:10:43 compute-0 nova_compute[351685]: 2025-10-03 10:10:43.013 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:10:43 compute-0 nova_compute[351685]: 2025-10-03 10:10:43.014 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:10:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct  3 10:10:43 compute-0 nova_compute[351685]: 2025-10-03 10:10:43.080 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:10:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:10:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/516766500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:10:43 compute-0 nova_compute[351685]: 2025-10-03 10:10:43.522 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:10:43 compute-0 nova_compute[351685]: 2025-10-03 10:10:43.530 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:10:43 compute-0 nova_compute[351685]: 2025-10-03 10:10:43.553 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:10:43 compute-0 nova_compute[351685]: 2025-10-03 10:10:43.555 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:10:43 compute-0 nova_compute[351685]: 2025-10-03 10:10:43.555 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:10:43 compute-0 nova_compute[351685]: 2025-10-03 10:10:43.555 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:10:43 compute-0 podman[428494]: 2025-10-03 10:10:43.802532414 +0000 UTC m=+0.062757740 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:10:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct  3 10:10:45 compute-0 nova_compute[351685]: 2025-10-03 10:10:45.518 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759486230.517225, 5b008829-2c76-4e40-b9e6-0e3d73095522 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:10:45 compute-0 nova_compute[351685]: 2025-10-03 10:10:45.518 2 INFO nova.compute.manager [-] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] VM Stopped (Lifecycle Event)#033[00m
Oct  3 10:10:45 compute-0 nova_compute[351685]: 2025-10-03 10:10:45.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:45 compute-0 nova_compute[351685]: 2025-10-03 10:10:45.605 2 DEBUG nova.compute.manager [None req-1c81cfce-563b-4bf8-bcc6-cf6b0f0c7ce3 - - - - - -] [instance: 5b008829-2c76-4e40-b9e6-0e3d73095522] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:10:45 compute-0 nova_compute[351685]: 2025-10-03 10:10:45.606 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:10:45 compute-0 nova_compute[351685]: 2025-10-03 10:10:45.607 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:10:45 compute-0 nova_compute[351685]: 2025-10-03 10:10:45.607 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:10:46
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'vms', 'volumes', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', '.mgr', 'images']
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:10:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:10:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:10:46 compute-0 nova_compute[351685]: 2025-10-03 10:10:46.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:10:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 341 B/s wr, 12 op/s
Oct  3 10:10:47 compute-0 nova_compute[351685]: 2025-10-03 10:10:47.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:47 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  3 10:10:47 compute-0 podman[428513]: 2025-10-03 10:10:47.41339508 +0000 UTC m=+0.113311431 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 10:10:47 compute-0 podman[428515]: 2025-10-03 10:10:47.419559656 +0000 UTC m=+0.101508541 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:10:47 compute-0 podman[428514]: 2025-10-03 10:10:47.4343603 +0000 UTC m=+0.123041411 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:10:47 compute-0 nova_compute[351685]: 2025-10-03 10:10:47.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:10:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:49 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  3 10:10:50 compute-0 nova_compute[351685]: 2025-10-03 10:10:50.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:10:52 compute-0 nova_compute[351685]: 2025-10-03 10:10:52.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:52 compute-0 podman[428572]: 2025-10-03 10:10:52.81792584 +0000 UTC m=+0.078239747 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:10:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:10:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2517331285' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:10:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:10:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2517331285' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:10:54 compute-0 nova_compute[351685]: 2025-10-03 10:10:54.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:10:54 compute-0 nova_compute[351685]: 2025-10-03 10:10:54.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016574282086344989 of space, bias 1.0, pg target 0.49722846259034964 quantized to 32 (current 32)
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:10:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:10:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:10:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 5922 writes, 26K keys, 5922 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 5922 writes, 5922 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1325 writes, 6022 keys, 1325 commit groups, 1.0 writes per commit group, ingest: 8.69 MB, 0.01 MB/s#012Interval WAL: 1325 writes, 1325 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     66.7      0.44              0.11        15    0.029       0      0       0.0       0.0#012  L6      1/0    7.24 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    106.7     87.4      1.15              0.32        14    0.082     64K   7731       0.0       0.0#012 Sum      1/0    7.24 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     77.2     81.7      1.59              0.43        29    0.055     64K   7731       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8    116.2    116.6      0.33              0.13         8    0.042     20K   2562       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    106.7     87.4      1.15              0.32        14    0.082     64K   7731       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     68.1      0.43              0.11        14    0.031       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.6      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.029, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.05 MB/s write, 0.12 GB read, 0.05 MB/s read, 1.6 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56005dddb1f0#2 capacity: 308.00 MB usage: 13.27 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000102 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(844,12.76 MB,4.14186%) FilterBlock(30,184.80 KB,0.0585928%) IndexBlock(30,340.20 KB,0.107867%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  3 10:10:55 compute-0 nova_compute[351685]: 2025-10-03 10:10:55.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:10:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:57 compute-0 nova_compute[351685]: 2025-10-03 10:10:57.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:10:57 compute-0 podman[428593]: 2025-10-03 10:10:57.803635287 +0000 UTC m=+0.064652561 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:10:57 compute-0 podman[428594]: 2025-10-03 10:10:57.821047115 +0000 UTC m=+0.078826955 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, release=1214.1726694543, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, architecture=x86_64, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:10:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:10:59 compute-0 podman[157165]: time="2025-10-03T10:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:10:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:10:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9048 "" "Go-http-client/1.1"
Oct  3 10:11:00 compute-0 nova_compute[351685]: 2025-10-03 10:11:00.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:11:01 compute-0 openstack_network_exporter[367524]: ERROR   10:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:11:01 compute-0 openstack_network_exporter[367524]: ERROR   10:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:11:01 compute-0 openstack_network_exporter[367524]: ERROR   10:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:11:01 compute-0 openstack_network_exporter[367524]: ERROR   10:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:11:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:11:01 compute-0 openstack_network_exporter[367524]: ERROR   10:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:11:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:11:02 compute-0 nova_compute[351685]: 2025-10-03 10:11:02.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:05 compute-0 ovn_controller[88471]: 2025-10-03T10:11:05Z|00060|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Oct  3 10:11:05 compute-0 nova_compute[351685]: 2025-10-03 10:11:05.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:11:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:07 compute-0 nova_compute[351685]: 2025-10-03 10:11:07.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:08 compute-0 podman[428687]: 2025-10-03 10:11:08.0249428 +0000 UTC m=+0.089801607 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, org.label-schema.build-date=20250930, container_name=ceilometer_agent_compute)
Oct  3 10:11:08 compute-0 podman[428686]: 2025-10-03 10:11:08.048518046 +0000 UTC m=+0.117819625 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, distribution-scope=public, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, architecture=x86_64)
Oct  3 10:11:08 compute-0 podman[428688]: 2025-10-03 10:11:08.061390227 +0000 UTC m=+0.125023104 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:11:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:11:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:11:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:11:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:11:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:11:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:11:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a80965d2-4dce-4296-8be2-2797d79098f0 does not exist
Oct  3 10:11:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 5961f456-76e5-403d-9a20-5f5b059f556c does not exist
Oct  3 10:11:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 01b8c104-8d45-4415-9d12-280cc716da5d does not exist
Oct  3 10:11:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:11:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:11:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:11:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:11:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:11:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:11:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:11:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:11:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:11:09 compute-0 podman[428968]: 2025-10-03 10:11:09.53068249 +0000 UTC m=+0.030949691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:11:09 compute-0 podman[428968]: 2025-10-03 10:11:09.910656769 +0000 UTC m=+0.410923950 container create ae035be3e636903c4bd673c25ef6c57b6a0e76ee7bd8c1f6cefd3af873b36b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 10:11:10 compute-0 systemd[1]: Started libpod-conmon-ae035be3e636903c4bd673c25ef6c57b6a0e76ee7bd8c1f6cefd3af873b36b9a.scope.
Oct  3 10:11:10 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:11:10 compute-0 podman[428968]: 2025-10-03 10:11:10.496602592 +0000 UTC m=+0.996869793 container init ae035be3e636903c4bd673c25ef6c57b6a0e76ee7bd8c1f6cefd3af873b36b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:11:10 compute-0 podman[428968]: 2025-10-03 10:11:10.506099387 +0000 UTC m=+1.006366608 container start ae035be3e636903c4bd673c25ef6c57b6a0e76ee7bd8c1f6cefd3af873b36b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackwell, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 10:11:10 compute-0 serene_blackwell[428984]: 167 167
Oct  3 10:11:10 compute-0 systemd[1]: libpod-ae035be3e636903c4bd673c25ef6c57b6a0e76ee7bd8c1f6cefd3af873b36b9a.scope: Deactivated successfully.
Oct  3 10:11:10 compute-0 conmon[428984]: conmon ae035be3e636903c4bd6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae035be3e636903c4bd673c25ef6c57b6a0e76ee7bd8c1f6cefd3af873b36b9a.scope/container/memory.events
Oct  3 10:11:10 compute-0 nova_compute[351685]: 2025-10-03 10:11:10.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:10 compute-0 podman[428968]: 2025-10-03 10:11:10.587197784 +0000 UTC m=+1.087465045 container attach ae035be3e636903c4bd673c25ef6c57b6a0e76ee7bd8c1f6cefd3af873b36b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  3 10:11:10 compute-0 podman[428968]: 2025-10-03 10:11:10.589981463 +0000 UTC m=+1.090248654 container died ae035be3e636903c4bd673c25ef6c57b6a0e76ee7bd8c1f6cefd3af873b36b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 10:11:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-13c3004d0e97c31dc413df63d32bfbe206a7f68bf0f444659a1f4fc72cb08a51-merged.mount: Deactivated successfully.
Oct  3 10:11:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:11:11 compute-0 podman[428968]: 2025-10-03 10:11:11.302841181 +0000 UTC m=+1.803108402 container remove ae035be3e636903c4bd673c25ef6c57b6a0e76ee7bd8c1f6cefd3af873b36b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 10:11:11 compute-0 systemd[1]: libpod-conmon-ae035be3e636903c4bd673c25ef6c57b6a0e76ee7bd8c1f6cefd3af873b36b9a.scope: Deactivated successfully.
Oct  3 10:11:11 compute-0 podman[429007]: 2025-10-03 10:11:11.591100562 +0000 UTC m=+0.061173270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:11:11 compute-0 podman[429007]: 2025-10-03 10:11:11.717758049 +0000 UTC m=+0.187830697 container create 85c90849c07dad4f179e5a04c2a6c5a10a4fd6349680086adea7f13b710cbce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gauss, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  3 10:11:11 compute-0 systemd[1]: Started libpod-conmon-85c90849c07dad4f179e5a04c2a6c5a10a4fd6349680086adea7f13b710cbce1.scope.
Oct  3 10:11:11 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1205031e663d0dca0679748861069c7866efd1d6b6480582ba47cdf6d9ae4cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1205031e663d0dca0679748861069c7866efd1d6b6480582ba47cdf6d9ae4cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1205031e663d0dca0679748861069c7866efd1d6b6480582ba47cdf6d9ae4cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1205031e663d0dca0679748861069c7866efd1d6b6480582ba47cdf6d9ae4cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1205031e663d0dca0679748861069c7866efd1d6b6480582ba47cdf6d9ae4cc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:11:12 compute-0 podman[429007]: 2025-10-03 10:11:12.028099076 +0000 UTC m=+0.498171784 container init 85c90849c07dad4f179e5a04c2a6c5a10a4fd6349680086adea7f13b710cbce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:11:12 compute-0 podman[429007]: 2025-10-03 10:11:12.037466017 +0000 UTC m=+0.507538675 container start 85c90849c07dad4f179e5a04c2a6c5a10a4fd6349680086adea7f13b710cbce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gauss, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:11:12 compute-0 podman[429007]: 2025-10-03 10:11:12.074088729 +0000 UTC m=+0.544161397 container attach 85c90849c07dad4f179e5a04c2a6c5a10a4fd6349680086adea7f13b710cbce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gauss, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct  3 10:11:12 compute-0 nova_compute[351685]: 2025-10-03 10:11:12.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:13 compute-0 determined_gauss[429023]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:11:13 compute-0 determined_gauss[429023]: --> relative data size: 1.0
Oct  3 10:11:13 compute-0 determined_gauss[429023]: --> All data devices are unavailable
Oct  3 10:11:13 compute-0 systemd[1]: libpod-85c90849c07dad4f179e5a04c2a6c5a10a4fd6349680086adea7f13b710cbce1.scope: Deactivated successfully.
Oct  3 10:11:13 compute-0 systemd[1]: libpod-85c90849c07dad4f179e5a04c2a6c5a10a4fd6349680086adea7f13b710cbce1.scope: Consumed 1.176s CPU time.
Oct  3 10:11:13 compute-0 podman[429052]: 2025-10-03 10:11:13.348486641 +0000 UTC m=+0.045257010 container died 85c90849c07dad4f179e5a04c2a6c5a10a4fd6349680086adea7f13b710cbce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gauss, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:11:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1205031e663d0dca0679748861069c7866efd1d6b6480582ba47cdf6d9ae4cc-merged.mount: Deactivated successfully.
Oct  3 10:11:13 compute-0 podman[429052]: 2025-10-03 10:11:13.889051342 +0000 UTC m=+0.585821711 container remove 85c90849c07dad4f179e5a04c2a6c5a10a4fd6349680086adea7f13b710cbce1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_gauss, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 10:11:13 compute-0 systemd[1]: libpod-conmon-85c90849c07dad4f179e5a04c2a6c5a10a4fd6349680086adea7f13b710cbce1.scope: Deactivated successfully.
Oct  3 10:11:14 compute-0 podman[429065]: 2025-10-03 10:11:14.052885998 +0000 UTC m=+0.094021651 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  3 10:11:14 compute-0 podman[429221]: 2025-10-03 10:11:14.924756959 +0000 UTC m=+0.092666218 container create 1d60bc1a28b7be677b532011d59af32821356080a7b7c9f246e56c7023ed73ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:11:14 compute-0 podman[429221]: 2025-10-03 10:11:14.876843935 +0000 UTC m=+0.044753304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:11:14 compute-0 systemd[1]: Started libpod-conmon-1d60bc1a28b7be677b532011d59af32821356080a7b7c9f246e56c7023ed73ac.scope.
Oct  3 10:11:15 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:11:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:15 compute-0 podman[429221]: 2025-10-03 10:11:15.090180767 +0000 UTC m=+0.258090076 container init 1d60bc1a28b7be677b532011d59af32821356080a7b7c9f246e56c7023ed73ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:11:15 compute-0 podman[429221]: 2025-10-03 10:11:15.102909374 +0000 UTC m=+0.270818643 container start 1d60bc1a28b7be677b532011d59af32821356080a7b7c9f246e56c7023ed73ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_allen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:11:15 compute-0 nice_allen[429237]: 167 167
Oct  3 10:11:15 compute-0 systemd[1]: libpod-1d60bc1a28b7be677b532011d59af32821356080a7b7c9f246e56c7023ed73ac.scope: Deactivated successfully.
Oct  3 10:11:15 compute-0 podman[429221]: 2025-10-03 10:11:15.150185728 +0000 UTC m=+0.318095087 container attach 1d60bc1a28b7be677b532011d59af32821356080a7b7c9f246e56c7023ed73ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  3 10:11:15 compute-0 podman[429221]: 2025-10-03 10:11:15.15087353 +0000 UTC m=+0.318782839 container died 1d60bc1a28b7be677b532011d59af32821356080a7b7c9f246e56c7023ed73ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_allen, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:11:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-80f8d58e62b6927cf17ad86d63c9fd94a748cb14980bbe6c1af0b86b24e13853-merged.mount: Deactivated successfully.
Oct  3 10:11:15 compute-0 podman[429221]: 2025-10-03 10:11:15.257428783 +0000 UTC m=+0.425338052 container remove 1d60bc1a28b7be677b532011d59af32821356080a7b7c9f246e56c7023ed73ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_allen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 10:11:15 compute-0 systemd[1]: libpod-conmon-1d60bc1a28b7be677b532011d59af32821356080a7b7c9f246e56c7023ed73ac.scope: Deactivated successfully.
Oct  3 10:11:15 compute-0 podman[429261]: 2025-10-03 10:11:15.507523712 +0000 UTC m=+0.080486649 container create 75b81e91a8db711b1b9a1425ba186ac7d47b36197cb89e52c35d5e396d166a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 10:11:15 compute-0 podman[429261]: 2025-10-03 10:11:15.478572735 +0000 UTC m=+0.051535762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:11:15 compute-0 systemd[1]: Started libpod-conmon-75b81e91a8db711b1b9a1425ba186ac7d47b36197cb89e52c35d5e396d166a66.scope.
Oct  3 10:11:15 compute-0 nova_compute[351685]: 2025-10-03 10:11:15.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:15 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708866bf279ec130d5b17fe65dbcacc75e4f90c9bc45dd705bbcd979424adb89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708866bf279ec130d5b17fe65dbcacc75e4f90c9bc45dd705bbcd979424adb89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708866bf279ec130d5b17fe65dbcacc75e4f90c9bc45dd705bbcd979424adb89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708866bf279ec130d5b17fe65dbcacc75e4f90c9bc45dd705bbcd979424adb89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:11:15 compute-0 podman[429261]: 2025-10-03 10:11:15.654630003 +0000 UTC m=+0.227592960 container init 75b81e91a8db711b1b9a1425ba186ac7d47b36197cb89e52c35d5e396d166a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  3 10:11:15 compute-0 podman[429261]: 2025-10-03 10:11:15.668559189 +0000 UTC m=+0.241522126 container start 75b81e91a8db711b1b9a1425ba186ac7d47b36197cb89e52c35d5e396d166a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  3 10:11:15 compute-0 podman[429261]: 2025-10-03 10:11:15.682365761 +0000 UTC m=+0.255328758 container attach 75b81e91a8db711b1b9a1425ba186ac7d47b36197cb89e52c35d5e396d166a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 10:11:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:11:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:11:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:11:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:11:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:11:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:11:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:11:16 compute-0 pensive_pike[429278]: {
Oct  3 10:11:16 compute-0 pensive_pike[429278]:    "0": [
Oct  3 10:11:16 compute-0 pensive_pike[429278]:        {
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "devices": [
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "/dev/loop3"
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            ],
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "lv_name": "ceph_lv0",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "lv_size": "21470642176",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "name": "ceph_lv0",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "tags": {
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.cluster_name": "ceph",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.crush_device_class": "",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.encrypted": "0",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.osd_id": "0",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.type": "block",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.vdo": "0"
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            },
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "type": "block",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "vg_name": "ceph_vg0"
Oct  3 10:11:16 compute-0 pensive_pike[429278]:        }
Oct  3 10:11:16 compute-0 pensive_pike[429278]:    ],
Oct  3 10:11:16 compute-0 pensive_pike[429278]:    "1": [
Oct  3 10:11:16 compute-0 pensive_pike[429278]:        {
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "devices": [
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "/dev/loop4"
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            ],
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "lv_name": "ceph_lv1",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "lv_size": "21470642176",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "name": "ceph_lv1",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "tags": {
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.cluster_name": "ceph",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.crush_device_class": "",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.encrypted": "0",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.osd_id": "1",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.type": "block",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.vdo": "0"
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            },
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "type": "block",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "vg_name": "ceph_vg1"
Oct  3 10:11:16 compute-0 pensive_pike[429278]:        }
Oct  3 10:11:16 compute-0 pensive_pike[429278]:    ],
Oct  3 10:11:16 compute-0 pensive_pike[429278]:    "2": [
Oct  3 10:11:16 compute-0 pensive_pike[429278]:        {
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "devices": [
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "/dev/loop5"
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            ],
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "lv_name": "ceph_lv2",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "lv_size": "21470642176",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "name": "ceph_lv2",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "tags": {
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.cluster_name": "ceph",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.crush_device_class": "",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.encrypted": "0",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.osd_id": "2",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.type": "block",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:                "ceph.vdo": "0"
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            },
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "type": "block",
Oct  3 10:11:16 compute-0 pensive_pike[429278]:            "vg_name": "ceph_vg2"
Oct  3 10:11:16 compute-0 pensive_pike[429278]:        }
Oct  3 10:11:16 compute-0 pensive_pike[429278]:    ]
Oct  3 10:11:16 compute-0 pensive_pike[429278]: }
Oct  3 10:11:16 compute-0 systemd[1]: libpod-75b81e91a8db711b1b9a1425ba186ac7d47b36197cb89e52c35d5e396d166a66.scope: Deactivated successfully.
Oct  3 10:11:16 compute-0 podman[429261]: 2025-10-03 10:11:16.497503165 +0000 UTC m=+1.070466152 container died 75b81e91a8db711b1b9a1425ba186ac7d47b36197cb89e52c35d5e396d166a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-708866bf279ec130d5b17fe65dbcacc75e4f90c9bc45dd705bbcd979424adb89-merged.mount: Deactivated successfully.
Oct  3 10:11:16 compute-0 podman[429261]: 2025-10-03 10:11:16.680402622 +0000 UTC m=+1.253365579 container remove 75b81e91a8db711b1b9a1425ba186ac7d47b36197cb89e52c35d5e396d166a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:11:16 compute-0 systemd[1]: libpod-conmon-75b81e91a8db711b1b9a1425ba186ac7d47b36197cb89e52c35d5e396d166a66.scope: Deactivated successfully.
Oct  3 10:11:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:17 compute-0 nova_compute[351685]: 2025-10-03 10:11:17.135 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:11:17 compute-0 nova_compute[351685]: 2025-10-03 10:11:17.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:17 compute-0 nova_compute[351685]: 2025-10-03 10:11:17.172 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Triggering sync for uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  3 10:11:17 compute-0 nova_compute[351685]: 2025-10-03 10:11:17.173 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Triggering sync for uuid cd0be179-1941-400f-a1e6-8ee6243ee71a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  3 10:11:17 compute-0 nova_compute[351685]: 2025-10-03 10:11:17.173 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Triggering sync for uuid 10f21e57-50ad-48e0-a664-66fd8affbe73 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  3 10:11:17 compute-0 nova_compute[351685]: 2025-10-03 10:11:17.174 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:11:17 compute-0 nova_compute[351685]: 2025-10-03 10:11:17.174 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:11:17 compute-0 nova_compute[351685]: 2025-10-03 10:11:17.174 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "cd0be179-1941-400f-a1e6-8ee6243ee71a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:11:17 compute-0 nova_compute[351685]: 2025-10-03 10:11:17.175 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:11:17 compute-0 nova_compute[351685]: 2025-10-03 10:11:17.175 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "10f21e57-50ad-48e0-a664-66fd8affbe73" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:11:17 compute-0 nova_compute[351685]: 2025-10-03 10:11:17.175 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:11:17 compute-0 nova_compute[351685]: 2025-10-03 10:11:17.296 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:11:17 compute-0 nova_compute[351685]: 2025-10-03 10:11:17.297 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:11:17 compute-0 nova_compute[351685]: 2025-10-03 10:11:17.310 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:11:17 compute-0 podman[429435]: 2025-10-03 10:11:17.703218246 +0000 UTC m=+0.077668288 container create bc7c211c14433ced92f346811a55b4d950b14dc91b9f5c7558e6662423fd3a82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pascal, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:11:17 compute-0 podman[429435]: 2025-10-03 10:11:17.668408082 +0000 UTC m=+0.042858104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:11:17 compute-0 systemd[1]: Started libpod-conmon-bc7c211c14433ced92f346811a55b4d950b14dc91b9f5c7558e6662423fd3a82.scope.
Oct  3 10:11:17 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:11:17 compute-0 podman[429435]: 2025-10-03 10:11:17.841673861 +0000 UTC m=+0.216123873 container init bc7c211c14433ced92f346811a55b4d950b14dc91b9f5c7558e6662423fd3a82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:11:17 compute-0 podman[429450]: 2025-10-03 10:11:17.85318406 +0000 UTC m=+0.104615752 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 10:11:17 compute-0 podman[429435]: 2025-10-03 10:11:17.855151142 +0000 UTC m=+0.229601114 container start bc7c211c14433ced92f346811a55b4d950b14dc91b9f5c7558e6662423fd3a82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:11:17 compute-0 podman[429435]: 2025-10-03 10:11:17.860722311 +0000 UTC m=+0.235172313 container attach bc7c211c14433ced92f346811a55b4d950b14dc91b9f5c7558e6662423fd3a82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pascal, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 10:11:17 compute-0 strange_pascal[429475]: 167 167
Oct  3 10:11:17 compute-0 systemd[1]: libpod-bc7c211c14433ced92f346811a55b4d950b14dc91b9f5c7558e6662423fd3a82.scope: Deactivated successfully.
Oct  3 10:11:17 compute-0 podman[429435]: 2025-10-03 10:11:17.86473596 +0000 UTC m=+0.239185942 container died bc7c211c14433ced92f346811a55b4d950b14dc91b9f5c7558e6662423fd3a82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 10:11:17 compute-0 podman[429451]: 2025-10-03 10:11:17.875701601 +0000 UTC m=+0.122026520 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 10:11:17 compute-0 podman[429448]: 2025-10-03 10:11:17.878929733 +0000 UTC m=+0.131348637 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:11:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e1fce73c3d182738a02018978a4ec988242e3f869489a86e241237b24c86ac7-merged.mount: Deactivated successfully.
Oct  3 10:11:17 compute-0 podman[429435]: 2025-10-03 10:11:17.926588769 +0000 UTC m=+0.301038751 container remove bc7c211c14433ced92f346811a55b4d950b14dc91b9f5c7558e6662423fd3a82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_pascal, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:11:17 compute-0 systemd[1]: libpod-conmon-bc7c211c14433ced92f346811a55b4d950b14dc91b9f5c7558e6662423fd3a82.scope: Deactivated successfully.
Oct  3 10:11:18 compute-0 podman[429536]: 2025-10-03 10:11:18.164004673 +0000 UTC m=+0.060335723 container create 9c58cf459620029f37081af39613e03ccfb690a1742c78377f476a338c9e3fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:11:18 compute-0 systemd[1]: Started libpod-conmon-9c58cf459620029f37081af39613e03ccfb690a1742c78377f476a338c9e3fa9.scope.
Oct  3 10:11:18 compute-0 podman[429536]: 2025-10-03 10:11:18.144290142 +0000 UTC m=+0.040621212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:11:18 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d71e9f4eb808294ccb954fd12cafff52bd2ece4ff42e92995cb687c868e49cb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d71e9f4eb808294ccb954fd12cafff52bd2ece4ff42e92995cb687c868e49cb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d71e9f4eb808294ccb954fd12cafff52bd2ece4ff42e92995cb687c868e49cb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:11:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d71e9f4eb808294ccb954fd12cafff52bd2ece4ff42e92995cb687c868e49cb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:11:18 compute-0 podman[429536]: 2025-10-03 10:11:18.339961438 +0000 UTC m=+0.236292538 container init 9c58cf459620029f37081af39613e03ccfb690a1742c78377f476a338c9e3fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 10:11:18 compute-0 podman[429536]: 2025-10-03 10:11:18.355702402 +0000 UTC m=+0.252033462 container start 9c58cf459620029f37081af39613e03ccfb690a1742c78377f476a338c9e3fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:11:18 compute-0 podman[429536]: 2025-10-03 10:11:18.362069356 +0000 UTC m=+0.258400436 container attach 9c58cf459620029f37081af39613e03ccfb690a1742c78377f476a338c9e3fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:11:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:19 compute-0 musing_driscoll[429552]: {
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:        "osd_id": 1,
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:        "type": "bluestore"
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:    },
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:        "osd_id": 2,
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:        "type": "bluestore"
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:    },
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:        "osd_id": 0,
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:        "type": "bluestore"
Oct  3 10:11:19 compute-0 musing_driscoll[429552]:    }
Oct  3 10:11:19 compute-0 musing_driscoll[429552]: }
Oct  3 10:11:19 compute-0 systemd[1]: libpod-9c58cf459620029f37081af39613e03ccfb690a1742c78377f476a338c9e3fa9.scope: Deactivated successfully.
Oct  3 10:11:19 compute-0 systemd[1]: libpod-9c58cf459620029f37081af39613e03ccfb690a1742c78377f476a338c9e3fa9.scope: Consumed 1.121s CPU time.
Oct  3 10:11:19 compute-0 podman[429586]: 2025-10-03 10:11:19.547590551 +0000 UTC m=+0.044955409 container died 9c58cf459620029f37081af39613e03ccfb690a1742c78377f476a338c9e3fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:11:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d71e9f4eb808294ccb954fd12cafff52bd2ece4ff42e92995cb687c868e49cb7-merged.mount: Deactivated successfully.
Oct  3 10:11:19 compute-0 podman[429586]: 2025-10-03 10:11:19.700507499 +0000 UTC m=+0.197872327 container remove 9c58cf459620029f37081af39613e03ccfb690a1742c78377f476a338c9e3fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:11:19 compute-0 systemd[1]: libpod-conmon-9c58cf459620029f37081af39613e03ccfb690a1742c78377f476a338c9e3fa9.scope: Deactivated successfully.
Oct  3 10:11:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:11:19 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:11:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:11:19 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:11:19 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev da75f50a-c93d-4faa-b4de-947b587142fb does not exist
Oct  3 10:11:19 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 0dca1ee3-7e75-4a81-9bfc-2696086cc9da does not exist
Oct  3 10:11:20 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:11:20 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:11:20 compute-0 nova_compute[351685]: 2025-10-03 10:11:20.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:11:22 compute-0 nova_compute[351685]: 2025-10-03 10:11:22.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:23 compute-0 podman[429651]: 2025-10-03 10:11:23.863079943 +0000 UTC m=+0.119777857 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 10:11:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:25 compute-0 nova_compute[351685]: 2025-10-03 10:11:25.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:11:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:27 compute-0 nova_compute[351685]: 2025-10-03 10:11:27.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:28 compute-0 podman[429671]: 2025-10-03 10:11:28.852351449 +0000 UTC m=+0.091918704 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:11:28 compute-0 podman[429672]: 2025-10-03 10:11:28.899387575 +0000 UTC m=+0.134205539 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, build-date=2024-09-18T21:23:30, config_id=edpm, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vendor=Red Hat, Inc., container_name=kepler, name=ubi9)
Oct  3 10:11:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:29 compute-0 podman[157165]: time="2025-10-03T10:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:11:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:11:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9046 "" "Go-http-client/1.1"
Oct  3 10:11:30 compute-0 nova_compute[351685]: 2025-10-03 10:11:30.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:11:31 compute-0 openstack_network_exporter[367524]: ERROR   10:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:11:31 compute-0 openstack_network_exporter[367524]: ERROR   10:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:11:31 compute-0 openstack_network_exporter[367524]: ERROR   10:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:11:31 compute-0 openstack_network_exporter[367524]: ERROR   10:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:11:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:11:31 compute-0 openstack_network_exporter[367524]: ERROR   10:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:11:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:11:32 compute-0 nova_compute[351685]: 2025-10-03 10:11:32.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:35 compute-0 nova_compute[351685]: 2025-10-03 10:11:35.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:11:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:37 compute-0 nova_compute[351685]: 2025-10-03 10:11:37.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:38 compute-0 podman[429713]: 2025-10-03 10:11:38.853325636 +0000 UTC m=+0.099876319 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Oct  3 10:11:38 compute-0 podman[429712]: 2025-10-03 10:11:38.868303606 +0000 UTC m=+0.119988903 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7)
Oct  3 10:11:38 compute-0 podman[429714]: 2025-10-03 10:11:38.928260637 +0000 UTC m=+0.147947079 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:11:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:40 compute-0 nova_compute[351685]: 2025-10-03 10:11:40.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.884 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.884 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.884 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.885 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.893 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '10f21e57-50ad-48e0-a664-66fd8affbe73', 'name': 'vn-yplftfd-y3rllwivvz5w-5iuabdtdgdic-vnf-ze2ouau6iet5', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {'metering.server_group': '09b6fef3-eb54-4e45-9716-a57b7d592bd8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.896 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'cd0be179-1941-400f-a1e6-8ee6243ee71a', 'name': 'vn-yplftfd-tmqfhxgxqbpz-nlbkra67kned-vnf-ck27xuhmg25j', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {'metering.server_group': '09b6fef3-eb54-4e45-9716-a57b7d592bd8'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.899 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.900 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:11:40.900653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.907 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.911 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.917 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.917 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.918 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.918 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.918 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.918 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.918 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.918 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.918 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.918 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.919 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.919 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.919 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.920 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.920 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:11:40.918403) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.921 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:11:40.920011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.943 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.943 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.944 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.964 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.965 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.965 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.998 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.999 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:40.999 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.001 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.003 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.003 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.003 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.004 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.004 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:11:41.004146) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.040 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.041 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.041 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.093 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.094 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.095 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.152 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.152 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.153 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.153 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.153 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.154 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.154 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.154 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.154 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.154 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.latency volume: 1220219266 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.155 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:11:41.154320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.155 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.latency volume: 209689103 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.155 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.latency volume: 160346833 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.155 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.latency volume: 1480162541 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.155 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.latency volume: 246885128 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.156 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.latency volume: 161615200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.156 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.156 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.156 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.157 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.157 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.157 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.157 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.158 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.158 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.158 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.158 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:11:41.158083) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.159 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.159 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.159 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.159 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.160 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.160 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.160 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.161 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.162 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.162 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.162 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:11:41.161943) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.162 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.163 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.163 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.163 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.163 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.164 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.164 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.164 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.164 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.165 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.165 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.165 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.165 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.165 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.165 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.166 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.166 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:11:41.165231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.166 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.166 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.166 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.167 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.167 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.167 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.167 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.168 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.168 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.168 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.168 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.168 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.168 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.168 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:11:41.168387) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.168 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.169 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.169 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.bytes volume: 41771008 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.169 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.169 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.170 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.170 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.170 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.171 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.171 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.171 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.171 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.171 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.171 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.172 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:11:41.171513) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.197 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.218 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.242 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.243 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.243 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.243 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.243 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.243 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.244 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.244 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.latency volume: 16807456468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.244 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.latency volume: 33411420 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.244 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:11:41.243985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.245 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.245 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.latency volume: 7225300586 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.246 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.latency volume: 33230824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.246 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.246 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.247 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.247 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.248 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.248 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.248 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.248 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.248 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.249 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.249 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.249 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.250 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.250 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.requests volume: 228 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.250 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:11:41.249098) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.251 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.251 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.251 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.252 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.252 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.253 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.253 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.253 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.254 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.254 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.254 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.254 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.incoming.bytes.delta volume: 126 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.254 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:11:41.254378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.255 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.255 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.256 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.256 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.256 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.256 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.257 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.257 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.257 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:11:41.257458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.258 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.258 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.259 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.259 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.259 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.259 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.259 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.260 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 29 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.260 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.261 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.261 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.261 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.261 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:11:41.259326) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.262 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:11:41.261586) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.263 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.263 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.263 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.263 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.264 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.264 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.264 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.265 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.265 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.265 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.265 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.265 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.266 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/cpu volume: 37750000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.266 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/cpu volume: 37850000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.266 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 40640000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.266 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:11:41.263830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.267 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:11:41.265873) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.267 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.267 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.267 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.267 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.267 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.268 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.268 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.269 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.269 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:11:41.267947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.269 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.270 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.270 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.270 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.270 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.270 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.271 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:11:41.270486) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.271 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.271 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.272 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.272 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.272 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.272 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.272 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.273 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:11:41.272866) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.273 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.outgoing.bytes.delta volume: 422 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.273 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.274 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.274 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.275 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.275 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.275 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.275 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.275 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.276 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:11:41.275904) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.276 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/memory.usage volume: 49.109375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.277 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.84765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.277 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.278 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.278 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.278 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.278 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.279 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.279 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.279 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.279 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.incoming.bytes volume: 1912 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.280 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.incoming.bytes volume: 2038 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.280 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.281 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.281 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.281 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.281 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.281 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.282 14 DEBUG ceilometer.compute.pollsters [-] 10f21e57-50ad-48e0-a664-66fd8affbe73/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.282 14 DEBUG ceilometer.compute.pollsters [-] cd0be179-1941-400f-a1e6-8ee6243ee71a/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.282 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.283 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.284 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.284 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:11:41.279396) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:11:41.281861) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.285 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.286 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.286 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.286 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.286 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.286 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.286 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.286 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.286 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:11:41.287 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:11:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:11:41.601 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:11:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:11:41.601 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:11:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:11:41.602 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:11:41 compute-0 nova_compute[351685]: 2025-10-03 10:11:41.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:11:41 compute-0 nova_compute[351685]: 2025-10-03 10:11:41.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:11:42 compute-0 nova_compute[351685]: 2025-10-03 10:11:42.058 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-cd0be179-1941-400f-a1e6-8ee6243ee71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:11:42 compute-0 nova_compute[351685]: 2025-10-03 10:11:42.058 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-cd0be179-1941-400f-a1e6-8ee6243ee71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:11:42 compute-0 nova_compute[351685]: 2025-10-03 10:11:42.059 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:11:42 compute-0 nova_compute[351685]: 2025-10-03 10:11:42.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:43 compute-0 nova_compute[351685]: 2025-10-03 10:11:43.812 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Updating instance_info_cache with network_info: [{"id": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "address": "fa:16:3e:3f:37:aa", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13472a1d-91", "ovs_interfaceid": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:11:43 compute-0 nova_compute[351685]: 2025-10-03 10:11:43.832 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-cd0be179-1941-400f-a1e6-8ee6243ee71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:11:43 compute-0 nova_compute[351685]: 2025-10-03 10:11:43.833 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:11:43 compute-0 nova_compute[351685]: 2025-10-03 10:11:43.833 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:11:43 compute-0 nova_compute[351685]: 2025-10-03 10:11:43.834 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:11:43 compute-0 nova_compute[351685]: 2025-10-03 10:11:43.834 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:11:43 compute-0 nova_compute[351685]: 2025-10-03 10:11:43.834 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:11:43 compute-0 nova_compute[351685]: 2025-10-03 10:11:43.835 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:11:43 compute-0 nova_compute[351685]: 2025-10-03 10:11:43.863 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:11:43 compute-0 nova_compute[351685]: 2025-10-03 10:11:43.864 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:11:43 compute-0 nova_compute[351685]: 2025-10-03 10:11:43.864 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:11:43 compute-0 nova_compute[351685]: 2025-10-03 10:11:43.865 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:11:43 compute-0 nova_compute[351685]: 2025-10-03 10:11:43.865 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:11:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:11:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/965045891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:11:44 compute-0 nova_compute[351685]: 2025-10-03 10:11:44.299 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:11:44 compute-0 nova_compute[351685]: 2025-10-03 10:11:44.407 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:11:44 compute-0 nova_compute[351685]: 2025-10-03 10:11:44.408 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:11:44 compute-0 nova_compute[351685]: 2025-10-03 10:11:44.408 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:11:44 compute-0 nova_compute[351685]: 2025-10-03 10:11:44.415 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:11:44 compute-0 nova_compute[351685]: 2025-10-03 10:11:44.416 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:11:44 compute-0 nova_compute[351685]: 2025-10-03 10:11:44.417 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:11:44 compute-0 nova_compute[351685]: 2025-10-03 10:11:44.424 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:11:44 compute-0 nova_compute[351685]: 2025-10-03 10:11:44.424 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:11:44 compute-0 nova_compute[351685]: 2025-10-03 10:11:44.425 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:11:44 compute-0 podman[429799]: 2025-10-03 10:11:44.798482856 +0000 UTC m=+0.102572015 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  3 10:11:44 compute-0 nova_compute[351685]: 2025-10-03 10:11:44.831 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:11:44 compute-0 nova_compute[351685]: 2025-10-03 10:11:44.833 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3433MB free_disk=59.88885498046875GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:11:44 compute-0 nova_compute[351685]: 2025-10-03 10:11:44.833 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:11:44 compute-0 nova_compute[351685]: 2025-10-03 10:11:44.834 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:11:45 compute-0 nova_compute[351685]: 2025-10-03 10:11:45.056 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:11:45 compute-0 nova_compute[351685]: 2025-10-03 10:11:45.057 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance cd0be179-1941-400f-a1e6-8ee6243ee71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:11:45 compute-0 nova_compute[351685]: 2025-10-03 10:11:45.057 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 10f21e57-50ad-48e0-a664-66fd8affbe73 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:11:45 compute-0 nova_compute[351685]: 2025-10-03 10:11:45.060 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:11:45 compute-0 nova_compute[351685]: 2025-10-03 10:11:45.061 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:11:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:45 compute-0 nova_compute[351685]: 2025-10-03 10:11:45.247 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:11:45 compute-0 nova_compute[351685]: 2025-10-03 10:11:45.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:11:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3205048629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:11:45 compute-0 nova_compute[351685]: 2025-10-03 10:11:45.700 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:11:45 compute-0 nova_compute[351685]: 2025-10-03 10:11:45.709 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:11:45 compute-0 nova_compute[351685]: 2025-10-03 10:11:45.730 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:11:45 compute-0 nova_compute[351685]: 2025-10-03 10:11:45.731 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:11:45 compute-0 nova_compute[351685]: 2025-10-03 10:11:45.732 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:11:46
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.mgr', 'images']
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:11:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:11:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:11:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:47 compute-0 nova_compute[351685]: 2025-10-03 10:11:47.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:47 compute-0 nova_compute[351685]: 2025-10-03 10:11:47.628 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:11:47 compute-0 nova_compute[351685]: 2025-10-03 10:11:47.629 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:11:48 compute-0 nova_compute[351685]: 2025-10-03 10:11:48.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:11:48 compute-0 podman[429840]: 2025-10-03 10:11:48.836307214 +0000 UTC m=+0.088218467 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:11:48 compute-0 podman[429839]: 2025-10-03 10:11:48.845747826 +0000 UTC m=+0.104556659 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:11:48 compute-0 podman[429845]: 2025-10-03 10:11:48.863333499 +0000 UTC m=+0.092711340 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  3 10:11:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:49 compute-0 nova_compute[351685]: 2025-10-03 10:11:49.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:11:50 compute-0 nova_compute[351685]: 2025-10-03 10:11:50.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:11:52 compute-0 nova_compute[351685]: 2025-10-03 10:11:52.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:11:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4035771261' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:11:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:11:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4035771261' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:11:54 compute-0 podman[429902]: 2025-10-03 10:11:54.887123006 +0000 UTC m=+0.133821676 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016574282086344989 of space, bias 1.0, pg target 0.49722846259034964 quantized to 32 (current 32)
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:11:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:11:55 compute-0 nova_compute[351685]: 2025-10-03 10:11:55.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:11:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:57 compute-0 nova_compute[351685]: 2025-10-03 10:11:57.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:11:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:11:59 compute-0 podman[157165]: time="2025-10-03T10:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:11:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:11:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9057 "" "Go-http-client/1.1"
Oct  3 10:11:59 compute-0 podman[429922]: 2025-10-03 10:11:59.838648282 +0000 UTC m=+0.103210447 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:11:59 compute-0 podman[429923]: 2025-10-03 10:11:59.880323837 +0000 UTC m=+0.132152414 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, release-0.7.12=, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:12:00 compute-0 nova_compute[351685]: 2025-10-03 10:12:00.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:12:01 compute-0 openstack_network_exporter[367524]: ERROR   10:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:12:01 compute-0 openstack_network_exporter[367524]: ERROR   10:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:12:01 compute-0 openstack_network_exporter[367524]: ERROR   10:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:12:01 compute-0 openstack_network_exporter[367524]: ERROR   10:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:12:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:12:01 compute-0 openstack_network_exporter[367524]: ERROR   10:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:12:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:12:02 compute-0 nova_compute[351685]: 2025-10-03 10:12:02.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:05 compute-0 nova_compute[351685]: 2025-10-03 10:12:05.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:12:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:07 compute-0 nova_compute[351685]: 2025-10-03 10:12:07.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:09 compute-0 podman[429965]: 2025-10-03 10:12:09.860543641 +0000 UTC m=+0.101536833 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct  3 10:12:09 compute-0 podman[429967]: 2025-10-03 10:12:09.909363433 +0000 UTC m=+0.136932856 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  3 10:12:09 compute-0 podman[429966]: 2025-10-03 10:12:09.911609976 +0000 UTC m=+0.146075009 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  3 10:12:10 compute-0 nova_compute[351685]: 2025-10-03 10:12:10.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:12:12 compute-0 nova_compute[351685]: 2025-10-03 10:12:12.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:15 compute-0 nova_compute[351685]: 2025-10-03 10:12:15.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:15 compute-0 podman[430029]: 2025-10-03 10:12:15.853376624 +0000 UTC m=+0.106392208 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  3 10:12:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:12:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:12:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:12:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:12:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:12:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:12:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:12:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:17 compute-0 nova_compute[351685]: 2025-10-03 10:12:17.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:19 compute-0 podman[430050]: 2025-10-03 10:12:19.832689249 +0000 UTC m=+0.096670037 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  3 10:12:19 compute-0 podman[430049]: 2025-10-03 10:12:19.846745019 +0000 UTC m=+0.102390391 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:12:19 compute-0 podman[430051]: 2025-10-03 10:12:19.85616269 +0000 UTC m=+0.103410113 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  3 10:12:20 compute-0 nova_compute[351685]: 2025-10-03 10:12:20.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:12:21 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:12:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:12:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:12:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:12:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:12:21 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 48a2fa8a-9e98-4b7b-8aa7-0af03af92839 does not exist
Oct  3 10:12:21 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev deba4950-6dbb-46b9-a2a9-5a0209643264 does not exist
Oct  3 10:12:21 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 22765d48-cf20-43e8-8c47-7678a3a901fa does not exist
Oct  3 10:12:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:12:21 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:12:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:12:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:12:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:12:21 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:12:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:12:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:12:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:12:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:12:22 compute-0 podman[430377]: 2025-10-03 10:12:22.084944383 +0000 UTC m=+0.117875416 container create e3a3fa3de8e20c77984a38423d4894658f9abf7a76a47481174708ba07b958d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 10:12:22 compute-0 podman[430377]: 2025-10-03 10:12:22.027680309 +0000 UTC m=+0.060611432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:12:22 compute-0 systemd[1]: Started libpod-conmon-e3a3fa3de8e20c77984a38423d4894658f9abf7a76a47481174708ba07b958d2.scope.
Oct  3 10:12:22 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:12:22 compute-0 nova_compute[351685]: 2025-10-03 10:12:22.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:22 compute-0 podman[430377]: 2025-10-03 10:12:22.206290388 +0000 UTC m=+0.239221481 container init e3a3fa3de8e20c77984a38423d4894658f9abf7a76a47481174708ba07b958d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_raman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:12:22 compute-0 podman[430377]: 2025-10-03 10:12:22.215581976 +0000 UTC m=+0.248513039 container start e3a3fa3de8e20c77984a38423d4894658f9abf7a76a47481174708ba07b958d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 10:12:22 compute-0 podman[430377]: 2025-10-03 10:12:22.222614421 +0000 UTC m=+0.255545484 container attach e3a3fa3de8e20c77984a38423d4894658f9abf7a76a47481174708ba07b958d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 10:12:22 compute-0 optimistic_raman[430393]: 167 167
Oct  3 10:12:22 compute-0 systemd[1]: libpod-e3a3fa3de8e20c77984a38423d4894658f9abf7a76a47481174708ba07b958d2.scope: Deactivated successfully.
Oct  3 10:12:22 compute-0 podman[430377]: 2025-10-03 10:12:22.225752822 +0000 UTC m=+0.258683915 container died e3a3fa3de8e20c77984a38423d4894658f9abf7a76a47481174708ba07b958d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_raman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:12:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f0b7de77df1845c9bb0f15d29452e6504f7e1c42a4edea600e3c40b29a864cf-merged.mount: Deactivated successfully.
Oct  3 10:12:22 compute-0 podman[430377]: 2025-10-03 10:12:22.412782241 +0000 UTC m=+0.445713284 container remove e3a3fa3de8e20c77984a38423d4894658f9abf7a76a47481174708ba07b958d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:12:22 compute-0 systemd[1]: libpod-conmon-e3a3fa3de8e20c77984a38423d4894658f9abf7a76a47481174708ba07b958d2.scope: Deactivated successfully.
Oct  3 10:12:22 compute-0 podman[430417]: 2025-10-03 10:12:22.684517013 +0000 UTC m=+0.086885634 container create a2b61187e27939766c27029df40ea1accc39eed5fa05ae9b672bb4062a718f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 10:12:22 compute-0 podman[430417]: 2025-10-03 10:12:22.645502873 +0000 UTC m=+0.047871514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:12:22 compute-0 systemd[1]: Started libpod-conmon-a2b61187e27939766c27029df40ea1accc39eed5fa05ae9b672bb4062a718f64.scope.
Oct  3 10:12:22 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d036d466311de9256ac684afeb281cad09577e3356a024733598121c26f2f0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d036d466311de9256ac684afeb281cad09577e3356a024733598121c26f2f0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d036d466311de9256ac684afeb281cad09577e3356a024733598121c26f2f0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d036d466311de9256ac684afeb281cad09577e3356a024733598121c26f2f0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d036d466311de9256ac684afeb281cad09577e3356a024733598121c26f2f0b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:12:22 compute-0 podman[430417]: 2025-10-03 10:12:22.941879494 +0000 UTC m=+0.344248135 container init a2b61187e27939766c27029df40ea1accc39eed5fa05ae9b672bb4062a718f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lehmann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 10:12:22 compute-0 podman[430417]: 2025-10-03 10:12:22.950340825 +0000 UTC m=+0.352709466 container start a2b61187e27939766c27029df40ea1accc39eed5fa05ae9b672bb4062a718f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lehmann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 10:12:22 compute-0 podman[430417]: 2025-10-03 10:12:22.964314383 +0000 UTC m=+0.366683024 container attach a2b61187e27939766c27029df40ea1accc39eed5fa05ae9b672bb4062a718f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:12:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:24 compute-0 objective_lehmann[430432]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:12:24 compute-0 objective_lehmann[430432]: --> relative data size: 1.0
Oct  3 10:12:24 compute-0 objective_lehmann[430432]: --> All data devices are unavailable
Oct  3 10:12:24 compute-0 systemd[1]: libpod-a2b61187e27939766c27029df40ea1accc39eed5fa05ae9b672bb4062a718f64.scope: Deactivated successfully.
Oct  3 10:12:24 compute-0 podman[430417]: 2025-10-03 10:12:24.175875361 +0000 UTC m=+1.578243982 container died a2b61187e27939766c27029df40ea1accc39eed5fa05ae9b672bb4062a718f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:12:24 compute-0 systemd[1]: libpod-a2b61187e27939766c27029df40ea1accc39eed5fa05ae9b672bb4062a718f64.scope: Consumed 1.139s CPU time.
Oct  3 10:12:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d036d466311de9256ac684afeb281cad09577e3356a024733598121c26f2f0b-merged.mount: Deactivated successfully.
Oct  3 10:12:24 compute-0 podman[430417]: 2025-10-03 10:12:24.513666909 +0000 UTC m=+1.916035530 container remove a2b61187e27939766c27029df40ea1accc39eed5fa05ae9b672bb4062a718f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lehmann, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:12:24 compute-0 systemd[1]: libpod-conmon-a2b61187e27939766c27029df40ea1accc39eed5fa05ae9b672bb4062a718f64.scope: Deactivated successfully.
Oct  3 10:12:25 compute-0 podman[430571]: 2025-10-03 10:12:25.049909782 +0000 UTC m=+0.092882566 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:12:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:25 compute-0 podman[430630]: 2025-10-03 10:12:25.425159929 +0000 UTC m=+0.042303566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:12:25 compute-0 podman[430630]: 2025-10-03 10:12:25.546864367 +0000 UTC m=+0.164007984 container create 092538887fda69affae7ac8c5caeac80d2d3e8c8cc598190007e5829a9d0c8c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:12:25 compute-0 nova_compute[351685]: 2025-10-03 10:12:25.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:25 compute-0 systemd[1]: Started libpod-conmon-092538887fda69affae7ac8c5caeac80d2d3e8c8cc598190007e5829a9d0c8c9.scope.
Oct  3 10:12:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:12:25 compute-0 podman[430630]: 2025-10-03 10:12:25.805559781 +0000 UTC m=+0.422703428 container init 092538887fda69affae7ac8c5caeac80d2d3e8c8cc598190007e5829a9d0c8c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 10:12:25 compute-0 podman[430630]: 2025-10-03 10:12:25.819071913 +0000 UTC m=+0.436215530 container start 092538887fda69affae7ac8c5caeac80d2d3e8c8cc598190007e5829a9d0c8c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:12:25 compute-0 agitated_lumiere[430645]: 167 167
Oct  3 10:12:25 compute-0 systemd[1]: libpod-092538887fda69affae7ac8c5caeac80d2d3e8c8cc598190007e5829a9d0c8c9.scope: Deactivated successfully.
Oct  3 10:12:25 compute-0 podman[430630]: 2025-10-03 10:12:25.859010233 +0000 UTC m=+0.476153940 container attach 092538887fda69affae7ac8c5caeac80d2d3e8c8cc598190007e5829a9d0c8c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:12:25 compute-0 podman[430630]: 2025-10-03 10:12:25.860131749 +0000 UTC m=+0.477275406 container died 092538887fda69affae7ac8c5caeac80d2d3e8c8cc598190007e5829a9d0c8c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:12:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-15ac34819c6ed8314abc91f98439db6afdcbfedfba99b3a935f1a1d3c982dae0-merged.mount: Deactivated successfully.
Oct  3 10:12:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:12:26 compute-0 podman[430630]: 2025-10-03 10:12:26.414491171 +0000 UTC m=+1.031634788 container remove 092538887fda69affae7ac8c5caeac80d2d3e8c8cc598190007e5829a9d0c8c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 10:12:26 compute-0 systemd[1]: libpod-conmon-092538887fda69affae7ac8c5caeac80d2d3e8c8cc598190007e5829a9d0c8c9.scope: Deactivated successfully.
Oct  3 10:12:26 compute-0 podman[430670]: 2025-10-03 10:12:26.640661744 +0000 UTC m=+0.041760029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:12:26 compute-0 podman[430670]: 2025-10-03 10:12:26.748641301 +0000 UTC m=+0.149739516 container create 54dd1f602c89acf7e4021a1c3e70922f7683de47e051ebd399ea6b9b26edbce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shaw, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:12:26 compute-0 systemd[1]: Started libpod-conmon-54dd1f602c89acf7e4021a1c3e70922f7683de47e051ebd399ea6b9b26edbce8.scope.
Oct  3 10:12:27 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a188f6f8573ddbd793392d50dd9f3795c0a1ff492ee0ec35718983a8885662/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a188f6f8573ddbd793392d50dd9f3795c0a1ff492ee0ec35718983a8885662/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a188f6f8573ddbd793392d50dd9f3795c0a1ff492ee0ec35718983a8885662/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a188f6f8573ddbd793392d50dd9f3795c0a1ff492ee0ec35718983a8885662/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:12:27 compute-0 podman[430670]: 2025-10-03 10:12:27.068227156 +0000 UTC m=+0.469325361 container init 54dd1f602c89acf7e4021a1c3e70922f7683de47e051ebd399ea6b9b26edbce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shaw, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:12:27 compute-0 podman[430670]: 2025-10-03 10:12:27.089194998 +0000 UTC m=+0.490293183 container start 54dd1f602c89acf7e4021a1c3e70922f7683de47e051ebd399ea6b9b26edbce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 10:12:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:27 compute-0 podman[430670]: 2025-10-03 10:12:27.100118917 +0000 UTC m=+0.501217132 container attach 54dd1f602c89acf7e4021a1c3e70922f7683de47e051ebd399ea6b9b26edbce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:12:27 compute-0 nova_compute[351685]: 2025-10-03 10:12:27.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:27 compute-0 stoic_shaw[430686]: {
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:    "0": [
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:        {
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "devices": [
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "/dev/loop3"
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            ],
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "lv_name": "ceph_lv0",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "lv_size": "21470642176",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "name": "ceph_lv0",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "tags": {
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.cluster_name": "ceph",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.crush_device_class": "",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.encrypted": "0",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.osd_id": "0",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.type": "block",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.vdo": "0"
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            },
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "type": "block",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "vg_name": "ceph_vg0"
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:        }
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:    ],
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:    "1": [
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:        {
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "devices": [
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "/dev/loop4"
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            ],
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "lv_name": "ceph_lv1",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "lv_size": "21470642176",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "name": "ceph_lv1",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "tags": {
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.cluster_name": "ceph",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.crush_device_class": "",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.encrypted": "0",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.osd_id": "1",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.type": "block",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.vdo": "0"
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            },
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "type": "block",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "vg_name": "ceph_vg1"
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:        }
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:    ],
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:    "2": [
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:        {
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "devices": [
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "/dev/loop5"
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            ],
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "lv_name": "ceph_lv2",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "lv_size": "21470642176",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "name": "ceph_lv2",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "tags": {
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.cluster_name": "ceph",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.crush_device_class": "",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.encrypted": "0",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.osd_id": "2",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.type": "block",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:                "ceph.vdo": "0"
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            },
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "type": "block",
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:            "vg_name": "ceph_vg2"
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:        }
Oct  3 10:12:27 compute-0 stoic_shaw[430686]:    ]
Oct  3 10:12:27 compute-0 stoic_shaw[430686]: }
Oct  3 10:12:27 compute-0 systemd[1]: libpod-54dd1f602c89acf7e4021a1c3e70922f7683de47e051ebd399ea6b9b26edbce8.scope: Deactivated successfully.
Oct  3 10:12:27 compute-0 podman[430670]: 2025-10-03 10:12:27.916560973 +0000 UTC m=+1.317659178 container died 54dd1f602c89acf7e4021a1c3e70922f7683de47e051ebd399ea6b9b26edbce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shaw, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 10:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2a188f6f8573ddbd793392d50dd9f3795c0a1ff492ee0ec35718983a8885662-merged.mount: Deactivated successfully.
Oct  3 10:12:28 compute-0 podman[430670]: 2025-10-03 10:12:28.393175216 +0000 UTC m=+1.794273411 container remove 54dd1f602c89acf7e4021a1c3e70922f7683de47e051ebd399ea6b9b26edbce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:12:28 compute-0 systemd[1]: libpod-conmon-54dd1f602c89acf7e4021a1c3e70922f7683de47e051ebd399ea6b9b26edbce8.scope: Deactivated successfully.
Oct  3 10:12:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:29 compute-0 podman[430845]: 2025-10-03 10:12:29.160288683 +0000 UTC m=+0.071687317 container create b777fb15193c0aa054de382bff6abb1047baa7e3533f85247b7916e4123ca67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wozniak, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 10:12:29 compute-0 systemd[1]: Started libpod-conmon-b777fb15193c0aa054de382bff6abb1047baa7e3533f85247b7916e4123ca67b.scope.
Oct  3 10:12:29 compute-0 podman[430845]: 2025-10-03 10:12:29.122276396 +0000 UTC m=+0.033675050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:12:29 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:12:29 compute-0 podman[430845]: 2025-10-03 10:12:29.283534459 +0000 UTC m=+0.194933123 container init b777fb15193c0aa054de382bff6abb1047baa7e3533f85247b7916e4123ca67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  3 10:12:29 compute-0 podman[430845]: 2025-10-03 10:12:29.292128925 +0000 UTC m=+0.203527559 container start b777fb15193c0aa054de382bff6abb1047baa7e3533f85247b7916e4123ca67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wozniak, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 10:12:29 compute-0 intelligent_wozniak[430861]: 167 167
Oct  3 10:12:29 compute-0 systemd[1]: libpod-b777fb15193c0aa054de382bff6abb1047baa7e3533f85247b7916e4123ca67b.scope: Deactivated successfully.
Oct  3 10:12:29 compute-0 podman[430845]: 2025-10-03 10:12:29.306169714 +0000 UTC m=+0.217568378 container attach b777fb15193c0aa054de382bff6abb1047baa7e3533f85247b7916e4123ca67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wozniak, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:12:29 compute-0 podman[430845]: 2025-10-03 10:12:29.306619869 +0000 UTC m=+0.218018513 container died b777fb15193c0aa054de382bff6abb1047baa7e3533f85247b7916e4123ca67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wozniak, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:12:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9ca9f4b5f44a35abdea1cebd593c04ed3be95a9c75bc0b9c0a60ea5388f8183-merged.mount: Deactivated successfully.
Oct  3 10:12:29 compute-0 podman[430845]: 2025-10-03 10:12:29.410307129 +0000 UTC m=+0.321705763 container remove b777fb15193c0aa054de382bff6abb1047baa7e3533f85247b7916e4123ca67b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 10:12:29 compute-0 systemd[1]: libpod-conmon-b777fb15193c0aa054de382bff6abb1047baa7e3533f85247b7916e4123ca67b.scope: Deactivated successfully.
Oct  3 10:12:29 compute-0 podman[430884]: 2025-10-03 10:12:29.614825269 +0000 UTC m=+0.055273552 container create 8ed6870c114cfde3cf81a7404681e5ccb84c8b9c42a3b7bfefe42662d4a0cc4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:12:29 compute-0 systemd[1]: Started libpod-conmon-8ed6870c114cfde3cf81a7404681e5ccb84c8b9c42a3b7bfefe42662d4a0cc4e.scope.
Oct  3 10:12:29 compute-0 podman[430884]: 2025-10-03 10:12:29.593061021 +0000 UTC m=+0.033509334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:12:29 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23968c2c2cf0851645a5389cb1024b3bc3918a6d42eb804024caebe6b5d07e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23968c2c2cf0851645a5389cb1024b3bc3918a6d42eb804024caebe6b5d07e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23968c2c2cf0851645a5389cb1024b3bc3918a6d42eb804024caebe6b5d07e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:12:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f23968c2c2cf0851645a5389cb1024b3bc3918a6d42eb804024caebe6b5d07e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:12:29 compute-0 podman[430884]: 2025-10-03 10:12:29.7282236 +0000 UTC m=+0.168671903 container init 8ed6870c114cfde3cf81a7404681e5ccb84c8b9c42a3b7bfefe42662d4a0cc4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:12:29 compute-0 podman[430884]: 2025-10-03 10:12:29.74258324 +0000 UTC m=+0.183031523 container start 8ed6870c114cfde3cf81a7404681e5ccb84c8b9c42a3b7bfefe42662d4a0cc4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:12:29 compute-0 podman[157165]: time="2025-10-03T10:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:12:29 compute-0 podman[430884]: 2025-10-03 10:12:29.747660472 +0000 UTC m=+0.188108785 container attach 8ed6870c114cfde3cf81a7404681e5ccb84c8b9c42a3b7bfefe42662d4a0cc4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 10:12:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47833 "" "Go-http-client/1.1"
Oct  3 10:12:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9455 "" "Go-http-client/1.1"
Oct  3 10:12:30 compute-0 nova_compute[351685]: 2025-10-03 10:12:30.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:30 compute-0 jolly_lewin[430901]: {
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:        "osd_id": 1,
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:        "type": "bluestore"
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:    },
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:        "osd_id": 2,
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:        "type": "bluestore"
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:    },
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:        "osd_id": 0,
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:        "type": "bluestore"
Oct  3 10:12:30 compute-0 jolly_lewin[430901]:    }
Oct  3 10:12:30 compute-0 jolly_lewin[430901]: }
Oct  3 10:12:30 compute-0 podman[430926]: 2025-10-03 10:12:30.830715826 +0000 UTC m=+0.083283438 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:12:30 compute-0 systemd[1]: libpod-8ed6870c114cfde3cf81a7404681e5ccb84c8b9c42a3b7bfefe42662d4a0cc4e.scope: Deactivated successfully.
Oct  3 10:12:30 compute-0 systemd[1]: libpod-8ed6870c114cfde3cf81a7404681e5ccb84c8b9c42a3b7bfefe42662d4a0cc4e.scope: Consumed 1.101s CPU time.
Oct  3 10:12:30 compute-0 podman[430884]: 2025-10-03 10:12:30.85955826 +0000 UTC m=+1.300006573 container died 8ed6870c114cfde3cf81a7404681e5ccb84c8b9c42a3b7bfefe42662d4a0cc4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 10:12:30 compute-0 podman[430929]: 2025-10-03 10:12:30.884398145 +0000 UTC m=+0.124419015 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., distribution-scope=public, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, com.redhat.component=ubi9-container, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64)
Oct  3 10:12:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f23968c2c2cf0851645a5389cb1024b3bc3918a6d42eb804024caebe6b5d07e6-merged.mount: Deactivated successfully.
Oct  3 10:12:30 compute-0 podman[430884]: 2025-10-03 10:12:30.934391516 +0000 UTC m=+1.374839799 container remove 8ed6870c114cfde3cf81a7404681e5ccb84c8b9c42a3b7bfefe42662d4a0cc4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 10:12:30 compute-0 systemd[1]: libpod-conmon-8ed6870c114cfde3cf81a7404681e5ccb84c8b9c42a3b7bfefe42662d4a0cc4e.scope: Deactivated successfully.
Oct  3 10:12:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:12:30 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:12:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:12:30 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:12:30 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 799b2e42-bc24-438e-96ea-55ef7ffa68c5 does not exist
Oct  3 10:12:30 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b9b3d71a-a65e-43e8-b8a2-28a26e2a9044 does not exist
Oct  3 10:12:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:12:31 compute-0 openstack_network_exporter[367524]: ERROR   10:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:12:31 compute-0 openstack_network_exporter[367524]: ERROR   10:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:12:31 compute-0 openstack_network_exporter[367524]: ERROR   10:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:12:31 compute-0 openstack_network_exporter[367524]: ERROR   10:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:12:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:12:31 compute-0 openstack_network_exporter[367524]: ERROR   10:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:12:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:12:31 compute-0 nova_compute[351685]: 2025-10-03 10:12:31.724 2 DEBUG nova.compute.manager [req-09073d5b-9d83-470a-abd2-571cf0486570 req-2edffacd-b529-4ef1-8d03-cc52f16f07f0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Received event network-changed-13472a1d-91d3-44c2-8d02-1ced64234ab1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:12:31 compute-0 nova_compute[351685]: 2025-10-03 10:12:31.725 2 DEBUG nova.compute.manager [req-09073d5b-9d83-470a-abd2-571cf0486570 req-2edffacd-b529-4ef1-8d03-cc52f16f07f0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Refreshing instance network info cache due to event network-changed-13472a1d-91d3-44c2-8d02-1ced64234ab1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 10:12:31 compute-0 nova_compute[351685]: 2025-10-03 10:12:31.725 2 DEBUG oslo_concurrency.lockutils [req-09073d5b-9d83-470a-abd2-571cf0486570 req-2edffacd-b529-4ef1-8d03-cc52f16f07f0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-cd0be179-1941-400f-a1e6-8ee6243ee71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:12:31 compute-0 nova_compute[351685]: 2025-10-03 10:12:31.726 2 DEBUG oslo_concurrency.lockutils [req-09073d5b-9d83-470a-abd2-571cf0486570 req-2edffacd-b529-4ef1-8d03-cc52f16f07f0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-cd0be179-1941-400f-a1e6-8ee6243ee71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:12:31 compute-0 nova_compute[351685]: 2025-10-03 10:12:31.726 2 DEBUG nova.network.neutron [req-09073d5b-9d83-470a-abd2-571cf0486570 req-2edffacd-b529-4ef1-8d03-cc52f16f07f0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Refreshing network info cache for port 13472a1d-91d3-44c2-8d02-1ced64234ab1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 10:12:31 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:12:31 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:12:32 compute-0 nova_compute[351685]: 2025-10-03 10:12:32.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:32 compute-0 nova_compute[351685]: 2025-10-03 10:12:32.556 2 DEBUG oslo_concurrency.lockutils [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "cd0be179-1941-400f-a1e6-8ee6243ee71a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:12:32 compute-0 nova_compute[351685]: 2025-10-03 10:12:32.557 2 DEBUG oslo_concurrency.lockutils [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:12:32 compute-0 nova_compute[351685]: 2025-10-03 10:12:32.557 2 DEBUG oslo_concurrency.lockutils [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:12:32 compute-0 nova_compute[351685]: 2025-10-03 10:12:32.557 2 DEBUG oslo_concurrency.lockutils [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:12:32 compute-0 nova_compute[351685]: 2025-10-03 10:12:32.558 2 DEBUG oslo_concurrency.lockutils [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:12:32 compute-0 nova_compute[351685]: 2025-10-03 10:12:32.559 2 INFO nova.compute.manager [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Terminating instance#033[00m
Oct  3 10:12:32 compute-0 nova_compute[351685]: 2025-10-03 10:12:32.560 2 DEBUG nova.compute.manager [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  3 10:12:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:33 compute-0 nova_compute[351685]: 2025-10-03 10:12:33.245 2 DEBUG nova.network.neutron [req-09073d5b-9d83-470a-abd2-571cf0486570 req-2edffacd-b529-4ef1-8d03-cc52f16f07f0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Updated VIF entry in instance network info cache for port 13472a1d-91d3-44c2-8d02-1ced64234ab1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 10:12:33 compute-0 nova_compute[351685]: 2025-10-03 10:12:33.246 2 DEBUG nova.network.neutron [req-09073d5b-9d83-470a-abd2-571cf0486570 req-2edffacd-b529-4ef1-8d03-cc52f16f07f0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Updating instance_info_cache with network_info: [{"id": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "address": "fa:16:3e:3f:37:aa", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13472a1d-91", "ovs_interfaceid": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:12:33 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:33.259 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:73:2e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:70:f7:73:f2:48'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 10:12:33 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:33.259 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  3 10:12:33 compute-0 nova_compute[351685]: 2025-10-03 10:12:33.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:33 compute-0 kernel: tap13472a1d-91 (unregistering): left promiscuous mode
Oct  3 10:12:33 compute-0 NetworkManager[45015]: <info>  [1759486353.6926] device (tap13472a1d-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  3 10:12:33 compute-0 nova_compute[351685]: 2025-10-03 10:12:33.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:33 compute-0 ovn_controller[88471]: 2025-10-03T10:12:33Z|00061|binding|INFO|Releasing lport 13472a1d-91d3-44c2-8d02-1ced64234ab1 from this chassis (sb_readonly=0)
Oct  3 10:12:33 compute-0 ovn_controller[88471]: 2025-10-03T10:12:33Z|00062|binding|INFO|Setting lport 13472a1d-91d3-44c2-8d02-1ced64234ab1 down in Southbound
Oct  3 10:12:33 compute-0 ovn_controller[88471]: 2025-10-03T10:12:33Z|00063|binding|INFO|Removing iface tap13472a1d-91 ovn-installed in OVS
Oct  3 10:12:33 compute-0 nova_compute[351685]: 2025-10-03 10:12:33.722 2 DEBUG oslo_concurrency.lockutils [req-09073d5b-9d83-470a-abd2-571cf0486570 req-2edffacd-b529-4ef1-8d03-cc52f16f07f0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-cd0be179-1941-400f-a1e6-8ee6243ee71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:12:33 compute-0 nova_compute[351685]: 2025-10-03 10:12:33.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:33 compute-0 nova_compute[351685]: 2025-10-03 10:12:33.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:33 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct  3 10:12:33 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 17.320s CPU time.
Oct  3 10:12:33 compute-0 systemd-machined[137653]: Machine qemu-4-instance-00000004 terminated.
Oct  3 10:12:33 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:33.937 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:37:aa 192.168.0.177'], port_security=['fa:16:3e:3f:37:aa 192.168.0.177'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-e77v6yplftfd-tmqfhxgxqbpz-nlbkra67kned-port-rzcgch7wjejz', 'neutron:cidrs': '192.168.0.177/24', 'neutron:device_id': 'cd0be179-1941-400f-a1e6-8ee6243ee71a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-e77v6yplftfd-tmqfhxgxqbpz-nlbkra67kned-port-rzcgch7wjejz', 'neutron:project_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c22f45fa-3e9c-4adb-8724-80552acd48b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c829e9c2-c63b-44e6-806c-cc11bdf56258, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=13472a1d-91d3-44c2-8d02-1ced64234ab1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 10:12:33 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:33.940 284328 INFO neutron.agent.ovn.metadata.agent [-] Port 13472a1d-91d3-44c2-8d02-1ced64234ab1 in datapath 67eed0ac-d131-40ed-a5fe-0484d04236a0 unbound from our chassis#033[00m
Oct  3 10:12:33 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:33.941 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67eed0ac-d131-40ed-a5fe-0484d04236a0#033[00m
Oct  3 10:12:33 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:33.966 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[5e686a2e-33c6-4853-b1da-dc6894de2bf7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:12:33 compute-0 nova_compute[351685]: 2025-10-03 10:12:33.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.012 2 INFO nova.virt.libvirt.driver [-] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Instance destroyed successfully.#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.013 2 DEBUG nova.objects.instance [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lazy-loading 'resources' on Instance uuid cd0be179-1941-400f-a1e6-8ee6243ee71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:12:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:34.021 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[fa3bf79e-3e1f-4e02-b706-14656c9f892b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:12:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:34.026 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[dee3ee0d-4e1d-4c67-a5f4-605683c68d66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:12:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:34.062 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[fb004457-d76c-418f-b7b2-3e9b5d70fa4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.067 2 DEBUG nova.virt.libvirt.vif [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T10:06:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-yplftfd-tmqfhxgxqbpz-nlbkra67kned-vnf-ck27xuhmg25j',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-yplftfd-tmqfhxgxqbpz-nlbkra67kned-vnf-ck27xuhmg25j',id=4,image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-03T10:06:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='09b6fef3-eb54-4e45-9716-a57b7d592bd8'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ee75a4dc6ade43baab6ee923c9cf4cdf',ramdisk_id='',reservation_id='r-3ggt3wig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-03T10:06:53Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xNjg5NzM2NDI2NDAzMDM0MTYyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTE2ODk3MzY0MjY0MDMwMzQxNjI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTY4OTczNjQyNjQwMzAzNDE2Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTE2ODk3MzY0MjY0MDMwMzQxNjI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xNjg5NzM2NDI2NDAzMDM0MTYyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xNjg5NzM2NDI2NDAzMDM0MTYyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgIC
Oct  3 10:12:34 compute-0 nova_compute[351685]: xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTY4OTczNjQyNjQwMzAzNDE2Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTE2ODk3MzY0MjY0MDMwMzQxNjI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xNjg5NzM2NDI2NDAzMDM0MTYyPT0tLQo=',user_id='2f408449ba0f42fcb69f92dbf541f2e3',uuid=cd0be179-1941-400f-a1e6-8ee6243ee71a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "address": "fa:16:3e:3f:37:aa", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13472a1d-91", "ovs_interfaceid": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.069 2 DEBUG nova.network.os_vif_util [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converting VIF {"id": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "address": "fa:16:3e:3f:37:aa", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.177", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13472a1d-91", "ovs_interfaceid": "13472a1d-91d3-44c2-8d02-1ced64234ab1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.070 2 DEBUG nova.network.os_vif_util [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:37:aa,bridge_name='br-int',has_traffic_filtering=True,id=13472a1d-91d3-44c2-8d02-1ced64234ab1,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap13472a1d-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.071 2 DEBUG os_vif [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:37:aa,bridge_name='br-int',has_traffic_filtering=True,id=13472a1d-91d3-44c2-8d02-1ced64234ab1,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap13472a1d-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.074 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13472a1d-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.082 2 INFO os_vif [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:37:aa,bridge_name='br-int',has_traffic_filtering=True,id=13472a1d-91d3-44c2-8d02-1ced64234ab1,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap13472a1d-91')#033[00m
Oct  3 10:12:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:34.085 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[9a41abe8-efab-47ab-95a8-8299a59fccd1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67eed0ac-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 17, 'rx_bytes': 832, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 17, 'rx_bytes': 832, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489539, 'reachable_time': 39813, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 431060, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:12:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:34.108 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[1aacbd9f-2329-4978-b44e-b2fcd3adb841]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap67eed0ac-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489551, 'tstamp': 489551}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 431062, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67eed0ac-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489554, 'tstamp': 489554}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 431062, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:12:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:34.111 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67eed0ac-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:34.115 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67eed0ac-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:12:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:34.115 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:12:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:34.115 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67eed0ac-d0, col_values=(('external_ids', {'iface-id': 'e79720f4-8084-4b6f-a8ef-933cf0e7b8bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:12:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:34.116 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:12:34 compute-0 rsyslogd[187556]: message too long (8192) with configured size 8096, begin of message is: 2025-10-03 10:12:34.067 2 DEBUG nova.virt.libvirt.vif [None req-ea1713df-0542-4a [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.270 2 DEBUG nova.compute.manager [req-76d09ec9-9b88-4c20-9060-d7b04e6a545f req-06dee71b-cf84-4091-8d8e-2c97ed97044f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Received event network-vif-unplugged-13472a1d-91d3-44c2-8d02-1ced64234ab1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.271 2 DEBUG oslo_concurrency.lockutils [req-76d09ec9-9b88-4c20-9060-d7b04e6a545f req-06dee71b-cf84-4091-8d8e-2c97ed97044f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.271 2 DEBUG oslo_concurrency.lockutils [req-76d09ec9-9b88-4c20-9060-d7b04e6a545f req-06dee71b-cf84-4091-8d8e-2c97ed97044f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.271 2 DEBUG oslo_concurrency.lockutils [req-76d09ec9-9b88-4c20-9060-d7b04e6a545f req-06dee71b-cf84-4091-8d8e-2c97ed97044f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.272 2 DEBUG nova.compute.manager [req-76d09ec9-9b88-4c20-9060-d7b04e6a545f req-06dee71b-cf84-4091-8d8e-2c97ed97044f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] No waiting events found dispatching network-vif-unplugged-13472a1d-91d3-44c2-8d02-1ced64234ab1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 10:12:34 compute-0 nova_compute[351685]: 2025-10-03 10:12:34.272 2 DEBUG nova.compute.manager [req-76d09ec9-9b88-4c20-9060-d7b04e6a545f req-06dee71b-cf84-4091-8d8e-2c97ed97044f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Received event network-vif-unplugged-13472a1d-91d3-44c2-8d02-1ced64234ab1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  3 10:12:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 193 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 0 B/s wr, 10 op/s
Oct  3 10:12:36 compute-0 nova_compute[351685]: 2025-10-03 10:12:36.013 2 INFO nova.virt.libvirt.driver [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Deleting instance files /var/lib/nova/instances/cd0be179-1941-400f-a1e6-8ee6243ee71a_del#033[00m
Oct  3 10:12:36 compute-0 nova_compute[351685]: 2025-10-03 10:12:36.014 2 INFO nova.virt.libvirt.driver [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Deletion of /var/lib/nova/instances/cd0be179-1941-400f-a1e6-8ee6243ee71a_del complete#033[00m
Oct  3 10:12:36 compute-0 nova_compute[351685]: 2025-10-03 10:12:36.083 2 INFO nova.compute.manager [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Took 3.52 seconds to destroy the instance on the hypervisor.#033[00m
Oct  3 10:12:36 compute-0 nova_compute[351685]: 2025-10-03 10:12:36.084 2 DEBUG oslo.service.loopingcall [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  3 10:12:36 compute-0 nova_compute[351685]: 2025-10-03 10:12:36.085 2 DEBUG nova.compute.manager [-] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  3 10:12:36 compute-0 nova_compute[351685]: 2025-10-03 10:12:36.085 2 DEBUG nova.network.neutron [-] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  3 10:12:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:12:36 compute-0 nova_compute[351685]: 2025-10-03 10:12:36.466 2 DEBUG nova.compute.manager [req-b18e890e-cb87-4ea3-a2da-757218eac440 req-ee377767-9588-4c7c-837c-0a4955fd19da 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Received event network-vif-plugged-13472a1d-91d3-44c2-8d02-1ced64234ab1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:12:36 compute-0 nova_compute[351685]: 2025-10-03 10:12:36.467 2 DEBUG oslo_concurrency.lockutils [req-b18e890e-cb87-4ea3-a2da-757218eac440 req-ee377767-9588-4c7c-837c-0a4955fd19da 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:12:36 compute-0 nova_compute[351685]: 2025-10-03 10:12:36.467 2 DEBUG oslo_concurrency.lockutils [req-b18e890e-cb87-4ea3-a2da-757218eac440 req-ee377767-9588-4c7c-837c-0a4955fd19da 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:12:36 compute-0 nova_compute[351685]: 2025-10-03 10:12:36.468 2 DEBUG oslo_concurrency.lockutils [req-b18e890e-cb87-4ea3-a2da-757218eac440 req-ee377767-9588-4c7c-837c-0a4955fd19da 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:12:36 compute-0 nova_compute[351685]: 2025-10-03 10:12:36.468 2 DEBUG nova.compute.manager [req-b18e890e-cb87-4ea3-a2da-757218eac440 req-ee377767-9588-4c7c-837c-0a4955fd19da 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] No waiting events found dispatching network-vif-plugged-13472a1d-91d3-44c2-8d02-1ced64234ab1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 10:12:36 compute-0 nova_compute[351685]: 2025-10-03 10:12:36.468 2 WARNING nova.compute.manager [req-b18e890e-cb87-4ea3-a2da-757218eac440 req-ee377767-9588-4c7c-837c-0a4955fd19da 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Received unexpected event network-vif-plugged-13472a1d-91d3-44c2-8d02-1ced64234ab1 for instance with vm_state active and task_state deleting.#033[00m
Oct  3 10:12:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 193 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 0 B/s wr, 10 op/s
Oct  3 10:12:37 compute-0 nova_compute[351685]: 2025-10-03 10:12:37.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:37 compute-0 nova_compute[351685]: 2025-10-03 10:12:37.250 2 DEBUG nova.network.neutron [-] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:12:37 compute-0 nova_compute[351685]: 2025-10-03 10:12:37.270 2 INFO nova.compute.manager [-] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Took 1.19 seconds to deallocate network for instance.#033[00m
Oct  3 10:12:37 compute-0 nova_compute[351685]: 2025-10-03 10:12:37.322 2 DEBUG oslo_concurrency.lockutils [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:12:37 compute-0 nova_compute[351685]: 2025-10-03 10:12:37.323 2 DEBUG oslo_concurrency.lockutils [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:12:37 compute-0 nova_compute[351685]: 2025-10-03 10:12:37.423 2 DEBUG oslo_concurrency.processutils [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:12:37.532574) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486357532636, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2059, "num_deletes": 251, "total_data_size": 3507143, "memory_usage": 3554008, "flush_reason": "Manual Compaction"}
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486357559377, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3440997, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25639, "largest_seqno": 27697, "table_properties": {"data_size": 3431453, "index_size": 6102, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18828, "raw_average_key_size": 20, "raw_value_size": 3412648, "raw_average_value_size": 3649, "num_data_blocks": 271, "num_entries": 935, "num_filter_entries": 935, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759486124, "oldest_key_time": 1759486124, "file_creation_time": 1759486357, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 26837 microseconds, and 7100 cpu microseconds.
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:12:37.559427) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3440997 bytes OK
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:12:37.559444) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:12:37.562123) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:12:37.562147) EVENT_LOG_v1 {"time_micros": 1759486357562140, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:12:37.562170) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3498503, prev total WAL file size 3498503, number of live WAL files 2.
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:12:37.563210) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3360KB)], [59(7416KB)]
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486357563359, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 11035724, "oldest_snapshot_seqno": -1}
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5091 keys, 9275304 bytes, temperature: kUnknown
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486357680581, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9275304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9239194, "index_size": 22304, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 126475, "raw_average_key_size": 24, "raw_value_size": 9145031, "raw_average_value_size": 1796, "num_data_blocks": 923, "num_entries": 5091, "num_filter_entries": 5091, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759486357, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:12:37.680852) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9275304 bytes
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:12:37.684007) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 94.1 rd, 79.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.2 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5605, records dropped: 514 output_compression: NoCompression
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:12:37.684031) EVENT_LOG_v1 {"time_micros": 1759486357684020, "job": 32, "event": "compaction_finished", "compaction_time_micros": 117302, "compaction_time_cpu_micros": 31857, "output_level": 6, "num_output_files": 1, "total_output_size": 9275304, "num_input_records": 5605, "num_output_records": 5091, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486357685049, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486357686604, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:12:37.563057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:12:37.686743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:12:37.686749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:12:37.686752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:12:37.686754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:12:37 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:12:37.686756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:12:37 compute-0 nova_compute[351685]: 2025-10-03 10:12:37.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:12:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:12:37 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3641972326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:12:37 compute-0 nova_compute[351685]: 2025-10-03 10:12:37.904 2 DEBUG oslo_concurrency.processutils [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:12:37 compute-0 nova_compute[351685]: 2025-10-03 10:12:37.913 2 DEBUG nova.compute.provider_tree [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:12:37 compute-0 nova_compute[351685]: 2025-10-03 10:12:37.936 2 DEBUG nova.scheduler.client.report [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:12:37 compute-0 nova_compute[351685]: 2025-10-03 10:12:37.975 2 DEBUG oslo_concurrency.lockutils [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:12:38 compute-0 nova_compute[351685]: 2025-10-03 10:12:38.004 2 INFO nova.scheduler.client.report [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Deleted allocations for instance cd0be179-1941-400f-a1e6-8ee6243ee71a#033[00m
Oct  3 10:12:38 compute-0 nova_compute[351685]: 2025-10-03 10:12:38.176 2 DEBUG oslo_concurrency.lockutils [None req-ea1713df-0542-4adf-b08d-c5fb3138a152 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "cd0be179-1941-400f-a1e6-8ee6243ee71a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:12:39 compute-0 nova_compute[351685]: 2025-10-03 10:12:39.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 167 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 682 B/s wr, 34 op/s
Oct  3 10:12:40 compute-0 podman[431104]: 2025-10-03 10:12:40.811715274 +0000 UTC m=+0.070961954 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:12:40 compute-0 podman[431103]: 2025-10-03 10:12:40.820984491 +0000 UTC m=+0.080491538 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, version=9.6, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct  3 10:12:40 compute-0 podman[431105]: 2025-10-03 10:12:40.863370349 +0000 UTC m=+0.116264055 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 10:12:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct  3 10:12:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:12:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:41.601 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:12:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:41.602 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:12:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:41.602 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:12:42 compute-0 nova_compute[351685]: 2025-10-03 10:12:42.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:42.261 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:12:42 compute-0 nova_compute[351685]: 2025-10-03 10:12:42.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:12:42 compute-0 nova_compute[351685]: 2025-10-03 10:12:42.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:12:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct  3 10:12:43 compute-0 nova_compute[351685]: 2025-10-03 10:12:43.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:12:43 compute-0 nova_compute[351685]: 2025-10-03 10:12:43.732 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:12:44 compute-0 nova_compute[351685]: 2025-10-03 10:12:44.081 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-10f21e57-50ad-48e0-a664-66fd8affbe73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:12:44 compute-0 nova_compute[351685]: 2025-10-03 10:12:44.082 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-10f21e57-50ad-48e0-a664-66fd8affbe73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:12:44 compute-0 nova_compute[351685]: 2025-10-03 10:12:44.082 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:12:44 compute-0 nova_compute[351685]: 2025-10-03 10:12:44.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct  3 10:12:45 compute-0 nova_compute[351685]: 2025-10-03 10:12:45.358 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Updating instance_info_cache with network_info: [{"id": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "address": "fa:16:3e:f6:2d:c9", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.248", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4892d0b-79", "ovs_interfaceid": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:12:45 compute-0 nova_compute[351685]: 2025-10-03 10:12:45.399 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-10f21e57-50ad-48e0-a664-66fd8affbe73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:12:45 compute-0 nova_compute[351685]: 2025-10-03 10:12:45.399 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:12:45 compute-0 nova_compute[351685]: 2025-10-03 10:12:45.399 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:12:45 compute-0 nova_compute[351685]: 2025-10-03 10:12:45.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:12:45 compute-0 nova_compute[351685]: 2025-10-03 10:12:45.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:12:45 compute-0 nova_compute[351685]: 2025-10-03 10:12:45.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:12:45 compute-0 nova_compute[351685]: 2025-10-03 10:12:45.755 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:12:45 compute-0 nova_compute[351685]: 2025-10-03 10:12:45.755 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:12:45 compute-0 nova_compute[351685]: 2025-10-03 10:12:45.755 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:12:45 compute-0 nova_compute[351685]: 2025-10-03 10:12:45.756 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:12:45 compute-0 nova_compute[351685]: 2025-10-03 10:12:45.756 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:12:46
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', '.rgw.root', 'vms']
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:12:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:12:46 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2601513467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:12:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.289 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.388 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.389 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.389 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.394 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.394 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.394 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:12:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:12:46 compute-0 podman[431192]: 2025-10-03 10:12:46.43800891 +0000 UTC m=+0.084140675 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true)
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.796 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.797 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3612MB free_disk=59.922019958496094GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.798 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.798 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.876 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.876 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 10f21e57-50ad-48e0-a664-66fd8affbe73 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.877 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.877 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.889 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.922 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.922 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.935 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 10:12:46 compute-0 nova_compute[351685]: 2025-10-03 10:12:46.966 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 10:12:47 compute-0 nova_compute[351685]: 2025-10-03 10:12:47.034 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:12:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Oct  3 10:12:47 compute-0 nova_compute[351685]: 2025-10-03 10:12:47.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:12:47 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/365920781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:12:47 compute-0 nova_compute[351685]: 2025-10-03 10:12:47.580 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:12:47 compute-0 nova_compute[351685]: 2025-10-03 10:12:47.591 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:12:47 compute-0 nova_compute[351685]: 2025-10-03 10:12:47.607 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:12:47 compute-0 nova_compute[351685]: 2025-10-03 10:12:47.608 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:12:47 compute-0 nova_compute[351685]: 2025-10-03 10:12:47.608 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:12:48 compute-0 nova_compute[351685]: 2025-10-03 10:12:48.604 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:12:48 compute-0 nova_compute[351685]: 2025-10-03 10:12:48.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:12:49 compute-0 nova_compute[351685]: 2025-10-03 10:12:49.006 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759486354.0031767, cd0be179-1941-400f-a1e6-8ee6243ee71a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:12:49 compute-0 nova_compute[351685]: 2025-10-03 10:12:49.007 2 INFO nova.compute.manager [-] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] VM Stopped (Lifecycle Event)#033[00m
Oct  3 10:12:49 compute-0 nova_compute[351685]: 2025-10-03 10:12:49.047 2 DEBUG nova.compute.manager [None req-b9b3959f-bb9c-4272-835a-206321d7f2c6 - - - - - -] [instance: cd0be179-1941-400f-a1e6-8ee6243ee71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:12:49 compute-0 nova_compute[351685]: 2025-10-03 10:12:49.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Oct  3 10:12:50 compute-0 nova_compute[351685]: 2025-10-03 10:12:50.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:12:50 compute-0 podman[431231]: 2025-10-03 10:12:50.849083221 +0000 UTC m=+0.100052175 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:12:50 compute-0 podman[431233]: 2025-10-03 10:12:50.865172046 +0000 UTC m=+0.112507044 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid)
Oct  3 10:12:50 compute-0 podman[431232]: 2025-10-03 10:12:50.873212244 +0000 UTC m=+0.111857193 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  3 10:12:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:12:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 7384 writes, 29K keys, 7384 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7384 writes, 1596 syncs, 4.63 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1247 writes, 4241 keys, 1247 commit groups, 1.0 writes per commit group, ingest: 4.34 MB, 0.01 MB/s#012Interval WAL: 1247 writes, 516 syncs, 2.42 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:12:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Oct  3 10:12:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:12:52 compute-0 nova_compute[351685]: 2025-10-03 10:12:52.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:12:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3542662391' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:12:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:12:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3542662391' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:12:54 compute-0 nova_compute[351685]: 2025-10-03 10:12:54.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011045705948427428 of space, bias 1.0, pg target 0.3313711784528228 quantized to 32 (current 32)
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:12:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:12:55 compute-0 podman[431291]: 2025-10-03 10:12:55.843995154 +0000 UTC m=+0.096662157 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.007 2 DEBUG oslo_concurrency.lockutils [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "10f21e57-50ad-48e0-a664-66fd8affbe73" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.009 2 DEBUG oslo_concurrency.lockutils [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.010 2 DEBUG oslo_concurrency.lockutils [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.011 2 DEBUG oslo_concurrency.lockutils [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.011 2 DEBUG oslo_concurrency.lockutils [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.014 2 INFO nova.compute.manager [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Terminating instance#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.016 2 DEBUG nova.compute.manager [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  3 10:12:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:12:56 compute-0 kernel: tapb4892d0b-79 (unregistering): left promiscuous mode
Oct  3 10:12:56 compute-0 NetworkManager[45015]: <info>  [1759486376.2968] device (tapb4892d0b-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  3 10:12:56 compute-0 ovn_controller[88471]: 2025-10-03T10:12:56Z|00064|binding|INFO|Releasing lport b4892d0b-79ef-407a-9e1d-ac886b07daba from this chassis (sb_readonly=0)
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:56 compute-0 ovn_controller[88471]: 2025-10-03T10:12:56Z|00065|binding|INFO|Setting lport b4892d0b-79ef-407a-9e1d-ac886b07daba down in Southbound
Oct  3 10:12:56 compute-0 ovn_controller[88471]: 2025-10-03T10:12:56Z|00066|binding|INFO|Removing iface tapb4892d0b-79 ovn-installed in OVS
Oct  3 10:12:56 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:56.330 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:2d:c9 192.168.0.248'], port_security=['fa:16:3e:f6:2d:c9 192.168.0.248'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-e77v6yplftfd-y3rllwivvz5w-5iuabdtdgdic-port-txgbpgprlo7w', 'neutron:cidrs': '192.168.0.248/24', 'neutron:device_id': '10f21e57-50ad-48e0-a664-66fd8affbe73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-e77v6yplftfd-y3rllwivvz5w-5iuabdtdgdic-port-txgbpgprlo7w', 'neutron:project_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c22f45fa-3e9c-4adb-8724-80552acd48b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c829e9c2-c63b-44e6-806c-cc11bdf56258, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=b4892d0b-79ef-407a-9e1d-ac886b07daba) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 10:12:56 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:56.331 284328 INFO neutron.agent.ovn.metadata.agent [-] Port b4892d0b-79ef-407a-9e1d-ac886b07daba in datapath 67eed0ac-d131-40ed-a5fe-0484d04236a0 unbound from our chassis#033[00m
Oct  3 10:12:56 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:56.332 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67eed0ac-d131-40ed-a5fe-0484d04236a0#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:56 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:56.355 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[065f5dec-a20b-4637-97a8-bf00ccf13b3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:12:56 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct  3 10:12:56 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:56.391 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[0f33fc63-6fdf-4710-ae3c-72933951d8ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:12:56 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 1min 7.433s CPU time.
Oct  3 10:12:56 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:56.395 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[4448b8e6-fd45-4881-8e6c-3a11eabd64e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:12:56 compute-0 systemd-machined[137653]: Machine qemu-5-instance-00000005 terminated.
Oct  3 10:12:56 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:56.428 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[11f78a7b-7ac3-41e1-9fac-e9ca419ac09f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:12:56 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:56.446 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[2d691b1f-1c40-4f22-8ca1-f872bf5562f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67eed0ac-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0b:cc:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 19, 'rx_bytes': 832, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 19, 'rx_bytes': 832, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489539, 'reachable_time': 39813, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 431323, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:12:56 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:56.464 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[b24adda6-3eb9-4e76-a90d-2e7a229c89a6]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap67eed0ac-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489551, 'tstamp': 489551}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 431329, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67eed0ac-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489554, 'tstamp': 489554}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 431329, 'error': None, 'target': 'ovnmeta-67eed0ac-d131-40ed-a5fe-0484d04236a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.465 2 INFO nova.virt.libvirt.driver [-] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Instance destroyed successfully.#033[00m
Oct  3 10:12:56 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:56.465 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67eed0ac-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.466 2 DEBUG nova.objects.instance [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lazy-loading 'resources' on Instance uuid 10f21e57-50ad-48e0-a664-66fd8affbe73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:56 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:56.473 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67eed0ac-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:12:56 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:56.473 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:12:56 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:56.474 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67eed0ac-d0, col_values=(('external_ids', {'iface-id': 'e79720f4-8084-4b6f-a8ef-933cf0e7b8bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:12:56 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:12:56.474 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.484 2 DEBUG nova.virt.libvirt.vif [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T10:08:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-yplftfd-y3rllwivvz5w-5iuabdtdgdic-vnf-ze2ouau6iet5',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-yplftfd-y3rllwivvz5w-5iuabdtdgdic-vnf-ze2ouau6iet5',id=5,image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-03T10:08:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='09b6fef3-eb54-4e45-9716-a57b7d592bd8'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ee75a4dc6ade43baab6ee923c9cf4cdf',ramdisk_id='',reservation_id='r-k650uwcj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='37f03e8a-3aed-46a5-8219-fc87e355127e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-03T10:08:49Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NDgwMDA5NTU5MzA0NjMxNTc3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU0ODAwMDk1NTkzMDQ2MzE1Nzc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTQ4MDAwOTU1OTMwNDYzMTU3Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU0ODAwMDk1NTkzMDQ2MzE1Nzc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NDgwMDA5NTU5MzA0NjMxNTc3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NDgwMDA5NTU5MzA0NjMxNTc3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgIC
Oct  3 10:12:56 compute-0 nova_compute[351685]: xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTQ4MDAwOTU1OTMwNDYzMTU3Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU0ODAwMDk1NTkzMDQ2MzE1Nzc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NDgwMDA5NTU5MzA0NjMxNTc3PT0tLQo=',user_id='2f408449ba0f42fcb69f92dbf541f2e3',uuid=10f21e57-50ad-48e0-a664-66fd8affbe73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "address": "fa:16:3e:f6:2d:c9", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.248", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4892d0b-79", "ovs_interfaceid": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.485 2 DEBUG nova.network.os_vif_util [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converting VIF {"id": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "address": "fa:16:3e:f6:2d:c9", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.248", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4892d0b-79", "ovs_interfaceid": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.486 2 DEBUG nova.network.os_vif_util [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:2d:c9,bridge_name='br-int',has_traffic_filtering=True,id=b4892d0b-79ef-407a-9e1d-ac886b07daba,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb4892d0b-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.486 2 DEBUG os_vif [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:2d:c9,bridge_name='br-int',has_traffic_filtering=True,id=b4892d0b-79ef-407a-9e1d-ac886b07daba,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb4892d0b-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.488 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4892d0b-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.494 2 INFO os_vif [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:2d:c9,bridge_name='br-int',has_traffic_filtering=True,id=b4892d0b-79ef-407a-9e1d-ac886b07daba,network=Network(67eed0ac-d131-40ed-a5fe-0484d04236a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb4892d0b-79')#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.569 2 DEBUG nova.compute.manager [req-a3d9150f-125e-45a8-bff3-660036001893 req-1daad795-74b5-4d8c-819a-280f25a827e1 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Received event network-vif-unplugged-b4892d0b-79ef-407a-9e1d-ac886b07daba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.570 2 DEBUG oslo_concurrency.lockutils [req-a3d9150f-125e-45a8-bff3-660036001893 req-1daad795-74b5-4d8c-819a-280f25a827e1 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.570 2 DEBUG oslo_concurrency.lockutils [req-a3d9150f-125e-45a8-bff3-660036001893 req-1daad795-74b5-4d8c-819a-280f25a827e1 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.571 2 DEBUG oslo_concurrency.lockutils [req-a3d9150f-125e-45a8-bff3-660036001893 req-1daad795-74b5-4d8c-819a-280f25a827e1 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.571 2 DEBUG nova.compute.manager [req-a3d9150f-125e-45a8-bff3-660036001893 req-1daad795-74b5-4d8c-819a-280f25a827e1 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] No waiting events found dispatching network-vif-unplugged-b4892d0b-79ef-407a-9e1d-ac886b07daba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 10:12:56 compute-0 nova_compute[351685]: 2025-10-03 10:12:56.572 2 DEBUG nova.compute.manager [req-a3d9150f-125e-45a8-bff3-660036001893 req-1daad795-74b5-4d8c-819a-280f25a827e1 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Received event network-vif-unplugged-b4892d0b-79ef-407a-9e1d-ac886b07daba for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  3 10:12:56 compute-0 rsyslogd[187556]: message too long (8192) with configured size 8096, begin of message is: 2025-10-03 10:12:56.484 2 DEBUG nova.virt.libvirt.vif [None req-907ea1e1-6214-43 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  3 10:12:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:12:57 compute-0 nova_compute[351685]: 2025-10-03 10:12:57.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:12:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:12:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 8307 writes, 32K keys, 8307 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8307 writes, 1877 syncs, 4.43 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1068 writes, 3546 keys, 1068 commit groups, 1.0 writes per commit group, ingest: 3.27 MB, 0.01 MB/s#012Interval WAL: 1068 writes, 439 syncs, 2.43 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:12:58 compute-0 nova_compute[351685]: 2025-10-03 10:12:58.295 2 INFO nova.virt.libvirt.driver [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Deleting instance files /var/lib/nova/instances/10f21e57-50ad-48e0-a664-66fd8affbe73_del#033[00m
Oct  3 10:12:58 compute-0 nova_compute[351685]: 2025-10-03 10:12:58.296 2 INFO nova.virt.libvirt.driver [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Deletion of /var/lib/nova/instances/10f21e57-50ad-48e0-a664-66fd8affbe73_del complete#033[00m
Oct  3 10:12:58 compute-0 nova_compute[351685]: 2025-10-03 10:12:58.374 2 INFO nova.compute.manager [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Took 2.36 seconds to destroy the instance on the hypervisor.#033[00m
Oct  3 10:12:58 compute-0 nova_compute[351685]: 2025-10-03 10:12:58.374 2 DEBUG oslo.service.loopingcall [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  3 10:12:58 compute-0 nova_compute[351685]: 2025-10-03 10:12:58.375 2 DEBUG nova.compute.manager [-] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  3 10:12:58 compute-0 nova_compute[351685]: 2025-10-03 10:12:58.375 2 DEBUG nova.network.neutron [-] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  3 10:12:58 compute-0 nova_compute[351685]: 2025-10-03 10:12:58.682 2 DEBUG nova.compute.manager [req-00a789c4-d2b1-4533-9442-111d88951ac3 req-c9f031df-b4b2-4cbe-986a-0ca30474bba2 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Received event network-vif-plugged-b4892d0b-79ef-407a-9e1d-ac886b07daba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:12:58 compute-0 nova_compute[351685]: 2025-10-03 10:12:58.683 2 DEBUG oslo_concurrency.lockutils [req-00a789c4-d2b1-4533-9442-111d88951ac3 req-c9f031df-b4b2-4cbe-986a-0ca30474bba2 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:12:58 compute-0 nova_compute[351685]: 2025-10-03 10:12:58.684 2 DEBUG oslo_concurrency.lockutils [req-00a789c4-d2b1-4533-9442-111d88951ac3 req-c9f031df-b4b2-4cbe-986a-0ca30474bba2 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:12:58 compute-0 nova_compute[351685]: 2025-10-03 10:12:58.684 2 DEBUG oslo_concurrency.lockutils [req-00a789c4-d2b1-4533-9442-111d88951ac3 req-c9f031df-b4b2-4cbe-986a-0ca30474bba2 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:12:58 compute-0 nova_compute[351685]: 2025-10-03 10:12:58.685 2 DEBUG nova.compute.manager [req-00a789c4-d2b1-4533-9442-111d88951ac3 req-c9f031df-b4b2-4cbe-986a-0ca30474bba2 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] No waiting events found dispatching network-vif-plugged-b4892d0b-79ef-407a-9e1d-ac886b07daba pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 10:12:58 compute-0 nova_compute[351685]: 2025-10-03 10:12:58.686 2 WARNING nova.compute.manager [req-00a789c4-d2b1-4533-9442-111d88951ac3 req-c9f031df-b4b2-4cbe-986a-0ca30474bba2 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Received unexpected event network-vif-plugged-b4892d0b-79ef-407a-9e1d-ac886b07daba for instance with vm_state active and task_state deleting.#033[00m
Oct  3 10:12:58 compute-0 nova_compute[351685]: 2025-10-03 10:12:58.687 2 DEBUG nova.compute.manager [req-00a789c4-d2b1-4533-9442-111d88951ac3 req-c9f031df-b4b2-4cbe-986a-0ca30474bba2 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Received event network-changed-b4892d0b-79ef-407a-9e1d-ac886b07daba external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 10:12:58 compute-0 nova_compute[351685]: 2025-10-03 10:12:58.688 2 DEBUG nova.compute.manager [req-00a789c4-d2b1-4533-9442-111d88951ac3 req-c9f031df-b4b2-4cbe-986a-0ca30474bba2 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Refreshing instance network info cache due to event network-changed-b4892d0b-79ef-407a-9e1d-ac886b07daba. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 10:12:58 compute-0 nova_compute[351685]: 2025-10-03 10:12:58.688 2 DEBUG oslo_concurrency.lockutils [req-00a789c4-d2b1-4533-9442-111d88951ac3 req-c9f031df-b4b2-4cbe-986a-0ca30474bba2 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-10f21e57-50ad-48e0-a664-66fd8affbe73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:12:58 compute-0 nova_compute[351685]: 2025-10-03 10:12:58.689 2 DEBUG oslo_concurrency.lockutils [req-00a789c4-d2b1-4533-9442-111d88951ac3 req-c9f031df-b4b2-4cbe-986a-0ca30474bba2 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-10f21e57-50ad-48e0-a664-66fd8affbe73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:12:58 compute-0 nova_compute[351685]: 2025-10-03 10:12:58.690 2 DEBUG nova.network.neutron [req-00a789c4-d2b1-4533-9442-111d88951ac3 req-c9f031df-b4b2-4cbe-986a-0ca30474bba2 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Refreshing network info cache for port b4892d0b-79ef-407a-9e1d-ac886b07daba _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 10:12:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 101 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 12 op/s
Oct  3 10:12:59 compute-0 podman[157165]: time="2025-10-03T10:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:12:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:12:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9045 "" "Go-http-client/1.1"
Oct  3 10:13:00 compute-0 nova_compute[351685]: 2025-10-03 10:13:00.416 2 DEBUG nova.network.neutron [-] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:13:00 compute-0 nova_compute[351685]: 2025-10-03 10:13:00.438 2 INFO nova.compute.manager [-] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Took 2.06 seconds to deallocate network for instance.#033[00m
Oct  3 10:13:00 compute-0 nova_compute[351685]: 2025-10-03 10:13:00.475 2 DEBUG oslo_concurrency.lockutils [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:13:00 compute-0 nova_compute[351685]: 2025-10-03 10:13:00.476 2 DEBUG oslo_concurrency.lockutils [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:13:00 compute-0 nova_compute[351685]: 2025-10-03 10:13:00.543 2 DEBUG oslo_concurrency.processutils [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:13:00 compute-0 nova_compute[351685]: 2025-10-03 10:13:00.684 2 DEBUG nova.network.neutron [req-00a789c4-d2b1-4533-9442-111d88951ac3 req-c9f031df-b4b2-4cbe-986a-0ca30474bba2 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Updated VIF entry in instance network info cache for port b4892d0b-79ef-407a-9e1d-ac886b07daba. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 10:13:00 compute-0 nova_compute[351685]: 2025-10-03 10:13:00.685 2 DEBUG nova.network.neutron [req-00a789c4-d2b1-4533-9442-111d88951ac3 req-c9f031df-b4b2-4cbe-986a-0ca30474bba2 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Updating instance_info_cache with network_info: [{"id": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "address": "fa:16:3e:f6:2d:c9", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.248", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4892d0b-79", "ovs_interfaceid": "b4892d0b-79ef-407a-9e1d-ac886b07daba", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:13:00 compute-0 nova_compute[351685]: 2025-10-03 10:13:00.717 2 DEBUG oslo_concurrency.lockutils [req-00a789c4-d2b1-4533-9442-111d88951ac3 req-c9f031df-b4b2-4cbe-986a-0ca30474bba2 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-10f21e57-50ad-48e0-a664-66fd8affbe73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:13:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:13:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/846935576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:13:01 compute-0 nova_compute[351685]: 2025-10-03 10:13:01.004 2 DEBUG oslo_concurrency.processutils [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:13:01 compute-0 nova_compute[351685]: 2025-10-03 10:13:01.013 2 DEBUG nova.compute.provider_tree [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:13:01 compute-0 nova_compute[351685]: 2025-10-03 10:13:01.034 2 DEBUG nova.scheduler.client.report [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:13:01 compute-0 nova_compute[351685]: 2025-10-03 10:13:01.061 2 DEBUG oslo_concurrency.lockutils [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:13:01 compute-0 nova_compute[351685]: 2025-10-03 10:13:01.081 2 INFO nova.scheduler.client.report [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Deleted allocations for instance 10f21e57-50ad-48e0-a664-66fd8affbe73#033[00m
Oct  3 10:13:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 KiB/s wr, 38 op/s
Oct  3 10:13:01 compute-0 nova_compute[351685]: 2025-10-03 10:13:01.155 2 DEBUG oslo_concurrency.lockutils [None req-907ea1e1-6214-4373-aa7a-b933dedcc8fd 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "10f21e57-50ad-48e0-a664-66fd8affbe73" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:13:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:13:01 compute-0 openstack_network_exporter[367524]: ERROR   10:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:13:01 compute-0 openstack_network_exporter[367524]: ERROR   10:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:13:01 compute-0 openstack_network_exporter[367524]: ERROR   10:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:13:01 compute-0 openstack_network_exporter[367524]: ERROR   10:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:13:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:13:01 compute-0 openstack_network_exporter[367524]: ERROR   10:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:13:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:13:01 compute-0 nova_compute[351685]: 2025-10-03 10:13:01.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:01 compute-0 podman[431377]: 2025-10-03 10:13:01.825340798 +0000 UTC m=+0.083251037 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:13:01 compute-0 podman[431378]: 2025-10-03 10:13:01.844992437 +0000 UTC m=+0.093280898 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.component=ubi9-container, io.openshift.expose-services=, name=ubi9, config_id=edpm, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, distribution-scope=public, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:13:02 compute-0 nova_compute[351685]: 2025-10-03 10:13:02.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1366: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 KiB/s wr, 38 op/s
Oct  3 10:13:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:13:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 6787 writes, 27K keys, 6787 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6787 writes, 1379 syncs, 4.92 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 530 writes, 1536 keys, 530 commit groups, 1.0 writes per commit group, ingest: 1.10 MB, 0.00 MB/s#012Interval WAL: 530 writes, 250 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:13:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct  3 10:13:05 compute-0 ceph-mgr[192071]: [devicehealth INFO root] Check health
Oct  3 10:13:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:13:06 compute-0 nova_compute[351685]: 2025-10-03 10:13:06.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct  3 10:13:07 compute-0 nova_compute[351685]: 2025-10-03 10:13:07.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1369: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Oct  3 10:13:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 27 op/s
Oct  3 10:13:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:13:11 compute-0 nova_compute[351685]: 2025-10-03 10:13:11.460 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759486376.4593832, 10f21e57-50ad-48e0-a664-66fd8affbe73 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:13:11 compute-0 nova_compute[351685]: 2025-10-03 10:13:11.460 2 INFO nova.compute.manager [-] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] VM Stopped (Lifecycle Event)#033[00m
Oct  3 10:13:11 compute-0 nova_compute[351685]: 2025-10-03 10:13:11.497 2 DEBUG nova.compute.manager [None req-229e5735-f7ea-4a67-9fa4-a3444326d7d2 - - - - - -] [instance: 10f21e57-50ad-48e0-a664-66fd8affbe73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:13:11 compute-0 nova_compute[351685]: 2025-10-03 10:13:11.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:11 compute-0 podman[431417]: 2025-10-03 10:13:11.813405873 +0000 UTC m=+0.075028903 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 10:13:11 compute-0 podman[431416]: 2025-10-03 10:13:11.813471835 +0000 UTC m=+0.073355659 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public)
Oct  3 10:13:11 compute-0 podman[431418]: 2025-10-03 10:13:11.848478017 +0000 UTC m=+0.098248317 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:13:12 compute-0 nova_compute[351685]: 2025-10-03 10:13:12.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Oct  3 10:13:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1372: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Oct  3 10:13:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:13:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:13:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:13:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:13:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:13:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:13:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:13:16 compute-0 nova_compute[351685]: 2025-10-03 10:13:16.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:16 compute-0 podman[431477]: 2025-10-03 10:13:16.857888919 +0000 UTC m=+0.105036465 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:13:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:13:17 compute-0 nova_compute[351685]: 2025-10-03 10:13:17.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:13:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:13:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:13:21 compute-0 nova_compute[351685]: 2025-10-03 10:13:21.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:21 compute-0 podman[431498]: 2025-10-03 10:13:21.872584051 +0000 UTC m=+0.117571207 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:13:21 compute-0 podman[431499]: 2025-10-03 10:13:21.874999328 +0000 UTC m=+0.115085266 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Oct  3 10:13:21 compute-0 podman[431500]: 2025-10-03 10:13:21.876993262 +0000 UTC m=+0.110651395 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 10:13:22 compute-0 nova_compute[351685]: 2025-10-03 10:13:22.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:13:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:13:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:13:26 compute-0 nova_compute[351685]: 2025-10-03 10:13:26.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:26 compute-0 podman[431554]: 2025-10-03 10:13:26.844980827 +0000 UTC m=+0.096186551 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.3)
Oct  3 10:13:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:13:27 compute-0 nova_compute[351685]: 2025-10-03 10:13:27.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:28 compute-0 systemd-logind[798]: New session 64 of user zuul.
Oct  3 10:13:28 compute-0 systemd[1]: Started Session 64 of User zuul.
Oct  3 10:13:29 compute-0 python3[431752]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 10:13:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1379: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:13:29 compute-0 podman[157165]: time="2025-10-03T10:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:13:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:13:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9057 "" "Go-http-client/1.1"
Oct  3 10:13:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:13:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:13:31 compute-0 openstack_network_exporter[367524]: ERROR   10:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:13:31 compute-0 openstack_network_exporter[367524]: ERROR   10:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:13:31 compute-0 openstack_network_exporter[367524]: ERROR   10:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:13:31 compute-0 openstack_network_exporter[367524]: ERROR   10:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:13:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:13:31 compute-0 openstack_network_exporter[367524]: ERROR   10:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:13:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:13:31 compute-0 nova_compute[351685]: 2025-10-03 10:13:31.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:32 compute-0 ovn_controller[88471]: 2025-10-03T10:13:32Z|00067|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  3 10:13:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:13:32 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:13:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:13:32 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:13:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:13:32 compute-0 nova_compute[351685]: 2025-10-03 10:13:32.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:32 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:13:32 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 1d5b72c4-31b5-4f8f-9b17-9db94d8c5036 does not exist
Oct  3 10:13:32 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 7ab31800-8b58-45e8-84d7-14714daf268d does not exist
Oct  3 10:13:32 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev cb51a2d4-6647-47c4-a7b7-81943dfd36e6 does not exist
Oct  3 10:13:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:13:32 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:13:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:13:32 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:13:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:13:32 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:13:32 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:13:32 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:13:32 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:13:32 compute-0 podman[431945]: 2025-10-03 10:13:32.537404142 +0000 UTC m=+0.097431402 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:13:32 compute-0 podman[431946]: 2025-10-03 10:13:32.606218822 +0000 UTC m=+0.162786301 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., version=9.4, release=1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler)
Oct  3 10:13:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1381: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:13:33 compute-0 podman[432099]: 2025-10-03 10:13:33.20282912 +0000 UTC m=+0.070120714 container create ced25ee920c2566cc188c5dc33425c92fd20d16ac1e84a1509ffa52e03e63852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 10:13:33 compute-0 systemd[1]: Started libpod-conmon-ced25ee920c2566cc188c5dc33425c92fd20d16ac1e84a1509ffa52e03e63852.scope.
Oct  3 10:13:33 compute-0 podman[432099]: 2025-10-03 10:13:33.170907145 +0000 UTC m=+0.038198789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:13:33 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:13:33 compute-0 podman[432099]: 2025-10-03 10:13:33.318543888 +0000 UTC m=+0.185835472 container init ced25ee920c2566cc188c5dc33425c92fd20d16ac1e84a1509ffa52e03e63852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pasteur, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 10:13:33 compute-0 podman[432099]: 2025-10-03 10:13:33.339147399 +0000 UTC m=+0.206438963 container start ced25ee920c2566cc188c5dc33425c92fd20d16ac1e84a1509ffa52e03e63852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pasteur, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:13:33 compute-0 podman[432099]: 2025-10-03 10:13:33.343619963 +0000 UTC m=+0.210911557 container attach ced25ee920c2566cc188c5dc33425c92fd20d16ac1e84a1509ffa52e03e63852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pasteur, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:13:33 compute-0 sad_pasteur[432114]: 167 167
Oct  3 10:13:33 compute-0 systemd[1]: libpod-ced25ee920c2566cc188c5dc33425c92fd20d16ac1e84a1509ffa52e03e63852.scope: Deactivated successfully.
Oct  3 10:13:33 compute-0 podman[432099]: 2025-10-03 10:13:33.350491014 +0000 UTC m=+0.217782588 container died ced25ee920c2566cc188c5dc33425c92fd20d16ac1e84a1509ffa52e03e63852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pasteur, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 10:13:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8c4ec58f5aff255cd634b3098ca5d422312de7355aefe7126e65d30742b5b8c-merged.mount: Deactivated successfully.
Oct  3 10:13:33 compute-0 podman[432099]: 2025-10-03 10:13:33.476702489 +0000 UTC m=+0.343994063 container remove ced25ee920c2566cc188c5dc33425c92fd20d16ac1e84a1509ffa52e03e63852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_pasteur, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:13:33 compute-0 systemd[1]: libpod-conmon-ced25ee920c2566cc188c5dc33425c92fd20d16ac1e84a1509ffa52e03e63852.scope: Deactivated successfully.
Oct  3 10:13:33 compute-0 podman[432139]: 2025-10-03 10:13:33.728900351 +0000 UTC m=+0.062103366 container create eaed909e60adca91b190d189af5cd3656eb255b4da0142adeba6f14b309ae15d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct  3 10:13:33 compute-0 systemd[1]: Started libpod-conmon-eaed909e60adca91b190d189af5cd3656eb255b4da0142adeba6f14b309ae15d.scope.
Oct  3 10:13:33 compute-0 podman[432139]: 2025-10-03 10:13:33.706791761 +0000 UTC m=+0.039994806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:13:33 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e72f1c28231a864fadc9d2b6cd0dffbaa6a56478c08c77eda40f887e5fdd186/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e72f1c28231a864fadc9d2b6cd0dffbaa6a56478c08c77eda40f887e5fdd186/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e72f1c28231a864fadc9d2b6cd0dffbaa6a56478c08c77eda40f887e5fdd186/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e72f1c28231a864fadc9d2b6cd0dffbaa6a56478c08c77eda40f887e5fdd186/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e72f1c28231a864fadc9d2b6cd0dffbaa6a56478c08c77eda40f887e5fdd186/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:13:33 compute-0 podman[432139]: 2025-10-03 10:13:33.862938897 +0000 UTC m=+0.196142042 container init eaed909e60adca91b190d189af5cd3656eb255b4da0142adeba6f14b309ae15d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 10:13:33 compute-0 podman[432139]: 2025-10-03 10:13:33.899279715 +0000 UTC m=+0.232482730 container start eaed909e60adca91b190d189af5cd3656eb255b4da0142adeba6f14b309ae15d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:13:33 compute-0 podman[432139]: 2025-10-03 10:13:33.913669188 +0000 UTC m=+0.246872253 container attach eaed909e60adca91b190d189af5cd3656eb255b4da0142adeba6f14b309ae15d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:13:35 compute-0 ecstatic_visvesvaraya[432154]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:13:35 compute-0 ecstatic_visvesvaraya[432154]: --> relative data size: 1.0
Oct  3 10:13:35 compute-0 ecstatic_visvesvaraya[432154]: --> All data devices are unavailable
Oct  3 10:13:35 compute-0 systemd[1]: libpod-eaed909e60adca91b190d189af5cd3656eb255b4da0142adeba6f14b309ae15d.scope: Deactivated successfully.
Oct  3 10:13:35 compute-0 systemd[1]: libpod-eaed909e60adca91b190d189af5cd3656eb255b4da0142adeba6f14b309ae15d.scope: Consumed 1.060s CPU time.
Oct  3 10:13:35 compute-0 podman[432183]: 2025-10-03 10:13:35.071817417 +0000 UTC m=+0.032809006 container died eaed909e60adca91b190d189af5cd3656eb255b4da0142adeba6f14b309ae15d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 10:13:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e72f1c28231a864fadc9d2b6cd0dffbaa6a56478c08c77eda40f887e5fdd186-merged.mount: Deactivated successfully.
Oct  3 10:13:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1382: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 2 op/s
Oct  3 10:13:35 compute-0 podman[432183]: 2025-10-03 10:13:35.137340171 +0000 UTC m=+0.098331740 container remove eaed909e60adca91b190d189af5cd3656eb255b4da0142adeba6f14b309ae15d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:13:35 compute-0 systemd[1]: libpod-conmon-eaed909e60adca91b190d189af5cd3656eb255b4da0142adeba6f14b309ae15d.scope: Deactivated successfully.
Oct  3 10:13:36 compute-0 podman[432334]: 2025-10-03 10:13:36.080626457 +0000 UTC m=+0.081604533 container create f85b27e50b59a7a1cd12222cb1cb9bcf22785c674691bfae8a2c8caf0b3be8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 10:13:36 compute-0 podman[432334]: 2025-10-03 10:13:36.040647012 +0000 UTC m=+0.041625168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:13:36 compute-0 systemd[1]: Started libpod-conmon-f85b27e50b59a7a1cd12222cb1cb9bcf22785c674691bfae8a2c8caf0b3be8a1.scope.
Oct  3 10:13:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:13:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:13:36 compute-0 podman[432334]: 2025-10-03 10:13:36.319406829 +0000 UTC m=+0.320385015 container init f85b27e50b59a7a1cd12222cb1cb9bcf22785c674691bfae8a2c8caf0b3be8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 10:13:36 compute-0 podman[432334]: 2025-10-03 10:13:36.338358127 +0000 UTC m=+0.339336253 container start f85b27e50b59a7a1cd12222cb1cb9bcf22785c674691bfae8a2c8caf0b3be8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:13:36 compute-0 silly_swartz[432350]: 167 167
Oct  3 10:13:36 compute-0 systemd[1]: libpod-f85b27e50b59a7a1cd12222cb1cb9bcf22785c674691bfae8a2c8caf0b3be8a1.scope: Deactivated successfully.
Oct  3 10:13:36 compute-0 podman[432334]: 2025-10-03 10:13:36.421803308 +0000 UTC m=+0.422781424 container attach f85b27e50b59a7a1cd12222cb1cb9bcf22785c674691bfae8a2c8caf0b3be8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 10:13:36 compute-0 podman[432334]: 2025-10-03 10:13:36.422576923 +0000 UTC m=+0.423555029 container died f85b27e50b59a7a1cd12222cb1cb9bcf22785c674691bfae8a2c8caf0b3be8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:13:36 compute-0 nova_compute[351685]: 2025-10-03 10:13:36.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b0dc753fe4a46217fd2989f73d8b8e45a3ada200444eb36af90e814730bd266-merged.mount: Deactivated successfully.
Oct  3 10:13:36 compute-0 podman[432334]: 2025-10-03 10:13:36.812018866 +0000 UTC m=+0.812996992 container remove f85b27e50b59a7a1cd12222cb1cb9bcf22785c674691bfae8a2c8caf0b3be8a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:13:36 compute-0 systemd[1]: libpod-conmon-f85b27e50b59a7a1cd12222cb1cb9bcf22785c674691bfae8a2c8caf0b3be8a1.scope: Deactivated successfully.
Oct  3 10:13:37 compute-0 podman[432376]: 2025-10-03 10:13:37.04933675 +0000 UTC m=+0.074069890 container create 15e2778ed85baf7e9bdd4181fdf8aa0426fcfab384b0aa702288bd66e289dabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ganguly, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct  3 10:13:37 compute-0 podman[432376]: 2025-10-03 10:13:37.014894173 +0000 UTC m=+0.039627333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:13:37 compute-0 systemd[1]: Started libpod-conmon-15e2778ed85baf7e9bdd4181fdf8aa0426fcfab384b0aa702288bd66e289dabb.scope.
Oct  3 10:13:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 2 op/s
Oct  3 10:13:37 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0f7f73564edd6701d3223e42448858c039696b1e380b5c2d0df5f8853dea87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0f7f73564edd6701d3223e42448858c039696b1e380b5c2d0df5f8853dea87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0f7f73564edd6701d3223e42448858c039696b1e380b5c2d0df5f8853dea87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0f7f73564edd6701d3223e42448858c039696b1e380b5c2d0df5f8853dea87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:13:37 compute-0 podman[432376]: 2025-10-03 10:13:37.192481459 +0000 UTC m=+0.217214579 container init 15e2778ed85baf7e9bdd4181fdf8aa0426fcfab384b0aa702288bd66e289dabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 10:13:37 compute-0 podman[432376]: 2025-10-03 10:13:37.204993792 +0000 UTC m=+0.229726932 container start 15e2778ed85baf7e9bdd4181fdf8aa0426fcfab384b0aa702288bd66e289dabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ganguly, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 10:13:37 compute-0 podman[432376]: 2025-10-03 10:13:37.212059338 +0000 UTC m=+0.236792478 container attach 15e2778ed85baf7e9bdd4181fdf8aa0426fcfab384b0aa702288bd66e289dabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ganguly, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct  3 10:13:37 compute-0 nova_compute[351685]: 2025-10-03 10:13:37.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:38 compute-0 zen_ganguly[432392]: {
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:    "0": [
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:        {
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "devices": [
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "/dev/loop3"
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            ],
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "lv_name": "ceph_lv0",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "lv_size": "21470642176",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "name": "ceph_lv0",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "tags": {
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.cluster_name": "ceph",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.crush_device_class": "",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.encrypted": "0",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.osd_id": "0",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.type": "block",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.vdo": "0"
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            },
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "type": "block",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "vg_name": "ceph_vg0"
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:        }
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:    ],
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:    "1": [
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:        {
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "devices": [
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "/dev/loop4"
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            ],
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "lv_name": "ceph_lv1",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "lv_size": "21470642176",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "name": "ceph_lv1",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "tags": {
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.cluster_name": "ceph",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.crush_device_class": "",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.encrypted": "0",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.osd_id": "1",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.type": "block",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.vdo": "0"
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            },
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "type": "block",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "vg_name": "ceph_vg1"
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:        }
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:    ],
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:    "2": [
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:        {
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "devices": [
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "/dev/loop5"
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            ],
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "lv_name": "ceph_lv2",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "lv_size": "21470642176",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "name": "ceph_lv2",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "tags": {
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.cluster_name": "ceph",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.crush_device_class": "",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.encrypted": "0",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.osd_id": "2",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.type": "block",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:                "ceph.vdo": "0"
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            },
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "type": "block",
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:            "vg_name": "ceph_vg2"
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:        }
Oct  3 10:13:38 compute-0 zen_ganguly[432392]:    ]
Oct  3 10:13:38 compute-0 zen_ganguly[432392]: }
Oct  3 10:13:38 compute-0 systemd[1]: libpod-15e2778ed85baf7e9bdd4181fdf8aa0426fcfab384b0aa702288bd66e289dabb.scope: Deactivated successfully.
Oct  3 10:13:38 compute-0 podman[432376]: 2025-10-03 10:13:38.099103107 +0000 UTC m=+1.123836217 container died 15e2778ed85baf7e9bdd4181fdf8aa0426fcfab384b0aa702288bd66e289dabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:13:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd0f7f73564edd6701d3223e42448858c039696b1e380b5c2d0df5f8853dea87-merged.mount: Deactivated successfully.
Oct  3 10:13:38 compute-0 podman[432376]: 2025-10-03 10:13:38.181646139 +0000 UTC m=+1.206379279 container remove 15e2778ed85baf7e9bdd4181fdf8aa0426fcfab384b0aa702288bd66e289dabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ganguly, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct  3 10:13:38 compute-0 systemd[1]: libpod-conmon-15e2778ed85baf7e9bdd4181fdf8aa0426fcfab384b0aa702288bd66e289dabb.scope: Deactivated successfully.
Oct  3 10:13:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct  3 10:13:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct  3 10:13:38 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct  3 10:13:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 321 active+clean; 85 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 773 KiB/s wr, 4 op/s
Oct  3 10:13:39 compute-0 podman[432548]: 2025-10-03 10:13:39.197111123 +0000 UTC m=+0.075147935 container create a4f6547dcc752d8e5b914ec44f55ee49660fed67df3cd6e016b4ff225eba71d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jennings, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 10:13:39 compute-0 podman[432548]: 2025-10-03 10:13:39.16712217 +0000 UTC m=+0.045159022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:13:39 compute-0 systemd[1]: Started libpod-conmon-a4f6547dcc752d8e5b914ec44f55ee49660fed67df3cd6e016b4ff225eba71d4.scope.
Oct  3 10:13:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:13:39 compute-0 podman[432548]: 2025-10-03 10:13:39.321503 +0000 UTC m=+0.199539902 container init a4f6547dcc752d8e5b914ec44f55ee49660fed67df3cd6e016b4ff225eba71d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jennings, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 10:13:39 compute-0 podman[432548]: 2025-10-03 10:13:39.333790224 +0000 UTC m=+0.211827046 container start a4f6547dcc752d8e5b914ec44f55ee49660fed67df3cd6e016b4ff225eba71d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jennings, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:13:39 compute-0 podman[432548]: 2025-10-03 10:13:39.338424884 +0000 UTC m=+0.216461786 container attach a4f6547dcc752d8e5b914ec44f55ee49660fed67df3cd6e016b4ff225eba71d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jennings, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 10:13:39 compute-0 serene_jennings[432564]: 167 167
Oct  3 10:13:39 compute-0 systemd[1]: libpod-a4f6547dcc752d8e5b914ec44f55ee49660fed67df3cd6e016b4ff225eba71d4.scope: Deactivated successfully.
Oct  3 10:13:39 compute-0 podman[432548]: 2025-10-03 10:13:39.341128111 +0000 UTC m=+0.219164923 container died a4f6547dcc752d8e5b914ec44f55ee49660fed67df3cd6e016b4ff225eba71d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jennings, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 10:13:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f753504dc3ff462c118537be5249ed4d6602466bf090fa381c91041c6229460b-merged.mount: Deactivated successfully.
Oct  3 10:13:39 compute-0 podman[432548]: 2025-10-03 10:13:39.414489927 +0000 UTC m=+0.292526729 container remove a4f6547dcc752d8e5b914ec44f55ee49660fed67df3cd6e016b4ff225eba71d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jennings, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:13:39 compute-0 systemd[1]: libpod-conmon-a4f6547dcc752d8e5b914ec44f55ee49660fed67df3cd6e016b4ff225eba71d4.scope: Deactivated successfully.
Oct  3 10:13:39 compute-0 podman[432588]: 2025-10-03 10:13:39.648645211 +0000 UTC m=+0.093078252 container create 228edcd23ce610c14eb709a9cfbe3a9fc383029b20cfc5fd24c44e32978d4c20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:13:39 compute-0 podman[432588]: 2025-10-03 10:13:39.596535206 +0000 UTC m=+0.040968297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:13:39 compute-0 systemd[1]: Started libpod-conmon-228edcd23ce610c14eb709a9cfbe3a9fc383029b20cfc5fd24c44e32978d4c20.scope.
Oct  3 10:13:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b94cba8aafb4dd52d084304c4a15415f339d4110878b3b35ff8374acb70726/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b94cba8aafb4dd52d084304c4a15415f339d4110878b3b35ff8374acb70726/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b94cba8aafb4dd52d084304c4a15415f339d4110878b3b35ff8374acb70726/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12b94cba8aafb4dd52d084304c4a15415f339d4110878b3b35ff8374acb70726/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:13:39 compute-0 podman[432588]: 2025-10-03 10:13:39.820560723 +0000 UTC m=+0.264993754 container init 228edcd23ce610c14eb709a9cfbe3a9fc383029b20cfc5fd24c44e32978d4c20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 10:13:39 compute-0 podman[432588]: 2025-10-03 10:13:39.853191202 +0000 UTC m=+0.297624213 container start 228edcd23ce610c14eb709a9cfbe3a9fc383029b20cfc5fd24c44e32978d4c20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:13:39 compute-0 podman[432588]: 2025-10-03 10:13:39.858193933 +0000 UTC m=+0.302627034 container attach 228edcd23ce610c14eb709a9cfbe3a9fc383029b20cfc5fd24c44e32978d4c20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.884 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.885 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.885 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.891 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.892 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.892 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.892 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.892 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.893 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:13:40.892747) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.900 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.902 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.902 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.902 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.902 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.902 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.902 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.903 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.903 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.903 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.903 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.903 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.903 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:13:40.902763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:13:40.903912) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.926 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.926 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.927 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.930 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.930 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.931 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.932 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.932 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.935 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.936 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:13:40.934067) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]: {
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:        "osd_id": 1,
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:        "type": "bluestore"
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:    },
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:        "osd_id": 2,
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:        "type": "bluestore"
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:    },
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:        "osd_id": 0,
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:        "type": "bluestore"
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]:    }
Oct  3 10:13:40 compute-0 vigorous_keldysh[432605]: }
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.994 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.994 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.995 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.995 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.996 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.996 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.996 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.996 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.997 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.997 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:13:40.996558) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.997 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.997 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.998 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.998 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.998 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.999 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.999 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.999 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.999 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:13:40.999170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:40.999 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.000 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.000 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.001 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.001 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.001 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.001 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.001 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.002 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.002 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.002 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:13:41.001579) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.003 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.003 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.003 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.004 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.004 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 systemd[1]: libpod-228edcd23ce610c14eb709a9cfbe3a9fc383029b20cfc5fd24c44e32978d4c20.scope: Deactivated successfully.
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.004 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.004 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.005 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 systemd[1]: libpod-228edcd23ce610c14eb709a9cfbe3a9fc383029b20cfc5fd24c44e32978d4c20.scope: Consumed 1.138s CPU time.
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.005 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:13:41.003962) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.005 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.006 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.007 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.007 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:13:41.007087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.008 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.008 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.008 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:13:41.009218) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.038 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.038 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.039 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.039 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.039 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.039 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.039 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.039 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.040 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.040 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:13:41.039614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.041 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.042 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:13:41.042361) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.043 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.043 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.044 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.044 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:13:41.044501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.045 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.046 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.046 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.046 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.046 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.046 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.047 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.047 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.047 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.047 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.048 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.048 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.048 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.048 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.049 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.049 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:13:41.046415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:13:41.047644) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.049 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.050 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.050 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:13:41.049204) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.050 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.050 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.050 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.051 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.052 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.052 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:13:41.050734) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.052 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.052 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 42260000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.053 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:13:41.052424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.053 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.053 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.053 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.054 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.054 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.054 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.054 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.055 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.055 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.055 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.055 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.055 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.055 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.056 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.056 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.056 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.056 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:13:41.053947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.056 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.057 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:13:41.055163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:13:41.056222) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.058 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.84765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.058 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.058 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.058 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.058 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.059 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.059 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.059 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.059 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.059 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.059 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:13:41.057881) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:13:41.058911) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:13:41.059799) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:13:41.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:13:41 compute-0 podman[432639]: 2025-10-03 10:13:41.10533262 +0000 UTC m=+0.065502825 container died 228edcd23ce610c14eb709a9cfbe3a9fc383029b20cfc5fd24c44e32978d4c20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 10:13:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 93 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Oct  3 10:13:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-12b94cba8aafb4dd52d084304c4a15415f339d4110878b3b35ff8374acb70726-merged.mount: Deactivated successfully.
Oct  3 10:13:41 compute-0 podman[432639]: 2025-10-03 10:13:41.200943522 +0000 UTC m=+0.161113697 container remove 228edcd23ce610c14eb709a9cfbe3a9fc383029b20cfc5fd24c44e32978d4c20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_keldysh, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 10:13:41 compute-0 systemd[1]: libpod-conmon-228edcd23ce610c14eb709a9cfbe3a9fc383029b20cfc5fd24c44e32978d4c20.scope: Deactivated successfully.
Oct  3 10:13:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:13:41 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:13:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:13:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:13:41 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:13:41 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 443d5189-a858-4236-bc3f-6db0c5cc98ad does not exist
Oct  3 10:13:41 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 69bedd7c-02b3-434a-adac-93c33b9b192b does not exist
Oct  3 10:13:41 compute-0 nova_compute[351685]: 2025-10-03 10:13:41.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:13:41.602 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:13:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:13:41.603 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:13:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:13:41.603 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:13:42 compute-0 nova_compute[351685]: 2025-10-03 10:13:42.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:42 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:13:42 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:13:42 compute-0 podman[432705]: 2025-10-03 10:13:42.833050949 +0000 UTC m=+0.089252369 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute)
Oct  3 10:13:42 compute-0 podman[432704]: 2025-10-03 10:13:42.855723187 +0000 UTC m=+0.111812423 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, release=1755695350, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6)
Oct  3 10:13:42 compute-0 podman[432706]: 2025-10-03 10:13:42.89968771 +0000 UTC m=+0.142458828 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  3 10:13:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 93 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Oct  3 10:13:43 compute-0 nova_compute[351685]: 2025-10-03 10:13:43.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:13:43 compute-0 nova_compute[351685]: 2025-10-03 10:13:43.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:13:44 compute-0 nova_compute[351685]: 2025-10-03 10:13:44.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:13:44 compute-0 nova_compute[351685]: 2025-10-03 10:13:44.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:13:44 compute-0 nova_compute[351685]: 2025-10-03 10:13:44.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:13:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1388: 321 pgs: 321 active+clean; 93 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Oct  3 10:13:45 compute-0 nova_compute[351685]: 2025-10-03 10:13:45.772 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:13:45 compute-0 nova_compute[351685]: 2025-10-03 10:13:45.773 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:13:45 compute-0 nova_compute[351685]: 2025-10-03 10:13:45.774 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:13:45 compute-0 nova_compute[351685]: 2025-10-03 10:13:45.775 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:13:46
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.meta', 'vms', '.rgw.root']
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:13:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:13:46 compute-0 nova_compute[351685]: 2025-10-03 10:13:46.337 2 DEBUG oslo_concurrency.lockutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "3f8a8352-bb52-4cb1-baf2-968c6b0d5e08" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:13:46 compute-0 nova_compute[351685]: 2025-10-03 10:13:46.337 2 DEBUG oslo_concurrency.lockutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "3f8a8352-bb52-4cb1-baf2-968c6b0d5e08" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:13:46 compute-0 nova_compute[351685]: 2025-10-03 10:13:46.357 2 DEBUG nova.compute.manager [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:13:46 compute-0 nova_compute[351685]: 2025-10-03 10:13:46.431 2 DEBUG oslo_concurrency.lockutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:13:46 compute-0 nova_compute[351685]: 2025-10-03 10:13:46.432 2 DEBUG oslo_concurrency.lockutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:13:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:13:46 compute-0 nova_compute[351685]: 2025-10-03 10:13:46.444 2 DEBUG nova.virt.hardware [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  3 10:13:46 compute-0 nova_compute[351685]: 2025-10-03 10:13:46.444 2 INFO nova.compute.claims [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  3 10:13:46 compute-0 nova_compute[351685]: 2025-10-03 10:13:46.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:46 compute-0 nova_compute[351685]: 2025-10-03 10:13:46.576 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:13:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:13:47 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2657197116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.084 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.093 2 DEBUG nova.compute.provider_tree [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.107 2 DEBUG nova.scheduler.client.report [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.133 2 DEBUG oslo_concurrency.lockutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.134 2 DEBUG nova.compute.manager [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  3 10:13:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 93 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.191 2 DEBUG nova.compute.manager [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.203 2 INFO nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.237 2 DEBUG nova.compute.manager [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.334 2 DEBUG nova.compute.manager [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.336 2 DEBUG nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.337 2 INFO nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Creating image(s)#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.373 2 DEBUG nova.storage.rbd_utils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.431 2 DEBUG nova.storage.rbd_utils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.481 2 DEBUG nova.storage.rbd_utils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.489 2 DEBUG oslo_concurrency.lockutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "e7f67a70e606c08bfea45c9da4c170e96d463110" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.491 2 DEBUG oslo_concurrency.lockutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "e7f67a70e606c08bfea45c9da4c170e96d463110" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.719 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.730 2 DEBUG nova.virt.libvirt.imagebackend [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Image locations are: [{'url': 'rbd://9b4e8c9a-5555-5510-a631-4742a1182561/images/934d9c92-fbd6-4cca-90f6-600b09f043df/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://9b4e8c9a-5555-5510-a631-4742a1182561/images/934d9c92-fbd6-4cca-90f6-600b09f043df/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.740 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.740 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.741 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.742 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.742 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.742 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.770 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.771 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.771 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.772 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:13:47 compute-0 nova_compute[351685]: 2025-10-03 10:13:47.772 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:13:47 compute-0 podman[432842]: 2025-10-03 10:13:47.823115429 +0000 UTC m=+0.082011326 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:13:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:13:48 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2992036928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:13:48 compute-0 nova_compute[351685]: 2025-10-03 10:13:48.415 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:13:48 compute-0 nova_compute[351685]: 2025-10-03 10:13:48.490 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:13:48 compute-0 nova_compute[351685]: 2025-10-03 10:13:48.490 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:13:48 compute-0 nova_compute[351685]: 2025-10-03 10:13:48.491 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:13:48 compute-0 nova_compute[351685]: 2025-10-03 10:13:48.608 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e7f67a70e606c08bfea45c9da4c170e96d463110.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:13:48 compute-0 nova_compute[351685]: 2025-10-03 10:13:48.684 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e7f67a70e606c08bfea45c9da4c170e96d463110.part --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:13:48 compute-0 nova_compute[351685]: 2025-10-03 10:13:48.685 2 DEBUG nova.virt.images [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] 934d9c92-fbd6-4cca-90f6-600b09f043df was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  3 10:13:48 compute-0 nova_compute[351685]: 2025-10-03 10:13:48.686 2 DEBUG nova.privsep.utils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  3 10:13:48 compute-0 nova_compute[351685]: 2025-10-03 10:13:48.687 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e7f67a70e606c08bfea45c9da4c170e96d463110.part /var/lib/nova/instances/_base/e7f67a70e606c08bfea45c9da4c170e96d463110.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:13:48 compute-0 nova_compute[351685]: 2025-10-03 10:13:48.915 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:13:48 compute-0 nova_compute[351685]: 2025-10-03 10:13:48.917 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3858MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:13:48 compute-0 nova_compute[351685]: 2025-10-03 10:13:48.917 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:13:48 compute-0 nova_compute[351685]: 2025-10-03 10:13:48.918 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:13:48 compute-0 nova_compute[351685]: 2025-10-03 10:13:48.992 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:13:48 compute-0 nova_compute[351685]: 2025-10-03 10:13:48.993 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:13:48 compute-0 nova_compute[351685]: 2025-10-03 10:13:48.993 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:13:48 compute-0 nova_compute[351685]: 2025-10-03 10:13:48.994 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:13:49 compute-0 nova_compute[351685]: 2025-10-03 10:13:49.037 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:13:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 93 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 781 KiB/s wr, 14 op/s
Oct  3 10:13:49 compute-0 nova_compute[351685]: 2025-10-03 10:13:49.167 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e7f67a70e606c08bfea45c9da4c170e96d463110.part /var/lib/nova/instances/_base/e7f67a70e606c08bfea45c9da4c170e96d463110.converted" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:13:49 compute-0 nova_compute[351685]: 2025-10-03 10:13:49.172 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e7f67a70e606c08bfea45c9da4c170e96d463110.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:13:49 compute-0 nova_compute[351685]: 2025-10-03 10:13:49.239 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e7f67a70e606c08bfea45c9da4c170e96d463110.converted --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:13:49 compute-0 nova_compute[351685]: 2025-10-03 10:13:49.242 2 DEBUG oslo_concurrency.lockutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "e7f67a70e606c08bfea45c9da4c170e96d463110" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:13:49 compute-0 nova_compute[351685]: 2025-10-03 10:13:49.271 2 DEBUG nova.storage.rbd_utils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:13:49 compute-0 nova_compute[351685]: 2025-10-03 10:13:49.278 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/e7f67a70e606c08bfea45c9da4c170e96d463110 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:13:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:13:49 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1834462073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:13:49 compute-0 nova_compute[351685]: 2025-10-03 10:13:49.575 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:13:49 compute-0 nova_compute[351685]: 2025-10-03 10:13:49.585 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:13:49 compute-0 nova_compute[351685]: 2025-10-03 10:13:49.625 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:13:49 compute-0 nova_compute[351685]: 2025-10-03 10:13:49.655 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:13:49 compute-0 nova_compute[351685]: 2025-10-03 10:13:49.655 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:13:49 compute-0 nova_compute[351685]: 2025-10-03 10:13:49.904 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/e7f67a70e606c08bfea45c9da4c170e96d463110 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:13:50 compute-0 nova_compute[351685]: 2025-10-03 10:13:50.015 2 DEBUG nova.storage.rbd_utils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] resizing rbd image 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  3 10:13:50 compute-0 nova_compute[351685]: 2025-10-03 10:13:50.397 2 DEBUG nova.objects.instance [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lazy-loading 'migration_context' on Instance uuid 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:13:50 compute-0 nova_compute[351685]: 2025-10-03 10:13:50.453 2 DEBUG nova.storage.rbd_utils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:13:50 compute-0 nova_compute[351685]: 2025-10-03 10:13:50.490 2 DEBUG nova.storage.rbd_utils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:13:50 compute-0 nova_compute[351685]: 2025-10-03 10:13:50.496 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:13:50 compute-0 nova_compute[351685]: 2025-10-03 10:13:50.560 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:13:50 compute-0 nova_compute[351685]: 2025-10-03 10:13:50.561 2 DEBUG oslo_concurrency.lockutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:13:50 compute-0 nova_compute[351685]: 2025-10-03 10:13:50.562 2 DEBUG oslo_concurrency.lockutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:13:50 compute-0 nova_compute[351685]: 2025-10-03 10:13:50.562 2 DEBUG oslo_concurrency.lockutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:13:50 compute-0 nova_compute[351685]: 2025-10-03 10:13:50.597 2 DEBUG nova.storage.rbd_utils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:13:50 compute-0 nova_compute[351685]: 2025-10-03 10:13:50.604 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:13:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 321 active+clean; 98 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 696 KiB/s rd, 965 KiB/s wr, 19 op/s
Oct  3 10:13:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.455 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.850s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.611 2 DEBUG nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.612 2 DEBUG nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Ensure instance console log exists: /var/lib/nova/instances/3f8a8352-bb52-4cb1-baf2-968c6b0d5e08/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.613 2 DEBUG oslo_concurrency.lockutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.614 2 DEBUG oslo_concurrency.lockutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.614 2 DEBUG oslo_concurrency.lockutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.616 2 DEBUG nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-03T10:13:33Z,direct_url=<?>,disk_format='qcow2',id=934d9c92-fbd6-4cca-90f6-600b09f043df,min_disk=0,min_ram=0,name='fvt_testing_image',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-03T10:13:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 0, 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '934d9c92-fbd6-4cca-90f6-600b09f043df'}], 'ephemerals': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vdb', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 1, 'encrypted': False, 'encryption_options': None, 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.623 2 WARNING nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.630 2 DEBUG nova.virt.libvirt.host [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.630 2 DEBUG nova.virt.libvirt.host [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.635 2 DEBUG nova.virt.libvirt.host [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.635 2 DEBUG nova.virt.libvirt.host [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.636 2 DEBUG nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.636 2 DEBUG nova.virt.hardware [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-03T10:13:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='cc83cc1e-3f89-4bcf-92e9-02f6e61b74a5',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-10-03T10:13:33Z,direct_url=<?>,disk_format='qcow2',id=934d9c92-fbd6-4cca-90f6-600b09f043df,min_disk=0,min_ram=0,name='fvt_testing_image',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-10-03T10:13:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.637 2 DEBUG nova.virt.hardware [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.637 2 DEBUG nova.virt.hardware [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.637 2 DEBUG nova.virt.hardware [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.638 2 DEBUG nova.virt.hardware [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.638 2 DEBUG nova.virt.hardware [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.638 2 DEBUG nova.virt.hardware [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.639 2 DEBUG nova.virt.hardware [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.639 2 DEBUG nova.virt.hardware [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.639 2 DEBUG nova.virt.hardware [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.640 2 DEBUG nova.virt.hardware [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  3 10:13:51 compute-0 nova_compute[351685]: 2025-10-03 10:13:51.643 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:13:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:13:52 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/427793463' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:13:52 compute-0 nova_compute[351685]: 2025-10-03 10:13:52.141 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:13:52 compute-0 nova_compute[351685]: 2025-10-03 10:13:52.143 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:13:52 compute-0 nova_compute[351685]: 2025-10-03 10:13:52.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:13:52 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3479064794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:13:52 compute-0 nova_compute[351685]: 2025-10-03 10:13:52.612 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:13:52 compute-0 nova_compute[351685]: 2025-10-03 10:13:52.644 2 DEBUG nova.storage.rbd_utils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:13:52 compute-0 nova_compute[351685]: 2025-10-03 10:13:52.652 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:13:52 compute-0 nova_compute[351685]: 2025-10-03 10:13:52.687 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:13:52 compute-0 nova_compute[351685]: 2025-10-03 10:13:52.688 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:13:52 compute-0 nova_compute[351685]: 2025-10-03 10:13:52.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:13:52 compute-0 podman[433222]: 2025-10-03 10:13:52.850785207 +0000 UTC m=+0.097833945 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:13:52 compute-0 podman[433223]: 2025-10-03 10:13:52.875872823 +0000 UTC m=+0.123674695 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:13:52 compute-0 podman[433224]: 2025-10-03 10:13:52.88670083 +0000 UTC m=+0.132799927 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:13:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 98 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 688 KiB/s rd, 281 KiB/s wr, 7 op/s
Oct  3 10:13:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 10:13:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3203181291' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 10:13:53 compute-0 nova_compute[351685]: 2025-10-03 10:13:53.188 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:13:53 compute-0 nova_compute[351685]: 2025-10-03 10:13:53.192 2 DEBUG nova.objects.instance [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:13:53 compute-0 nova_compute[351685]: 2025-10-03 10:13:53.217 2 DEBUG nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] End _get_guest_xml xml=<domain type="kvm">
Oct  3 10:13:53 compute-0 nova_compute[351685]:  <uuid>3f8a8352-bb52-4cb1-baf2-968c6b0d5e08</uuid>
Oct  3 10:13:53 compute-0 nova_compute[351685]:  <name>instance-00000006</name>
Oct  3 10:13:53 compute-0 nova_compute[351685]:  <memory>524288</memory>
Oct  3 10:13:53 compute-0 nova_compute[351685]:  <vcpu>1</vcpu>
Oct  3 10:13:53 compute-0 nova_compute[351685]:  <metadata>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <nova:name>fvt_testing_server</nova:name>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <nova:creationTime>2025-10-03 10:13:51</nova:creationTime>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <nova:flavor name="fvt_testing_flavor">
Oct  3 10:13:53 compute-0 nova_compute[351685]:        <nova:memory>512</nova:memory>
Oct  3 10:13:53 compute-0 nova_compute[351685]:        <nova:disk>1</nova:disk>
Oct  3 10:13:53 compute-0 nova_compute[351685]:        <nova:swap>0</nova:swap>
Oct  3 10:13:53 compute-0 nova_compute[351685]:        <nova:ephemeral>1</nova:ephemeral>
Oct  3 10:13:53 compute-0 nova_compute[351685]:        <nova:vcpus>1</nova:vcpus>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      </nova:flavor>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <nova:owner>
Oct  3 10:13:53 compute-0 nova_compute[351685]:        <nova:user uuid="2f408449ba0f42fcb69f92dbf541f2e3">admin</nova:user>
Oct  3 10:13:53 compute-0 nova_compute[351685]:        <nova:project uuid="ee75a4dc6ade43baab6ee923c9cf4cdf">admin</nova:project>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      </nova:owner>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <nova:root type="image" uuid="934d9c92-fbd6-4cca-90f6-600b09f043df"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <nova:ports/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    </nova:instance>
Oct  3 10:13:53 compute-0 nova_compute[351685]:  </metadata>
Oct  3 10:13:53 compute-0 nova_compute[351685]:  <sysinfo type="smbios">
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <system>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <entry name="manufacturer">RDO</entry>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <entry name="product">OpenStack Compute</entry>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <entry name="serial">3f8a8352-bb52-4cb1-baf2-968c6b0d5e08</entry>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <entry name="uuid">3f8a8352-bb52-4cb1-baf2-968c6b0d5e08</entry>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <entry name="family">Virtual Machine</entry>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    </system>
Oct  3 10:13:53 compute-0 nova_compute[351685]:  </sysinfo>
Oct  3 10:13:53 compute-0 nova_compute[351685]:  <os>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <boot dev="hd"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <smbios mode="sysinfo"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:  </os>
Oct  3 10:13:53 compute-0 nova_compute[351685]:  <features>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <acpi/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <apic/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <vmcoreinfo/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:  </features>
Oct  3 10:13:53 compute-0 nova_compute[351685]:  <clock offset="utc">
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <timer name="pit" tickpolicy="delay"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <timer name="hpet" present="no"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:  </clock>
Oct  3 10:13:53 compute-0 nova_compute[351685]:  <cpu mode="host-model" match="exact">
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <topology sockets="1" cores="1" threads="1"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:  </cpu>
Oct  3 10:13:53 compute-0 nova_compute[351685]:  <devices>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk">
Oct  3 10:13:53 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      </source>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:13:53 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <target dev="vda" bus="virtio"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk.eph0">
Oct  3 10:13:53 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      </source>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:13:53 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <target dev="vdb" bus="virtio"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <disk type="network" device="cdrom">
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk.config">
Oct  3 10:13:53 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      </source>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 10:13:53 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      </auth>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <target dev="sda" bus="sata"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    </disk>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <serial type="pty">
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <log file="/var/lib/nova/instances/3f8a8352-bb52-4cb1-baf2-968c6b0d5e08/console.log" append="off"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    </serial>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <video>
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    </video>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <input type="tablet" bus="usb"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <rng model="virtio">
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <backend model="random">/dev/urandom</backend>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    </rng>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <controller type="usb" index="0"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    <memballoon model="virtio">
Oct  3 10:13:53 compute-0 nova_compute[351685]:      <stats period="10"/>
Oct  3 10:13:53 compute-0 nova_compute[351685]:    </memballoon>
Oct  3 10:13:53 compute-0 nova_compute[351685]:  </devices>
Oct  3 10:13:53 compute-0 nova_compute[351685]: </domain>
Oct  3 10:13:53 compute-0 nova_compute[351685]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  3 10:13:53 compute-0 nova_compute[351685]: 2025-10-03 10:13:53.287 2 DEBUG nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:13:53 compute-0 nova_compute[351685]: 2025-10-03 10:13:53.288 2 DEBUG nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:13:53 compute-0 nova_compute[351685]: 2025-10-03 10:13:53.289 2 DEBUG nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 10:13:53 compute-0 nova_compute[351685]: 2025-10-03 10:13:53.290 2 INFO nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Using config drive#033[00m
Oct  3 10:13:53 compute-0 nova_compute[351685]: 2025-10-03 10:13:53.342 2 DEBUG nova.storage.rbd_utils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:13:53 compute-0 nova_compute[351685]: 2025-10-03 10:13:53.753 2 INFO nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Creating config drive at /var/lib/nova/instances/3f8a8352-bb52-4cb1-baf2-968c6b0d5e08/disk.config#033[00m
Oct  3 10:13:53 compute-0 nova_compute[351685]: 2025-10-03 10:13:53.758 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3f8a8352-bb52-4cb1-baf2-968c6b0d5e08/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv7owlppp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:13:53 compute-0 nova_compute[351685]: 2025-10-03 10:13:53.886 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3f8a8352-bb52-4cb1-baf2-968c6b0d5e08/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv7owlppp" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:13:53 compute-0 nova_compute[351685]: 2025-10-03 10:13:53.935 2 DEBUG nova.storage.rbd_utils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] rbd image 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 10:13:53 compute-0 nova_compute[351685]: 2025-10-03 10:13:53.942 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3f8a8352-bb52-4cb1-baf2-968c6b0d5e08/disk.config 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:13:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:13:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1001921291' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:13:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:13:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1001921291' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:13:54 compute-0 nova_compute[351685]: 2025-10-03 10:13:54.210 2 DEBUG oslo_concurrency.processutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3f8a8352-bb52-4cb1-baf2-968c6b0d5e08/disk.config 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:13:54 compute-0 nova_compute[351685]: 2025-10-03 10:13:54.212 2 INFO nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Deleting local config drive /var/lib/nova/instances/3f8a8352-bb52-4cb1-baf2-968c6b0d5e08/disk.config because it was imported into RBD.#033[00m
Oct  3 10:13:54 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct  3 10:13:54 compute-0 systemd[1]: Started libvirt secret daemon.
Oct  3 10:13:54 compute-0 systemd-machined[137653]: New machine qemu-6-instance-00000006.
Oct  3 10:13:54 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Oct  3 10:13:55 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct  3 10:13:55 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 321 active+clean; 126 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 48 op/s
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008189211353128867 of space, bias 1.0, pg target 0.245676340593866 quantized to 32 (current 32)
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0005066271692062251 of space, bias 1.0, pg target 0.15198815076186756 quantized to 32 (current 32)
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:13:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:13:55 compute-0 nova_compute[351685]: 2025-10-03 10:13:55.790 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759486435.7903283, 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:13:55 compute-0 nova_compute[351685]: 2025-10-03 10:13:55.791 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] VM Resumed (Lifecycle Event)#033[00m
Oct  3 10:13:55 compute-0 nova_compute[351685]: 2025-10-03 10:13:55.794 2 DEBUG nova.compute.manager [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  3 10:13:55 compute-0 nova_compute[351685]: 2025-10-03 10:13:55.794 2 DEBUG nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  3 10:13:55 compute-0 nova_compute[351685]: 2025-10-03 10:13:55.800 2 INFO nova.virt.libvirt.driver [-] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Instance spawned successfully.#033[00m
Oct  3 10:13:55 compute-0 nova_compute[351685]: 2025-10-03 10:13:55.800 2 DEBUG nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  3 10:13:55 compute-0 nova_compute[351685]: 2025-10-03 10:13:55.896 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:13:55 compute-0 nova_compute[351685]: 2025-10-03 10:13:55.902 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 10:13:56 compute-0 nova_compute[351685]: 2025-10-03 10:13:56.011 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 10:13:56 compute-0 nova_compute[351685]: 2025-10-03 10:13:56.012 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759486435.7933338, 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:13:56 compute-0 nova_compute[351685]: 2025-10-03 10:13:56.013 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] VM Started (Lifecycle Event)#033[00m
Oct  3 10:13:56 compute-0 nova_compute[351685]: 2025-10-03 10:13:56.023 2 DEBUG nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:13:56 compute-0 nova_compute[351685]: 2025-10-03 10:13:56.024 2 DEBUG nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:13:56 compute-0 nova_compute[351685]: 2025-10-03 10:13:56.024 2 DEBUG nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:13:56 compute-0 nova_compute[351685]: 2025-10-03 10:13:56.025 2 DEBUG nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:13:56 compute-0 nova_compute[351685]: 2025-10-03 10:13:56.026 2 DEBUG nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:13:56 compute-0 nova_compute[351685]: 2025-10-03 10:13:56.027 2 DEBUG nova.virt.libvirt.driver [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 10:13:56 compute-0 nova_compute[351685]: 2025-10-03 10:13:56.038 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:13:56 compute-0 nova_compute[351685]: 2025-10-03 10:13:56.046 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 10:13:56 compute-0 nova_compute[351685]: 2025-10-03 10:13:56.082 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 10:13:56 compute-0 nova_compute[351685]: 2025-10-03 10:13:56.101 2 INFO nova.compute.manager [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Took 8.77 seconds to spawn the instance on the hypervisor.#033[00m
Oct  3 10:13:56 compute-0 nova_compute[351685]: 2025-10-03 10:13:56.102 2 DEBUG nova.compute.manager [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:13:56 compute-0 nova_compute[351685]: 2025-10-03 10:13:56.166 2 INFO nova.compute.manager [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Took 9.76 seconds to build instance.#033[00m
Oct  3 10:13:56 compute-0 nova_compute[351685]: 2025-10-03 10:13:56.185 2 DEBUG oslo_concurrency.lockutils [None req-7785e39c-e49e-4f8d-8b66-79b574b8c183 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "3f8a8352-bb52-4cb1-baf2-968c6b0d5e08" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:13:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:13:56 compute-0 nova_compute[351685]: 2025-10-03 10:13:56.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 126 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 48 op/s
Oct  3 10:13:57 compute-0 nova_compute[351685]: 2025-10-03 10:13:57.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:13:57 compute-0 podman[433472]: 2025-10-03 10:13:57.828946284 +0000 UTC m=+0.090939763 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Oct  3 10:13:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1395: 321 pgs: 321 active+clean; 126 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 67 op/s
Oct  3 10:13:59 compute-0 podman[157165]: time="2025-10-03T10:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:13:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:13:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9063 "" "Go-http-client/1.1"
Oct  3 10:14:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 126 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.4 MiB/s wr, 92 op/s
Oct  3 10:14:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:14:01 compute-0 openstack_network_exporter[367524]: ERROR   10:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:14:01 compute-0 openstack_network_exporter[367524]: ERROR   10:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:14:01 compute-0 openstack_network_exporter[367524]: ERROR   10:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:14:01 compute-0 openstack_network_exporter[367524]: ERROR   10:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:14:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:14:01 compute-0 openstack_network_exporter[367524]: ERROR   10:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:14:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:14:01 compute-0 nova_compute[351685]: 2025-10-03 10:14:01.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:02 compute-0 nova_compute[351685]: 2025-10-03 10:14:02.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:02 compute-0 podman[433493]: 2025-10-03 10:14:02.830073958 +0000 UTC m=+0.086191280 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, managed_by=edpm_ansible, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-type=git, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9)
Oct  3 10:14:02 compute-0 podman[433492]: 2025-10-03 10:14:02.842285581 +0000 UTC m=+0.100866882 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:14:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1397: 321 pgs: 321 active+clean; 126 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Oct  3 10:14:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 126 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 96 op/s
Oct  3 10:14:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:14:06.314831) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486446314891, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 963, "num_deletes": 250, "total_data_size": 1327931, "memory_usage": 1357144, "flush_reason": "Manual Compaction"}
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486446380225, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 815841, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27698, "largest_seqno": 28660, "table_properties": {"data_size": 812007, "index_size": 1486, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10257, "raw_average_key_size": 20, "raw_value_size": 803719, "raw_average_value_size": 1626, "num_data_blocks": 67, "num_entries": 494, "num_filter_entries": 494, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759486358, "oldest_key_time": 1759486358, "file_creation_time": 1759486446, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 65638 microseconds, and 3709 cpu microseconds.
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:14:06.380470) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 815841 bytes OK
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:14:06.380499) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:14:06.388881) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:14:06.388913) EVENT_LOG_v1 {"time_micros": 1759486446388904, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:14:06.388933) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1323328, prev total WAL file size 1323328, number of live WAL files 2.
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:14:06.389824) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303034' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(796KB)], [62(9057KB)]
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486446389895, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 10091145, "oldest_snapshot_seqno": -1}
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5107 keys, 7290758 bytes, temperature: kUnknown
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486446542533, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 7290758, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7258185, "index_size": 18703, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12805, "raw_key_size": 126991, "raw_average_key_size": 24, "raw_value_size": 7167439, "raw_average_value_size": 1403, "num_data_blocks": 775, "num_entries": 5107, "num_filter_entries": 5107, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759486446, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:14:06.542741) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 7290758 bytes
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:14:06.555787) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 66.1 rd, 47.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 8.8 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(21.3) write-amplify(8.9) OK, records in: 5585, records dropped: 478 output_compression: NoCompression
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:14:06.555825) EVENT_LOG_v1 {"time_micros": 1759486446555810, "job": 34, "event": "compaction_finished", "compaction_time_micros": 152705, "compaction_time_cpu_micros": 21949, "output_level": 6, "num_output_files": 1, "total_output_size": 7290758, "num_input_records": 5585, "num_output_records": 5107, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486446556327, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486446559083, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:14:06.389684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:14:06.559358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:14:06.559364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:14:06.559366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:14:06.559368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:14:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:14:06.559370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:14:06 compute-0 nova_compute[351685]: 2025-10-03 10:14:06.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 126 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 170 B/s wr, 55 op/s
Oct  3 10:14:07 compute-0 nova_compute[351685]: 2025-10-03 10:14:07.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1400: 321 pgs: 321 active+clean; 126 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 170 B/s wr, 55 op/s
Oct  3 10:14:10 compute-0 nova_compute[351685]: 2025-10-03 10:14:10.046 2 DEBUG oslo_concurrency.lockutils [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "3f8a8352-bb52-4cb1-baf2-968c6b0d5e08" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:14:10 compute-0 nova_compute[351685]: 2025-10-03 10:14:10.046 2 DEBUG oslo_concurrency.lockutils [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "3f8a8352-bb52-4cb1-baf2-968c6b0d5e08" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:14:10 compute-0 nova_compute[351685]: 2025-10-03 10:14:10.047 2 DEBUG oslo_concurrency.lockutils [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "3f8a8352-bb52-4cb1-baf2-968c6b0d5e08-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:14:10 compute-0 nova_compute[351685]: 2025-10-03 10:14:10.048 2 DEBUG oslo_concurrency.lockutils [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "3f8a8352-bb52-4cb1-baf2-968c6b0d5e08-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:14:10 compute-0 nova_compute[351685]: 2025-10-03 10:14:10.048 2 DEBUG oslo_concurrency.lockutils [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "3f8a8352-bb52-4cb1-baf2-968c6b0d5e08-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:14:10 compute-0 nova_compute[351685]: 2025-10-03 10:14:10.050 2 INFO nova.compute.manager [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Terminating instance#033[00m
Oct  3 10:14:10 compute-0 nova_compute[351685]: 2025-10-03 10:14:10.051 2 DEBUG oslo_concurrency.lockutils [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "refresh_cache-3f8a8352-bb52-4cb1-baf2-968c6b0d5e08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:14:10 compute-0 nova_compute[351685]: 2025-10-03 10:14:10.051 2 DEBUG oslo_concurrency.lockutils [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquired lock "refresh_cache-3f8a8352-bb52-4cb1-baf2-968c6b0d5e08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:14:10 compute-0 nova_compute[351685]: 2025-10-03 10:14:10.051 2 DEBUG nova.network.neutron [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 10:14:10 compute-0 nova_compute[351685]: 2025-10-03 10:14:10.294 2 DEBUG nova.network.neutron [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  3 10:14:10 compute-0 nova_compute[351685]: 2025-10-03 10:14:10.628 2 DEBUG nova.network.neutron [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:14:10 compute-0 nova_compute[351685]: 2025-10-03 10:14:10.689 2 DEBUG oslo_concurrency.lockutils [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Releasing lock "refresh_cache-3f8a8352-bb52-4cb1-baf2-968c6b0d5e08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:14:10 compute-0 nova_compute[351685]: 2025-10-03 10:14:10.690 2 DEBUG nova.compute.manager [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  3 10:14:10 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct  3 10:14:10 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 16.510s CPU time.
Oct  3 10:14:10 compute-0 systemd-machined[137653]: Machine qemu-6-instance-00000006 terminated.
Oct  3 10:14:11 compute-0 nova_compute[351685]: 2025-10-03 10:14:11.120 2 INFO nova.virt.libvirt.driver [-] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Instance destroyed successfully.#033[00m
Oct  3 10:14:11 compute-0 nova_compute[351685]: 2025-10-03 10:14:11.121 2 DEBUG nova.objects.instance [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lazy-loading 'resources' on Instance uuid 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:14:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 126 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 36 op/s
Oct  3 10:14:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:14:11 compute-0 nova_compute[351685]: 2025-10-03 10:14:11.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:12 compute-0 nova_compute[351685]: 2025-10-03 10:14:12.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1402: 321 pgs: 321 active+clean; 126 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 10 op/s
Oct  3 10:14:13 compute-0 podman[433553]: 2025-10-03 10:14:13.876487492 +0000 UTC m=+0.123650383 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute)
Oct  3 10:14:13 compute-0 podman[433554]: 2025-10-03 10:14:13.890495652 +0000 UTC m=+0.135597318 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:14:13 compute-0 podman[433552]: 2025-10-03 10:14:13.899689637 +0000 UTC m=+0.147859181 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Oct  3 10:14:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 126 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 340 KiB/s rd, 170 B/s wr, 17 op/s
Oct  3 10:14:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:14:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:14:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:14:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:14:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:14:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:14:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  3 10:14:16 compute-0 nova_compute[351685]: 2025-10-03 10:14:16.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 126 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 170 B/s wr, 7 op/s
Oct  3 10:14:17 compute-0 nova_compute[351685]: 2025-10-03 10:14:17.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:18 compute-0 podman[433617]: 2025-10-03 10:14:18.846184114 +0000 UTC m=+0.098264217 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:14:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 113 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 511 B/s wr, 10 op/s
Oct  3 10:14:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct  3 10:14:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct  3 10:14:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 95 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.1 KiB/s wr, 20 op/s
Oct  3 10:14:21 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct  3 10:14:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:14:21 compute-0 nova_compute[351685]: 2025-10-03 10:14:21.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:22 compute-0 nova_compute[351685]: 2025-10-03 10:14:22.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1408: 321 pgs: 321 active+clean; 95 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Oct  3 10:14:23 compute-0 podman[433638]: 2025-10-03 10:14:23.850924836 +0000 UTC m=+0.097428772 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  3 10:14:23 compute-0 podman[433639]: 2025-10-03 10:14:23.871527917 +0000 UTC m=+0.109880901 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:14:23 compute-0 podman[433637]: 2025-10-03 10:14:23.889829625 +0000 UTC m=+0.135360630 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:14:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 95 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  3 10:14:26 compute-0 nova_compute[351685]: 2025-10-03 10:14:26.119 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759486451.1169257, 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 10:14:26 compute-0 nova_compute[351685]: 2025-10-03 10:14:26.119 2 INFO nova.compute.manager [-] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] VM Stopped (Lifecycle Event)#033[00m
Oct  3 10:14:26 compute-0 nova_compute[351685]: 2025-10-03 10:14:26.236 2 DEBUG nova.compute.manager [None req-d189e699-8ad1-4de3-9c81-0cda0d289155 - - - - - -] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 10:14:26 compute-0 nova_compute[351685]: 2025-10-03 10:14:26.244 2 DEBUG nova.compute.manager [None req-d189e699-8ad1-4de3-9c81-0cda0d289155 - - - - - -] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 10:14:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:14:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct  3 10:14:26 compute-0 nova_compute[351685]: 2025-10-03 10:14:26.397 2 INFO nova.compute.manager [None req-d189e699-8ad1-4de3-9c81-0cda0d289155 - - - - - -] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Oct  3 10:14:26 compute-0 nova_compute[351685]: 2025-10-03 10:14:26.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct  3 10:14:27 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct  3 10:14:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1411: 321 pgs: 321 active+clean; 95 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1023 B/s wr, 31 op/s
Oct  3 10:14:27 compute-0 nova_compute[351685]: 2025-10-03 10:14:27.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:28 compute-0 systemd[1]: session-64.scope: Deactivated successfully.
Oct  3 10:14:28 compute-0 systemd[1]: session-64.scope: Consumed 1.153s CPU time.
Oct  3 10:14:28 compute-0 systemd-logind[798]: Session 64 logged out. Waiting for processes to exit.
Oct  3 10:14:28 compute-0 systemd-logind[798]: Removed session 64.
Oct  3 10:14:28 compute-0 podman[433698]: 2025-10-03 10:14:28.567626389 +0000 UTC m=+0.105519511 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 10:14:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 95 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 255 B/s wr, 17 op/s
Oct  3 10:14:29 compute-0 podman[157165]: time="2025-10-03T10:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:14:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:14:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9061 "" "Go-http-client/1.1"
Oct  3 10:14:30 compute-0 nova_compute[351685]: 2025-10-03 10:14:30.573 2 INFO nova.virt.libvirt.driver [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Deleting instance files /var/lib/nova/instances/3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_del#033[00m
Oct  3 10:14:30 compute-0 nova_compute[351685]: 2025-10-03 10:14:30.574 2 INFO nova.virt.libvirt.driver [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Deletion of /var/lib/nova/instances/3f8a8352-bb52-4cb1-baf2-968c6b0d5e08_del complete#033[00m
Oct  3 10:14:30 compute-0 nova_compute[351685]: 2025-10-03 10:14:30.744 2 INFO nova.compute.manager [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Took 20.05 seconds to destroy the instance on the hypervisor.#033[00m
Oct  3 10:14:30 compute-0 nova_compute[351685]: 2025-10-03 10:14:30.745 2 DEBUG oslo.service.loopingcall [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  3 10:14:30 compute-0 nova_compute[351685]: 2025-10-03 10:14:30.746 2 DEBUG nova.compute.manager [-] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  3 10:14:30 compute-0 nova_compute[351685]: 2025-10-03 10:14:30.746 2 DEBUG nova.network.neutron [-] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  3 10:14:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1413: 321 pgs: 321 active+clean; 87 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Oct  3 10:14:31 compute-0 openstack_network_exporter[367524]: ERROR   10:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:14:31 compute-0 openstack_network_exporter[367524]: ERROR   10:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:14:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:14:31 compute-0 openstack_network_exporter[367524]: ERROR   10:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:14:31 compute-0 openstack_network_exporter[367524]: ERROR   10:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:14:31 compute-0 openstack_network_exporter[367524]: ERROR   10:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:14:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:14:31 compute-0 nova_compute[351685]: 2025-10-03 10:14:31.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:14:31 compute-0 nova_compute[351685]: 2025-10-03 10:14:31.782 2 DEBUG nova.network.neutron [-] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  3 10:14:31 compute-0 nova_compute[351685]: 2025-10-03 10:14:31.806 2 DEBUG nova.network.neutron [-] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:14:31 compute-0 nova_compute[351685]: 2025-10-03 10:14:31.845 2 INFO nova.compute.manager [-] [instance: 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08] Took 1.10 seconds to deallocate network for instance.#033[00m
Oct  3 10:14:31 compute-0 nova_compute[351685]: 2025-10-03 10:14:31.919 2 DEBUG oslo_concurrency.lockutils [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:14:31 compute-0 nova_compute[351685]: 2025-10-03 10:14:31.921 2 DEBUG oslo_concurrency.lockutils [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:14:32 compute-0 nova_compute[351685]: 2025-10-03 10:14:32.020 2 DEBUG oslo_concurrency.processutils [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:14:32 compute-0 nova_compute[351685]: 2025-10-03 10:14:32.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:14:32 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2141801770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:14:32 compute-0 nova_compute[351685]: 2025-10-03 10:14:32.549 2 DEBUG oslo_concurrency.processutils [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:14:32 compute-0 nova_compute[351685]: 2025-10-03 10:14:32.559 2 DEBUG nova.compute.provider_tree [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:14:32 compute-0 nova_compute[351685]: 2025-10-03 10:14:32.573 2 DEBUG nova.scheduler.client.report [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:14:32 compute-0 nova_compute[351685]: 2025-10-03 10:14:32.594 2 DEBUG oslo_concurrency.lockutils [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:14:32 compute-0 nova_compute[351685]: 2025-10-03 10:14:32.632 2 INFO nova.scheduler.client.report [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Deleted allocations for instance 3f8a8352-bb52-4cb1-baf2-968c6b0d5e08#033[00m
Oct  3 10:14:32 compute-0 nova_compute[351685]: 2025-10-03 10:14:32.686 2 DEBUG oslo_concurrency.lockutils [None req-2d0170a7-2fb9-4856-85bb-b5d5ebf4a3bb 2f408449ba0f42fcb69f92dbf541f2e3 ee75a4dc6ade43baab6ee923c9cf4cdf - - default default] Lock "3f8a8352-bb52-4cb1-baf2-968c6b0d5e08" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 22.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:14:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 87 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Oct  3 10:14:33 compute-0 podman[433739]: 2025-10-03 10:14:33.85814837 +0000 UTC m=+0.099745016 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:14:33 compute-0 podman[433740]: 2025-10-03 10:14:33.889133355 +0000 UTC m=+0.128557691 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, io.openshift.tags=base rhel9, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-container, release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct  3 10:14:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.1 KiB/s wr, 60 op/s
Oct  3 10:14:36 compute-0 nova_compute[351685]: 2025-10-03 10:14:36.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:14:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.1 KiB/s wr, 60 op/s
Oct  3 10:14:37 compute-0 nova_compute[351685]: 2025-10-03 10:14:37.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 1.7 KiB/s wr, 69 op/s
Oct  3 10:14:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1418: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1.8 KiB/s wr, 87 op/s
Oct  3 10:14:41 compute-0 nova_compute[351685]: 2025-10-03 10:14:41.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:14:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:14:41.603 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:14:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:14:41.604 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:14:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:14:41.605 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:14:41 compute-0 nova_compute[351685]: 2025-10-03 10:14:41.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:14:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:14:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:14:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:14:42 compute-0 nova_compute[351685]: 2025-10-03 10:14:42.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:14:42 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:14:42 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:14:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 86 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 684 KiB/s wr, 77 op/s
Oct  3 10:14:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:14:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:14:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:14:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:14:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:14:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:14:43 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e2813ac7-2d29-4450-afb3-847f13232575 does not exist
Oct  3 10:14:43 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev df652698-e788-4b7a-8d35-4d8c4b5ca1fb does not exist
Oct  3 10:14:43 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 33066a87-2dfd-4f4e-a03c-c588db8de5c2 does not exist
Oct  3 10:14:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:14:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:14:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:14:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:14:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:14:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:14:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:14:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:14:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:14:44 compute-0 podman[434168]: 2025-10-03 10:14:44.305156982 +0000 UTC m=+0.070167485 container create 8d75cb6b5a7b970b74287cda2194b10d9acb66e3d40befeb8eee78209cfcded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mclean, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  3 10:14:44 compute-0 systemd[1]: Started libpod-conmon-8d75cb6b5a7b970b74287cda2194b10d9acb66e3d40befeb8eee78209cfcded0.scope.
Oct  3 10:14:44 compute-0 podman[434168]: 2025-10-03 10:14:44.274863658 +0000 UTC m=+0.039874171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:14:44 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:14:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct  3 10:14:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct  3 10:14:44 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct  3 10:14:44 compute-0 podman[434168]: 2025-10-03 10:14:44.410949951 +0000 UTC m=+0.175960484 container init 8d75cb6b5a7b970b74287cda2194b10d9acb66e3d40befeb8eee78209cfcded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:14:44 compute-0 podman[434168]: 2025-10-03 10:14:44.425418166 +0000 UTC m=+0.190428659 container start 8d75cb6b5a7b970b74287cda2194b10d9acb66e3d40befeb8eee78209cfcded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mclean, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:14:44 compute-0 podman[434168]: 2025-10-03 10:14:44.42992595 +0000 UTC m=+0.194936433 container attach 8d75cb6b5a7b970b74287cda2194b10d9acb66e3d40befeb8eee78209cfcded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mclean, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 10:14:44 compute-0 relaxed_mclean[434197]: 167 167
Oct  3 10:14:44 compute-0 systemd[1]: libpod-8d75cb6b5a7b970b74287cda2194b10d9acb66e3d40befeb8eee78209cfcded0.scope: Deactivated successfully.
Oct  3 10:14:44 compute-0 podman[434168]: 2025-10-03 10:14:44.433568097 +0000 UTC m=+0.198578580 container died 8d75cb6b5a7b970b74287cda2194b10d9acb66e3d40befeb8eee78209cfcded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mclean, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 10:14:44 compute-0 podman[434182]: 2025-10-03 10:14:44.463112796 +0000 UTC m=+0.105416777 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, config_id=edpm, container_name=openstack_network_exporter)
Oct  3 10:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-fea55359b9f5073869b91dadfbf7509eed733192c1225c6ba91d9e67322c0730-merged.mount: Deactivated successfully.
Oct  3 10:14:44 compute-0 podman[434168]: 2025-10-03 10:14:44.486625322 +0000 UTC m=+0.251635795 container remove 8d75cb6b5a7b970b74287cda2194b10d9acb66e3d40befeb8eee78209cfcded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mclean, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:14:44 compute-0 podman[434185]: 2025-10-03 10:14:44.495763956 +0000 UTC m=+0.137826189 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, container_name=ceilometer_agent_compute)
Oct  3 10:14:44 compute-0 podman[434186]: 2025-10-03 10:14:44.501045065 +0000 UTC m=+0.139497703 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:14:44 compute-0 systemd[1]: libpod-conmon-8d75cb6b5a7b970b74287cda2194b10d9acb66e3d40befeb8eee78209cfcded0.scope: Deactivated successfully.
Oct  3 10:14:44 compute-0 podman[434267]: 2025-10-03 10:14:44.712697755 +0000 UTC m=+0.082880793 container create c9ac9636e4be4c807fafe92e8de92a30198be4bbe44ef4209c04b09950a950a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 10:14:44 compute-0 systemd[1]: Started libpod-conmon-c9ac9636e4be4c807fafe92e8de92a30198be4bbe44ef4209c04b09950a950a4.scope.
Oct  3 10:14:44 compute-0 podman[434267]: 2025-10-03 10:14:44.682572197 +0000 UTC m=+0.052755265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:14:44 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7abb8e40552d271f3b734ad39b5ae0b12092e5b7ca563efe809551530dbb6a7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7abb8e40552d271f3b734ad39b5ae0b12092e5b7ca563efe809551530dbb6a7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7abb8e40552d271f3b734ad39b5ae0b12092e5b7ca563efe809551530dbb6a7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7abb8e40552d271f3b734ad39b5ae0b12092e5b7ca563efe809551530dbb6a7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:14:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7abb8e40552d271f3b734ad39b5ae0b12092e5b7ca563efe809551530dbb6a7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:14:44 compute-0 podman[434267]: 2025-10-03 10:14:44.828752734 +0000 UTC m=+0.198935832 container init c9ac9636e4be4c807fafe92e8de92a30198be4bbe44ef4209c04b09950a950a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:14:44 compute-0 podman[434267]: 2025-10-03 10:14:44.85664307 +0000 UTC m=+0.226826118 container start c9ac9636e4be4c807fafe92e8de92a30198be4bbe44ef4209c04b09950a950a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 10:14:44 compute-0 podman[434267]: 2025-10-03 10:14:44.862395855 +0000 UTC m=+0.232578943 container attach c9ac9636e4be4c807fafe92e8de92a30198be4bbe44ef4209c04b09950a950a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 10:14:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 86 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 820 KiB/s wr, 53 op/s
Oct  3 10:14:45 compute-0 nova_compute[351685]: 2025-10-03 10:14:45.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:14:45 compute-0 nova_compute[351685]: 2025-10-03 10:14:45.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:14:45 compute-0 nova_compute[351685]: 2025-10-03 10:14:45.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:14:46 compute-0 gallant_noyce[434283]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:14:46 compute-0 gallant_noyce[434283]: --> relative data size: 1.0
Oct  3 10:14:46 compute-0 gallant_noyce[434283]: --> All data devices are unavailable
Oct  3 10:14:46 compute-0 systemd[1]: libpod-c9ac9636e4be4c807fafe92e8de92a30198be4bbe44ef4209c04b09950a950a4.scope: Deactivated successfully.
Oct  3 10:14:46 compute-0 systemd[1]: libpod-c9ac9636e4be4c807fafe92e8de92a30198be4bbe44ef4209c04b09950a950a4.scope: Consumed 1.120s CPU time.
Oct  3 10:14:46 compute-0 podman[434267]: 2025-10-03 10:14:46.039544893 +0000 UTC m=+1.409727971 container died c9ac9636e4be4c807fafe92e8de92a30198be4bbe44ef4209c04b09950a950a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:14:46
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'images', 'volumes', 'default.rgw.meta', 'vms']
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7abb8e40552d271f3b734ad39b5ae0b12092e5b7ca563efe809551530dbb6a7f-merged.mount: Deactivated successfully.
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:14:46 compute-0 podman[434267]: 2025-10-03 10:14:46.121581009 +0000 UTC m=+1.491764047 container remove c9ac9636e4be4c807fafe92e8de92a30198be4bbe44ef4209c04b09950a950a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 10:14:46 compute-0 systemd[1]: libpod-conmon-c9ac9636e4be4c807fafe92e8de92a30198be4bbe44ef4209c04b09950a950a4.scope: Deactivated successfully.
Oct  3 10:14:46 compute-0 nova_compute[351685]: 2025-10-03 10:14:46.340 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:14:46 compute-0 nova_compute[351685]: 2025-10-03 10:14:46.341 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:14:46 compute-0 nova_compute[351685]: 2025-10-03 10:14:46.341 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:14:46 compute-0 nova_compute[351685]: 2025-10-03 10:14:46.341 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:14:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:14:46 compute-0 nova_compute[351685]: 2025-10-03 10:14:46.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:14:47 compute-0 podman[434464]: 2025-10-03 10:14:46.949161727 +0000 UTC m=+0.025227231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:14:47 compute-0 podman[434464]: 2025-10-03 10:14:47.09616831 +0000 UTC m=+0.172233804 container create d8670ec7528dba696606ca107edae777abb2400e4e84c1ffad2aca2cf19f4661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_maxwell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 10:14:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 86 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 820 KiB/s wr, 53 op/s
Oct  3 10:14:47 compute-0 systemd[1]: Started libpod-conmon-d8670ec7528dba696606ca107edae777abb2400e4e84c1ffad2aca2cf19f4661.scope.
Oct  3 10:14:47 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:14:47 compute-0 nova_compute[351685]: 2025-10-03 10:14:47.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:47 compute-0 podman[434464]: 2025-10-03 10:14:47.314912218 +0000 UTC m=+0.390977712 container init d8670ec7528dba696606ca107edae777abb2400e4e84c1ffad2aca2cf19f4661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_maxwell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:14:47 compute-0 podman[434464]: 2025-10-03 10:14:47.332807903 +0000 UTC m=+0.408873397 container start d8670ec7528dba696606ca107edae777abb2400e4e84c1ffad2aca2cf19f4661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_maxwell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 10:14:47 compute-0 vigilant_maxwell[434480]: 167 167
Oct  3 10:14:47 compute-0 podman[434464]: 2025-10-03 10:14:47.343131444 +0000 UTC m=+0.419196978 container attach d8670ec7528dba696606ca107edae777abb2400e4e84c1ffad2aca2cf19f4661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_maxwell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:14:47 compute-0 systemd[1]: libpod-d8670ec7528dba696606ca107edae777abb2400e4e84c1ffad2aca2cf19f4661.scope: Deactivated successfully.
Oct  3 10:14:47 compute-0 conmon[434480]: conmon d8670ec7528dba696606 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8670ec7528dba696606ca107edae777abb2400e4e84c1ffad2aca2cf19f4661.scope/container/memory.events
Oct  3 10:14:47 compute-0 podman[434464]: 2025-10-03 10:14:47.349324273 +0000 UTC m=+0.425389817 container died d8670ec7528dba696606ca107edae777abb2400e4e84c1ffad2aca2cf19f4661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_maxwell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:14:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6f88124d0a840691df47394d3ba18571f47f098364f529614bcbd384c1b7a24-merged.mount: Deactivated successfully.
Oct  3 10:14:47 compute-0 podman[434464]: 2025-10-03 10:14:47.411956496 +0000 UTC m=+0.488021980 container remove d8670ec7528dba696606ca107edae777abb2400e4e84c1ffad2aca2cf19f4661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_maxwell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:14:47 compute-0 systemd[1]: libpod-conmon-d8670ec7528dba696606ca107edae777abb2400e4e84c1ffad2aca2cf19f4661.scope: Deactivated successfully.
Oct  3 10:14:47 compute-0 podman[434503]: 2025-10-03 10:14:47.656503243 +0000 UTC m=+0.078997099 container create f08fae7a5ca422b89f43c72708bc951a9139e81a84fb64a03d38b78021e5e99b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 10:14:47 compute-0 systemd[1]: Started libpod-conmon-f08fae7a5ca422b89f43c72708bc951a9139e81a84fb64a03d38b78021e5e99b.scope.
Oct  3 10:14:47 compute-0 podman[434503]: 2025-10-03 10:14:47.624120902 +0000 UTC m=+0.046614788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:14:47 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135cd4cae54fce0cb632b2e71f418b7b29cca24b4994b2e9182c0a0da383d7b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135cd4cae54fce0cb632b2e71f418b7b29cca24b4994b2e9182c0a0da383d7b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135cd4cae54fce0cb632b2e71f418b7b29cca24b4994b2e9182c0a0da383d7b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:14:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135cd4cae54fce0cb632b2e71f418b7b29cca24b4994b2e9182c0a0da383d7b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:14:47 compute-0 podman[434503]: 2025-10-03 10:14:47.773956156 +0000 UTC m=+0.196450002 container init f08fae7a5ca422b89f43c72708bc951a9139e81a84fb64a03d38b78021e5e99b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kalam, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 10:14:47 compute-0 podman[434503]: 2025-10-03 10:14:47.794464814 +0000 UTC m=+0.216958670 container start f08fae7a5ca422b89f43c72708bc951a9139e81a84fb64a03d38b78021e5e99b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kalam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  3 10:14:47 compute-0 podman[434503]: 2025-10-03 10:14:47.800307192 +0000 UTC m=+0.222801038 container attach f08fae7a5ca422b89f43c72708bc951a9139e81a84fb64a03d38b78021e5e99b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:14:47 compute-0 nova_compute[351685]: 2025-10-03 10:14:47.988 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:14:48 compute-0 nova_compute[351685]: 2025-10-03 10:14:48.012 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:14:48 compute-0 nova_compute[351685]: 2025-10-03 10:14:48.013 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:14:48 compute-0 nova_compute[351685]: 2025-10-03 10:14:48.014 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:14:48 compute-0 nova_compute[351685]: 2025-10-03 10:14:48.015 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:14:48 compute-0 nova_compute[351685]: 2025-10-03 10:14:48.016 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:14:48 compute-0 nova_compute[351685]: 2025-10-03 10:14:48.019 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:14:48 compute-0 nova_compute[351685]: 2025-10-03 10:14:48.020 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:14:48 compute-0 naughty_kalam[434519]: {
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:    "0": [
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:        {
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "devices": [
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "/dev/loop3"
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            ],
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "lv_name": "ceph_lv0",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "lv_size": "21470642176",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "name": "ceph_lv0",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "tags": {
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.cluster_name": "ceph",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.crush_device_class": "",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.encrypted": "0",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.osd_id": "0",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.type": "block",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.vdo": "0"
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            },
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "type": "block",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "vg_name": "ceph_vg0"
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:        }
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:    ],
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:    "1": [
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:        {
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "devices": [
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "/dev/loop4"
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            ],
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "lv_name": "ceph_lv1",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "lv_size": "21470642176",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "name": "ceph_lv1",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "tags": {
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.cluster_name": "ceph",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.crush_device_class": "",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.encrypted": "0",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.osd_id": "1",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.type": "block",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.vdo": "0"
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            },
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "type": "block",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "vg_name": "ceph_vg1"
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:        }
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:    ],
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:    "2": [
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:        {
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "devices": [
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "/dev/loop5"
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            ],
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "lv_name": "ceph_lv2",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "lv_size": "21470642176",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "name": "ceph_lv2",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "tags": {
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.cluster_name": "ceph",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.crush_device_class": "",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.encrypted": "0",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.osd_id": "2",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.type": "block",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:                "ceph.vdo": "0"
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            },
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "type": "block",
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:            "vg_name": "ceph_vg2"
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:        }
Oct  3 10:14:48 compute-0 naughty_kalam[434519]:    ]
Oct  3 10:14:48 compute-0 naughty_kalam[434519]: }
Oct  3 10:14:48 compute-0 systemd[1]: libpod-f08fae7a5ca422b89f43c72708bc951a9139e81a84fb64a03d38b78021e5e99b.scope: Deactivated successfully.
Oct  3 10:14:48 compute-0 podman[434503]: 2025-10-03 10:14:48.62909994 +0000 UTC m=+1.051593816 container died f08fae7a5ca422b89f43c72708bc951a9139e81a84fb64a03d38b78021e5e99b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:14:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-135cd4cae54fce0cb632b2e71f418b7b29cca24b4994b2e9182c0a0da383d7b8-merged.mount: Deactivated successfully.
Oct  3 10:14:48 compute-0 podman[434503]: 2025-10-03 10:14:48.720122004 +0000 UTC m=+1.142615860 container remove f08fae7a5ca422b89f43c72708bc951a9139e81a84fb64a03d38b78021e5e99b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kalam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 10:14:48 compute-0 nova_compute[351685]: 2025-10-03 10:14:48.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:14:48 compute-0 nova_compute[351685]: 2025-10-03 10:14:48.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:14:48 compute-0 systemd[1]: libpod-conmon-f08fae7a5ca422b89f43c72708bc951a9139e81a84fb64a03d38b78021e5e99b.scope: Deactivated successfully.
Oct  3 10:14:48 compute-0 nova_compute[351685]: 2025-10-03 10:14:48.750 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:14:48 compute-0 nova_compute[351685]: 2025-10-03 10:14:48.750 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:14:48 compute-0 nova_compute[351685]: 2025-10-03 10:14:48.750 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:14:48 compute-0 nova_compute[351685]: 2025-10-03 10:14:48.751 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:14:48 compute-0 nova_compute[351685]: 2025-10-03 10:14:48.751 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:14:49 compute-0 podman[434610]: 2025-10-03 10:14:49.040952842 +0000 UTC m=+0.074339689 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 10:14:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1423: 321 pgs: 321 active+clean; 93 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.6 MiB/s wr, 40 op/s
Oct  3 10:14:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:14:49 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2936764586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:14:49 compute-0 nova_compute[351685]: 2025-10-03 10:14:49.259 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:14:49 compute-0 nova_compute[351685]: 2025-10-03 10:14:49.340 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:14:49 compute-0 nova_compute[351685]: 2025-10-03 10:14:49.341 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:14:49 compute-0 nova_compute[351685]: 2025-10-03 10:14:49.341 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:14:49 compute-0 podman[434718]: 2025-10-03 10:14:49.566844707 +0000 UTC m=+0.063503751 container create c23aca86667192022d395989fc40a7d86d17f0851eea6d7ca4ab18510abda260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 10:14:49 compute-0 systemd[1]: Started libpod-conmon-c23aca86667192022d395989fc40a7d86d17f0851eea6d7ca4ab18510abda260.scope.
Oct  3 10:14:49 compute-0 podman[434718]: 2025-10-03 10:14:49.533751044 +0000 UTC m=+0.030410108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:14:49 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:14:49 compute-0 podman[434718]: 2025-10-03 10:14:49.663952587 +0000 UTC m=+0.160611641 container init c23aca86667192022d395989fc40a7d86d17f0851eea6d7ca4ab18510abda260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 10:14:49 compute-0 nova_compute[351685]: 2025-10-03 10:14:49.668 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:14:49 compute-0 nova_compute[351685]: 2025-10-03 10:14:49.669 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3827MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:14:49 compute-0 nova_compute[351685]: 2025-10-03 10:14:49.669 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:14:49 compute-0 nova_compute[351685]: 2025-10-03 10:14:49.670 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:14:49 compute-0 podman[434718]: 2025-10-03 10:14:49.67277393 +0000 UTC m=+0.169432964 container start c23aca86667192022d395989fc40a7d86d17f0851eea6d7ca4ab18510abda260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 10:14:49 compute-0 podman[434718]: 2025-10-03 10:14:49.680334984 +0000 UTC m=+0.176994018 container attach c23aca86667192022d395989fc40a7d86d17f0851eea6d7ca4ab18510abda260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 10:14:49 compute-0 cranky_mirzakhani[434733]: 167 167
Oct  3 10:14:49 compute-0 systemd[1]: libpod-c23aca86667192022d395989fc40a7d86d17f0851eea6d7ca4ab18510abda260.scope: Deactivated successfully.
Oct  3 10:14:49 compute-0 podman[434718]: 2025-10-03 10:14:49.681768389 +0000 UTC m=+0.178427423 container died c23aca86667192022d395989fc40a7d86d17f0851eea6d7ca4ab18510abda260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 10:14:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f60703b9f4166575aeb3640afc44ae26e1683c4753cfbe694dd0eab9d4579761-merged.mount: Deactivated successfully.
Oct  3 10:14:49 compute-0 podman[434718]: 2025-10-03 10:14:49.739490663 +0000 UTC m=+0.236149697 container remove c23aca86667192022d395989fc40a7d86d17f0851eea6d7ca4ab18510abda260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mirzakhani, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct  3 10:14:49 compute-0 systemd[1]: libpod-conmon-c23aca86667192022d395989fc40a7d86d17f0851eea6d7ca4ab18510abda260.scope: Deactivated successfully.
Oct  3 10:14:49 compute-0 nova_compute[351685]: 2025-10-03 10:14:49.764 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:14:49 compute-0 nova_compute[351685]: 2025-10-03 10:14:49.765 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:14:49 compute-0 nova_compute[351685]: 2025-10-03 10:14:49.765 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:14:49 compute-0 nova_compute[351685]: 2025-10-03 10:14:49.812 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:14:49 compute-0 podman[434757]: 2025-10-03 10:14:49.930347455 +0000 UTC m=+0.048572520 container create ae5c7bb83ec10c3721bb0d34b0ba7930375e3f15137f43e0653be2cf8f3a8e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:14:49 compute-0 systemd[1]: Started libpod-conmon-ae5c7bb83ec10c3721bb0d34b0ba7930375e3f15137f43e0653be2cf8f3a8e25.scope.
Oct  3 10:14:50 compute-0 podman[434757]: 2025-10-03 10:14:49.90899791 +0000 UTC m=+0.027222995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:14:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/036708236b2616c2e51d25ecfe5cad780da3b8e6f2c466bfa3aeabe0e00e9b9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/036708236b2616c2e51d25ecfe5cad780da3b8e6f2c466bfa3aeabe0e00e9b9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/036708236b2616c2e51d25ecfe5cad780da3b8e6f2c466bfa3aeabe0e00e9b9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/036708236b2616c2e51d25ecfe5cad780da3b8e6f2c466bfa3aeabe0e00e9b9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:14:50 compute-0 podman[434757]: 2025-10-03 10:14:50.053059238 +0000 UTC m=+0.171284313 container init ae5c7bb83ec10c3721bb0d34b0ba7930375e3f15137f43e0653be2cf8f3a8e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 10:14:50 compute-0 podman[434757]: 2025-10-03 10:14:50.067894575 +0000 UTC m=+0.186119640 container start ae5c7bb83ec10c3721bb0d34b0ba7930375e3f15137f43e0653be2cf8f3a8e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:14:50 compute-0 podman[434757]: 2025-10-03 10:14:50.072480093 +0000 UTC m=+0.190705178 container attach ae5c7bb83ec10c3721bb0d34b0ba7930375e3f15137f43e0653be2cf8f3a8e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:14:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:14:50 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4081589369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:14:50 compute-0 nova_compute[351685]: 2025-10-03 10:14:50.279 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:14:50 compute-0 nova_compute[351685]: 2025-10-03 10:14:50.290 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:14:50 compute-0 nova_compute[351685]: 2025-10-03 10:14:50.311 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:14:50 compute-0 nova_compute[351685]: 2025-10-03 10:14:50.333 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:14:50 compute-0 nova_compute[351685]: 2025-10-03 10:14:50.334 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]: {
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:        "osd_id": 1,
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:        "type": "bluestore"
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:    },
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:        "osd_id": 2,
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:        "type": "bluestore"
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:    },
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:        "osd_id": 0,
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:        "type": "bluestore"
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]:    }
Oct  3 10:14:51 compute-0 adoring_heyrovsky[434792]: }
Oct  3 10:14:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 93 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 16 op/s
Oct  3 10:14:51 compute-0 systemd[1]: libpod-ae5c7bb83ec10c3721bb0d34b0ba7930375e3f15137f43e0653be2cf8f3a8e25.scope: Deactivated successfully.
Oct  3 10:14:51 compute-0 systemd[1]: libpod-ae5c7bb83ec10c3721bb0d34b0ba7930375e3f15137f43e0653be2cf8f3a8e25.scope: Consumed 1.125s CPU time.
Oct  3 10:14:51 compute-0 podman[434827]: 2025-10-03 10:14:51.255274892 +0000 UTC m=+0.046066051 container died ae5c7bb83ec10c3721bb0d34b0ba7930375e3f15137f43e0653be2cf8f3a8e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  3 10:14:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-036708236b2616c2e51d25ecfe5cad780da3b8e6f2c466bfa3aeabe0e00e9b9e-merged.mount: Deactivated successfully.
Oct  3 10:14:51 compute-0 podman[434827]: 2025-10-03 10:14:51.347847777 +0000 UTC m=+0.138638936 container remove ae5c7bb83ec10c3721bb0d34b0ba7930375e3f15137f43e0653be2cf8f3a8e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:14:51 compute-0 systemd[1]: libpod-conmon-ae5c7bb83ec10c3721bb0d34b0ba7930375e3f15137f43e0653be2cf8f3a8e25.scope: Deactivated successfully.
Oct  3 10:14:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:14:51 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:14:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:14:51 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:14:51 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 789f4ea3-6123-420e-8e8a-254910bce26f does not exist
Oct  3 10:14:51 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a51c76d1-b4b5-4360-a391-ffcb9c2361ca does not exist
Oct  3 10:14:51 compute-0 nova_compute[351685]: 2025-10-03 10:14:51.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:14:52 compute-0 nova_compute[351685]: 2025-10-03 10:14:52.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct  3 10:14:52 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:14:52 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:14:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct  3 10:14:52 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct  3 10:14:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 93 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 884 KiB/s wr, 26 op/s
Oct  3 10:14:53 compute-0 nova_compute[351685]: 2025-10-03 10:14:53.334 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:14:53 compute-0 nova_compute[351685]: 2025-10-03 10:14:53.335 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:14:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:14:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2719890600' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:14:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:14:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2719890600' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:14:54 compute-0 podman[434892]: 2025-10-03 10:14:54.831151569 +0000 UTC m=+0.085691485 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:14:54 compute-0 podman[434893]: 2025-10-03 10:14:54.841030116 +0000 UTC m=+0.096116169 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:14:54 compute-0 podman[434894]: 2025-10-03 10:14:54.870487843 +0000 UTC m=+0.120649948 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=iscsid, container_name=iscsid)
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 93 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 774 KiB/s wr, 24 op/s
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0005064999877905841 of space, bias 1.0, pg target 0.15194999633717524 quantized to 32 (current 32)
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:14:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:14:56 compute-0 nova_compute[351685]: 2025-10-03 10:14:56.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:14:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct  3 10:14:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct  3 10:14:56 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct  3 10:14:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 93 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 895 B/s wr, 18 op/s
Oct  3 10:14:57 compute-0 nova_compute[351685]: 2025-10-03 10:14:57.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:14:58 compute-0 podman[434953]: 2025-10-03 10:14:58.83342074 +0000 UTC m=+0.092272845 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:14:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Oct  3 10:14:59 compute-0 podman[157165]: time="2025-10-03T10:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:14:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:14:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9059 "" "Go-http-client/1.1"
Oct  3 10:15:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Oct  3 10:15:01 compute-0 openstack_network_exporter[367524]: ERROR   10:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:15:01 compute-0 openstack_network_exporter[367524]: ERROR   10:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:15:01 compute-0 openstack_network_exporter[367524]: ERROR   10:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:15:01 compute-0 openstack_network_exporter[367524]: ERROR   10:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:15:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:15:01 compute-0 openstack_network_exporter[367524]: ERROR   10:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:15:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:15:01 compute-0 nova_compute[351685]: 2025-10-03 10:15:01.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:15:02 compute-0 nova_compute[351685]: 2025-10-03 10:15:02.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:03 compute-0 systemd-logind[798]: New session 65 of user zuul.
Oct  3 10:15:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 716 B/s wr, 12 op/s
Oct  3 10:15:03 compute-0 systemd[1]: Started Session 65 of User zuul.
Oct  3 10:15:04 compute-0 podman[435124]: 2025-10-03 10:15:04.04108393 +0000 UTC m=+0.066078173 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, version=9.4, vcs-type=git)
Oct  3 10:15:04 compute-0 podman[435123]: 2025-10-03 10:15:04.053362965 +0000 UTC m=+0.081235901 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:15:04 compute-0 python3[435192]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 10:15:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 716 B/s wr, 10 op/s
Oct  3 10:15:06 compute-0 nova_compute[351685]: 2025-10-03 10:15:06.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:15:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 678 B/s wr, 9 op/s
Oct  3 10:15:07 compute-0 nova_compute[351685]: 2025-10-03 10:15:07.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 597 B/s wr, 8 op/s
Oct  3 10:15:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:11 compute-0 nova_compute[351685]: 2025-10-03 10:15:11.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:15:11 compute-0 python3[435404]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 10:15:12 compute-0 nova_compute[351685]: 2025-10-03 10:15:12.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:14 compute-0 podman[435443]: 2025-10-03 10:15:14.775200473 +0000 UTC m=+0.074653630 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, release=1755695350, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7)
Oct  3 10:15:14 compute-0 podman[435444]: 2025-10-03 10:15:14.788343625 +0000 UTC m=+0.078624187 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, container_name=ceilometer_agent_compute)
Oct  3 10:15:14 compute-0 podman[435445]: 2025-10-03 10:15:14.824549048 +0000 UTC m=+0.113857338 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  3 10:15:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:15:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:15:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:15:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:15:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:15:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:15:16 compute-0 nova_compute[351685]: 2025-10-03 10:15:16.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:15:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:17 compute-0 nova_compute[351685]: 2025-10-03 10:15:17.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:19 compute-0 podman[435504]: 2025-10-03 10:15:19.876370194 +0000 UTC m=+0.119882443 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  3 10:15:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:21 compute-0 python3[435698]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 10:15:21 compute-0 nova_compute[351685]: 2025-10-03 10:15:21.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:15:22 compute-0 nova_compute[351685]: 2025-10-03 10:15:22.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:25 compute-0 podman[435740]: 2025-10-03 10:15:25.873848909 +0000 UTC m=+0.117325131 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 10:15:25 compute-0 podman[435738]: 2025-10-03 10:15:25.875156261 +0000 UTC m=+0.121712282 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:15:25 compute-0 podman[435739]: 2025-10-03 10:15:25.886997062 +0000 UTC m=+0.128575442 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:15:26 compute-0 nova_compute[351685]: 2025-10-03 10:15:26.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:15:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:27 compute-0 nova_compute[351685]: 2025-10-03 10:15:27.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:29 compute-0 podman[157165]: time="2025-10-03T10:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:15:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:15:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9056 "" "Go-http-client/1.1"
Oct  3 10:15:29 compute-0 podman[435796]: 2025-10-03 10:15:29.910204516 +0000 UTC m=+0.170698205 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:15:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:31 compute-0 openstack_network_exporter[367524]: ERROR   10:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:15:31 compute-0 openstack_network_exporter[367524]: ERROR   10:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:15:31 compute-0 openstack_network_exporter[367524]: ERROR   10:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:15:31 compute-0 openstack_network_exporter[367524]: ERROR   10:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:15:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:15:31 compute-0 openstack_network_exporter[367524]: ERROR   10:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:15:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:15:31 compute-0 nova_compute[351685]: 2025-10-03 10:15:31.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:15:32 compute-0 nova_compute[351685]: 2025-10-03 10:15:32.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:34 compute-0 podman[435817]: 2025-10-03 10:15:34.835409712 +0000 UTC m=+0.090474367 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Oct  3 10:15:34 compute-0 podman[435816]: 2025-10-03 10:15:34.858405231 +0000 UTC m=+0.107947909 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:15:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:36 compute-0 python3[436030]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  3 10:15:36 compute-0 nova_compute[351685]: 2025-10-03 10:15:36.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:15:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:37 compute-0 nova_compute[351685]: 2025-10-03 10:15:37.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.885 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.886 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.886 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.887 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.895 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.896 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.896 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.896 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.897 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.898 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:15:40.896879) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.903 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.905 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.905 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.905 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.905 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.906 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.906 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.906 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.907 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:15:40.906386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.908 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.908 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.908 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.908 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:15:40.908874) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.945 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.946 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.946 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.947 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.947 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.947 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.948 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.948 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:40.949 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:15:40.948179) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.017 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.017 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.017 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.018 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.018 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.019 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.021 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.022 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:15:41.019409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:15:41.021808) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.023 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.024 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.024 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.025 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.025 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.026 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.027 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.028 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:15:41.024680) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.028 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.029 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.029 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.029 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.029 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.029 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.030 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.030 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:15:41.027479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.030 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.030 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.031 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.031 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.031 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.032 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.032 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.032 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.032 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:15:41.030075) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.033 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:15:41.032723) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.061 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.062 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.062 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.063 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.063 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.063 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.064 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.065 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.065 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.065 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.065 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.065 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.066 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.066 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:15:41.062980) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.067 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:15:41.065521) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:15:41.067913) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.068 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.069 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.069 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.069 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.069 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.070 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.071 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.071 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:15:41.069864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.071 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.072 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.072 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.072 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.072 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.072 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.072 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:15:41.071358) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.073 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.073 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.074 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.074 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.074 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.074 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.074 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.074 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 43860000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.075 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.075 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.075 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:15:41.072752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:15:41.073727) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:15:41.074777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.076 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.076 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.077 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.077 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.077 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.077 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.077 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.078 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:15:41.076087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.078 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:15:41.077664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.078 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.078 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.078 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.078 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.079 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.079 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.080 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.080 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.84765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.081 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.081 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.081 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.081 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.081 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.081 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.082 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.082 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:15:41.078850) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:15:41.079982) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:15:41.081501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:15:41.082516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:15:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:15:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1451: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:15:41.604 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:15:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:15:41.605 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:15:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:15:41.605 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:15:41 compute-0 nova_compute[351685]: 2025-10-03 10:15:41.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:15:42 compute-0 nova_compute[351685]: 2025-10-03 10:15:42.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:44 compute-0 nova_compute[351685]: 2025-10-03 10:15:44.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:15:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:45 compute-0 nova_compute[351685]: 2025-10-03 10:15:45.755 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:15:45 compute-0 nova_compute[351685]: 2025-10-03 10:15:45.755 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:15:45 compute-0 nova_compute[351685]: 2025-10-03 10:15:45.756 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:15:45 compute-0 nova_compute[351685]: 2025-10-03 10:15:45.756 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:15:45 compute-0 nova_compute[351685]: 2025-10-03 10:15:45.756 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 10:15:45 compute-0 podman[436071]: 2025-10-03 10:15:45.863207059 +0000 UTC m=+0.118174628 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_id=edpm, maintainer=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:15:45 compute-0 podman[436072]: 2025-10-03 10:15:45.878099987 +0000 UTC m=+0.110708278 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Oct  3 10:15:45 compute-0 podman[436073]: 2025-10-03 10:15:45.901981194 +0000 UTC m=+0.146298450 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:15:46
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', '.mgr', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'vms', '.rgw.root']
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:15:46 compute-0 nova_compute[351685]: 2025-10-03 10:15:46.423 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:15:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:15:46 compute-0 nova_compute[351685]: 2025-10-03 10:15:46.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:15:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:47 compute-0 nova_compute[351685]: 2025-10-03 10:15:47.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:47 compute-0 nova_compute[351685]: 2025-10-03 10:15:47.399 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:15:47 compute-0 nova_compute[351685]: 2025-10-03 10:15:47.399 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:15:47 compute-0 nova_compute[351685]: 2025-10-03 10:15:47.399 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:15:47 compute-0 nova_compute[351685]: 2025-10-03 10:15:47.814 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:15:47 compute-0 nova_compute[351685]: 2025-10-03 10:15:47.814 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:15:47 compute-0 nova_compute[351685]: 2025-10-03 10:15:47.815 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:15:47 compute-0 nova_compute[351685]: 2025-10-03 10:15:47.815 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:15:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:50 compute-0 nova_compute[351685]: 2025-10-03 10:15:50.467 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:15:50 compute-0 nova_compute[351685]: 2025-10-03 10:15:50.553 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:15:50 compute-0 nova_compute[351685]: 2025-10-03 10:15:50.553 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:15:50 compute-0 nova_compute[351685]: 2025-10-03 10:15:50.554 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:15:50 compute-0 nova_compute[351685]: 2025-10-03 10:15:50.555 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:15:50 compute-0 nova_compute[351685]: 2025-10-03 10:15:50.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:15:50 compute-0 nova_compute[351685]: 2025-10-03 10:15:50.732 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:15:50 compute-0 podman[436132]: 2025-10-03 10:15:50.851075378 +0000 UTC m=+0.096259294 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:15:50 compute-0 nova_compute[351685]: 2025-10-03 10:15:50.879 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:15:50 compute-0 nova_compute[351685]: 2025-10-03 10:15:50.879 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:15:50 compute-0 nova_compute[351685]: 2025-10-03 10:15:50.880 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:15:50 compute-0 nova_compute[351685]: 2025-10-03 10:15:50.880 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:15:50 compute-0 nova_compute[351685]: 2025-10-03 10:15:50.881 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:15:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1456: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:15:51 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/90636983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:15:51 compute-0 nova_compute[351685]: 2025-10-03 10:15:51.459 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:15:51 compute-0 nova_compute[351685]: 2025-10-03 10:15:51.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:15:52 compute-0 nova_compute[351685]: 2025-10-03 10:15:51.999 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:15:52 compute-0 nova_compute[351685]: 2025-10-03 10:15:52.000 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:15:52 compute-0 nova_compute[351685]: 2025-10-03 10:15:52.000 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:15:52 compute-0 nova_compute[351685]: 2025-10-03 10:15:52.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:52 compute-0 nova_compute[351685]: 2025-10-03 10:15:52.368 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:15:52 compute-0 nova_compute[351685]: 2025-10-03 10:15:52.369 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3881MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:15:52 compute-0 nova_compute[351685]: 2025-10-03 10:15:52.369 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:15:52 compute-0 nova_compute[351685]: 2025-10-03 10:15:52.369 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:15:52 compute-0 nova_compute[351685]: 2025-10-03 10:15:52.675 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:15:52 compute-0 nova_compute[351685]: 2025-10-03 10:15:52.676 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:15:52 compute-0 nova_compute[351685]: 2025-10-03 10:15:52.677 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:15:52 compute-0 nova_compute[351685]: 2025-10-03 10:15:52.724 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:15:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  3 10:15:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 10:15:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:15:52 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:15:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:15:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:15:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:15:53 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:15:53 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 4722d1f3-67fb-4a63-bad7-2d4cd4f83b6b does not exist
Oct  3 10:15:53 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f7a76d40-5094-4f8d-bb7c-0d510cbca4bb does not exist
Oct  3 10:15:53 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 75f2e9be-651c-4119-916f-c7802dadc019 does not exist
Oct  3 10:15:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:15:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:15:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:15:53 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:15:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:15:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:15:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:15:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554217800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:15:53 compute-0 nova_compute[351685]: 2025-10-03 10:15:53.308 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:15:53 compute-0 nova_compute[351685]: 2025-10-03 10:15:53.317 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:15:53 compute-0 nova_compute[351685]: 2025-10-03 10:15:53.613 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:15:53 compute-0 nova_compute[351685]: 2025-10-03 10:15:53.614 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:15:53 compute-0 nova_compute[351685]: 2025-10-03 10:15:53.615 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:15:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 10:15:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:15:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:15:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:15:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:15:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2956353152' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:15:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:15:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2956353152' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:15:54 compute-0 podman[436463]: 2025-10-03 10:15:53.963331167 +0000 UTC m=+0.051451794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:15:54 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  3 10:15:54 compute-0 podman[436463]: 2025-10-03 10:15:54.448808515 +0000 UTC m=+0.536929102 container create aa6a83dfdcd728d6951fde74edfd86d6ac08ea1f87582301f736bbcced5e3043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:15:54 compute-0 systemd[1]: Started libpod-conmon-aa6a83dfdcd728d6951fde74edfd86d6ac08ea1f87582301f736bbcced5e3043.scope.
Oct  3 10:15:54 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:15:54 compute-0 podman[436463]: 2025-10-03 10:15:54.817979606 +0000 UTC m=+0.906100233 container init aa6a83dfdcd728d6951fde74edfd86d6ac08ea1f87582301f736bbcced5e3043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  3 10:15:54 compute-0 podman[436463]: 2025-10-03 10:15:54.836915655 +0000 UTC m=+0.925036242 container start aa6a83dfdcd728d6951fde74edfd86d6ac08ea1f87582301f736bbcced5e3043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct  3 10:15:54 compute-0 sleepy_khorana[436479]: 167 167
Oct  3 10:15:54 compute-0 systemd[1]: libpod-aa6a83dfdcd728d6951fde74edfd86d6ac08ea1f87582301f736bbcced5e3043.scope: Deactivated successfully.
Oct  3 10:15:54 compute-0 podman[436463]: 2025-10-03 10:15:54.98492554 +0000 UTC m=+1.073046177 container attach aa6a83dfdcd728d6951fde74edfd86d6ac08ea1f87582301f736bbcced5e3043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khorana, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 10:15:54 compute-0 podman[436463]: 2025-10-03 10:15:54.986004725 +0000 UTC m=+1.074125272 container died aa6a83dfdcd728d6951fde74edfd86d6ac08ea1f87582301f736bbcced5e3043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 10:15:55 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:15:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:15:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6464bc4ae1351c32f3f30b369d96f6cc75cebe7f5a52a07a23b961035c8407c-merged.mount: Deactivated successfully.
Oct  3 10:15:55 compute-0 nova_compute[351685]: 2025-10-03 10:15:55.613 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:15:55 compute-0 nova_compute[351685]: 2025-10-03 10:15:55.615 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:15:56 compute-0 podman[436463]: 2025-10-03 10:15:56.397605855 +0000 UTC m=+2.485726442 container remove aa6a83dfdcd728d6951fde74edfd86d6ac08ea1f87582301f736bbcced5e3043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khorana, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 10:15:56 compute-0 systemd[1]: libpod-conmon-aa6a83dfdcd728d6951fde74edfd86d6ac08ea1f87582301f736bbcced5e3043.scope: Deactivated successfully.
Oct  3 10:15:56 compute-0 podman[436498]: 2025-10-03 10:15:56.61006448 +0000 UTC m=+0.089363992 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:15:56 compute-0 nova_compute[351685]: 2025-10-03 10:15:56.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:56 compute-0 podman[436501]: 2025-10-03 10:15:56.629291959 +0000 UTC m=+0.103661172 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  3 10:15:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:15:56 compute-0 podman[436500]: 2025-10-03 10:15:56.635424565 +0000 UTC m=+0.116807034 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  3 10:15:56 compute-0 podman[436545]: 2025-10-03 10:15:56.642064308 +0000 UTC m=+0.037146344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:15:56 compute-0 podman[436545]: 2025-10-03 10:15:56.875507739 +0000 UTC m=+0.270589765 container create 0e2077e28e7ccf58726d2dc4044da88cb2b41ca8584e974892fe9427eec55e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wilbur, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:15:57 compute-0 systemd[1]: Started libpod-conmon-0e2077e28e7ccf58726d2dc4044da88cb2b41ca8584e974892fe9427eec55e98.scope.
Oct  3 10:15:57 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31619aecc0cbca50fd63b6c653a9675236a383f43c5404f726adef2f5911a11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31619aecc0cbca50fd63b6c653a9675236a383f43c5404f726adef2f5911a11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31619aecc0cbca50fd63b6c653a9675236a383f43c5404f726adef2f5911a11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31619aecc0cbca50fd63b6c653a9675236a383f43c5404f726adef2f5911a11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31619aecc0cbca50fd63b6c653a9675236a383f43c5404f726adef2f5911a11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:15:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:57 compute-0 podman[436545]: 2025-10-03 10:15:57.317159297 +0000 UTC m=+0.712241423 container init 0e2077e28e7ccf58726d2dc4044da88cb2b41ca8584e974892fe9427eec55e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct  3 10:15:57 compute-0 podman[436545]: 2025-10-03 10:15:57.331437496 +0000 UTC m=+0.726519532 container start 0e2077e28e7ccf58726d2dc4044da88cb2b41ca8584e974892fe9427eec55e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 10:15:57 compute-0 nova_compute[351685]: 2025-10-03 10:15:57.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:15:57 compute-0 podman[436545]: 2025-10-03 10:15:57.442193504 +0000 UTC m=+0.837275630 container attach 0e2077e28e7ccf58726d2dc4044da88cb2b41ca8584e974892fe9427eec55e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wilbur, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 10:15:58 compute-0 tender_wilbur[436580]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:15:58 compute-0 tender_wilbur[436580]: --> relative data size: 1.0
Oct  3 10:15:58 compute-0 tender_wilbur[436580]: --> All data devices are unavailable
Oct  3 10:15:58 compute-0 systemd[1]: libpod-0e2077e28e7ccf58726d2dc4044da88cb2b41ca8584e974892fe9427eec55e98.scope: Deactivated successfully.
Oct  3 10:15:58 compute-0 systemd[1]: libpod-0e2077e28e7ccf58726d2dc4044da88cb2b41ca8584e974892fe9427eec55e98.scope: Consumed 1.203s CPU time.
Oct  3 10:15:58 compute-0 podman[436609]: 2025-10-03 10:15:58.689912112 +0000 UTC m=+0.058930315 container died 0e2077e28e7ccf58726d2dc4044da88cb2b41ca8584e974892fe9427eec55e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 10:15:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d31619aecc0cbca50fd63b6c653a9675236a383f43c5404f726adef2f5911a11-merged.mount: Deactivated successfully.
Oct  3 10:15:59 compute-0 podman[436609]: 2025-10-03 10:15:59.119200983 +0000 UTC m=+0.488219176 container remove 0e2077e28e7ccf58726d2dc4044da88cb2b41ca8584e974892fe9427eec55e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 10:15:59 compute-0 systemd[1]: libpod-conmon-0e2077e28e7ccf58726d2dc4044da88cb2b41ca8584e974892fe9427eec55e98.scope: Deactivated successfully.
Oct  3 10:15:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1460: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:15:59 compute-0 podman[157165]: time="2025-10-03T10:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:15:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:15:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9051 "" "Go-http-client/1.1"
Oct  3 10:16:00 compute-0 podman[436760]: 2025-10-03 10:16:00.064985158 +0000 UTC m=+0.112455573 container create 9dd271d48179f67bc1616bdda86f20670bc2d207450f567e792b755f933ed95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 10:16:00 compute-0 podman[436760]: 2025-10-03 10:15:59.982636243 +0000 UTC m=+0.030106688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:16:00 compute-0 systemd[1]: Started libpod-conmon-9dd271d48179f67bc1616bdda86f20670bc2d207450f567e792b755f933ed95b.scope.
Oct  3 10:16:00 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:16:00 compute-0 podman[436774]: 2025-10-03 10:16:00.227705776 +0000 UTC m=+0.107871596 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct  3 10:16:00 compute-0 podman[436760]: 2025-10-03 10:16:00.325184227 +0000 UTC m=+0.372654682 container init 9dd271d48179f67bc1616bdda86f20670bc2d207450f567e792b755f933ed95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_archimedes, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:16:00 compute-0 podman[436760]: 2025-10-03 10:16:00.352355451 +0000 UTC m=+0.399825916 container start 9dd271d48179f67bc1616bdda86f20670bc2d207450f567e792b755f933ed95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:16:00 compute-0 gifted_archimedes[436792]: 167 167
Oct  3 10:16:00 compute-0 systemd[1]: libpod-9dd271d48179f67bc1616bdda86f20670bc2d207450f567e792b755f933ed95b.scope: Deactivated successfully.
Oct  3 10:16:00 compute-0 podman[436760]: 2025-10-03 10:16:00.425210022 +0000 UTC m=+0.472680527 container attach 9dd271d48179f67bc1616bdda86f20670bc2d207450f567e792b755f933ed95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_archimedes, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 10:16:00 compute-0 podman[436760]: 2025-10-03 10:16:00.42612253 +0000 UTC m=+0.473592985 container died 9dd271d48179f67bc1616bdda86f20670bc2d207450f567e792b755f933ed95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_archimedes, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 10:16:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-dde457e8b7da77fd083511d827ca6ab128fd63f6bc2f9cabe4ced1cc31266841-merged.mount: Deactivated successfully.
Oct  3 10:16:00 compute-0 podman[436760]: 2025-10-03 10:16:00.97015518 +0000 UTC m=+1.017625625 container remove 9dd271d48179f67bc1616bdda86f20670bc2d207450f567e792b755f933ed95b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 10:16:01 compute-0 systemd[1]: libpod-conmon-9dd271d48179f67bc1616bdda86f20670bc2d207450f567e792b755f933ed95b.scope: Deactivated successfully.
Oct  3 10:16:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:01 compute-0 podman[436818]: 2025-10-03 10:16:01.255367563 +0000 UTC m=+0.055920778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:16:01 compute-0 podman[436818]: 2025-10-03 10:16:01.358473975 +0000 UTC m=+0.159027100 container create 3723608a5099ff5b54527b17bde2c623d93aecfe6f7d97a5eb46c6eefe96ca70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lalande, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:16:01 compute-0 openstack_network_exporter[367524]: ERROR   10:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:16:01 compute-0 openstack_network_exporter[367524]: ERROR   10:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:16:01 compute-0 openstack_network_exporter[367524]: ERROR   10:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:16:01 compute-0 openstack_network_exporter[367524]: ERROR   10:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:16:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:16:01 compute-0 openstack_network_exporter[367524]: ERROR   10:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:16:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:16:01 compute-0 systemd[1]: Started libpod-conmon-3723608a5099ff5b54527b17bde2c623d93aecfe6f7d97a5eb46c6eefe96ca70.scope.
Oct  3 10:16:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7eb1658f53835bdb431f215c1a61aace87dd36c9ad892b10e37b86aa9bcc60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7eb1658f53835bdb431f215c1a61aace87dd36c9ad892b10e37b86aa9bcc60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7eb1658f53835bdb431f215c1a61aace87dd36c9ad892b10e37b86aa9bcc60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7eb1658f53835bdb431f215c1a61aace87dd36c9ad892b10e37b86aa9bcc60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:16:01 compute-0 podman[436818]: 2025-10-03 10:16:01.623029115 +0000 UTC m=+0.423582310 container init 3723608a5099ff5b54527b17bde2c623d93aecfe6f7d97a5eb46c6eefe96ca70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:16:01 compute-0 nova_compute[351685]: 2025-10-03 10:16:01.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:01 compute-0 podman[436818]: 2025-10-03 10:16:01.643698289 +0000 UTC m=+0.444251444 container start 3723608a5099ff5b54527b17bde2c623d93aecfe6f7d97a5eb46c6eefe96ca70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lalande, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:16:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:16:01 compute-0 podman[436818]: 2025-10-03 10:16:01.759957545 +0000 UTC m=+0.560510760 container attach 3723608a5099ff5b54527b17bde2c623d93aecfe6f7d97a5eb46c6eefe96ca70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 10:16:02 compute-0 nova_compute[351685]: 2025-10-03 10:16:02.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]: {
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:    "0": [
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:        {
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "devices": [
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "/dev/loop3"
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            ],
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "lv_name": "ceph_lv0",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "lv_size": "21470642176",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "name": "ceph_lv0",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "tags": {
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.cluster_name": "ceph",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.crush_device_class": "",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.encrypted": "0",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.osd_id": "0",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.type": "block",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.vdo": "0"
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            },
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "type": "block",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "vg_name": "ceph_vg0"
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:        }
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:    ],
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:    "1": [
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:        {
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "devices": [
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "/dev/loop4"
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            ],
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "lv_name": "ceph_lv1",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "lv_size": "21470642176",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "name": "ceph_lv1",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "tags": {
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.cluster_name": "ceph",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.crush_device_class": "",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.encrypted": "0",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.osd_id": "1",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.type": "block",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.vdo": "0"
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            },
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "type": "block",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "vg_name": "ceph_vg1"
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:        }
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:    ],
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:    "2": [
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:        {
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "devices": [
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "/dev/loop5"
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            ],
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "lv_name": "ceph_lv2",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "lv_size": "21470642176",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "name": "ceph_lv2",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "tags": {
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.cluster_name": "ceph",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.crush_device_class": "",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.encrypted": "0",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.osd_id": "2",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.type": "block",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:                "ceph.vdo": "0"
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            },
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "type": "block",
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:            "vg_name": "ceph_vg2"
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:        }
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]:    ]
Oct  3 10:16:02 compute-0 thirsty_lalande[436835]: }
Oct  3 10:16:02 compute-0 systemd[1]: libpod-3723608a5099ff5b54527b17bde2c623d93aecfe6f7d97a5eb46c6eefe96ca70.scope: Deactivated successfully.
Oct  3 10:16:02 compute-0 podman[436818]: 2025-10-03 10:16:02.579780853 +0000 UTC m=+1.380333988 container died 3723608a5099ff5b54527b17bde2c623d93aecfe6f7d97a5eb46c6eefe96ca70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lalande, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a7eb1658f53835bdb431f215c1a61aace87dd36c9ad892b10e37b86aa9bcc60-merged.mount: Deactivated successfully.
Oct  3 10:16:02 compute-0 podman[436818]: 2025-10-03 10:16:02.944526152 +0000 UTC m=+1.745079267 container remove 3723608a5099ff5b54527b17bde2c623d93aecfe6f7d97a5eb46c6eefe96ca70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 10:16:02 compute-0 systemd[1]: libpod-conmon-3723608a5099ff5b54527b17bde2c623d93aecfe6f7d97a5eb46c6eefe96ca70.scope: Deactivated successfully.
Oct  3 10:16:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1462: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:03 compute-0 nova_compute[351685]: 2025-10-03 10:16:03.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:16:03 compute-0 nova_compute[351685]: 2025-10-03 10:16:03.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 10:16:03 compute-0 podman[436993]: 2025-10-03 10:16:03.914913468 +0000 UTC m=+0.110521431 container create da2237a0d68c7219b30e726c152a0c9cd08aa7b04997612d2a63e23e6f6e6767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_black, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:16:03 compute-0 podman[436993]: 2025-10-03 10:16:03.839987991 +0000 UTC m=+0.035595964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:16:04 compute-0 systemd[1]: Started libpod-conmon-da2237a0d68c7219b30e726c152a0c9cd08aa7b04997612d2a63e23e6f6e6767.scope.
Oct  3 10:16:04 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:16:04 compute-0 podman[436993]: 2025-10-03 10:16:04.12257272 +0000 UTC m=+0.318180753 container init da2237a0d68c7219b30e726c152a0c9cd08aa7b04997612d2a63e23e6f6e6767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_black, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 10:16:04 compute-0 podman[436993]: 2025-10-03 10:16:04.139098291 +0000 UTC m=+0.334706254 container start da2237a0d68c7219b30e726c152a0c9cd08aa7b04997612d2a63e23e6f6e6767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_black, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:16:04 compute-0 ecstatic_black[437009]: 167 167
Oct  3 10:16:04 compute-0 systemd[1]: libpod-da2237a0d68c7219b30e726c152a0c9cd08aa7b04997612d2a63e23e6f6e6767.scope: Deactivated successfully.
Oct  3 10:16:04 compute-0 podman[436993]: 2025-10-03 10:16:04.167906536 +0000 UTC m=+0.363514589 container attach da2237a0d68c7219b30e726c152a0c9cd08aa7b04997612d2a63e23e6f6e6767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 10:16:04 compute-0 podman[436993]: 2025-10-03 10:16:04.168876217 +0000 UTC m=+0.364484200 container died da2237a0d68c7219b30e726c152a0c9cd08aa7b04997612d2a63e23e6f6e6767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_black, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 10:16:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f17a218d28ae5433e0523b8a8196dd383d2a83b3361a4018295e15a8f95d82d8-merged.mount: Deactivated successfully.
Oct  3 10:16:04 compute-0 podman[436993]: 2025-10-03 10:16:04.477540474 +0000 UTC m=+0.673148467 container remove da2237a0d68c7219b30e726c152a0c9cd08aa7b04997612d2a63e23e6f6e6767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_black, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:16:04 compute-0 systemd[1]: libpod-conmon-da2237a0d68c7219b30e726c152a0c9cd08aa7b04997612d2a63e23e6f6e6767.scope: Deactivated successfully.
Oct  3 10:16:04 compute-0 podman[437034]: 2025-10-03 10:16:04.758514192 +0000 UTC m=+0.115899176 container create 91976ec4b4a1555fbc805ccae47881c39448f47f728791572982191be1928e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 10:16:04 compute-0 podman[437034]: 2025-10-03 10:16:04.676584169 +0000 UTC m=+0.033969183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:16:04 compute-0 systemd[1]: Started libpod-conmon-91976ec4b4a1555fbc805ccae47881c39448f47f728791572982191be1928e64.scope.
Oct  3 10:16:04 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec41f6192e80cff1fed41c457cf2850c01b5b84dca245325f36418e6174d67b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec41f6192e80cff1fed41c457cf2850c01b5b84dca245325f36418e6174d67b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec41f6192e80cff1fed41c457cf2850c01b5b84dca245325f36418e6174d67b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec41f6192e80cff1fed41c457cf2850c01b5b84dca245325f36418e6174d67b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:16:04 compute-0 podman[437034]: 2025-10-03 10:16:04.990739312 +0000 UTC m=+0.348124386 container init 91976ec4b4a1555fbc805ccae47881c39448f47f728791572982191be1928e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:16:05 compute-0 podman[437034]: 2025-10-03 10:16:05.000928339 +0000 UTC m=+0.358313333 container start 91976ec4b4a1555fbc805ccae47881c39448f47f728791572982191be1928e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:16:05 compute-0 podman[437034]: 2025-10-03 10:16:05.053889161 +0000 UTC m=+0.411274185 container attach 91976ec4b4a1555fbc805ccae47881c39448f47f728791572982191be1928e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 10:16:05 compute-0 podman[437052]: 2025-10-03 10:16:05.116132691 +0000 UTC m=+0.193316382 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:16:05 compute-0 podman[437051]: 2025-10-03 10:16:05.143353975 +0000 UTC m=+0.215875756 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 10:16:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:06 compute-0 naughty_bassi[437048]: {
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:        "osd_id": 1,
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:        "type": "bluestore"
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:    },
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:        "osd_id": 2,
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:        "type": "bluestore"
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:    },
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:        "osd_id": 0,
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:        "type": "bluestore"
Oct  3 10:16:06 compute-0 naughty_bassi[437048]:    }
Oct  3 10:16:06 compute-0 naughty_bassi[437048]: }
Oct  3 10:16:06 compute-0 systemd[1]: libpod-91976ec4b4a1555fbc805ccae47881c39448f47f728791572982191be1928e64.scope: Deactivated successfully.
Oct  3 10:16:06 compute-0 systemd[1]: libpod-91976ec4b4a1555fbc805ccae47881c39448f47f728791572982191be1928e64.scope: Consumed 1.146s CPU time.
Oct  3 10:16:06 compute-0 podman[437034]: 2025-10-03 10:16:06.154585014 +0000 UTC m=+1.511970038 container died 91976ec4b4a1555fbc805ccae47881c39448f47f728791572982191be1928e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Oct  3 10:16:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec41f6192e80cff1fed41c457cf2850c01b5b84dca245325f36418e6174d67b3-merged.mount: Deactivated successfully.
Oct  3 10:16:06 compute-0 podman[437034]: 2025-10-03 10:16:06.46381538 +0000 UTC m=+1.821200374 container remove 91976ec4b4a1555fbc805ccae47881c39448f47f728791572982191be1928e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bassi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 10:16:06 compute-0 systemd[1]: libpod-conmon-91976ec4b4a1555fbc805ccae47881c39448f47f728791572982191be1928e64.scope: Deactivated successfully.
Oct  3 10:16:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:16:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:16:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:16:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:16:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev eca62ff5-1323-49b5-8369-b56972ea481c does not exist
Oct  3 10:16:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b931da17-e5b2-416c-adf6-ecc119a43c38 does not exist
Oct  3 10:16:06 compute-0 nova_compute[351685]: 2025-10-03 10:16:06.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:16:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:07 compute-0 nova_compute[351685]: 2025-10-03 10:16:07.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:16:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:16:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:11 compute-0 nova_compute[351685]: 2025-10-03 10:16:11.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:16:12 compute-0 nova_compute[351685]: 2025-10-03 10:16:12.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:16:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:16:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:16:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:16:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:16:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:16:16 compute-0 nova_compute[351685]: 2025-10-03 10:16:16.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:16:16 compute-0 podman[437187]: 2025-10-03 10:16:16.81095802 +0000 UTC m=+0.073262295 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.buildah.version=1.33.7)
Oct  3 10:16:16 compute-0 podman[437188]: 2025-10-03 10:16:16.813717998 +0000 UTC m=+0.074410572 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true)
Oct  3 10:16:16 compute-0 podman[437189]: 2025-10-03 10:16:16.880928848 +0000 UTC m=+0.139309017 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 10:16:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1469: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:17 compute-0 nova_compute[351685]: 2025-10-03 10:16:17.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:16:17.752580) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486577752715, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1622, "num_deletes": 509, "total_data_size": 2109178, "memory_usage": 2143448, "flush_reason": "Manual Compaction"}
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486577771791, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 2065773, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28661, "largest_seqno": 30282, "table_properties": {"data_size": 2058590, "index_size": 3745, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 17795, "raw_average_key_size": 18, "raw_value_size": 2042136, "raw_average_value_size": 2179, "num_data_blocks": 168, "num_entries": 937, "num_filter_entries": 937, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759486446, "oldest_key_time": 1759486446, "file_creation_time": 1759486577, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 19249 microseconds, and 6466 cpu microseconds.
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:16:17.771852) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 2065773 bytes OK
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:16:17.771879) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:16:17.773830) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:16:17.773884) EVENT_LOG_v1 {"time_micros": 1759486577773874, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:16:17.773912) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 2101023, prev total WAL file size 2101023, number of live WAL files 2.
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:16:17.775152) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(2017KB)], [65(7119KB)]
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486577775293, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 9356531, "oldest_snapshot_seqno": -1}
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5010 keys, 7523265 bytes, temperature: kUnknown
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486577834468, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7523265, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7490275, "index_size": 19376, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 126884, "raw_average_key_size": 25, "raw_value_size": 7400090, "raw_average_value_size": 1477, "num_data_blocks": 796, "num_entries": 5010, "num_filter_entries": 5010, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759486577, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:16:17.834931) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7523265 bytes
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:16:17.837018) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.3 rd, 126.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.0 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(8.2) write-amplify(3.6) OK, records in: 6044, records dropped: 1034 output_compression: NoCompression
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:16:17.837034) EVENT_LOG_v1 {"time_micros": 1759486577837027, "job": 36, "event": "compaction_finished", "compaction_time_micros": 59475, "compaction_time_cpu_micros": 21726, "output_level": 6, "num_output_files": 1, "total_output_size": 7523265, "num_input_records": 6044, "num_output_records": 5010, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486577838692, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486577841084, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:16:17.774886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:16:17.841578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:16:17.841585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:16:17.841587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:16:17.841589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:16:17 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:16:17.841591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:16:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:21 compute-0 nova_compute[351685]: 2025-10-03 10:16:21.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:16:21 compute-0 podman[437255]: 2025-10-03 10:16:21.857795633 +0000 UTC m=+0.100847091 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct  3 10:16:22 compute-0 nova_compute[351685]: 2025-10-03 10:16:22.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:26 compute-0 nova_compute[351685]: 2025-10-03 10:16:26.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:16:26 compute-0 podman[437273]: 2025-10-03 10:16:26.860889002 +0000 UTC m=+0.110575664 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:16:26 compute-0 podman[437275]: 2025-10-03 10:16:26.882415623 +0000 UTC m=+0.122721243 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  3 10:16:26 compute-0 podman[437274]: 2025-10-03 10:16:26.896102103 +0000 UTC m=+0.132398884 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  3 10:16:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:27 compute-0 nova_compute[351685]: 2025-10-03 10:16:27.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1475: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:29 compute-0 podman[157165]: time="2025-10-03T10:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:16:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:16:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9057 "" "Go-http-client/1.1"
Oct  3 10:16:30 compute-0 podman[437333]: 2025-10-03 10:16:30.883405108 +0000 UTC m=+0.133727147 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:16:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:31 compute-0 openstack_network_exporter[367524]: ERROR   10:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:16:31 compute-0 openstack_network_exporter[367524]: ERROR   10:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:16:31 compute-0 openstack_network_exporter[367524]: ERROR   10:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:16:31 compute-0 openstack_network_exporter[367524]: ERROR   10:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:16:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:16:31 compute-0 openstack_network_exporter[367524]: ERROR   10:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:16:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:16:31 compute-0 nova_compute[351685]: 2025-10-03 10:16:31.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:16:32 compute-0 nova_compute[351685]: 2025-10-03 10:16:32.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1478: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:35 compute-0 podman[437353]: 2025-10-03 10:16:35.823036546 +0000 UTC m=+0.086521840 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:16:35 compute-0 podman[437354]: 2025-10-03 10:16:35.837949905 +0000 UTC m=+0.094081323 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=kepler, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0)
Oct  3 10:16:36 compute-0 systemd[1]: session-65.scope: Deactivated successfully.
Oct  3 10:16:36 compute-0 systemd[1]: session-65.scope: Consumed 4.662s CPU time.
Oct  3 10:16:36 compute-0 systemd-logind[798]: Session 65 logged out. Waiting for processes to exit.
Oct  3 10:16:36 compute-0 systemd-logind[798]: Removed session 65.
Oct  3 10:16:36 compute-0 nova_compute[351685]: 2025-10-03 10:16:36.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:16:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:37 compute-0 nova_compute[351685]: 2025-10-03 10:16:37.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:16:41.605 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:16:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:16:41.605 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:16:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:16:41.606 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:16:41 compute-0 nova_compute[351685]: 2025-10-03 10:16:41.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:16:42 compute-0 nova_compute[351685]: 2025-10-03 10:16:42.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:45 compute-0 nova_compute[351685]: 2025-10-03 10:16:45.764 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:16:46
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', '.mgr', '.rgw.root', 'backups', 'default.rgw.control']
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:16:46 compute-0 nova_compute[351685]: 2025-10-03 10:16:46.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:16:46 compute-0 nova_compute[351685]: 2025-10-03 10:16:46.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:16:46 compute-0 nova_compute[351685]: 2025-10-03 10:16:46.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:16:46 compute-0 nova_compute[351685]: 2025-10-03 10:16:46.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:16:46 compute-0 ceph-mgr[192071]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3262515590
Oct  3 10:16:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:47 compute-0 nova_compute[351685]: 2025-10-03 10:16:47.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:47 compute-0 nova_compute[351685]: 2025-10-03 10:16:47.726 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:16:47 compute-0 nova_compute[351685]: 2025-10-03 10:16:47.727 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:16:47 compute-0 nova_compute[351685]: 2025-10-03 10:16:47.727 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:16:47 compute-0 nova_compute[351685]: 2025-10-03 10:16:47.727 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:16:47 compute-0 podman[437393]: 2025-10-03 10:16:47.86241371 +0000 UTC m=+0.115255334 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, name=ubi9-minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc.)
Oct  3 10:16:47 compute-0 podman[437394]: 2025-10-03 10:16:47.904918375 +0000 UTC m=+0.152925343 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Oct  3 10:16:47 compute-0 podman[437398]: 2025-10-03 10:16:47.905495704 +0000 UTC m=+0.132497848 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct  3 10:16:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:51 compute-0 nova_compute[351685]: 2025-10-03 10:16:51.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:16:51 compute-0 nova_compute[351685]: 2025-10-03 10:16:51.899 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:16:51 compute-0 nova_compute[351685]: 2025-10-03 10:16:51.922 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:16:51 compute-0 nova_compute[351685]: 2025-10-03 10:16:51.923 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:16:51 compute-0 nova_compute[351685]: 2025-10-03 10:16:51.923 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:16:51 compute-0 nova_compute[351685]: 2025-10-03 10:16:51.924 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:16:51 compute-0 nova_compute[351685]: 2025-10-03 10:16:51.924 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:16:51 compute-0 nova_compute[351685]: 2025-10-03 10:16:51.924 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:16:51 compute-0 nova_compute[351685]: 2025-10-03 10:16:51.924 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:16:51 compute-0 nova_compute[351685]: 2025-10-03 10:16:51.925 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:16:51 compute-0 nova_compute[351685]: 2025-10-03 10:16:51.958 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:16:51 compute-0 nova_compute[351685]: 2025-10-03 10:16:51.958 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:16:51 compute-0 nova_compute[351685]: 2025-10-03 10:16:51.958 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:16:51 compute-0 nova_compute[351685]: 2025-10-03 10:16:51.959 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:16:51 compute-0 nova_compute[351685]: 2025-10-03 10:16:51.959 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:16:52 compute-0 nova_compute[351685]: 2025-10-03 10:16:52.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:16:52 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2235104827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:16:52 compute-0 nova_compute[351685]: 2025-10-03 10:16:52.424 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:16:52 compute-0 nova_compute[351685]: 2025-10-03 10:16:52.528 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:16:52 compute-0 nova_compute[351685]: 2025-10-03 10:16:52.528 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:16:52 compute-0 nova_compute[351685]: 2025-10-03 10:16:52.529 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:16:52 compute-0 podman[437479]: 2025-10-03 10:16:52.822963119 +0000 UTC m=+0.085762976 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  3 10:16:52 compute-0 nova_compute[351685]: 2025-10-03 10:16:52.884 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:16:52 compute-0 nova_compute[351685]: 2025-10-03 10:16:52.885 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3853MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:16:52 compute-0 nova_compute[351685]: 2025-10-03 10:16:52.885 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:16:52 compute-0 nova_compute[351685]: 2025-10-03 10:16:52.886 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:16:53 compute-0 nova_compute[351685]: 2025-10-03 10:16:53.015 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:16:53 compute-0 nova_compute[351685]: 2025-10-03 10:16:53.015 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:16:53 compute-0 nova_compute[351685]: 2025-10-03 10:16:53.016 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:16:53 compute-0 nova_compute[351685]: 2025-10-03 10:16:53.155 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:16:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:16:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1784532041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:16:53 compute-0 nova_compute[351685]: 2025-10-03 10:16:53.652 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:16:53 compute-0 nova_compute[351685]: 2025-10-03 10:16:53.660 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:16:53 compute-0 nova_compute[351685]: 2025-10-03 10:16:53.678 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:16:53 compute-0 nova_compute[351685]: 2025-10-03 10:16:53.679 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:16:53 compute-0 nova_compute[351685]: 2025-10-03 10:16:53.680 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:16:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:16:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4235388920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:16:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:16:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4235388920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:16:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:16:55 compute-0 nova_compute[351685]: 2025-10-03 10:16:55.485 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:16:55 compute-0 nova_compute[351685]: 2025-10-03 10:16:55.486 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:16:55 compute-0 nova_compute[351685]: 2025-10-03 10:16:55.486 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:16:56 compute-0 nova_compute[351685]: 2025-10-03 10:16:56.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:16:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:57 compute-0 nova_compute[351685]: 2025-10-03 10:16:57.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:16:57 compute-0 podman[437520]: 2025-10-03 10:16:57.825541532 +0000 UTC m=+0.083970109 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:16:57 compute-0 podman[437521]: 2025-10-03 10:16:57.849022007 +0000 UTC m=+0.098405173 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:16:57 compute-0 podman[437522]: 2025-10-03 10:16:57.85037536 +0000 UTC m=+0.085269570 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  3 10:16:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:16:59 compute-0 podman[157165]: time="2025-10-03T10:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:16:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:16:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9054 "" "Go-http-client/1.1"
Oct  3 10:17:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1491: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:01 compute-0 openstack_network_exporter[367524]: ERROR   10:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:17:01 compute-0 openstack_network_exporter[367524]: ERROR   10:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:17:01 compute-0 openstack_network_exporter[367524]: ERROR   10:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:17:01 compute-0 openstack_network_exporter[367524]: ERROR   10:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:17:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:17:01 compute-0 openstack_network_exporter[367524]: ERROR   10:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:17:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:17:01 compute-0 nova_compute[351685]: 2025-10-03 10:17:01.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:17:01 compute-0 podman[437581]: 2025-10-03 10:17:01.857391694 +0000 UTC m=+0.110868202 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 10:17:02 compute-0 nova_compute[351685]: 2025-10-03 10:17:02.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:06 compute-0 nova_compute[351685]: 2025-10-03 10:17:06.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:17:06 compute-0 podman[437601]: 2025-10-03 10:17:06.853763559 +0000 UTC m=+0.109551121 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:17:06 compute-0 podman[437602]: 2025-10-03 10:17:06.907231097 +0000 UTC m=+0.146166568 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.component=ubi9-container, release=1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, name=ubi9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9)
Oct  3 10:17:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:07 compute-0 nova_compute[351685]: 2025-10-03 10:17:07.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:17:07 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:17:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:17:07 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:17:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:17:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:17:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b82f6586-5289-4683-ac5e-6ab81cc6fa3e does not exist
Oct  3 10:17:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e48130e7-d896-4942-ba57-252f6ff83cce does not exist
Oct  3 10:17:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 645761d5-61a9-40b1-ba82-0848a86bee53 does not exist
Oct  3 10:17:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:17:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:17:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:17:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:17:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:17:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:17:08 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:17:08 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:17:08 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:17:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct  3 10:17:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  3 10:17:09 compute-0 podman[437907]: 2025-10-03 10:17:09.064143093 +0000 UTC m=+0.030319695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:17:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:09 compute-0 podman[437907]: 2025-10-03 10:17:09.297013814 +0000 UTC m=+0.263190416 container create 4f21dfb959372e857be452fa459198158e84c5f0ef37de292f3e28d4968cde4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 10:17:09 compute-0 systemd[1]: Started libpod-conmon-4f21dfb959372e857be452fa459198158e84c5f0ef37de292f3e28d4968cde4e.scope.
Oct  3 10:17:09 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:17:09 compute-0 podman[437907]: 2025-10-03 10:17:09.697115478 +0000 UTC m=+0.663292150 container init 4f21dfb959372e857be452fa459198158e84c5f0ef37de292f3e28d4968cde4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:17:09 compute-0 podman[437907]: 2025-10-03 10:17:09.71709534 +0000 UTC m=+0.683271962 container start 4f21dfb959372e857be452fa459198158e84c5f0ef37de292f3e28d4968cde4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 10:17:09 compute-0 vigorous_napier[437923]: 167 167
Oct  3 10:17:09 compute-0 systemd[1]: libpod-4f21dfb959372e857be452fa459198158e84c5f0ef37de292f3e28d4968cde4e.scope: Deactivated successfully.
Oct  3 10:17:09 compute-0 podman[437907]: 2025-10-03 10:17:09.77281522 +0000 UTC m=+0.738991902 container attach 4f21dfb959372e857be452fa459198158e84c5f0ef37de292f3e28d4968cde4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 10:17:09 compute-0 podman[437907]: 2025-10-03 10:17:09.773706849 +0000 UTC m=+0.739883471 container died 4f21dfb959372e857be452fa459198158e84c5f0ef37de292f3e28d4968cde4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 10:17:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3074605305aef4cf8487bf5dd2f587f2c365866236b32cda4eb87501a3243ef1-merged.mount: Deactivated successfully.
Oct  3 10:17:10 compute-0 podman[437907]: 2025-10-03 10:17:10.16560604 +0000 UTC m=+1.131782632 container remove 4f21dfb959372e857be452fa459198158e84c5f0ef37de292f3e28d4968cde4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Oct  3 10:17:10 compute-0 systemd[1]: libpod-conmon-4f21dfb959372e857be452fa459198158e84c5f0ef37de292f3e28d4968cde4e.scope: Deactivated successfully.
Oct  3 10:17:10 compute-0 podman[437946]: 2025-10-03 10:17:10.458743417 +0000 UTC m=+0.085998734 container create 8eeebc978e9869967e9279631368713a385a223a09a040ac062949c311153b2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:17:10 compute-0 podman[437946]: 2025-10-03 10:17:10.409904278 +0000 UTC m=+0.037159595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:17:10 compute-0 systemd[1]: Started libpod-conmon-8eeebc978e9869967e9279631368713a385a223a09a040ac062949c311153b2a.scope.
Oct  3 10:17:10 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed516f9e9cbc724b639af85fb07d0cc461fc105d63f645357e3190597bf8089/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed516f9e9cbc724b639af85fb07d0cc461fc105d63f645357e3190597bf8089/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed516f9e9cbc724b639af85fb07d0cc461fc105d63f645357e3190597bf8089/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed516f9e9cbc724b639af85fb07d0cc461fc105d63f645357e3190597bf8089/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed516f9e9cbc724b639af85fb07d0cc461fc105d63f645357e3190597bf8089/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:10 compute-0 podman[437946]: 2025-10-03 10:17:10.732788212 +0000 UTC m=+0.360043569 container init 8eeebc978e9869967e9279631368713a385a223a09a040ac062949c311153b2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:17:10 compute-0 podman[437946]: 2025-10-03 10:17:10.745326945 +0000 UTC m=+0.372582302 container start 8eeebc978e9869967e9279631368713a385a223a09a040ac062949c311153b2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:17:10 compute-0 podman[437946]: 2025-10-03 10:17:10.761959529 +0000 UTC m=+0.389214856 container attach 8eeebc978e9869967e9279631368713a385a223a09a040ac062949c311153b2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct  3 10:17:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:17:11 compute-0 nova_compute[351685]: 2025-10-03 10:17:11.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:11 compute-0 interesting_kilby[437962]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:17:11 compute-0 interesting_kilby[437962]: --> relative data size: 1.0
Oct  3 10:17:11 compute-0 interesting_kilby[437962]: --> All data devices are unavailable
Oct  3 10:17:11 compute-0 systemd[1]: libpod-8eeebc978e9869967e9279631368713a385a223a09a040ac062949c311153b2a.scope: Deactivated successfully.
Oct  3 10:17:11 compute-0 systemd[1]: libpod-8eeebc978e9869967e9279631368713a385a223a09a040ac062949c311153b2a.scope: Consumed 1.160s CPU time.
Oct  3 10:17:11 compute-0 podman[437946]: 2025-10-03 10:17:11.976876472 +0000 UTC m=+1.604131829 container died 8eeebc978e9869967e9279631368713a385a223a09a040ac062949c311153b2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 10:17:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ed516f9e9cbc724b639af85fb07d0cc461fc105d63f645357e3190597bf8089-merged.mount: Deactivated successfully.
Oct  3 10:17:12 compute-0 nova_compute[351685]: 2025-10-03 10:17:12.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:12 compute-0 podman[437946]: 2025-10-03 10:17:12.486912217 +0000 UTC m=+2.114167574 container remove 8eeebc978e9869967e9279631368713a385a223a09a040ac062949c311153b2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_kilby, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct  3 10:17:12 compute-0 systemd[1]: libpod-conmon-8eeebc978e9869967e9279631368713a385a223a09a040ac062949c311153b2a.scope: Deactivated successfully.
Oct  3 10:17:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:13 compute-0 podman[438141]: 2025-10-03 10:17:13.344082947 +0000 UTC m=+0.041567277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:17:13 compute-0 podman[438141]: 2025-10-03 10:17:13.446397234 +0000 UTC m=+0.143881514 container create 0723cb00f732d4d17e40978dcc16c80c2239cc2684370103a7b7c2aa693dc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cerf, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  3 10:17:13 compute-0 systemd[1]: Started libpod-conmon-0723cb00f732d4d17e40978dcc16c80c2239cc2684370103a7b7c2aa693dc8fa.scope.
Oct  3 10:17:13 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:17:13 compute-0 podman[438141]: 2025-10-03 10:17:13.559352533 +0000 UTC m=+0.256836803 container init 0723cb00f732d4d17e40978dcc16c80c2239cc2684370103a7b7c2aa693dc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cerf, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 10:17:13 compute-0 podman[438141]: 2025-10-03 10:17:13.570066668 +0000 UTC m=+0.267550908 container start 0723cb00f732d4d17e40978dcc16c80c2239cc2684370103a7b7c2aa693dc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cerf, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  3 10:17:13 compute-0 podman[438141]: 2025-10-03 10:17:13.574926584 +0000 UTC m=+0.272410854 container attach 0723cb00f732d4d17e40978dcc16c80c2239cc2684370103a7b7c2aa693dc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 10:17:13 compute-0 systemd[1]: libpod-0723cb00f732d4d17e40978dcc16c80c2239cc2684370103a7b7c2aa693dc8fa.scope: Deactivated successfully.
Oct  3 10:17:13 compute-0 hardcore_cerf[438157]: 167 167
Oct  3 10:17:13 compute-0 conmon[438157]: conmon 0723cb00f732d4d17e40 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0723cb00f732d4d17e40978dcc16c80c2239cc2684370103a7b7c2aa693dc8fa.scope/container/memory.events
Oct  3 10:17:13 compute-0 podman[438141]: 2025-10-03 10:17:13.57948495 +0000 UTC m=+0.276969190 container died 0723cb00f732d4d17e40978dcc16c80c2239cc2684370103a7b7c2aa693dc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cerf, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 10:17:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0fa41735fba7a641fc68ca3edfa6be41dbd9ecbcdde37fe3577b57ccdb32147-merged.mount: Deactivated successfully.
Oct  3 10:17:13 compute-0 podman[438141]: 2025-10-03 10:17:13.644514689 +0000 UTC m=+0.341998939 container remove 0723cb00f732d4d17e40978dcc16c80c2239cc2684370103a7b7c2aa693dc8fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 10:17:13 compute-0 systemd[1]: libpod-conmon-0723cb00f732d4d17e40978dcc16c80c2239cc2684370103a7b7c2aa693dc8fa.scope: Deactivated successfully.
Oct  3 10:17:13 compute-0 podman[438180]: 2025-10-03 10:17:13.880942175 +0000 UTC m=+0.069171364 container create 9ade26582ff943a9e0a3b29289155835dc1d4f7a4235e7dabdefdfaa3d73724e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 10:17:13 compute-0 systemd[1]: Started libpod-conmon-9ade26582ff943a9e0a3b29289155835dc1d4f7a4235e7dabdefdfaa3d73724e.scope.
Oct  3 10:17:13 compute-0 podman[438180]: 2025-10-03 10:17:13.852423459 +0000 UTC m=+0.040652728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:17:13 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a00b9a395ac5c542182d5d287e4b5b355707f3c3e2f6982e6a8b4b7a8993be27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a00b9a395ac5c542182d5d287e4b5b355707f3c3e2f6982e6a8b4b7a8993be27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a00b9a395ac5c542182d5d287e4b5b355707f3c3e2f6982e6a8b4b7a8993be27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a00b9a395ac5c542182d5d287e4b5b355707f3c3e2f6982e6a8b4b7a8993be27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:13 compute-0 podman[438180]: 2025-10-03 10:17:13.997767338 +0000 UTC m=+0.185996547 container init 9ade26582ff943a9e0a3b29289155835dc1d4f7a4235e7dabdefdfaa3d73724e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dewdney, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  3 10:17:14 compute-0 podman[438180]: 2025-10-03 10:17:14.0077984 +0000 UTC m=+0.196027589 container start 9ade26582ff943a9e0a3b29289155835dc1d4f7a4235e7dabdefdfaa3d73724e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dewdney, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  3 10:17:14 compute-0 podman[438180]: 2025-10-03 10:17:14.012303506 +0000 UTC m=+0.200532715 container attach 9ade26582ff943a9e0a3b29289155835dc1d4f7a4235e7dabdefdfaa3d73724e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]: {
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:    "0": [
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:        {
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "devices": [
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "/dev/loop3"
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            ],
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "lv_name": "ceph_lv0",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "lv_size": "21470642176",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "name": "ceph_lv0",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "tags": {
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.cluster_name": "ceph",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.crush_device_class": "",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.encrypted": "0",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.osd_id": "0",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.type": "block",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.vdo": "0"
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            },
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "type": "block",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "vg_name": "ceph_vg0"
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:        }
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:    ],
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:    "1": [
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:        {
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "devices": [
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "/dev/loop4"
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            ],
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "lv_name": "ceph_lv1",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "lv_size": "21470642176",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "name": "ceph_lv1",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "tags": {
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.cluster_name": "ceph",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.crush_device_class": "",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.encrypted": "0",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.osd_id": "1",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.type": "block",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.vdo": "0"
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            },
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "type": "block",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "vg_name": "ceph_vg1"
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:        }
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:    ],
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:    "2": [
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:        {
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "devices": [
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "/dev/loop5"
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            ],
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "lv_name": "ceph_lv2",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "lv_size": "21470642176",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "name": "ceph_lv2",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "tags": {
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.cluster_name": "ceph",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.crush_device_class": "",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.encrypted": "0",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.osd_id": "2",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.type": "block",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:                "ceph.vdo": "0"
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            },
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "type": "block",
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:            "vg_name": "ceph_vg2"
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:        }
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]:    ]
Oct  3 10:17:14 compute-0 dazzling_dewdney[438196]: }
Oct  3 10:17:14 compute-0 systemd[1]: libpod-9ade26582ff943a9e0a3b29289155835dc1d4f7a4235e7dabdefdfaa3d73724e.scope: Deactivated successfully.
Oct  3 10:17:14 compute-0 podman[438180]: 2025-10-03 10:17:14.90420103 +0000 UTC m=+1.092430219 container died 9ade26582ff943a9e0a3b29289155835dc1d4f7a4235e7dabdefdfaa3d73724e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dewdney, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:17:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a00b9a395ac5c542182d5d287e4b5b355707f3c3e2f6982e6a8b4b7a8993be27-merged.mount: Deactivated successfully.
Oct  3 10:17:14 compute-0 podman[438180]: 2025-10-03 10:17:14.973967101 +0000 UTC m=+1.162196290 container remove 9ade26582ff943a9e0a3b29289155835dc1d4f7a4235e7dabdefdfaa3d73724e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 10:17:14 compute-0 systemd[1]: libpod-conmon-9ade26582ff943a9e0a3b29289155835dc1d4f7a4235e7dabdefdfaa3d73724e.scope: Deactivated successfully.
Oct  3 10:17:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:15 compute-0 podman[438354]: 2025-10-03 10:17:15.849049736 +0000 UTC m=+0.061019772 container create 7ffc2d89fde64ffba182198582c6c2221430cc7ca139ee7118f386972153784a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:17:15 compute-0 systemd[1]: Started libpod-conmon-7ffc2d89fde64ffba182198582c6c2221430cc7ca139ee7118f386972153784a.scope.
Oct  3 10:17:15 compute-0 podman[438354]: 2025-10-03 10:17:15.821290854 +0000 UTC m=+0.033260910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:17:15 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:17:15 compute-0 podman[438354]: 2025-10-03 10:17:15.967594594 +0000 UTC m=+0.179564630 container init 7ffc2d89fde64ffba182198582c6c2221430cc7ca139ee7118f386972153784a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:17:15 compute-0 podman[438354]: 2025-10-03 10:17:15.975830779 +0000 UTC m=+0.187800785 container start 7ffc2d89fde64ffba182198582c6c2221430cc7ca139ee7118f386972153784a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:17:15 compute-0 podman[438354]: 2025-10-03 10:17:15.981412148 +0000 UTC m=+0.193382164 container attach 7ffc2d89fde64ffba182198582c6c2221430cc7ca139ee7118f386972153784a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 10:17:15 compute-0 hungry_einstein[438370]: 167 167
Oct  3 10:17:15 compute-0 podman[438354]: 2025-10-03 10:17:15.989809918 +0000 UTC m=+0.201779954 container died 7ffc2d89fde64ffba182198582c6c2221430cc7ca139ee7118f386972153784a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:17:15 compute-0 systemd[1]: libpod-7ffc2d89fde64ffba182198582c6c2221430cc7ca139ee7118f386972153784a.scope: Deactivated successfully.
Oct  3 10:17:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-b67d64826fae7f778374f56c840e8f4a7c510c506af47a8c0781e747a852e4d8-merged.mount: Deactivated successfully.
Oct  3 10:17:16 compute-0 podman[438354]: 2025-10-03 10:17:16.041425707 +0000 UTC m=+0.253395713 container remove 7ffc2d89fde64ffba182198582c6c2221430cc7ca139ee7118f386972153784a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:17:16 compute-0 systemd[1]: libpod-conmon-7ffc2d89fde64ffba182198582c6c2221430cc7ca139ee7118f386972153784a.scope: Deactivated successfully.
Oct  3 10:17:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:17:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:17:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:17:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:17:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:17:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:17:16 compute-0 podman[438393]: 2025-10-03 10:17:16.270581559 +0000 UTC m=+0.060754043 container create eb9123aa1054c197ddd4ff17ae478c86c72bd9127e0392c71d1b051f026da37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ganguly, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:17:16 compute-0 podman[438393]: 2025-10-03 10:17:16.250332048 +0000 UTC m=+0.040504592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:17:16 compute-0 systemd[1]: Started libpod-conmon-eb9123aa1054c197ddd4ff17ae478c86c72bd9127e0392c71d1b051f026da37e.scope.
Oct  3 10:17:16 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000ebaa82623d2aff43ea96f078efa590e2e7766728d2e99f7bef995af7d60f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000ebaa82623d2aff43ea96f078efa590e2e7766728d2e99f7bef995af7d60f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000ebaa82623d2aff43ea96f078efa590e2e7766728d2e99f7bef995af7d60f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000ebaa82623d2aff43ea96f078efa590e2e7766728d2e99f7bef995af7d60f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:16 compute-0 podman[438393]: 2025-10-03 10:17:16.456191562 +0000 UTC m=+0.246364146 container init eb9123aa1054c197ddd4ff17ae478c86c72bd9127e0392c71d1b051f026da37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ganguly, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:17:16 compute-0 podman[438393]: 2025-10-03 10:17:16.47012481 +0000 UTC m=+0.260297294 container start eb9123aa1054c197ddd4ff17ae478c86c72bd9127e0392c71d1b051f026da37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 10:17:16 compute-0 podman[438393]: 2025-10-03 10:17:16.549950954 +0000 UTC m=+0.340123478 container attach eb9123aa1054c197ddd4ff17ae478c86c72bd9127e0392c71d1b051f026da37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ganguly, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:17:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:17:16 compute-0 nova_compute[351685]: 2025-10-03 10:17:16.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:17 compute-0 nova_compute[351685]: 2025-10-03 10:17:17.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]: {
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:        "osd_id": 1,
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:        "type": "bluestore"
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:    },
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:        "osd_id": 2,
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:        "type": "bluestore"
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:    },
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:        "osd_id": 0,
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:        "type": "bluestore"
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]:    }
Oct  3 10:17:17 compute-0 jolly_ganguly[438409]: }
Oct  3 10:17:17 compute-0 systemd[1]: libpod-eb9123aa1054c197ddd4ff17ae478c86c72bd9127e0392c71d1b051f026da37e.scope: Deactivated successfully.
Oct  3 10:17:17 compute-0 podman[438393]: 2025-10-03 10:17:17.517601653 +0000 UTC m=+1.307774147 container died eb9123aa1054c197ddd4ff17ae478c86c72bd9127e0392c71d1b051f026da37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  3 10:17:17 compute-0 systemd[1]: libpod-eb9123aa1054c197ddd4ff17ae478c86c72bd9127e0392c71d1b051f026da37e.scope: Consumed 1.035s CPU time.
Oct  3 10:17:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-000ebaa82623d2aff43ea96f078efa590e2e7766728d2e99f7bef995af7d60f9-merged.mount: Deactivated successfully.
Oct  3 10:17:18 compute-0 podman[438393]: 2025-10-03 10:17:18.202703594 +0000 UTC m=+1.992876078 container remove eb9123aa1054c197ddd4ff17ae478c86c72bd9127e0392c71d1b051f026da37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 10:17:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:17:18 compute-0 systemd[1]: libpod-conmon-eb9123aa1054c197ddd4ff17ae478c86c72bd9127e0392c71d1b051f026da37e.scope: Deactivated successfully.
Oct  3 10:17:18 compute-0 podman[438453]: 2025-10-03 10:17:18.444984228 +0000 UTC m=+0.094926091 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.buildah.version=1.33.7, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct  3 10:17:18 compute-0 podman[438454]: 2025-10-03 10:17:18.454145452 +0000 UTC m=+0.104175758 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Oct  3 10:17:18 compute-0 podman[438455]: 2025-10-03 10:17:18.471737308 +0000 UTC m=+0.119790630 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  3 10:17:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:17:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:17:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:17:18 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 63497d15-58d9-4c14-9325-db05ddf61583 does not exist
Oct  3 10:17:18 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev c8bd5316-6f14-483b-a354-183cc3ad970d does not exist
Oct  3 10:17:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:17:18 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:17:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:17:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:17:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:17:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:17:18 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 5ca11f15-4bca-4d2f-ab1e-ad29588ac986 does not exist
Oct  3 10:17:18 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 0028e1d8-bbab-47c4-83a2-376e77e79839 does not exist
Oct  3 10:17:18 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 93ee3e27-430f-4ebb-851f-ada3524a509d does not exist
Oct  3 10:17:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:17:18 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:17:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:17:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:17:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:17:18 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:17:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:17:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:17:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:17:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:17:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:17:19 compute-0 podman[438705]: 2025-10-03 10:17:19.675616386 +0000 UTC m=+0.030575213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:17:19 compute-0 podman[438705]: 2025-10-03 10:17:19.773184061 +0000 UTC m=+0.128142808 container create b99712b879cfb8748fe323a31eeab91717b05c81f935ad619839de05b4ecf2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:17:19 compute-0 systemd[1]: Started libpod-conmon-b99712b879cfb8748fe323a31eeab91717b05c81f935ad619839de05b4ecf2e7.scope.
Oct  3 10:17:19 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:17:20 compute-0 podman[438705]: 2025-10-03 10:17:20.043832766 +0000 UTC m=+0.398791523 container init b99712b879cfb8748fe323a31eeab91717b05c81f935ad619839de05b4ecf2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 10:17:20 compute-0 podman[438705]: 2025-10-03 10:17:20.052473984 +0000 UTC m=+0.407432731 container start b99712b879cfb8748fe323a31eeab91717b05c81f935ad619839de05b4ecf2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:17:20 compute-0 elastic_faraday[438722]: 167 167
Oct  3 10:17:20 compute-0 systemd[1]: libpod-b99712b879cfb8748fe323a31eeab91717b05c81f935ad619839de05b4ecf2e7.scope: Deactivated successfully.
Oct  3 10:17:20 compute-0 podman[438705]: 2025-10-03 10:17:20.087446158 +0000 UTC m=+0.442404905 container attach b99712b879cfb8748fe323a31eeab91717b05c81f935ad619839de05b4ecf2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:17:20 compute-0 podman[438705]: 2025-10-03 10:17:20.088136519 +0000 UTC m=+0.443095266 container died b99712b879cfb8748fe323a31eeab91717b05c81f935ad619839de05b4ecf2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct  3 10:17:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-761fadb69ebd30b51e46c558ab0be7c7f8845542df930cf2189291423f867394-merged.mount: Deactivated successfully.
Oct  3 10:17:20 compute-0 podman[438705]: 2025-10-03 10:17:20.5378894 +0000 UTC m=+0.892848177 container remove b99712b879cfb8748fe323a31eeab91717b05c81f935ad619839de05b4ecf2e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_faraday, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 10:17:20 compute-0 systemd[1]: libpod-conmon-b99712b879cfb8748fe323a31eeab91717b05c81f935ad619839de05b4ecf2e7.scope: Deactivated successfully.
Oct  3 10:17:20 compute-0 podman[438746]: 2025-10-03 10:17:20.806069666 +0000 UTC m=+0.046994861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:17:20 compute-0 podman[438746]: 2025-10-03 10:17:20.979485317 +0000 UTC m=+0.220410462 container create c7ddb60881c792f40b1f037cf6fe11e77367fbb5260cc3795d9dfc1f15b742e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_agnesi, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 10:17:21 compute-0 systemd[1]: Started libpod-conmon-c7ddb60881c792f40b1f037cf6fe11e77367fbb5260cc3795d9dfc1f15b742e8.scope.
Oct  3 10:17:21 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe95e18a0358035c7ccb9cf52dd183ab91a8bb0afb924a5448c776853ba8d8dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe95e18a0358035c7ccb9cf52dd183ab91a8bb0afb924a5448c776853ba8d8dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe95e18a0358035c7ccb9cf52dd183ab91a8bb0afb924a5448c776853ba8d8dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe95e18a0358035c7ccb9cf52dd183ab91a8bb0afb924a5448c776853ba8d8dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe95e18a0358035c7ccb9cf52dd183ab91a8bb0afb924a5448c776853ba8d8dd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:21 compute-0 podman[438746]: 2025-10-03 10:17:21.22422431 +0000 UTC m=+0.465149445 container init c7ddb60881c792f40b1f037cf6fe11e77367fbb5260cc3795d9dfc1f15b742e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct  3 10:17:21 compute-0 podman[438746]: 2025-10-03 10:17:21.232503196 +0000 UTC m=+0.473428311 container start c7ddb60881c792f40b1f037cf6fe11e77367fbb5260cc3795d9dfc1f15b742e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_agnesi, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 10:17:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:21 compute-0 podman[438746]: 2025-10-03 10:17:21.285121636 +0000 UTC m=+0.526046791 container attach c7ddb60881c792f40b1f037cf6fe11e77367fbb5260cc3795d9dfc1f15b742e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_agnesi, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:17:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:17:21 compute-0 nova_compute[351685]: 2025-10-03 10:17:21.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:22 compute-0 nova_compute[351685]: 2025-10-03 10:17:22.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:22 compute-0 stupefied_agnesi[438762]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:17:22 compute-0 stupefied_agnesi[438762]: --> relative data size: 1.0
Oct  3 10:17:22 compute-0 stupefied_agnesi[438762]: --> All data devices are unavailable
Oct  3 10:17:22 compute-0 systemd[1]: libpod-c7ddb60881c792f40b1f037cf6fe11e77367fbb5260cc3795d9dfc1f15b742e8.scope: Deactivated successfully.
Oct  3 10:17:22 compute-0 systemd[1]: libpod-c7ddb60881c792f40b1f037cf6fe11e77367fbb5260cc3795d9dfc1f15b742e8.scope: Consumed 1.136s CPU time.
Oct  3 10:17:22 compute-0 podman[438746]: 2025-10-03 10:17:22.440882338 +0000 UTC m=+1.681807453 container died c7ddb60881c792f40b1f037cf6fe11e77367fbb5260cc3795d9dfc1f15b742e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_agnesi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:17:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe95e18a0358035c7ccb9cf52dd183ab91a8bb0afb924a5448c776853ba8d8dd-merged.mount: Deactivated successfully.
Oct  3 10:17:22 compute-0 podman[438746]: 2025-10-03 10:17:22.865698666 +0000 UTC m=+2.106623811 container remove c7ddb60881c792f40b1f037cf6fe11e77367fbb5260cc3795d9dfc1f15b742e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 10:17:22 compute-0 systemd[1]: libpod-conmon-c7ddb60881c792f40b1f037cf6fe11e77367fbb5260cc3795d9dfc1f15b742e8.scope: Deactivated successfully.
Oct  3 10:17:23 compute-0 podman[438803]: 2025-10-03 10:17:23.033000171 +0000 UTC m=+0.106226854 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  3 10:17:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:23 compute-0 podman[438957]: 2025-10-03 10:17:23.810037886 +0000 UTC m=+0.108908860 container create 3c2adbab26db762b2f86317261dd2cb68f30e41387362d80d0a35be312a547be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 10:17:23 compute-0 podman[438957]: 2025-10-03 10:17:23.747890959 +0000 UTC m=+0.046761953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:17:23 compute-0 systemd[1]: Started libpod-conmon-3c2adbab26db762b2f86317261dd2cb68f30e41387362d80d0a35be312a547be.scope.
Oct  3 10:17:23 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:17:23 compute-0 podman[438957]: 2025-10-03 10:17:23.98477445 +0000 UTC m=+0.283645454 container init 3c2adbab26db762b2f86317261dd2cb68f30e41387362d80d0a35be312a547be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_turing, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:17:23 compute-0 podman[438957]: 2025-10-03 10:17:23.995145243 +0000 UTC m=+0.294016227 container start 3c2adbab26db762b2f86317261dd2cb68f30e41387362d80d0a35be312a547be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_turing, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 10:17:24 compute-0 podman[438957]: 2025-10-03 10:17:24.001280811 +0000 UTC m=+0.300151785 container attach 3c2adbab26db762b2f86317261dd2cb68f30e41387362d80d0a35be312a547be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_turing, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:17:24 compute-0 vigorous_turing[438973]: 167 167
Oct  3 10:17:24 compute-0 systemd[1]: libpod-3c2adbab26db762b2f86317261dd2cb68f30e41387362d80d0a35be312a547be.scope: Deactivated successfully.
Oct  3 10:17:24 compute-0 podman[438957]: 2025-10-03 10:17:24.006290242 +0000 UTC m=+0.305161206 container died 3c2adbab26db762b2f86317261dd2cb68f30e41387362d80d0a35be312a547be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:17:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1134e9e297d75d707d3fae075818f3d3d9d4a074d5bad5d837ca199eb1b41d02-merged.mount: Deactivated successfully.
Oct  3 10:17:24 compute-0 podman[438957]: 2025-10-03 10:17:24.057713553 +0000 UTC m=+0.356584517 container remove 3c2adbab26db762b2f86317261dd2cb68f30e41387362d80d0a35be312a547be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 10:17:24 compute-0 systemd[1]: libpod-conmon-3c2adbab26db762b2f86317261dd2cb68f30e41387362d80d0a35be312a547be.scope: Deactivated successfully.
Oct  3 10:17:24 compute-0 podman[438996]: 2025-10-03 10:17:24.259438374 +0000 UTC m=+0.063676356 container create 08e1057446b4be4ff10ae51bc59675bb140a41d4d0203716be00682ca41163b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dewdney, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:17:24 compute-0 podman[438996]: 2025-10-03 10:17:24.236779796 +0000 UTC m=+0.041017788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:17:24 compute-0 systemd[1]: Started libpod-conmon-08e1057446b4be4ff10ae51bc59675bb140a41d4d0203716be00682ca41163b1.scope.
Oct  3 10:17:24 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e3fd3f3e29bca6766121699a5e41c8f20ebde9e2aee0e6dcaf6ddcc468f7f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e3fd3f3e29bca6766121699a5e41c8f20ebde9e2aee0e6dcaf6ddcc468f7f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e3fd3f3e29bca6766121699a5e41c8f20ebde9e2aee0e6dcaf6ddcc468f7f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e3fd3f3e29bca6766121699a5e41c8f20ebde9e2aee0e6dcaf6ddcc468f7f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:24 compute-0 podman[438996]: 2025-10-03 10:17:24.395732524 +0000 UTC m=+0.199970556 container init 08e1057446b4be4ff10ae51bc59675bb140a41d4d0203716be00682ca41163b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  3 10:17:24 compute-0 podman[438996]: 2025-10-03 10:17:24.423140084 +0000 UTC m=+0.227378056 container start 08e1057446b4be4ff10ae51bc59675bb140a41d4d0203716be00682ca41163b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dewdney, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:17:24 compute-0 podman[438996]: 2025-10-03 10:17:24.427209534 +0000 UTC m=+0.231447506 container attach 08e1057446b4be4ff10ae51bc59675bb140a41d4d0203716be00682ca41163b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dewdney, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:17:25 compute-0 sad_dewdney[439013]: {
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:    "0": [
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:        {
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "devices": [
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "/dev/loop3"
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            ],
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "lv_name": "ceph_lv0",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "lv_size": "21470642176",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "name": "ceph_lv0",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "tags": {
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.cluster_name": "ceph",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.crush_device_class": "",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.encrypted": "0",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.osd_id": "0",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.type": "block",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.vdo": "0"
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            },
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "type": "block",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "vg_name": "ceph_vg0"
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:        }
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:    ],
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:    "1": [
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:        {
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "devices": [
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "/dev/loop4"
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            ],
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "lv_name": "ceph_lv1",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "lv_size": "21470642176",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "name": "ceph_lv1",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "tags": {
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.cluster_name": "ceph",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.crush_device_class": "",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.encrypted": "0",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.osd_id": "1",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.type": "block",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.vdo": "0"
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            },
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "type": "block",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "vg_name": "ceph_vg1"
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:        }
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:    ],
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:    "2": [
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:        {
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "devices": [
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "/dev/loop5"
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            ],
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "lv_name": "ceph_lv2",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "lv_size": "21470642176",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "name": "ceph_lv2",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "tags": {
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.cluster_name": "ceph",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.crush_device_class": "",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.encrypted": "0",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.osd_id": "2",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.type": "block",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:                "ceph.vdo": "0"
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            },
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "type": "block",
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:            "vg_name": "ceph_vg2"
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:        }
Oct  3 10:17:25 compute-0 sad_dewdney[439013]:    ]
Oct  3 10:17:25 compute-0 sad_dewdney[439013]: }
Oct  3 10:17:25 compute-0 systemd[1]: libpod-08e1057446b4be4ff10ae51bc59675bb140a41d4d0203716be00682ca41163b1.scope: Deactivated successfully.
Oct  3 10:17:25 compute-0 podman[438996]: 2025-10-03 10:17:25.251919551 +0000 UTC m=+1.056157553 container died 08e1057446b4be4ff10ae51bc59675bb140a41d4d0203716be00682ca41163b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Oct  3 10:17:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-04e3fd3f3e29bca6766121699a5e41c8f20ebde9e2aee0e6dcaf6ddcc468f7f5-merged.mount: Deactivated successfully.
Oct  3 10:17:25 compute-0 podman[438996]: 2025-10-03 10:17:25.590631583 +0000 UTC m=+1.394869555 container remove 08e1057446b4be4ff10ae51bc59675bb140a41d4d0203716be00682ca41163b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dewdney, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:17:25 compute-0 systemd[1]: libpod-conmon-08e1057446b4be4ff10ae51bc59675bb140a41d4d0203716be00682ca41163b1.scope: Deactivated successfully.
Oct  3 10:17:26 compute-0 podman[439172]: 2025-10-03 10:17:26.544654144 +0000 UTC m=+0.059934637 container create b71520edb03b432106a142bc7faeb508982bdcd5396549f57a4fd53f6ab83929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_grothendieck, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:17:26 compute-0 systemd[1]: Started libpod-conmon-b71520edb03b432106a142bc7faeb508982bdcd5396549f57a4fd53f6ab83929.scope.
Oct  3 10:17:26 compute-0 podman[439172]: 2025-10-03 10:17:26.525615072 +0000 UTC m=+0.040895595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:17:26 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:17:26 compute-0 podman[439172]: 2025-10-03 10:17:26.66812106 +0000 UTC m=+0.183401573 container init b71520edb03b432106a142bc7faeb508982bdcd5396549f57a4fd53f6ab83929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_grothendieck, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:17:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:17:26 compute-0 podman[439172]: 2025-10-03 10:17:26.676356645 +0000 UTC m=+0.191637148 container start b71520edb03b432106a142bc7faeb508982bdcd5396549f57a4fd53f6ab83929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 10:17:26 compute-0 nova_compute[351685]: 2025-10-03 10:17:26.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:26 compute-0 podman[439172]: 2025-10-03 10:17:26.681731967 +0000 UTC m=+0.197012460 container attach b71520edb03b432106a142bc7faeb508982bdcd5396549f57a4fd53f6ab83929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_grothendieck, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 10:17:26 compute-0 confident_grothendieck[439188]: 167 167
Oct  3 10:17:26 compute-0 systemd[1]: libpod-b71520edb03b432106a142bc7faeb508982bdcd5396549f57a4fd53f6ab83929.scope: Deactivated successfully.
Oct  3 10:17:26 compute-0 podman[439172]: 2025-10-03 10:17:26.685906462 +0000 UTC m=+0.201186955 container died b71520edb03b432106a142bc7faeb508982bdcd5396549f57a4fd53f6ab83929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_grothendieck, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:17:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-49ea64c3d4f868db5ffaf5e9e8034151da420a495823c9791d4f18d374bd00b6-merged.mount: Deactivated successfully.
Oct  3 10:17:26 compute-0 podman[439172]: 2025-10-03 10:17:26.732997205 +0000 UTC m=+0.248277698 container remove b71520edb03b432106a142bc7faeb508982bdcd5396549f57a4fd53f6ab83929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_grothendieck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:17:26 compute-0 systemd[1]: libpod-conmon-b71520edb03b432106a142bc7faeb508982bdcd5396549f57a4fd53f6ab83929.scope: Deactivated successfully.
Oct  3 10:17:26 compute-0 podman[439211]: 2025-10-03 10:17:26.987005065 +0000 UTC m=+0.068659196 container create cbf2afa0c882b58f59d3a531e52743e56f10f0e78d5c0688a4f2b6fafcaa954c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_euler, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 10:17:27 compute-0 podman[439211]: 2025-10-03 10:17:26.954311605 +0000 UTC m=+0.035965816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:17:27 compute-0 systemd[1]: Started libpod-conmon-cbf2afa0c882b58f59d3a531e52743e56f10f0e78d5c0688a4f2b6fafcaa954c.scope.
Oct  3 10:17:27 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:17:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e05ae14cb59db2dd7671cc62709ed23acafeedee747d791e8da3427ea3e676d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e05ae14cb59db2dd7671cc62709ed23acafeedee747d791e8da3427ea3e676d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e05ae14cb59db2dd7671cc62709ed23acafeedee747d791e8da3427ea3e676d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e05ae14cb59db2dd7671cc62709ed23acafeedee747d791e8da3427ea3e676d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:17:27 compute-0 podman[439211]: 2025-10-03 10:17:27.152685578 +0000 UTC m=+0.234339749 container init cbf2afa0c882b58f59d3a531e52743e56f10f0e78d5c0688a4f2b6fafcaa954c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_euler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 10:17:27 compute-0 podman[439211]: 2025-10-03 10:17:27.183510079 +0000 UTC m=+0.265164210 container start cbf2afa0c882b58f59d3a531e52743e56f10f0e78d5c0688a4f2b6fafcaa954c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_euler, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 10:17:27 compute-0 podman[439211]: 2025-10-03 10:17:27.189370217 +0000 UTC m=+0.271024458 container attach cbf2afa0c882b58f59d3a531e52743e56f10f0e78d5c0688a4f2b6fafcaa954c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:17:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:27 compute-0 nova_compute[351685]: 2025-10-03 10:17:27.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:28 compute-0 nifty_euler[439227]: {
Oct  3 10:17:28 compute-0 nifty_euler[439227]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:17:28 compute-0 nifty_euler[439227]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:17:28 compute-0 nifty_euler[439227]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:17:28 compute-0 nifty_euler[439227]:        "osd_id": 1,
Oct  3 10:17:28 compute-0 nifty_euler[439227]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:17:28 compute-0 nifty_euler[439227]:        "type": "bluestore"
Oct  3 10:17:28 compute-0 nifty_euler[439227]:    },
Oct  3 10:17:28 compute-0 nifty_euler[439227]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:17:28 compute-0 nifty_euler[439227]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:17:28 compute-0 nifty_euler[439227]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:17:28 compute-0 nifty_euler[439227]:        "osd_id": 2,
Oct  3 10:17:28 compute-0 nifty_euler[439227]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:17:28 compute-0 nifty_euler[439227]:        "type": "bluestore"
Oct  3 10:17:28 compute-0 nifty_euler[439227]:    },
Oct  3 10:17:28 compute-0 nifty_euler[439227]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:17:28 compute-0 nifty_euler[439227]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:17:28 compute-0 nifty_euler[439227]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:17:28 compute-0 nifty_euler[439227]:        "osd_id": 0,
Oct  3 10:17:28 compute-0 nifty_euler[439227]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:17:28 compute-0 nifty_euler[439227]:        "type": "bluestore"
Oct  3 10:17:28 compute-0 nifty_euler[439227]:    }
Oct  3 10:17:28 compute-0 nifty_euler[439227]: }
Oct  3 10:17:28 compute-0 systemd[1]: libpod-cbf2afa0c882b58f59d3a531e52743e56f10f0e78d5c0688a4f2b6fafcaa954c.scope: Deactivated successfully.
Oct  3 10:17:28 compute-0 systemd[1]: libpod-cbf2afa0c882b58f59d3a531e52743e56f10f0e78d5c0688a4f2b6fafcaa954c.scope: Consumed 1.214s CPU time.
Oct  3 10:17:28 compute-0 podman[439211]: 2025-10-03 10:17:28.404860218 +0000 UTC m=+1.486514379 container died cbf2afa0c882b58f59d3a531e52743e56f10f0e78d5c0688a4f2b6fafcaa954c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_euler, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:17:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-e05ae14cb59db2dd7671cc62709ed23acafeedee747d791e8da3427ea3e676d1-merged.mount: Deactivated successfully.
Oct  3 10:17:28 compute-0 podman[439211]: 2025-10-03 10:17:28.56429459 +0000 UTC m=+1.645948721 container remove cbf2afa0c882b58f59d3a531e52743e56f10f0e78d5c0688a4f2b6fafcaa954c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 10:17:28 compute-0 podman[439261]: 2025-10-03 10:17:28.575547291 +0000 UTC m=+0.139172282 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 10:17:28 compute-0 podman[439267]: 2025-10-03 10:17:28.577006318 +0000 UTC m=+0.133517350 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:17:28 compute-0 systemd[1]: libpod-conmon-cbf2afa0c882b58f59d3a531e52743e56f10f0e78d5c0688a4f2b6fafcaa954c.scope: Deactivated successfully.
Oct  3 10:17:28 compute-0 podman[439268]: 2025-10-03 10:17:28.602959502 +0000 UTC m=+0.154993071 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 10:17:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:17:28 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:17:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:17:28 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:17:28 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ccc4227a-412b-4808-bc0e-d018dc939370 does not exist
Oct  3 10:17:28 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 319876d9-a142-4ae7-b162-7a62e52d2c5e does not exist
Oct  3 10:17:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:29 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:17:29 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:17:29 compute-0 podman[157165]: time="2025-10-03T10:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:17:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:17:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9059 "" "Go-http-client/1.1"
Oct  3 10:17:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:31 compute-0 openstack_network_exporter[367524]: ERROR   10:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:17:31 compute-0 openstack_network_exporter[367524]: ERROR   10:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:17:31 compute-0 openstack_network_exporter[367524]: ERROR   10:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:17:31 compute-0 openstack_network_exporter[367524]: ERROR   10:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:17:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:17:31 compute-0 openstack_network_exporter[367524]: ERROR   10:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:17:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:17:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:17:31 compute-0 nova_compute[351685]: 2025-10-03 10:17:31.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:32 compute-0 nova_compute[351685]: 2025-10-03 10:17:32.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:32 compute-0 podman[439379]: 2025-10-03 10:17:32.866493949 +0000 UTC m=+0.104574692 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:17:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:17:36 compute-0 nova_compute[351685]: 2025-10-03 10:17:36.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:37 compute-0 nova_compute[351685]: 2025-10-03 10:17:37.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:37 compute-0 podman[439397]: 2025-10-03 10:17:37.837780232 +0000 UTC m=+0.090450796 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:17:37 compute-0 podman[439398]: 2025-10-03 10:17:37.863304803 +0000 UTC m=+0.111386580 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-type=git, version=9.4, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.tags=base rhel9, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible)
Oct  3 10:17:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.886 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.887 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.897 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.897 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.898 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.898 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.898 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.899 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:17:40.898541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.906 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.908 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.908 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.908 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.908 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.909 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:17:40.908885) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.910 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.910 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.911 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.911 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.911 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:17:40.911691) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.934 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.935 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.935 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.936 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.936 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.936 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.936 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.936 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.936 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:17:40.936802) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.979 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.980 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.980 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.980 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.981 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.981 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.981 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.981 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.981 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.981 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.982 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.982 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:17:40.981480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.982 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.982 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.982 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.982 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.982 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.983 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.983 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.983 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.983 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:17:40.982962) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.984 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.984 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.984 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.984 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.984 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.984 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.985 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:17:40.984625) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.985 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.985 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.985 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.986 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.986 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.986 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.986 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.986 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.986 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.987 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:17:40.986390) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.987 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.987 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.987 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.987 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.987 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.987 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.988 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.988 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.988 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.989 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.989 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.989 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:17:40.987960) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.989 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.989 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.989 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:40.990 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:17:40.989714) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.011 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.012 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.012 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.012 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.012 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.012 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.013 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.014 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.014 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:17:41.012398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:17:41.014334) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.015 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.015 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.015 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.015 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.016 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.016 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:17:41.015964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.016 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.016 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.017 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.017 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.017 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.017 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:17:41.017352) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.018 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.018 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:17:41.018359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.019 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.020 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.020 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.021 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:17:41.019559) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.021 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.021 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:17:41.020667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.022 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 45560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:17:41.022219) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.023 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.023 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.023 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:17:41.023605) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.024 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.024 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.024 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.025 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:17:41.024937) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.025 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.025 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.025 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.026 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.026 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.026 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:17:41.026113) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.027 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.027 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.027 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.84765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.028 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:17:41.027507) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.028 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.029 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.029 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.029 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.029 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.029 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.029 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.030 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:17:41.029207) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.030 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.030 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.030 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.030 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.030 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:17:41.030337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.030 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.033 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.036 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.036 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.036 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:17:41.036 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:17:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:17:41.606 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:17:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:17:41.606 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:17:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:17:41.607 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:17:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:17:41 compute-0 nova_compute[351685]: 2025-10-03 10:17:41.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:42 compute-0 nova_compute[351685]: 2025-10-03 10:17:42.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:17:46
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'backups', 'images', 'default.rgw.log', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'vms']
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:17:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:17:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:17:46 compute-0 nova_compute[351685]: 2025-10-03 10:17:46.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:46 compute-0 nova_compute[351685]: 2025-10-03 10:17:46.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:17:46 compute-0 nova_compute[351685]: 2025-10-03 10:17:46.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:17:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:47 compute-0 nova_compute[351685]: 2025-10-03 10:17:47.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:47 compute-0 nova_compute[351685]: 2025-10-03 10:17:47.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:17:47 compute-0 nova_compute[351685]: 2025-10-03 10:17:47.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:17:47 compute-0 nova_compute[351685]: 2025-10-03 10:17:47.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:17:48 compute-0 podman[439441]: 2025-10-03 10:17:48.864710072 +0000 UTC m=+0.108776985 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 10:17:48 compute-0 podman[439440]: 2025-10-03 10:17:48.868853776 +0000 UTC m=+0.111236244 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Oct  3 10:17:48 compute-0 nova_compute[351685]: 2025-10-03 10:17:48.872 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:17:48 compute-0 nova_compute[351685]: 2025-10-03 10:17:48.873 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:17:48 compute-0 nova_compute[351685]: 2025-10-03 10:17:48.873 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:17:48 compute-0 nova_compute[351685]: 2025-10-03 10:17:48.873 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:17:48 compute-0 podman[439442]: 2025-10-03 10:17:48.912051544 +0000 UTC m=+0.154952949 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 10:17:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:51 compute-0 nova_compute[351685]: 2025-10-03 10:17:51.205 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:17:51 compute-0 nova_compute[351685]: 2025-10-03 10:17:51.221 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:17:51 compute-0 nova_compute[351685]: 2025-10-03 10:17:51.222 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:17:51 compute-0 nova_compute[351685]: 2025-10-03 10:17:51.222 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:17:51 compute-0 nova_compute[351685]: 2025-10-03 10:17:51.223 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:17:51 compute-0 nova_compute[351685]: 2025-10-03 10:17:51.223 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:17:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:17:51 compute-0 nova_compute[351685]: 2025-10-03 10:17:51.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:52 compute-0 nova_compute[351685]: 2025-10-03 10:17:52.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:52 compute-0 nova_compute[351685]: 2025-10-03 10:17:52.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:17:52 compute-0 nova_compute[351685]: 2025-10-03 10:17:52.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:17:52 compute-0 nova_compute[351685]: 2025-10-03 10:17:52.854 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:17:52 compute-0 nova_compute[351685]: 2025-10-03 10:17:52.854 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:17:52 compute-0 nova_compute[351685]: 2025-10-03 10:17:52.854 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:17:52 compute-0 nova_compute[351685]: 2025-10-03 10:17:52.854 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:17:52 compute-0 nova_compute[351685]: 2025-10-03 10:17:52.855 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:17:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:17:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1425960953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:17:53 compute-0 nova_compute[351685]: 2025-10-03 10:17:53.339 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:17:53 compute-0 nova_compute[351685]: 2025-10-03 10:17:53.436 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:17:53 compute-0 nova_compute[351685]: 2025-10-03 10:17:53.437 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:17:53 compute-0 nova_compute[351685]: 2025-10-03 10:17:53.438 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:17:53 compute-0 nova_compute[351685]: 2025-10-03 10:17:53.806 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:17:53 compute-0 nova_compute[351685]: 2025-10-03 10:17:53.808 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3841MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:17:53 compute-0 nova_compute[351685]: 2025-10-03 10:17:53.808 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:17:53 compute-0 nova_compute[351685]: 2025-10-03 10:17:53.808 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:17:53 compute-0 podman[439524]: 2025-10-03 10:17:53.820953305 +0000 UTC m=+0.073005207 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  3 10:17:53 compute-0 nova_compute[351685]: 2025-10-03 10:17:53.908 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:17:53 compute-0 nova_compute[351685]: 2025-10-03 10:17:53.908 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:17:53 compute-0 nova_compute[351685]: 2025-10-03 10:17:53.908 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:17:53 compute-0 nova_compute[351685]: 2025-10-03 10:17:53.939 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 10:17:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:17:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2078509217' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:17:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:17:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2078509217' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:17:53 compute-0 nova_compute[351685]: 2025-10-03 10:17:53.959 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 10:17:53 compute-0 nova_compute[351685]: 2025-10-03 10:17:53.960 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 10:17:53 compute-0 nova_compute[351685]: 2025-10-03 10:17:53.978 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 10:17:53 compute-0 nova_compute[351685]: 2025-10-03 10:17:53.997 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 10:17:54 compute-0 nova_compute[351685]: 2025-10-03 10:17:54.031 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:17:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:17:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/208191197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:17:54 compute-0 nova_compute[351685]: 2025-10-03 10:17:54.535 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:17:54 compute-0 nova_compute[351685]: 2025-10-03 10:17:54.544 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:17:54 compute-0 nova_compute[351685]: 2025-10-03 10:17:54.561 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:17:54 compute-0 nova_compute[351685]: 2025-10-03 10:17:54.562 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:17:54 compute-0 nova_compute[351685]: 2025-10-03 10:17:54.562 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:17:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:17:55 compute-0 nova_compute[351685]: 2025-10-03 10:17:55.562 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:17:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:17:56 compute-0 nova_compute[351685]: 2025-10-03 10:17:56.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:56 compute-0 nova_compute[351685]: 2025-10-03 10:17:56.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:17:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:57 compute-0 nova_compute[351685]: 2025-10-03 10:17:57.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:17:58 compute-0 podman[439564]: 2025-10-03 10:17:58.815662894 +0000 UTC m=+0.076522040 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:17:58 compute-0 podman[439565]: 2025-10-03 10:17:58.833961251 +0000 UTC m=+0.091278703 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, container_name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 10:17:58 compute-0 podman[439566]: 2025-10-03 10:17:58.838332232 +0000 UTC m=+0.089122734 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  3 10:17:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:17:59 compute-0 podman[157165]: time="2025-10-03T10:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:17:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:17:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9053 "" "Go-http-client/1.1"
Oct  3 10:18:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:01 compute-0 openstack_network_exporter[367524]: ERROR   10:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:18:01 compute-0 openstack_network_exporter[367524]: ERROR   10:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:18:01 compute-0 openstack_network_exporter[367524]: ERROR   10:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:18:01 compute-0 openstack_network_exporter[367524]: ERROR   10:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:18:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:18:01 compute-0 openstack_network_exporter[367524]: ERROR   10:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:18:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:18:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:18:01 compute-0 nova_compute[351685]: 2025-10-03 10:18:01.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:02 compute-0 nova_compute[351685]: 2025-10-03 10:18:02.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:03 compute-0 podman[439626]: 2025-10-03 10:18:03.865755673 +0000 UTC m=+0.116718960 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:18:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:18:06 compute-0 nova_compute[351685]: 2025-10-03 10:18:06.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:07 compute-0 nova_compute[351685]: 2025-10-03 10:18:07.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:08 compute-0 podman[439647]: 2025-10-03 10:18:08.886028475 +0000 UTC m=+0.121935019 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, managed_by=edpm_ansible, container_name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, com.redhat.component=ubi9-container, config_id=edpm, release=1214.1726694543, name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct  3 10:18:08 compute-0 podman[439646]: 2025-10-03 10:18:08.890936772 +0000 UTC m=+0.139274135 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:18:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:18:11 compute-0 nova_compute[351685]: 2025-10-03 10:18:11.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:12 compute-0 nova_compute[351685]: 2025-10-03 10:18:12.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:18:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:18:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:18:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:18:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:18:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:18:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:18:16 compute-0 nova_compute[351685]: 2025-10-03 10:18:16.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:17 compute-0 nova_compute[351685]: 2025-10-03 10:18:17.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:19 compute-0 podman[439687]: 2025-10-03 10:18:19.879612446 +0000 UTC m=+0.126968681 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, config_id=edpm, name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6)
Oct  3 10:18:19 compute-0 podman[439688]: 2025-10-03 10:18:19.890434794 +0000 UTC m=+0.132799567 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Oct  3 10:18:19 compute-0 podman[439689]: 2025-10-03 10:18:19.908646329 +0000 UTC m=+0.149522655 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 10:18:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:18:21 compute-0 nova_compute[351685]: 2025-10-03 10:18:21.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:22 compute-0 nova_compute[351685]: 2025-10-03 10:18:22.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:24 compute-0 podman[439745]: 2025-10-03 10:18:24.850456349 +0000 UTC m=+0.108300710 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:18:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:18:26 compute-0 nova_compute[351685]: 2025-10-03 10:18:26.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:27 compute-0 nova_compute[351685]: 2025-10-03 10:18:27.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:29 compute-0 podman[439790]: 2025-10-03 10:18:29.062019488 +0000 UTC m=+0.097065499 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 10:18:29 compute-0 podman[439788]: 2025-10-03 10:18:29.072332299 +0000 UTC m=+0.103089293 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:18:29 compute-0 podman[439789]: 2025-10-03 10:18:29.109749631 +0000 UTC m=+0.152364526 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 10:18:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:29 compute-0 podman[157165]: time="2025-10-03T10:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:18:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:18:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9059 "" "Go-http-client/1.1"
Oct  3 10:18:29 compute-0 podman[439985]: 2025-10-03 10:18:29.929657083 +0000 UTC m=+0.085997994 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:18:30 compute-0 podman[439985]: 2025-10-03 10:18:30.033157398 +0000 UTC m=+0.189498309 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:18:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:18:30 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:18:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:18:30 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:18:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:31 compute-0 openstack_network_exporter[367524]: ERROR   10:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:18:31 compute-0 openstack_network_exporter[367524]: ERROR   10:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:18:31 compute-0 openstack_network_exporter[367524]: ERROR   10:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:18:31 compute-0 openstack_network_exporter[367524]: ERROR   10:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:18:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:18:31 compute-0 openstack_network_exporter[367524]: ERROR   10:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:18:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:18:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:18:31 compute-0 nova_compute[351685]: 2025-10-03 10:18:31.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:31 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:18:31 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:18:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:18:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:18:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:18:31 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:18:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:18:31 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:18:31 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a3610003-7dfe-4534-bfed-cb456224ff44 does not exist
Oct  3 10:18:31 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8198dc95-d1e3-434e-a308-07c52ac0e6cc does not exist
Oct  3 10:18:31 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 95e08a81-e032-4525-b1e2-8952d62c6cc9 does not exist
Oct  3 10:18:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:18:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:18:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:18:31 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:18:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:18:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:18:32 compute-0 nova_compute[351685]: 2025-10-03 10:18:32.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:32 compute-0 podman[440404]: 2025-10-03 10:18:32.837253867 +0000 UTC m=+0.070670062 container create 42d08f1679b5ebdce2767a4a7fb017eca2056dac7073453ee63a49886fef45e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:18:32 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:18:32 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:18:32 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:18:32 compute-0 systemd[1]: Started libpod-conmon-42d08f1679b5ebdce2767a4a7fb017eca2056dac7073453ee63a49886fef45e3.scope.
Oct  3 10:18:32 compute-0 podman[440404]: 2025-10-03 10:18:32.813486043 +0000 UTC m=+0.046902268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:18:32 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:18:32 compute-0 podman[440404]: 2025-10-03 10:18:32.946143325 +0000 UTC m=+0.179559550 container init 42d08f1679b5ebdce2767a4a7fb017eca2056dac7073453ee63a49886fef45e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:18:32 compute-0 podman[440404]: 2025-10-03 10:18:32.956519488 +0000 UTC m=+0.189935683 container start 42d08f1679b5ebdce2767a4a7fb017eca2056dac7073453ee63a49886fef45e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  3 10:18:32 compute-0 tender_fermi[440420]: 167 167
Oct  3 10:18:32 compute-0 podman[440404]: 2025-10-03 10:18:32.962503901 +0000 UTC m=+0.195920086 container attach 42d08f1679b5ebdce2767a4a7fb017eca2056dac7073453ee63a49886fef45e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 10:18:32 compute-0 systemd[1]: libpod-42d08f1679b5ebdce2767a4a7fb017eca2056dac7073453ee63a49886fef45e3.scope: Deactivated successfully.
Oct  3 10:18:32 compute-0 podman[440404]: 2025-10-03 10:18:32.963948447 +0000 UTC m=+0.197364632 container died 42d08f1679b5ebdce2767a4a7fb017eca2056dac7073453ee63a49886fef45e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 10:18:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-67c14c59c0309aad6d9d56563781e902531f51b60e331bfce3acbe6fe763f00a-merged.mount: Deactivated successfully.
Oct  3 10:18:33 compute-0 podman[440404]: 2025-10-03 10:18:33.03468195 +0000 UTC m=+0.268098135 container remove 42d08f1679b5ebdce2767a4a7fb017eca2056dac7073453ee63a49886fef45e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:18:33 compute-0 systemd[1]: libpod-conmon-42d08f1679b5ebdce2767a4a7fb017eca2056dac7073453ee63a49886fef45e3.scope: Deactivated successfully.
Oct  3 10:18:33 compute-0 podman[440443]: 2025-10-03 10:18:33.258437858 +0000 UTC m=+0.068047807 container create 218d46ce9a0d1984646055a9af5c75393ae9cb729af4ceeb2619fe6218dd06e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hodgkin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 10:18:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:33 compute-0 systemd[1]: Started libpod-conmon-218d46ce9a0d1984646055a9af5c75393ae9cb729af4ceeb2619fe6218dd06e5.scope.
Oct  3 10:18:33 compute-0 podman[440443]: 2025-10-03 10:18:33.237537717 +0000 UTC m=+0.047147696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:18:33 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/696d0c69c23711404608771def2ec34fdb4b791051dbdc2ebe1e28d1317c091b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/696d0c69c23711404608771def2ec34fdb4b791051dbdc2ebe1e28d1317c091b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/696d0c69c23711404608771def2ec34fdb4b791051dbdc2ebe1e28d1317c091b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/696d0c69c23711404608771def2ec34fdb4b791051dbdc2ebe1e28d1317c091b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/696d0c69c23711404608771def2ec34fdb4b791051dbdc2ebe1e28d1317c091b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:18:33 compute-0 podman[440443]: 2025-10-03 10:18:33.370834919 +0000 UTC m=+0.180444888 container init 218d46ce9a0d1984646055a9af5c75393ae9cb729af4ceeb2619fe6218dd06e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hodgkin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:18:33 compute-0 podman[440443]: 2025-10-03 10:18:33.386783291 +0000 UTC m=+0.196393280 container start 218d46ce9a0d1984646055a9af5c75393ae9cb729af4ceeb2619fe6218dd06e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:18:33 compute-0 podman[440443]: 2025-10-03 10:18:33.392716732 +0000 UTC m=+0.202326711 container attach 218d46ce9a0d1984646055a9af5c75393ae9cb729af4ceeb2619fe6218dd06e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 10:18:34 compute-0 zealous_hodgkin[440458]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:18:34 compute-0 zealous_hodgkin[440458]: --> relative data size: 1.0
Oct  3 10:18:34 compute-0 zealous_hodgkin[440458]: --> All data devices are unavailable
Oct  3 10:18:34 compute-0 systemd[1]: libpod-218d46ce9a0d1984646055a9af5c75393ae9cb729af4ceeb2619fe6218dd06e5.scope: Deactivated successfully.
Oct  3 10:18:34 compute-0 systemd[1]: libpod-218d46ce9a0d1984646055a9af5c75393ae9cb729af4ceeb2619fe6218dd06e5.scope: Consumed 1.163s CPU time.
Oct  3 10:18:34 compute-0 podman[440487]: 2025-10-03 10:18:34.703238437 +0000 UTC m=+0.054383309 container died 218d46ce9a0d1984646055a9af5c75393ae9cb729af4ceeb2619fe6218dd06e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hodgkin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:18:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-696d0c69c23711404608771def2ec34fdb4b791051dbdc2ebe1e28d1317c091b-merged.mount: Deactivated successfully.
Oct  3 10:18:35 compute-0 podman[440487]: 2025-10-03 10:18:35.115928957 +0000 UTC m=+0.467073849 container remove 218d46ce9a0d1984646055a9af5c75393ae9cb729af4ceeb2619fe6218dd06e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 10:18:35 compute-0 systemd[1]: libpod-conmon-218d46ce9a0d1984646055a9af5c75393ae9cb729af4ceeb2619fe6218dd06e5.scope: Deactivated successfully.
Oct  3 10:18:35 compute-0 podman[440488]: 2025-10-03 10:18:35.218726529 +0000 UTC m=+0.552739970 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:18:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:36 compute-0 podman[440661]: 2025-10-03 10:18:35.98910589 +0000 UTC m=+0.034798420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:18:36 compute-0 podman[440661]: 2025-10-03 10:18:36.10582538 +0000 UTC m=+0.151517920 container create 85a005e67a6a0c2949f11d7ede870a54b8cf1f5fef7feeb7d3cb9bce3f2af96e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:18:36 compute-0 systemd[1]: Started libpod-conmon-85a005e67a6a0c2949f11d7ede870a54b8cf1f5fef7feeb7d3cb9bce3f2af96e.scope.
Oct  3 10:18:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:18:36 compute-0 podman[440661]: 2025-10-03 10:18:36.239577177 +0000 UTC m=+0.285269697 container init 85a005e67a6a0c2949f11d7ede870a54b8cf1f5fef7feeb7d3cb9bce3f2af96e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 10:18:36 compute-0 podman[440661]: 2025-10-03 10:18:36.248842124 +0000 UTC m=+0.294534644 container start 85a005e67a6a0c2949f11d7ede870a54b8cf1f5fef7feeb7d3cb9bce3f2af96e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Oct  3 10:18:36 compute-0 podman[440661]: 2025-10-03 10:18:36.254708593 +0000 UTC m=+0.300401123 container attach 85a005e67a6a0c2949f11d7ede870a54b8cf1f5fef7feeb7d3cb9bce3f2af96e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:18:36 compute-0 competent_bell[440677]: 167 167
Oct  3 10:18:36 compute-0 systemd[1]: libpod-85a005e67a6a0c2949f11d7ede870a54b8cf1f5fef7feeb7d3cb9bce3f2af96e.scope: Deactivated successfully.
Oct  3 10:18:36 compute-0 podman[440661]: 2025-10-03 10:18:36.258408162 +0000 UTC m=+0.304100682 container died 85a005e67a6a0c2949f11d7ede870a54b8cf1f5fef7feeb7d3cb9bce3f2af96e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:18:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-97b9c399b9945231b352a4c66f848f87a72706ce5ecf148440e13cfcd444811e-merged.mount: Deactivated successfully.
Oct  3 10:18:36 compute-0 podman[440661]: 2025-10-03 10:18:36.323851094 +0000 UTC m=+0.369543614 container remove 85a005e67a6a0c2949f11d7ede870a54b8cf1f5fef7feeb7d3cb9bce3f2af96e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 10:18:36 compute-0 systemd[1]: libpod-conmon-85a005e67a6a0c2949f11d7ede870a54b8cf1f5fef7feeb7d3cb9bce3f2af96e.scope: Deactivated successfully.
Oct  3 10:18:36 compute-0 podman[440701]: 2025-10-03 10:18:36.570758127 +0000 UTC m=+0.059868835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:18:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:18:36 compute-0 podman[440701]: 2025-10-03 10:18:36.693849752 +0000 UTC m=+0.182960420 container create 8a72b73483c689c3f877f735636bcfaf1f9f75f9fe0be6e652a94220756dd54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:18:36 compute-0 nova_compute[351685]: 2025-10-03 10:18:36.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:36 compute-0 systemd[1]: Started libpod-conmon-8a72b73483c689c3f877f735636bcfaf1f9f75f9fe0be6e652a94220756dd54b.scope.
Oct  3 10:18:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e2edd023223f8377e6986814d532944d52d9bf3b4838bca1bf106701435899/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e2edd023223f8377e6986814d532944d52d9bf3b4838bca1bf106701435899/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e2edd023223f8377e6986814d532944d52d9bf3b4838bca1bf106701435899/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e2edd023223f8377e6986814d532944d52d9bf3b4838bca1bf106701435899/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:18:36 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 10:18:36 compute-0 podman[440701]: 2025-10-03 10:18:36.923631474 +0000 UTC m=+0.412742152 container init 8a72b73483c689c3f877f735636bcfaf1f9f75f9fe0be6e652a94220756dd54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  3 10:18:36 compute-0 podman[440701]: 2025-10-03 10:18:36.954980672 +0000 UTC m=+0.444091330 container start 8a72b73483c689c3f877f735636bcfaf1f9f75f9fe0be6e652a94220756dd54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 10:18:36 compute-0 podman[440701]: 2025-10-03 10:18:36.965169879 +0000 UTC m=+0.454280527 container attach 8a72b73483c689c3f877f735636bcfaf1f9f75f9fe0be6e652a94220756dd54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 10:18:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:37 compute-0 nova_compute[351685]: 2025-10-03 10:18:37.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]: {
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:    "0": [
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:        {
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "devices": [
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "/dev/loop3"
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            ],
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "lv_name": "ceph_lv0",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "lv_size": "21470642176",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "name": "ceph_lv0",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "tags": {
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.cluster_name": "ceph",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.crush_device_class": "",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.encrypted": "0",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.osd_id": "0",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.type": "block",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.vdo": "0"
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            },
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "type": "block",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "vg_name": "ceph_vg0"
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:        }
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:    ],
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:    "1": [
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:        {
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "devices": [
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "/dev/loop4"
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            ],
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "lv_name": "ceph_lv1",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "lv_size": "21470642176",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "name": "ceph_lv1",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "tags": {
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.cluster_name": "ceph",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.crush_device_class": "",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.encrypted": "0",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.osd_id": "1",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.type": "block",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.vdo": "0"
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            },
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "type": "block",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "vg_name": "ceph_vg1"
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:        }
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:    ],
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:    "2": [
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:        {
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "devices": [
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "/dev/loop5"
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            ],
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "lv_name": "ceph_lv2",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "lv_size": "21470642176",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "name": "ceph_lv2",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "tags": {
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.cluster_name": "ceph",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.crush_device_class": "",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.encrypted": "0",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.osd_id": "2",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.type": "block",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:                "ceph.vdo": "0"
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            },
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "type": "block",
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:            "vg_name": "ceph_vg2"
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:        }
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]:    ]
Oct  3 10:18:37 compute-0 compassionate_ishizaka[440717]: }
Oct  3 10:18:37 compute-0 systemd[1]: libpod-8a72b73483c689c3f877f735636bcfaf1f9f75f9fe0be6e652a94220756dd54b.scope: Deactivated successfully.
Oct  3 10:18:37 compute-0 podman[440701]: 2025-10-03 10:18:37.806886561 +0000 UTC m=+1.295997229 container died 8a72b73483c689c3f877f735636bcfaf1f9f75f9fe0be6e652a94220756dd54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 10:18:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-23e2edd023223f8377e6986814d532944d52d9bf3b4838bca1bf106701435899-merged.mount: Deactivated successfully.
Oct  3 10:18:38 compute-0 podman[440701]: 2025-10-03 10:18:38.385610715 +0000 UTC m=+1.874721383 container remove 8a72b73483c689c3f877f735636bcfaf1f9f75f9fe0be6e652a94220756dd54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:18:38 compute-0 systemd[1]: libpod-conmon-8a72b73483c689c3f877f735636bcfaf1f9f75f9fe0be6e652a94220756dd54b.scope: Deactivated successfully.
Oct  3 10:18:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:39 compute-0 podman[440875]: 2025-10-03 10:18:39.331484264 +0000 UTC m=+0.056621831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:18:39 compute-0 podman[440875]: 2025-10-03 10:18:39.485655057 +0000 UTC m=+0.210792564 container create 10c585fb96268417a30dc3695965b2b1115df4e40aaebca530439a4ff2ce5f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  3 10:18:39 compute-0 systemd[1]: Started libpod-conmon-10c585fb96268417a30dc3695965b2b1115df4e40aaebca530439a4ff2ce5f03.scope.
Oct  3 10:18:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:18:39 compute-0 podman[440875]: 2025-10-03 10:18:39.993819052 +0000 UTC m=+0.718956569 container init 10c585fb96268417a30dc3695965b2b1115df4e40aaebca530439a4ff2ce5f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:18:40 compute-0 podman[440875]: 2025-10-03 10:18:40.003060389 +0000 UTC m=+0.728197856 container start 10c585fb96268417a30dc3695965b2b1115df4e40aaebca530439a4ff2ce5f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:18:40 compute-0 flamboyant_archimedes[440913]: 167 167
Oct  3 10:18:40 compute-0 systemd[1]: libpod-10c585fb96268417a30dc3695965b2b1115df4e40aaebca530439a4ff2ce5f03.scope: Deactivated successfully.
Oct  3 10:18:40 compute-0 podman[440875]: 2025-10-03 10:18:40.058078137 +0000 UTC m=+0.783215634 container attach 10c585fb96268417a30dc3695965b2b1115df4e40aaebca530439a4ff2ce5f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 10:18:40 compute-0 podman[440875]: 2025-10-03 10:18:40.058885043 +0000 UTC m=+0.784022510 container died 10c585fb96268417a30dc3695965b2b1115df4e40aaebca530439a4ff2ce5f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  3 10:18:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-71c059ebb5c712d419e84d1bc899e311380963dcccb043e001b8fcec98643e8a-merged.mount: Deactivated successfully.
Oct  3 10:18:40 compute-0 podman[440875]: 2025-10-03 10:18:40.714151205 +0000 UTC m=+1.439288672 container remove 10c585fb96268417a30dc3695965b2b1115df4e40aaebca530439a4ff2ce5f03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:18:40 compute-0 podman[440888]: 2025-10-03 10:18:40.717996599 +0000 UTC m=+1.168792082 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:18:40 compute-0 podman[440889]: 2025-10-03 10:18:40.721695077 +0000 UTC m=+1.172489950 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, container_name=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.expose-services=, name=ubi9, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, distribution-scope=public, vendor=Red Hat, Inc., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container)
Oct  3 10:18:40 compute-0 systemd[1]: libpod-conmon-10c585fb96268417a30dc3695965b2b1115df4e40aaebca530439a4ff2ce5f03.scope: Deactivated successfully.
Oct  3 10:18:41 compute-0 podman[440953]: 2025-10-03 10:18:40.932464369 +0000 UTC m=+0.037924089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:18:41 compute-0 podman[440953]: 2025-10-03 10:18:41.161534609 +0000 UTC m=+0.266994319 container create 4d719d056bc11104cff6449b77864beb6074fe481553f112475916780bcd4304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_euler, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:18:41 compute-0 systemd[1]: Started libpod-conmon-4d719d056bc11104cff6449b77864beb6074fe481553f112475916780bcd4304.scope.
Oct  3 10:18:41 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:18:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eac8614d085b06bc13c630665cdc72b26cd6d0605112eee77b5bca7e922634f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:18:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eac8614d085b06bc13c630665cdc72b26cd6d0605112eee77b5bca7e922634f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:18:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eac8614d085b06bc13c630665cdc72b26cd6d0605112eee77b5bca7e922634f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:18:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eac8614d085b06bc13c630665cdc72b26cd6d0605112eee77b5bca7e922634f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:18:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:41 compute-0 podman[440953]: 2025-10-03 10:18:41.380169423 +0000 UTC m=+0.485629153 container init 4d719d056bc11104cff6449b77864beb6074fe481553f112475916780bcd4304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_euler, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 10:18:41 compute-0 podman[440953]: 2025-10-03 10:18:41.392392676 +0000 UTC m=+0.497852386 container start 4d719d056bc11104cff6449b77864beb6074fe481553f112475916780bcd4304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:18:41 compute-0 podman[440953]: 2025-10-03 10:18:41.4547986 +0000 UTC m=+0.560258330 container attach 4d719d056bc11104cff6449b77864beb6074fe481553f112475916780bcd4304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:18:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:18:41.607 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:18:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:18:41.609 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:18:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:18:41.610 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:18:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:18:41 compute-0 nova_compute[351685]: 2025-10-03 10:18:41.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:42 compute-0 nova_compute[351685]: 2025-10-03 10:18:42.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:42 compute-0 beautiful_euler[440969]: {
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:        "osd_id": 1,
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:        "type": "bluestore"
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:    },
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:        "osd_id": 2,
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:        "type": "bluestore"
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:    },
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:        "osd_id": 0,
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:        "type": "bluestore"
Oct  3 10:18:42 compute-0 beautiful_euler[440969]:    }
Oct  3 10:18:42 compute-0 beautiful_euler[440969]: }
Oct  3 10:18:42 compute-0 systemd[1]: libpod-4d719d056bc11104cff6449b77864beb6074fe481553f112475916780bcd4304.scope: Deactivated successfully.
Oct  3 10:18:42 compute-0 systemd[1]: libpod-4d719d056bc11104cff6449b77864beb6074fe481553f112475916780bcd4304.scope: Consumed 1.067s CPU time.
Oct  3 10:18:42 compute-0 podman[441002]: 2025-10-03 10:18:42.550075229 +0000 UTC m=+0.049750889 container died 4d719d056bc11104cff6449b77864beb6074fe481553f112475916780bcd4304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_euler, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 10:18:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3eac8614d085b06bc13c630665cdc72b26cd6d0605112eee77b5bca7e922634f-merged.mount: Deactivated successfully.
Oct  3 10:18:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:43 compute-0 podman[441002]: 2025-10-03 10:18:43.722752014 +0000 UTC m=+1.222427634 container remove 4d719d056bc11104cff6449b77864beb6074fe481553f112475916780bcd4304 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_euler, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:18:43 compute-0 systemd[1]: libpod-conmon-4d719d056bc11104cff6449b77864beb6074fe481553f112475916780bcd4304.scope: Deactivated successfully.
Oct  3 10:18:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:18:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:18:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:18:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:18:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 71d77020-549b-44aa-aa85-8d7f8f56d6ba does not exist
Oct  3 10:18:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 87361766-3521-453c-9a2a-9723f62eda15 does not exist
Oct  3 10:18:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:18:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:18:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:18:46
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'images']
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:18:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:18:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:18:46 compute-0 nova_compute[351685]: 2025-10-03 10:18:46.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:47 compute-0 nova_compute[351685]: 2025-10-03 10:18:47.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:48 compute-0 nova_compute[351685]: 2025-10-03 10:18:48.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:18:48 compute-0 nova_compute[351685]: 2025-10-03 10:18:48.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:18:48 compute-0 nova_compute[351685]: 2025-10-03 10:18:48.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:18:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:49 compute-0 nova_compute[351685]: 2025-10-03 10:18:49.883 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:18:49 compute-0 nova_compute[351685]: 2025-10-03 10:18:49.883 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:18:49 compute-0 nova_compute[351685]: 2025-10-03 10:18:49.884 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:18:49 compute-0 nova_compute[351685]: 2025-10-03 10:18:49.884 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:18:50 compute-0 podman[441069]: 2025-10-03 10:18:50.844546432 +0000 UTC m=+0.097780373 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:18:50 compute-0 podman[441068]: 2025-10-03 10:18:50.851149474 +0000 UTC m=+0.108521137 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350)
Oct  3 10:18:50 compute-0 podman[441070]: 2025-10-03 10:18:50.884501895 +0000 UTC m=+0.131093312 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 10:18:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:18:51 compute-0 nova_compute[351685]: 2025-10-03 10:18:51.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:52 compute-0 nova_compute[351685]: 2025-10-03 10:18:52.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:53 compute-0 nova_compute[351685]: 2025-10-03 10:18:53.274 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:18:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:53 compute-0 nova_compute[351685]: 2025-10-03 10:18:53.423 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:18:53 compute-0 nova_compute[351685]: 2025-10-03 10:18:53.423 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:18:53 compute-0 nova_compute[351685]: 2025-10-03 10:18:53.424 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:18:53 compute-0 nova_compute[351685]: 2025-10-03 10:18:53.425 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:18:53 compute-0 nova_compute[351685]: 2025-10-03 10:18:53.425 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:18:53 compute-0 nova_compute[351685]: 2025-10-03 10:18:53.426 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:18:53 compute-0 nova_compute[351685]: 2025-10-03 10:18:53.426 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:18:53 compute-0 nova_compute[351685]: 2025-10-03 10:18:53.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:18:53 compute-0 nova_compute[351685]: 2025-10-03 10:18:53.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:18:53 compute-0 nova_compute[351685]: 2025-10-03 10:18:53.792 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:18:53 compute-0 nova_compute[351685]: 2025-10-03 10:18:53.849 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:18:53 compute-0 nova_compute[351685]: 2025-10-03 10:18:53.850 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:18:53 compute-0 nova_compute[351685]: 2025-10-03 10:18:53.850 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:18:53 compute-0 nova_compute[351685]: 2025-10-03 10:18:53.850 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:18:53 compute-0 nova_compute[351685]: 2025-10-03 10:18:53.851 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:18:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:18:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2341540631' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:18:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:18:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2341540631' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:18:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:18:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1985419069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:18:54 compute-0 nova_compute[351685]: 2025-10-03 10:18:54.285 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:18:54 compute-0 nova_compute[351685]: 2025-10-03 10:18:54.534 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:18:54 compute-0 nova_compute[351685]: 2025-10-03 10:18:54.535 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:18:54 compute-0 nova_compute[351685]: 2025-10-03 10:18:54.536 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:18:54 compute-0 nova_compute[351685]: 2025-10-03 10:18:54.905 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:18:54 compute-0 nova_compute[351685]: 2025-10-03 10:18:54.906 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3853MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:18:54 compute-0 nova_compute[351685]: 2025-10-03 10:18:54.906 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:18:54 compute-0 nova_compute[351685]: 2025-10-03 10:18:54.907 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:18:55 compute-0 nova_compute[351685]: 2025-10-03 10:18:55.169 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:18:55 compute-0 nova_compute[351685]: 2025-10-03 10:18:55.169 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:18:55 compute-0 nova_compute[351685]: 2025-10-03 10:18:55.170 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:18:55 compute-0 nova_compute[351685]: 2025-10-03 10:18:55.223 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:18:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:18:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:18:55 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2761846630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:18:55 compute-0 nova_compute[351685]: 2025-10-03 10:18:55.732 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:18:55 compute-0 nova_compute[351685]: 2025-10-03 10:18:55.742 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:18:55 compute-0 nova_compute[351685]: 2025-10-03 10:18:55.769 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:18:55 compute-0 nova_compute[351685]: 2025-10-03 10:18:55.771 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:18:55 compute-0 nova_compute[351685]: 2025-10-03 10:18:55.771 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:18:55 compute-0 podman[441170]: 2025-10-03 10:18:55.820202428 +0000 UTC m=+0.076847689 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct  3 10:18:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:18:56 compute-0 nova_compute[351685]: 2025-10-03 10:18:56.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:57 compute-0 nova_compute[351685]: 2025-10-03 10:18:57.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:18:57 compute-0 nova_compute[351685]: 2025-10-03 10:18:57.709 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:18:57 compute-0 nova_compute[351685]: 2025-10-03 10:18:57.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:18:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:18:59 compute-0 podman[157165]: time="2025-10-03T10:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:18:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:18:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9052 "" "Go-http-client/1.1"
Oct  3 10:18:59 compute-0 podman[441188]: 2025-10-03 10:18:59.822876774 +0000 UTC m=+0.080913091 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:18:59 compute-0 podman[441189]: 2025-10-03 10:18:59.834684253 +0000 UTC m=+0.075918320 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible)
Oct  3 10:18:59 compute-0 podman[441190]: 2025-10-03 10:18:59.850103619 +0000 UTC m=+0.093779124 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct  3 10:19:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:01 compute-0 openstack_network_exporter[367524]: ERROR   10:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:19:01 compute-0 openstack_network_exporter[367524]: ERROR   10:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:19:01 compute-0 openstack_network_exporter[367524]: ERROR   10:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:19:01 compute-0 openstack_network_exporter[367524]: ERROR   10:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:19:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:19:01 compute-0 openstack_network_exporter[367524]: ERROR   10:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:19:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:19:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:19:01 compute-0 nova_compute[351685]: 2025-10-03 10:19:01.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:02 compute-0 nova_compute[351685]: 2025-10-03 10:19:02.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:05 compute-0 podman[441250]: 2025-10-03 10:19:05.87658571 +0000 UTC m=+0.125632667 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct  3 10:19:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:19:06 compute-0 nova_compute[351685]: 2025-10-03 10:19:06.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:07 compute-0 nova_compute[351685]: 2025-10-03 10:19:07.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:19:11 compute-0 nova_compute[351685]: 2025-10-03 10:19:11.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:11 compute-0 podman[441268]: 2025-10-03 10:19:11.819945186 +0000 UTC m=+0.083864765 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:19:11 compute-0 podman[441269]: 2025-10-03 10:19:11.834731882 +0000 UTC m=+0.092632017 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, name=ubi9, io.openshift.expose-services=, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=edpm, distribution-scope=public, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:19:12 compute-0 nova_compute[351685]: 2025-10-03 10:19:12.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:19:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:19:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:19:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:19:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:19:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:19:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:19:16 compute-0 nova_compute[351685]: 2025-10-03 10:19:16.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:17 compute-0 nova_compute[351685]: 2025-10-03 10:19:17.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:19:21 compute-0 nova_compute[351685]: 2025-10-03 10:19:21.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:21 compute-0 podman[441310]: 2025-10-03 10:19:21.863806949 +0000 UTC m=+0.109842380 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, name=ubi9-minimal)
Oct  3 10:19:21 compute-0 podman[441311]: 2025-10-03 10:19:21.864013235 +0000 UTC m=+0.109820188 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct  3 10:19:21 compute-0 podman[441312]: 2025-10-03 10:19:21.897285974 +0000 UTC m=+0.144158442 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller)
Oct  3 10:19:22 compute-0 nova_compute[351685]: 2025-10-03 10:19:22.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:19:26 compute-0 nova_compute[351685]: 2025-10-03 10:19:26.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:26 compute-0 podman[441374]: 2025-10-03 10:19:26.841123902 +0000 UTC m=+0.091470721 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 10:19:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:27 compute-0 nova_compute[351685]: 2025-10-03 10:19:27.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1565: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:29 compute-0 podman[157165]: time="2025-10-03T10:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:19:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:19:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9052 "" "Go-http-client/1.1"
Oct  3 10:19:30 compute-0 podman[441393]: 2025-10-03 10:19:30.81674522 +0000 UTC m=+0.080137496 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:19:30 compute-0 podman[441394]: 2025-10-03 10:19:30.828560349 +0000 UTC m=+0.084130304 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  3 10:19:30 compute-0 podman[441392]: 2025-10-03 10:19:30.831215755 +0000 UTC m=+0.091566023 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:19:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:31 compute-0 openstack_network_exporter[367524]: ERROR   10:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:19:31 compute-0 openstack_network_exporter[367524]: ERROR   10:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:19:31 compute-0 openstack_network_exporter[367524]: ERROR   10:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:19:31 compute-0 openstack_network_exporter[367524]: ERROR   10:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:19:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:19:31 compute-0 openstack_network_exporter[367524]: ERROR   10:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:19:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:19:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:19:31 compute-0 nova_compute[351685]: 2025-10-03 10:19:31.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:32 compute-0 nova_compute[351685]: 2025-10-03 10:19:32.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1567: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:19:36 compute-0 nova_compute[351685]: 2025-10-03 10:19:36.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:36 compute-0 podman[441450]: 2025-10-03 10:19:36.837526159 +0000 UTC m=+0.098446164 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  3 10:19:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:37 compute-0 nova_compute[351685]: 2025-10-03 10:19:37.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.886 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.887 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.894 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.894 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.894 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.894 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.895 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:19:40.894861) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.894 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.899 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.900 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.900 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.900 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.901 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:19:40.900851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.901 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.901 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.902 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.902 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.902 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:19:40.902466) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.902 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.922 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.922 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.923 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.923 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.923 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.923 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.923 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.924 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.924 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:19:40.924125) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.924 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.969 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.970 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.971 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.971 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.972 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.972 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.972 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.972 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.973 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.973 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.974 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.974 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:19:40.973136) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.974 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.975 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.975 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.976 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.976 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.976 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.976 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.977 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.977 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:19:40.976412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.978 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.978 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.979 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.979 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.979 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.979 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.979 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.980 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.981 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:19:40.979730) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.981 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.982 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.982 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.982 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.983 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.983 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.983 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.983 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:19:40.983312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.984 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.985 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.986 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.986 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.986 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.987 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.987 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.987 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.988 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.988 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:19:40.987216) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.988 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.990 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.990 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.990 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.990 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:40.991 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:19:40.990618) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.013 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.014 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.014 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.015 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.015 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.015 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.016 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.016 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.016 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.016 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.017 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:19:41.013402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.017 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.017 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:19:41.015044) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.017 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.017 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:19:41.016551) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.017 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.017 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.017 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.018 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.018 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:19:41.017867) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.019 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.019 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.020 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.020 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:19:41.018869) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:19:41.019967) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.021 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.021 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 47180000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:19:41.020855) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:19:41.021664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.022 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:19:41.022973) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.023 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.024 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.024 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.025 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.025 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:19:41.024784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.026 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.026 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.027 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:19:41.027296) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.027 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.028 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.028 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.029 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.029 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.029 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:19:41.029667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.030 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.84765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.030 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.031 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.031 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.031 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.031 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.032 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.032 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.032 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.032 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.033 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.034 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:19:41.032498) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.034 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.034 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.035 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.035 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.035 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.035 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:19:41.035422) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.036 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.036 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.037 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.037 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.037 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.037 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.037 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.037 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.037 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.037 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.037 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.038 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.038 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.038 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.038 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.038 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.038 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.038 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.038 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.038 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.038 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:19:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:19:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:19:41.608 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:19:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:19:41.609 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:19:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:19:41.610 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:19:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:19:41 compute-0 nova_compute[351685]: 2025-10-03 10:19:41.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:42 compute-0 nova_compute[351685]: 2025-10-03 10:19:42.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:42 compute-0 podman[441470]: 2025-10-03 10:19:42.849534623 +0000 UTC m=+0.092677469 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:19:42 compute-0 podman[441471]: 2025-10-03 10:19:42.858937905 +0000 UTC m=+0.102629438 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 10:19:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:19:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:19:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:19:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:19:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:19:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1573: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:19:45 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 228805f1-5472-4399-9696-a3b422cbf883 does not exist
Oct  3 10:19:45 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 432b7e9a-6fc4-4e65-8d64-e3a9ef1dc38b does not exist
Oct  3 10:19:45 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev cfc76d0e-ec36-4297-841c-08542a7bfb81 does not exist
Oct  3 10:19:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:19:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:19:45 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:19:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:19:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:19:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:19:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:19:46
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.meta', 'backups', 'images', 'default.rgw.control', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta']
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:19:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:19:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:19:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:19:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:19:46 compute-0 podman[441783]: 2025-10-03 10:19:46.728472943 +0000 UTC m=+0.102612748 container create fc80f0dcc913f6919df32efb80fb3ed0a75dc14f01d4c09127c3b565a0ce78da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chatterjee, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 10:19:46 compute-0 podman[441783]: 2025-10-03 10:19:46.655677975 +0000 UTC m=+0.029817800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:19:46 compute-0 nova_compute[351685]: 2025-10-03 10:19:46.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:46 compute-0 systemd[1]: Started libpod-conmon-fc80f0dcc913f6919df32efb80fb3ed0a75dc14f01d4c09127c3b565a0ce78da.scope.
Oct  3 10:19:46 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:19:46 compute-0 podman[441783]: 2025-10-03 10:19:46.902009939 +0000 UTC m=+0.276149764 container init fc80f0dcc913f6919df32efb80fb3ed0a75dc14f01d4c09127c3b565a0ce78da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:19:46 compute-0 podman[441783]: 2025-10-03 10:19:46.91234333 +0000 UTC m=+0.286483135 container start fc80f0dcc913f6919df32efb80fb3ed0a75dc14f01d4c09127c3b565a0ce78da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chatterjee, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 10:19:46 compute-0 quirky_chatterjee[441799]: 167 167
Oct  3 10:19:46 compute-0 systemd[1]: libpod-fc80f0dcc913f6919df32efb80fb3ed0a75dc14f01d4c09127c3b565a0ce78da.scope: Deactivated successfully.
Oct  3 10:19:46 compute-0 podman[441783]: 2025-10-03 10:19:46.944835605 +0000 UTC m=+0.318975420 container attach fc80f0dcc913f6919df32efb80fb3ed0a75dc14f01d4c09127c3b565a0ce78da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:19:46 compute-0 podman[441783]: 2025-10-03 10:19:46.94593132 +0000 UTC m=+0.320071135 container died fc80f0dcc913f6919df32efb80fb3ed0a75dc14f01d4c09127c3b565a0ce78da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 10:19:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b9605fa9d03dba66152e5fd91745b5e72ee6a75e2d18d255948deef5a3c889b-merged.mount: Deactivated successfully.
Oct  3 10:19:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1574: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:47 compute-0 nova_compute[351685]: 2025-10-03 10:19:47.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:47 compute-0 podman[441783]: 2025-10-03 10:19:47.492398297 +0000 UTC m=+0.866538102 container remove fc80f0dcc913f6919df32efb80fb3ed0a75dc14f01d4c09127c3b565a0ce78da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_chatterjee, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:19:47 compute-0 systemd[1]: libpod-conmon-fc80f0dcc913f6919df32efb80fb3ed0a75dc14f01d4c09127c3b565a0ce78da.scope: Deactivated successfully.
Oct  3 10:19:47 compute-0 podman[441825]: 2025-10-03 10:19:47.697730804 +0000 UTC m=+0.036381710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:19:47 compute-0 podman[441825]: 2025-10-03 10:19:47.857779695 +0000 UTC m=+0.196430581 container create b57a5878e769bb98be1d4bf8d2370f35a45a27b6ee08c8c5416edc18c6739ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hopper, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 10:19:47 compute-0 systemd[1]: Started libpod-conmon-b57a5878e769bb98be1d4bf8d2370f35a45a27b6ee08c8c5416edc18c6739ac3.scope.
Oct  3 10:19:47 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294d59ac92daf9867bb5678dfe21c6235cc2a3e2045bd0f8184d922ea7f1be35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294d59ac92daf9867bb5678dfe21c6235cc2a3e2045bd0f8184d922ea7f1be35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294d59ac92daf9867bb5678dfe21c6235cc2a3e2045bd0f8184d922ea7f1be35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294d59ac92daf9867bb5678dfe21c6235cc2a3e2045bd0f8184d922ea7f1be35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:19:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/294d59ac92daf9867bb5678dfe21c6235cc2a3e2045bd0f8184d922ea7f1be35/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:19:48 compute-0 podman[441825]: 2025-10-03 10:19:48.008215439 +0000 UTC m=+0.346866345 container init b57a5878e769bb98be1d4bf8d2370f35a45a27b6ee08c8c5416edc18c6739ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hopper, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:19:48 compute-0 podman[441825]: 2025-10-03 10:19:48.028223272 +0000 UTC m=+0.366874148 container start b57a5878e769bb98be1d4bf8d2370f35a45a27b6ee08c8c5416edc18c6739ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hopper, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:19:48 compute-0 podman[441825]: 2025-10-03 10:19:48.221639355 +0000 UTC m=+0.560290321 container attach b57a5878e769bb98be1d4bf8d2370f35a45a27b6ee08c8c5416edc18c6739ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hopper, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:19:48 compute-0 nova_compute[351685]: 2025-10-03 10:19:48.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:19:48 compute-0 nova_compute[351685]: 2025-10-03 10:19:48.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:19:48 compute-0 nova_compute[351685]: 2025-10-03 10:19:48.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:19:48 compute-0 nova_compute[351685]: 2025-10-03 10:19:48.933 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:19:48 compute-0 nova_compute[351685]: 2025-10-03 10:19:48.934 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:19:48 compute-0 nova_compute[351685]: 2025-10-03 10:19:48.934 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:19:48 compute-0 nova_compute[351685]: 2025-10-03 10:19:48.934 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:19:49 compute-0 elegant_hopper[441841]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:19:49 compute-0 elegant_hopper[441841]: --> relative data size: 1.0
Oct  3 10:19:49 compute-0 elegant_hopper[441841]: --> All data devices are unavailable
Oct  3 10:19:49 compute-0 systemd[1]: libpod-b57a5878e769bb98be1d4bf8d2370f35a45a27b6ee08c8c5416edc18c6739ac3.scope: Deactivated successfully.
Oct  3 10:19:49 compute-0 systemd[1]: libpod-b57a5878e769bb98be1d4bf8d2370f35a45a27b6ee08c8c5416edc18c6739ac3.scope: Consumed 1.069s CPU time.
Oct  3 10:19:49 compute-0 podman[441870]: 2025-10-03 10:19:49.206802937 +0000 UTC m=+0.031944058 container died b57a5878e769bb98be1d4bf8d2370f35a45a27b6ee08c8c5416edc18c6739ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hopper, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:19:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-294d59ac92daf9867bb5678dfe21c6235cc2a3e2045bd0f8184d922ea7f1be35-merged.mount: Deactivated successfully.
Oct  3 10:19:49 compute-0 podman[441870]: 2025-10-03 10:19:49.414309063 +0000 UTC m=+0.239450164 container remove b57a5878e769bb98be1d4bf8d2370f35a45a27b6ee08c8c5416edc18c6739ac3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hopper, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:19:49 compute-0 systemd[1]: libpod-conmon-b57a5878e769bb98be1d4bf8d2370f35a45a27b6ee08c8c5416edc18c6739ac3.scope: Deactivated successfully.
Oct  3 10:19:50 compute-0 nova_compute[351685]: 2025-10-03 10:19:50.094 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:19:50 compute-0 nova_compute[351685]: 2025-10-03 10:19:50.196 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:19:50 compute-0 nova_compute[351685]: 2025-10-03 10:19:50.197 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:19:50 compute-0 nova_compute[351685]: 2025-10-03 10:19:50.197 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:19:50 compute-0 nova_compute[351685]: 2025-10-03 10:19:50.197 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:19:50 compute-0 nova_compute[351685]: 2025-10-03 10:19:50.198 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:19:50 compute-0 podman[442027]: 2025-10-03 10:19:50.349650743 +0000 UTC m=+0.035319585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:19:50 compute-0 podman[442027]: 2025-10-03 10:19:50.647437331 +0000 UTC m=+0.333106163 container create 77ed784fe11761126075bda8a6f7c24b5838b5640b703deff92a0a8b2d5254d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_brattain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:19:50 compute-0 nova_compute[351685]: 2025-10-03 10:19:50.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:19:50 compute-0 systemd[1]: Started libpod-conmon-77ed784fe11761126075bda8a6f7c24b5838b5640b703deff92a0a8b2d5254d9.scope.
Oct  3 10:19:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:19:51 compute-0 podman[442027]: 2025-10-03 10:19:51.12363165 +0000 UTC m=+0.809300522 container init 77ed784fe11761126075bda8a6f7c24b5838b5640b703deff92a0a8b2d5254d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 10:19:51 compute-0 podman[442027]: 2025-10-03 10:19:51.143494728 +0000 UTC m=+0.829163600 container start 77ed784fe11761126075bda8a6f7c24b5838b5640b703deff92a0a8b2d5254d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_brattain, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:19:51 compute-0 condescending_brattain[442043]: 167 167
Oct  3 10:19:51 compute-0 systemd[1]: libpod-77ed784fe11761126075bda8a6f7c24b5838b5640b703deff92a0a8b2d5254d9.scope: Deactivated successfully.
Oct  3 10:19:51 compute-0 podman[442027]: 2025-10-03 10:19:51.242423776 +0000 UTC m=+0.928092668 container attach 77ed784fe11761126075bda8a6f7c24b5838b5640b703deff92a0a8b2d5254d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_brattain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:19:51 compute-0 podman[442027]: 2025-10-03 10:19:51.242994685 +0000 UTC m=+0.928663527 container died 77ed784fe11761126075bda8a6f7c24b5838b5640b703deff92a0a8b2d5254d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:19:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0d7198680420bb2fbed52022b0bb9b736c1712e23e06cea6e5f67cbbf8c4bc2-merged.mount: Deactivated successfully.
Oct  3 10:19:51 compute-0 podman[442027]: 2025-10-03 10:19:51.606038629 +0000 UTC m=+1.291707471 container remove 77ed784fe11761126075bda8a6f7c24b5838b5640b703deff92a0a8b2d5254d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_brattain, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:19:51 compute-0 systemd[1]: libpod-conmon-77ed784fe11761126075bda8a6f7c24b5838b5640b703deff92a0a8b2d5254d9.scope: Deactivated successfully.
Oct  3 10:19:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:19:51 compute-0 nova_compute[351685]: 2025-10-03 10:19:51.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:19:51 compute-0 nova_compute[351685]: 2025-10-03 10:19:51.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:51 compute-0 podman[442066]: 2025-10-03 10:19:51.819790916 +0000 UTC m=+0.059062499 container create e04b6bf7337946c81181765f8e823551a696ded9181ad592a2345fe2a86501f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bouman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:19:51 compute-0 systemd[1]: Started libpod-conmon-e04b6bf7337946c81181765f8e823551a696ded9181ad592a2345fe2a86501f7.scope.
Oct  3 10:19:51 compute-0 podman[442066]: 2025-10-03 10:19:51.80093699 +0000 UTC m=+0.040208573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:19:51 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d472aa13e5d09fb8229da7858a3be553eff21f2d863cf4267bfdadbaa7f46764/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d472aa13e5d09fb8229da7858a3be553eff21f2d863cf4267bfdadbaa7f46764/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d472aa13e5d09fb8229da7858a3be553eff21f2d863cf4267bfdadbaa7f46764/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:19:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d472aa13e5d09fb8229da7858a3be553eff21f2d863cf4267bfdadbaa7f46764/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:19:51 compute-0 podman[442066]: 2025-10-03 10:19:51.926619718 +0000 UTC m=+0.165891321 container init e04b6bf7337946c81181765f8e823551a696ded9181ad592a2345fe2a86501f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:19:51 compute-0 podman[442066]: 2025-10-03 10:19:51.942447636 +0000 UTC m=+0.181719219 container start e04b6bf7337946c81181765f8e823551a696ded9181ad592a2345fe2a86501f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bouman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 10:19:51 compute-0 podman[442066]: 2025-10-03 10:19:51.947187739 +0000 UTC m=+0.186459502 container attach e04b6bf7337946c81181765f8e823551a696ded9181ad592a2345fe2a86501f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 10:19:52 compute-0 podman[442085]: 2025-10-03 10:19:52.01663663 +0000 UTC m=+0.080899710 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, managed_by=edpm_ansible, io.buildah.version=1.41.4, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Oct  3 10:19:52 compute-0 podman[442084]: 2025-10-03 10:19:52.021167876 +0000 UTC m=+0.082093148 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm)
Oct  3 10:19:52 compute-0 podman[442087]: 2025-10-03 10:19:52.053094942 +0000 UTC m=+0.110673978 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 10:19:52 compute-0 nova_compute[351685]: 2025-10-03 10:19:52.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]: {
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:    "0": [
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:        {
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "devices": [
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "/dev/loop3"
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            ],
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "lv_name": "ceph_lv0",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "lv_size": "21470642176",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "name": "ceph_lv0",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "tags": {
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.cluster_name": "ceph",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.crush_device_class": "",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.encrypted": "0",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.osd_id": "0",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.type": "block",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.vdo": "0"
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            },
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "type": "block",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "vg_name": "ceph_vg0"
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:        }
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:    ],
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:    "1": [
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:        {
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "devices": [
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "/dev/loop4"
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            ],
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "lv_name": "ceph_lv1",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "lv_size": "21470642176",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "name": "ceph_lv1",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "tags": {
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.cluster_name": "ceph",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.crush_device_class": "",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.encrypted": "0",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.osd_id": "1",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.type": "block",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.vdo": "0"
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            },
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "type": "block",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "vg_name": "ceph_vg1"
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:        }
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:    ],
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:    "2": [
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:        {
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "devices": [
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "/dev/loop5"
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            ],
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "lv_name": "ceph_lv2",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "lv_size": "21470642176",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "name": "ceph_lv2",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "tags": {
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.cluster_name": "ceph",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.crush_device_class": "",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.encrypted": "0",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.osd_id": "2",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.type": "block",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:                "ceph.vdo": "0"
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            },
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "type": "block",
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:            "vg_name": "ceph_vg2"
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:        }
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]:    ]
Oct  3 10:19:52 compute-0 quizzical_bouman[442081]: }
Oct  3 10:19:52 compute-0 systemd[1]: libpod-e04b6bf7337946c81181765f8e823551a696ded9181ad592a2345fe2a86501f7.scope: Deactivated successfully.
Oct  3 10:19:52 compute-0 conmon[442081]: conmon e04b6bf7337946c81181 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e04b6bf7337946c81181765f8e823551a696ded9181ad592a2345fe2a86501f7.scope/container/memory.events
Oct  3 10:19:52 compute-0 podman[442153]: 2025-10-03 10:19:52.860973867 +0000 UTC m=+0.043442787 container died e04b6bf7337946c81181765f8e823551a696ded9181ad592a2345fe2a86501f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 10:19:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d472aa13e5d09fb8229da7858a3be553eff21f2d863cf4267bfdadbaa7f46764-merged.mount: Deactivated successfully.
Oct  3 10:19:52 compute-0 podman[442153]: 2025-10-03 10:19:52.947897889 +0000 UTC m=+0.130366769 container remove e04b6bf7337946c81181765f8e823551a696ded9181ad592a2345fe2a86501f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 10:19:52 compute-0 systemd[1]: libpod-conmon-e04b6bf7337946c81181765f8e823551a696ded9181ad592a2345fe2a86501f7.scope: Deactivated successfully.
Oct  3 10:19:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1577: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:53 compute-0 nova_compute[351685]: 2025-10-03 10:19:53.724 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:19:53 compute-0 podman[442303]: 2025-10-03 10:19:53.854038612 +0000 UTC m=+0.067757988 container create e2bc8709ee763804a46d95109d6f5cfada0df9a1fbb6cc18b780287617573525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ardinghelli, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:19:53 compute-0 systemd[1]: Started libpod-conmon-e2bc8709ee763804a46d95109d6f5cfada0df9a1fbb6cc18b780287617573525.scope.
Oct  3 10:19:53 compute-0 podman[442303]: 2025-10-03 10:19:53.829985289 +0000 UTC m=+0.043704695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:19:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:19:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4059064104' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:19:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:19:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4059064104' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:19:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:19:53 compute-0 podman[442303]: 2025-10-03 10:19:53.972952352 +0000 UTC m=+0.186671778 container init e2bc8709ee763804a46d95109d6f5cfada0df9a1fbb6cc18b780287617573525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ardinghelli, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:19:53 compute-0 podman[442303]: 2025-10-03 10:19:53.986967823 +0000 UTC m=+0.200687199 container start e2bc8709ee763804a46d95109d6f5cfada0df9a1fbb6cc18b780287617573525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 10:19:53 compute-0 podman[442303]: 2025-10-03 10:19:53.994753022 +0000 UTC m=+0.208472418 container attach e2bc8709ee763804a46d95109d6f5cfada0df9a1fbb6cc18b780287617573525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ardinghelli, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:19:53 compute-0 musing_ardinghelli[442318]: 167 167
Oct  3 10:19:53 compute-0 systemd[1]: libpod-e2bc8709ee763804a46d95109d6f5cfada0df9a1fbb6cc18b780287617573525.scope: Deactivated successfully.
Oct  3 10:19:53 compute-0 conmon[442318]: conmon e2bc8709ee763804a46d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2bc8709ee763804a46d95109d6f5cfada0df9a1fbb6cc18b780287617573525.scope/container/memory.events
Oct  3 10:19:54 compute-0 podman[442303]: 2025-10-03 10:19:53.99998334 +0000 UTC m=+0.213702766 container died e2bc8709ee763804a46d95109d6f5cfada0df9a1fbb6cc18b780287617573525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ardinghelli, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:19:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a086415c4c3873262b93c0e9049e8d5771675b9bcf5a0edc02d001e9a6166da-merged.mount: Deactivated successfully.
Oct  3 10:19:54 compute-0 podman[442303]: 2025-10-03 10:19:54.048346564 +0000 UTC m=+0.262065940 container remove e2bc8709ee763804a46d95109d6f5cfada0df9a1fbb6cc18b780287617573525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ardinghelli, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:19:54 compute-0 systemd[1]: libpod-conmon-e2bc8709ee763804a46d95109d6f5cfada0df9a1fbb6cc18b780287617573525.scope: Deactivated successfully.
Oct  3 10:19:54 compute-0 podman[442342]: 2025-10-03 10:19:54.220571488 +0000 UTC m=+0.036152052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:19:54 compute-0 podman[442342]: 2025-10-03 10:19:54.489493248 +0000 UTC m=+0.305073762 container create f87c74d9990ae0b703e1ea02769714c5d5fb4f079c1090fdf4cd872abbe5eb29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bohr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:19:54 compute-0 systemd[1]: Started libpod-conmon-f87c74d9990ae0b703e1ea02769714c5d5fb4f079c1090fdf4cd872abbe5eb29.scope.
Oct  3 10:19:54 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ff2220cf6a300acb262a89e77a7aa9aa2d67c1e74b955d047d6106d95cbf27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ff2220cf6a300acb262a89e77a7aa9aa2d67c1e74b955d047d6106d95cbf27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ff2220cf6a300acb262a89e77a7aa9aa2d67c1e74b955d047d6106d95cbf27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:19:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ff2220cf6a300acb262a89e77a7aa9aa2d67c1e74b955d047d6106d95cbf27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:19:54 compute-0 podman[442342]: 2025-10-03 10:19:54.605892467 +0000 UTC m=+0.421473041 container init f87c74d9990ae0b703e1ea02769714c5d5fb4f079c1090fdf4cd872abbe5eb29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bohr, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct  3 10:19:54 compute-0 podman[442342]: 2025-10-03 10:19:54.624897498 +0000 UTC m=+0.440478022 container start f87c74d9990ae0b703e1ea02769714c5d5fb4f079c1090fdf4cd872abbe5eb29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 10:19:54 compute-0 podman[442342]: 2025-10-03 10:19:54.630171867 +0000 UTC m=+0.445752371 container attach f87c74d9990ae0b703e1ea02769714c5d5fb4f079c1090fdf4cd872abbe5eb29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1578: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]: {
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:        "osd_id": 1,
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:        "type": "bluestore"
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:    },
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:        "osd_id": 2,
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:        "type": "bluestore"
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:    },
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:        "osd_id": 0,
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:        "type": "bluestore"
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]:    }
Oct  3 10:19:55 compute-0 intelligent_bohr[442358]: }
Oct  3 10:19:55 compute-0 nova_compute[351685]: 2025-10-03 10:19:55.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:19:55 compute-0 systemd[1]: libpod-f87c74d9990ae0b703e1ea02769714c5d5fb4f079c1090fdf4cd872abbe5eb29.scope: Deactivated successfully.
Oct  3 10:19:55 compute-0 podman[442342]: 2025-10-03 10:19:55.764752879 +0000 UTC m=+1.580333393 container died f87c74d9990ae0b703e1ea02769714c5d5fb4f079c1090fdf4cd872abbe5eb29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:19:55 compute-0 systemd[1]: libpod-f87c74d9990ae0b703e1ea02769714c5d5fb4f079c1090fdf4cd872abbe5eb29.scope: Consumed 1.131s CPU time.
Oct  3 10:19:55 compute-0 nova_compute[351685]: 2025-10-03 10:19:55.772 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:19:55 compute-0 nova_compute[351685]: 2025-10-03 10:19:55.773 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:19:55 compute-0 nova_compute[351685]: 2025-10-03 10:19:55.773 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:19:55 compute-0 nova_compute[351685]: 2025-10-03 10:19:55.773 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:19:55 compute-0 nova_compute[351685]: 2025-10-03 10:19:55.773 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:19:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9ff2220cf6a300acb262a89e77a7aa9aa2d67c1e74b955d047d6106d95cbf27-merged.mount: Deactivated successfully.
Oct  3 10:19:55 compute-0 podman[442342]: 2025-10-03 10:19:55.838593181 +0000 UTC m=+1.654173695 container remove f87c74d9990ae0b703e1ea02769714c5d5fb4f079c1090fdf4cd872abbe5eb29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_bohr, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct  3 10:19:55 compute-0 systemd[1]: libpod-conmon-f87c74d9990ae0b703e1ea02769714c5d5fb4f079c1090fdf4cd872abbe5eb29.scope: Deactivated successfully.
Oct  3 10:19:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:19:55 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:19:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:19:55 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a58a5c4b-030a-425b-8046-af83125beaf5 does not exist
Oct  3 10:19:55 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 367b9115-070c-42f6-b5cb-917938833858 does not exist
Oct  3 10:19:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:19:56 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3221871303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:19:56 compute-0 nova_compute[351685]: 2025-10-03 10:19:56.268 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:19:56 compute-0 nova_compute[351685]: 2025-10-03 10:19:56.352 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:19:56 compute-0 nova_compute[351685]: 2025-10-03 10:19:56.352 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:19:56 compute-0 nova_compute[351685]: 2025-10-03 10:19:56.353 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:19:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:19:56 compute-0 nova_compute[351685]: 2025-10-03 10:19:56.723 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:19:56 compute-0 nova_compute[351685]: 2025-10-03 10:19:56.725 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3795MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:19:56 compute-0 nova_compute[351685]: 2025-10-03 10:19:56.725 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:19:56 compute-0 nova_compute[351685]: 2025-10-03 10:19:56.726 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:19:56 compute-0 nova_compute[351685]: 2025-10-03 10:19:56.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:56 compute-0 nova_compute[351685]: 2025-10-03 10:19:56.807 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:19:56 compute-0 nova_compute[351685]: 2025-10-03 10:19:56.807 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:19:56 compute-0 nova_compute[351685]: 2025-10-03 10:19:56.808 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:19:56 compute-0 nova_compute[351685]: 2025-10-03 10:19:56.835 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:19:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:19:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:19:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:19:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3192094461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:19:57 compute-0 nova_compute[351685]: 2025-10-03 10:19:57.305 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:19:57 compute-0 nova_compute[351685]: 2025-10-03 10:19:57.319 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:19:57 compute-0 nova_compute[351685]: 2025-10-03 10:19:57.336 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:19:57 compute-0 nova_compute[351685]: 2025-10-03 10:19:57.338 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:19:57 compute-0 nova_compute[351685]: 2025-10-03 10:19:57.338 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:19:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:57 compute-0 nova_compute[351685]: 2025-10-03 10:19:57.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:19:57 compute-0 podman[442498]: 2025-10-03 10:19:57.847537884 +0000 UTC m=+0.090514389 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct  3 10:19:58 compute-0 nova_compute[351685]: 2025-10-03 10:19:58.339 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:19:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:19:59 compute-0 nova_compute[351685]: 2025-10-03 10:19:59.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:19:59 compute-0 podman[157165]: time="2025-10-03T10:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:19:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:19:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9064 "" "Go-http-client/1.1"
Oct  3 10:20:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:01 compute-0 openstack_network_exporter[367524]: ERROR   10:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:20:01 compute-0 openstack_network_exporter[367524]: ERROR   10:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:20:01 compute-0 openstack_network_exporter[367524]: ERROR   10:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:20:01 compute-0 openstack_network_exporter[367524]: ERROR   10:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:20:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:20:01 compute-0 openstack_network_exporter[367524]: ERROR   10:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:20:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:20:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:20:01 compute-0 nova_compute[351685]: 2025-10-03 10:20:01.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:01 compute-0 podman[442517]: 2025-10-03 10:20:01.841060147 +0000 UTC m=+0.097247196 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:20:01 compute-0 podman[442518]: 2025-10-03 10:20:01.85732578 +0000 UTC m=+0.099179168 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  3 10:20:01 compute-0 podman[442519]: 2025-10-03 10:20:01.878349615 +0000 UTC m=+0.111263676 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:20:02 compute-0 nova_compute[351685]: 2025-10-03 10:20:02.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1582: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:20:06 compute-0 nova_compute[351685]: 2025-10-03 10:20:06.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:07 compute-0 nova_compute[351685]: 2025-10-03 10:20:07.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:07 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Oct  3 10:20:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:20:07.834107) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:20:07 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Oct  3 10:20:07 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486807834205, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2060, "num_deletes": 251, "total_data_size": 3493790, "memory_usage": 3545840, "flush_reason": "Manual Compaction"}
Oct  3 10:20:07 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Oct  3 10:20:07 compute-0 podman[442580]: 2025-10-03 10:20:07.887770875 +0000 UTC m=+0.141175657 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486808396005, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3416458, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30283, "largest_seqno": 32342, "table_properties": {"data_size": 3407010, "index_size": 6006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18804, "raw_average_key_size": 20, "raw_value_size": 3388278, "raw_average_value_size": 3623, "num_data_blocks": 267, "num_entries": 935, "num_filter_entries": 935, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759486578, "oldest_key_time": 1759486578, "file_creation_time": 1759486807, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 561955 microseconds, and 8857 cpu microseconds.
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:20:08.396073) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3416458 bytes OK
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:20:08.396093) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:20:08.528594) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:20:08.528637) EVENT_LOG_v1 {"time_micros": 1759486808528627, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:20:08.528696) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3485143, prev total WAL file size 3485143, number of live WAL files 2.
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:20:08.530811) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3336KB)], [68(7346KB)]
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486808530920, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10939723, "oldest_snapshot_seqno": -1}
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5431 keys, 9166833 bytes, temperature: kUnknown
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486808611581, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9166833, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9129429, "index_size": 22702, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 136261, "raw_average_key_size": 25, "raw_value_size": 9030174, "raw_average_value_size": 1662, "num_data_blocks": 935, "num_entries": 5431, "num_filter_entries": 5431, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759486808, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:20:08.611840) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9166833 bytes
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:20:08.614322) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.5 rd, 113.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.2 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5945, records dropped: 514 output_compression: NoCompression
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:20:08.614346) EVENT_LOG_v1 {"time_micros": 1759486808614335, "job": 38, "event": "compaction_finished", "compaction_time_micros": 80734, "compaction_time_cpu_micros": 31612, "output_level": 6, "num_output_files": 1, "total_output_size": 9166833, "num_input_records": 5945, "num_output_records": 5431, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486808615117, "job": 38, "event": "table_file_deletion", "file_number": 70}
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486808616711, "job": 38, "event": "table_file_deletion", "file_number": 68}
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:20:08.529869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:20:08.616887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:20:08.616892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:20:08.616894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:20:08.616896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:20:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:20:08.616898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:20:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1585: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:20:11 compute-0 nova_compute[351685]: 2025-10-03 10:20:11.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:12 compute-0 nova_compute[351685]: 2025-10-03 10:20:12.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1587: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:13 compute-0 podman[442600]: 2025-10-03 10:20:13.806703665 +0000 UTC m=+0.071040914 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:20:13 compute-0 podman[442601]: 2025-10-03 10:20:13.811400926 +0000 UTC m=+0.074778624 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.buildah.version=1.29.0, release-0.7.12=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.expose-services=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:20:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:20:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:20:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:20:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:20:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:20:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:20:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:20:16 compute-0 nova_compute[351685]: 2025-10-03 10:20:16.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1589: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:17 compute-0 nova_compute[351685]: 2025-10-03 10:20:17.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1590: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:20:21 compute-0 nova_compute[351685]: 2025-10-03 10:20:21.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:22 compute-0 nova_compute[351685]: 2025-10-03 10:20:22.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:22 compute-0 podman[442643]: 2025-10-03 10:20:22.846802185 +0000 UTC m=+0.094728014 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, container_name=ceilometer_agent_compute)
Oct  3 10:20:22 compute-0 podman[442642]: 2025-10-03 10:20:22.847774437 +0000 UTC m=+0.099350192 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64)
Oct  3 10:20:22 compute-0 podman[442644]: 2025-10-03 10:20:22.91169259 +0000 UTC m=+0.154702080 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct  3 10:20:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1593: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:20:26 compute-0 nova_compute[351685]: 2025-10-03 10:20:26.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1594: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:27 compute-0 nova_compute[351685]: 2025-10-03 10:20:27.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:28 compute-0 podman[442709]: 2025-10-03 10:20:28.819748622 +0000 UTC m=+0.076975834 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  3 10:20:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1595: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:29 compute-0 podman[157165]: time="2025-10-03T10:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:20:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:20:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9052 "" "Go-http-client/1.1"
Oct  3 10:20:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:31 compute-0 openstack_network_exporter[367524]: ERROR   10:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:20:31 compute-0 openstack_network_exporter[367524]: ERROR   10:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:20:31 compute-0 openstack_network_exporter[367524]: ERROR   10:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:20:31 compute-0 openstack_network_exporter[367524]: ERROR   10:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:20:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:20:31 compute-0 openstack_network_exporter[367524]: ERROR   10:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:20:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:20:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:20:31 compute-0 nova_compute[351685]: 2025-10-03 10:20:31.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:32 compute-0 nova_compute[351685]: 2025-10-03 10:20:32.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:32 compute-0 podman[442728]: 2025-10-03 10:20:32.814898487 +0000 UTC m=+0.077757479 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:20:32 compute-0 podman[442729]: 2025-10-03 10:20:32.838123274 +0000 UTC m=+0.101099520 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001)
Oct  3 10:20:32 compute-0 podman[442730]: 2025-10-03 10:20:32.852991031 +0000 UTC m=+0.100487930 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  3 10:20:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1597: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1598: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:20:36 compute-0 nova_compute[351685]: 2025-10-03 10:20:36.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1599: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:37 compute-0 nova_compute[351685]: 2025-10-03 10:20:37.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:38 compute-0 podman[442789]: 2025-10-03 10:20:38.858032848 +0000 UTC m=+0.110461480 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Oct  3 10:20:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1600: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1601: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:20:41.610 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:20:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:20:41.610 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:20:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:20:41.611 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:20:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:20:41 compute-0 nova_compute[351685]: 2025-10-03 10:20:41.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:42 compute-0 nova_compute[351685]: 2025-10-03 10:20:42.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1602: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:44 compute-0 podman[442807]: 2025-10-03 10:20:44.749619961 +0000 UTC m=+0.060239977 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:20:44 compute-0 podman[442808]: 2025-10-03 10:20:44.763832388 +0000 UTC m=+0.075501097 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vcs-type=git, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, architecture=x86_64, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release-0.7.12=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=base rhel9)
Oct  3 10:20:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:20:46
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'vms', 'backups', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'volumes', '.rgw.root']
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:20:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:20:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:20:46 compute-0 nova_compute[351685]: 2025-10-03 10:20:46.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:20:46 compute-0 nova_compute[351685]: 2025-10-03 10:20:46.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 10:20:46 compute-0 nova_compute[351685]: 2025-10-03 10:20:46.747 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 10:20:46 compute-0 nova_compute[351685]: 2025-10-03 10:20:46.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1604: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:47 compute-0 nova_compute[351685]: 2025-10-03 10:20:47.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:48 compute-0 nova_compute[351685]: 2025-10-03 10:20:48.747 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:20:48 compute-0 nova_compute[351685]: 2025-10-03 10:20:48.748 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:20:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1605: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:49 compute-0 nova_compute[351685]: 2025-10-03 10:20:49.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:20:50 compute-0 nova_compute[351685]: 2025-10-03 10:20:50.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:20:50 compute-0 nova_compute[351685]: 2025-10-03 10:20:50.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:20:50 compute-0 nova_compute[351685]: 2025-10-03 10:20:50.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:20:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1606: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:20:51 compute-0 nova_compute[351685]: 2025-10-03 10:20:51.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:51 compute-0 nova_compute[351685]: 2025-10-03 10:20:51.936 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:20:51 compute-0 nova_compute[351685]: 2025-10-03 10:20:51.937 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:20:51 compute-0 nova_compute[351685]: 2025-10-03 10:20:51.938 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:20:51 compute-0 nova_compute[351685]: 2025-10-03 10:20:51.939 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:20:52 compute-0 nova_compute[351685]: 2025-10-03 10:20:52.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:53 compute-0 podman[442852]: 2025-10-03 10:20:53.834124933 +0000 UTC m=+0.090108426 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., vcs-type=git, name=ubi9-minimal, release=1755695350, architecture=x86_64, managed_by=edpm_ansible, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:20:53 compute-0 podman[442853]: 2025-10-03 10:20:53.872798435 +0000 UTC m=+0.122232477 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute)
Oct  3 10:20:53 compute-0 podman[442854]: 2025-10-03 10:20:53.906914772 +0000 UTC m=+0.151539340 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=ovn_controller)
Oct  3 10:20:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:20:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/875949643' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:20:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:20:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/875949643' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:20:53 compute-0 nova_compute[351685]: 2025-10-03 10:20:53.947 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:20:53 compute-0 nova_compute[351685]: 2025-10-03 10:20:53.962 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:20:53 compute-0 nova_compute[351685]: 2025-10-03 10:20:53.962 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:20:53 compute-0 nova_compute[351685]: 2025-10-03 10:20:53.963 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:20:53 compute-0 nova_compute[351685]: 2025-10-03 10:20:53.963 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:20:55 compute-0 nova_compute[351685]: 2025-10-03 10:20:55.080 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:20:55 compute-0 nova_compute[351685]: 2025-10-03 10:20:55.081 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:20:55 compute-0 nova_compute[351685]: 2025-10-03 10:20:55.099 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:20:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:20:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 7250 writes, 32K keys, 7250 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 7250 writes, 7250 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1328 writes, 6028 keys, 1328 commit groups, 1.0 writes per commit group, ingest: 8.69 MB, 0.01 MB/s#012Interval WAL: 1328 writes, 1328 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     34.7      1.11              0.13        19    0.059       0      0       0.0       0.0#012  L6      1/0    8.74 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.4    104.0     84.8      1.56              0.43        18    0.087     87K    10K       0.0       0.0#012 Sum      1/0    8.74 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.4     60.7     63.9      2.68              0.56        37    0.072     87K    10K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.4     36.4     37.8      1.08              0.13         8    0.135     23K   2540       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    104.0     84.8      1.56              0.43        18    0.087     87K    10K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     35.0      1.11              0.13        18    0.061       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.6      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.038, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.17 GB write, 0.06 MB/s write, 0.16 GB read, 0.05 MB/s read, 2.7 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56005dddb1f0#2 capacity: 304.00 MB usage: 20.00 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.00016 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1278,19.32 MB,6.35619%) FilterBlock(38,248.92 KB,0.0799631%) IndexBlock(38,446.33 KB,0.143377%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1608: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:20:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:20:55 compute-0 nova_compute[351685]: 2025-10-03 10:20:55.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:20:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:20:56 compute-0 nova_compute[351685]: 2025-10-03 10:20:56.741 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:20:56 compute-0 nova_compute[351685]: 2025-10-03 10:20:56.741 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:20:56 compute-0 nova_compute[351685]: 2025-10-03 10:20:56.766 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:20:56 compute-0 nova_compute[351685]: 2025-10-03 10:20:56.767 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:20:56 compute-0 nova_compute[351685]: 2025-10-03 10:20:56.768 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:20:56 compute-0 nova_compute[351685]: 2025-10-03 10:20:56.769 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:20:56 compute-0 nova_compute[351685]: 2025-10-03 10:20:56.770 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:20:56 compute-0 nova_compute[351685]: 2025-10-03 10:20:56.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:20:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:20:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:20:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:20:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:20:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:20:57 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev bc3134a3-7189-4e07-82e2-5ef8caa3431d does not exist
Oct  3 10:20:57 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e7be85b6-5d74-40e6-98c9-45a77a82bd15 does not exist
Oct  3 10:20:57 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 1f02cc41-2ea1-4057-bbf9-95a808be4ec1 does not exist
Oct  3 10:20:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:20:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:20:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:20:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:20:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:20:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:20:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:20:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:20:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:20:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:20:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3232085052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:20:57 compute-0 nova_compute[351685]: 2025-10-03 10:20:57.247 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:20:57 compute-0 nova_compute[351685]: 2025-10-03 10:20:57.332 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:20:57 compute-0 nova_compute[351685]: 2025-10-03 10:20:57.333 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:20:57 compute-0 nova_compute[351685]: 2025-10-03 10:20:57.333 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:20:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:57 compute-0 nova_compute[351685]: 2025-10-03 10:20:57.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:20:57 compute-0 nova_compute[351685]: 2025-10-03 10:20:57.686 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:20:57 compute-0 nova_compute[351685]: 2025-10-03 10:20:57.688 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3856MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:20:57 compute-0 nova_compute[351685]: 2025-10-03 10:20:57.688 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:20:57 compute-0 nova_compute[351685]: 2025-10-03 10:20:57.689 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:20:57 compute-0 nova_compute[351685]: 2025-10-03 10:20:57.786 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:20:57 compute-0 nova_compute[351685]: 2025-10-03 10:20:57.786 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:20:57 compute-0 nova_compute[351685]: 2025-10-03 10:20:57.786 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:20:57 compute-0 nova_compute[351685]: 2025-10-03 10:20:57.822 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:20:57 compute-0 podman[443206]: 2025-10-03 10:20:57.94699847 +0000 UTC m=+0.053201001 container create d05f18605e86b3d77b823d550b695e4d746a055a87b1d5f3cb8580d1c2010c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:20:58 compute-0 systemd[1]: Started libpod-conmon-d05f18605e86b3d77b823d550b695e4d746a055a87b1d5f3cb8580d1c2010c1f.scope.
Oct  3 10:20:58 compute-0 podman[443206]: 2025-10-03 10:20:57.925823549 +0000 UTC m=+0.032026100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:20:58 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:20:58 compute-0 podman[443206]: 2025-10-03 10:20:58.061101705 +0000 UTC m=+0.167304286 container init d05f18605e86b3d77b823d550b695e4d746a055a87b1d5f3cb8580d1c2010c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 10:20:58 compute-0 podman[443206]: 2025-10-03 10:20:58.070441736 +0000 UTC m=+0.176644267 container start d05f18605e86b3d77b823d550b695e4d746a055a87b1d5f3cb8580d1c2010c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:20:58 compute-0 podman[443206]: 2025-10-03 10:20:58.074394242 +0000 UTC m=+0.180596773 container attach d05f18605e86b3d77b823d550b695e4d746a055a87b1d5f3cb8580d1c2010c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 10:20:58 compute-0 crazy_kapitsa[443241]: 167 167
Oct  3 10:20:58 compute-0 systemd[1]: libpod-d05f18605e86b3d77b823d550b695e4d746a055a87b1d5f3cb8580d1c2010c1f.scope: Deactivated successfully.
Oct  3 10:20:58 compute-0 conmon[443241]: conmon d05f18605e86b3d77b82 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d05f18605e86b3d77b823d550b695e4d746a055a87b1d5f3cb8580d1c2010c1f.scope/container/memory.events
Oct  3 10:20:58 compute-0 podman[443206]: 2025-10-03 10:20:58.079198507 +0000 UTC m=+0.185401038 container died d05f18605e86b3d77b823d550b695e4d746a055a87b1d5f3cb8580d1c2010c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 10:20:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-31c52cf27768fd8b65a7595dfb0e1c449e5eeb93142861c3769ca8c6e78dc6ca-merged.mount: Deactivated successfully.
Oct  3 10:20:58 compute-0 podman[443206]: 2025-10-03 10:20:58.127034973 +0000 UTC m=+0.233237504 container remove d05f18605e86b3d77b823d550b695e4d746a055a87b1d5f3cb8580d1c2010c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 10:20:58 compute-0 systemd[1]: libpod-conmon-d05f18605e86b3d77b823d550b695e4d746a055a87b1d5f3cb8580d1c2010c1f.scope: Deactivated successfully.
Oct  3 10:20:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:20:58 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1063404895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:20:58 compute-0 nova_compute[351685]: 2025-10-03 10:20:58.290 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:20:58 compute-0 nova_compute[351685]: 2025-10-03 10:20:58.300 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:20:58 compute-0 nova_compute[351685]: 2025-10-03 10:20:58.315 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:20:58 compute-0 nova_compute[351685]: 2025-10-03 10:20:58.317 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:20:58 compute-0 nova_compute[351685]: 2025-10-03 10:20:58.317 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:20:58 compute-0 podman[443263]: 2025-10-03 10:20:58.321431689 +0000 UTC m=+0.054473291 container create e3e24fb55cb2f8deed3495c482e7b6cc4c43d868f6f39e88f7149fed24dc283b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_einstein, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 10:20:58 compute-0 systemd[1]: Started libpod-conmon-e3e24fb55cb2f8deed3495c482e7b6cc4c43d868f6f39e88f7149fed24dc283b.scope.
Oct  3 10:20:58 compute-0 podman[443263]: 2025-10-03 10:20:58.294526825 +0000 UTC m=+0.027568407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:20:58 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b086412fa03f06d8f155725379a5e64e840ddcd37d1dba9f35229de253ba31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b086412fa03f06d8f155725379a5e64e840ddcd37d1dba9f35229de253ba31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b086412fa03f06d8f155725379a5e64e840ddcd37d1dba9f35229de253ba31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b086412fa03f06d8f155725379a5e64e840ddcd37d1dba9f35229de253ba31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:20:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13b086412fa03f06d8f155725379a5e64e840ddcd37d1dba9f35229de253ba31/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:20:58 compute-0 podman[443263]: 2025-10-03 10:20:58.458806293 +0000 UTC m=+0.191847915 container init e3e24fb55cb2f8deed3495c482e7b6cc4c43d868f6f39e88f7149fed24dc283b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:20:58 compute-0 podman[443263]: 2025-10-03 10:20:58.467868944 +0000 UTC m=+0.200910496 container start e3e24fb55cb2f8deed3495c482e7b6cc4c43d868f6f39e88f7149fed24dc283b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:20:58 compute-0 podman[443263]: 2025-10-03 10:20:58.472660038 +0000 UTC m=+0.205701650 container attach e3e24fb55cb2f8deed3495c482e7b6cc4c43d868f6f39e88f7149fed24dc283b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_einstein, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:20:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1610: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:20:59 compute-0 youthful_einstein[443282]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:20:59 compute-0 youthful_einstein[443282]: --> relative data size: 1.0
Oct  3 10:20:59 compute-0 youthful_einstein[443282]: --> All data devices are unavailable
Oct  3 10:20:59 compute-0 systemd[1]: libpod-e3e24fb55cb2f8deed3495c482e7b6cc4c43d868f6f39e88f7149fed24dc283b.scope: Deactivated successfully.
Oct  3 10:20:59 compute-0 podman[443263]: 2025-10-03 10:20:59.624925047 +0000 UTC m=+1.357966649 container died e3e24fb55cb2f8deed3495c482e7b6cc4c43d868f6f39e88f7149fed24dc283b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct  3 10:20:59 compute-0 systemd[1]: libpod-e3e24fb55cb2f8deed3495c482e7b6cc4c43d868f6f39e88f7149fed24dc283b.scope: Consumed 1.076s CPU time.
Oct  3 10:20:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-13b086412fa03f06d8f155725379a5e64e840ddcd37d1dba9f35229de253ba31-merged.mount: Deactivated successfully.
Oct  3 10:20:59 compute-0 podman[443263]: 2025-10-03 10:20:59.737206804 +0000 UTC m=+1.470248376 container remove e3e24fb55cb2f8deed3495c482e7b6cc4c43d868f6f39e88f7149fed24dc283b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:20:59 compute-0 podman[157165]: time="2025-10-03T10:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:20:59 compute-0 systemd[1]: libpod-conmon-e3e24fb55cb2f8deed3495c482e7b6cc4c43d868f6f39e88f7149fed24dc283b.scope: Deactivated successfully.
Oct  3 10:20:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:20:59 compute-0 podman[443312]: 2025-10-03 10:20:59.770477173 +0000 UTC m=+0.106782171 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:20:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9061 "" "Go-http-client/1.1"
Oct  3 10:21:00 compute-0 podman[443479]: 2025-10-03 10:21:00.674601431 +0000 UTC m=+0.084019481 container create 084f5a95e792ca35e2af142f896dff57c376ad74e829a80c8f27693c6a47a0a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_golick, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:21:00 compute-0 podman[443479]: 2025-10-03 10:21:00.639896146 +0000 UTC m=+0.049314246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:21:00 compute-0 systemd[1]: Started libpod-conmon-084f5a95e792ca35e2af142f896dff57c376ad74e829a80c8f27693c6a47a0a4.scope.
Oct  3 10:21:00 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:21:00 compute-0 podman[443479]: 2025-10-03 10:21:00.816592313 +0000 UTC m=+0.226010333 container init 084f5a95e792ca35e2af142f896dff57c376ad74e829a80c8f27693c6a47a0a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_golick, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:21:00 compute-0 podman[443479]: 2025-10-03 10:21:00.832886887 +0000 UTC m=+0.242304897 container start 084f5a95e792ca35e2af142f896dff57c376ad74e829a80c8f27693c6a47a0a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:21:00 compute-0 podman[443479]: 2025-10-03 10:21:00.837426912 +0000 UTC m=+0.246844962 container attach 084f5a95e792ca35e2af142f896dff57c376ad74e829a80c8f27693c6a47a0a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 10:21:00 compute-0 sharp_golick[443492]: 167 167
Oct  3 10:21:00 compute-0 systemd[1]: libpod-084f5a95e792ca35e2af142f896dff57c376ad74e829a80c8f27693c6a47a0a4.scope: Deactivated successfully.
Oct  3 10:21:00 compute-0 podman[443479]: 2025-10-03 10:21:00.844845041 +0000 UTC m=+0.254263081 container died 084f5a95e792ca35e2af142f896dff57c376ad74e829a80c8f27693c6a47a0a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_golick, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:21:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac64bd35b244ac0e16f9bba494f376a75a28d8d3e805ec9199929198e22052cf-merged.mount: Deactivated successfully.
Oct  3 10:21:00 compute-0 podman[443479]: 2025-10-03 10:21:00.921510853 +0000 UTC m=+0.330928863 container remove 084f5a95e792ca35e2af142f896dff57c376ad74e829a80c8f27693c6a47a0a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_golick, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:21:00 compute-0 systemd[1]: libpod-conmon-084f5a95e792ca35e2af142f896dff57c376ad74e829a80c8f27693c6a47a0a4.scope: Deactivated successfully.
Oct  3 10:21:01 compute-0 podman[443517]: 2025-10-03 10:21:01.135192029 +0000 UTC m=+0.058909354 container create bb60d5a947387d149d359dfab7e5c80a0c677a8a1d132a1564de4ed076049fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 10:21:01 compute-0 systemd[1]: Started libpod-conmon-bb60d5a947387d149d359dfab7e5c80a0c677a8a1d132a1564de4ed076049fb2.scope.
Oct  3 10:21:01 compute-0 podman[443517]: 2025-10-03 10:21:01.110103963 +0000 UTC m=+0.033821288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:21:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134472318e6a4c9e67663359a96639805c00fa18829c5c6b0b45a2f1aef8e9f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134472318e6a4c9e67663359a96639805c00fa18829c5c6b0b45a2f1aef8e9f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134472318e6a4c9e67663359a96639805c00fa18829c5c6b0b45a2f1aef8e9f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134472318e6a4c9e67663359a96639805c00fa18829c5c6b0b45a2f1aef8e9f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:21:01 compute-0 podman[443517]: 2025-10-03 10:21:01.32449635 +0000 UTC m=+0.248213705 container init bb60d5a947387d149d359dfab7e5c80a0c677a8a1d132a1564de4ed076049fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 10:21:01 compute-0 podman[443517]: 2025-10-03 10:21:01.337085435 +0000 UTC m=+0.260802750 container start bb60d5a947387d149d359dfab7e5c80a0c677a8a1d132a1564de4ed076049fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 10:21:01 compute-0 podman[443517]: 2025-10-03 10:21:01.344129521 +0000 UTC m=+0.267846886 container attach bb60d5a947387d149d359dfab7e5c80a0c677a8a1d132a1564de4ed076049fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 10:21:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1611: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:01 compute-0 openstack_network_exporter[367524]: ERROR   10:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:21:01 compute-0 openstack_network_exporter[367524]: ERROR   10:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:21:01 compute-0 openstack_network_exporter[367524]: ERROR   10:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:21:01 compute-0 openstack_network_exporter[367524]: ERROR   10:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:21:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:21:01 compute-0 openstack_network_exporter[367524]: ERROR   10:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:21:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:21:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:21:01 compute-0 nova_compute[351685]: 2025-10-03 10:21:01.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:02 compute-0 wonderful_saha[443533]: {
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:    "0": [
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:        {
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "devices": [
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "/dev/loop3"
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            ],
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "lv_name": "ceph_lv0",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "lv_size": "21470642176",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "name": "ceph_lv0",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "tags": {
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.cluster_name": "ceph",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.crush_device_class": "",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.encrypted": "0",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.osd_id": "0",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.type": "block",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.vdo": "0"
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            },
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "type": "block",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "vg_name": "ceph_vg0"
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:        }
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:    ],
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:    "1": [
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:        {
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "devices": [
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "/dev/loop4"
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            ],
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "lv_name": "ceph_lv1",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "lv_size": "21470642176",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "name": "ceph_lv1",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "tags": {
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.cluster_name": "ceph",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.crush_device_class": "",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.encrypted": "0",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.osd_id": "1",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.type": "block",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.vdo": "0"
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            },
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "type": "block",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "vg_name": "ceph_vg1"
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:        }
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:    ],
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:    "2": [
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:        {
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "devices": [
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "/dev/loop5"
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            ],
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "lv_name": "ceph_lv2",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "lv_size": "21470642176",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "name": "ceph_lv2",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "tags": {
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.cluster_name": "ceph",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.crush_device_class": "",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.encrypted": "0",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.osd_id": "2",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.type": "block",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:                "ceph.vdo": "0"
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            },
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "type": "block",
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:            "vg_name": "ceph_vg2"
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:        }
Oct  3 10:21:02 compute-0 wonderful_saha[443533]:    ]
Oct  3 10:21:02 compute-0 wonderful_saha[443533]: }
Oct  3 10:21:02 compute-0 systemd[1]: libpod-bb60d5a947387d149d359dfab7e5c80a0c677a8a1d132a1564de4ed076049fb2.scope: Deactivated successfully.
Oct  3 10:21:02 compute-0 podman[443517]: 2025-10-03 10:21:02.267783686 +0000 UTC m=+1.191500971 container died bb60d5a947387d149d359dfab7e5c80a0c677a8a1d132a1564de4ed076049fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 10:21:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-134472318e6a4c9e67663359a96639805c00fa18829c5c6b0b45a2f1aef8e9f1-merged.mount: Deactivated successfully.
Oct  3 10:21:02 compute-0 podman[443517]: 2025-10-03 10:21:02.356737015 +0000 UTC m=+1.280454290 container remove bb60d5a947387d149d359dfab7e5c80a0c677a8a1d132a1564de4ed076049fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:21:02 compute-0 systemd[1]: libpod-conmon-bb60d5a947387d149d359dfab7e5c80a0c677a8a1d132a1564de4ed076049fb2.scope: Deactivated successfully.
Oct  3 10:21:02 compute-0 nova_compute[351685]: 2025-10-03 10:21:02.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:03 compute-0 podman[443695]: 2025-10-03 10:21:03.177622487 +0000 UTC m=+0.051521716 container create ce6d640145296eaed98e6403c4f09ce7c112e9956f69ec41f0a0db80d9cd37aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:21:03 compute-0 systemd[1]: Started libpod-conmon-ce6d640145296eaed98e6403c4f09ce7c112e9956f69ec41f0a0db80d9cd37aa.scope.
Oct  3 10:21:03 compute-0 podman[443695]: 2025-10-03 10:21:03.156430087 +0000 UTC m=+0.030329326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:21:03 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:21:03 compute-0 podman[443695]: 2025-10-03 10:21:03.282684583 +0000 UTC m=+0.156583822 container init ce6d640145296eaed98e6403c4f09ce7c112e9956f69ec41f0a0db80d9cd37aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 10:21:03 compute-0 podman[443695]: 2025-10-03 10:21:03.293709967 +0000 UTC m=+0.167609186 container start ce6d640145296eaed98e6403c4f09ce7c112e9956f69ec41f0a0db80d9cd37aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 10:21:03 compute-0 podman[443695]: 2025-10-03 10:21:03.297633294 +0000 UTC m=+0.171532543 container attach ce6d640145296eaed98e6403c4f09ce7c112e9956f69ec41f0a0db80d9cd37aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:21:03 compute-0 sad_noether[443714]: 167 167
Oct  3 10:21:03 compute-0 systemd[1]: libpod-ce6d640145296eaed98e6403c4f09ce7c112e9956f69ec41f0a0db80d9cd37aa.scope: Deactivated successfully.
Oct  3 10:21:03 compute-0 podman[443695]: 2025-10-03 10:21:03.30345873 +0000 UTC m=+0.177357969 container died ce6d640145296eaed98e6403c4f09ce7c112e9956f69ec41f0a0db80d9cd37aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:21:03 compute-0 nova_compute[351685]: 2025-10-03 10:21:03.306 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:21:03 compute-0 podman[443712]: 2025-10-03 10:21:03.333991272 +0000 UTC m=+0.101933037 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:21:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-46a965697e0aea4e6f22ca351f87f005f8a6d9669cd01dbc2a73190ae03da3da-merged.mount: Deactivated successfully.
Oct  3 10:21:03 compute-0 podman[443709]: 2025-10-03 10:21:03.348224668 +0000 UTC m=+0.112743692 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:21:03 compute-0 podman[443695]: 2025-10-03 10:21:03.356181225 +0000 UTC m=+0.230080444 container remove ce6d640145296eaed98e6403c4f09ce7c112e9956f69ec41f0a0db80d9cd37aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_noether, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 10:21:03 compute-0 podman[443713]: 2025-10-03 10:21:03.357554908 +0000 UTC m=+0.123799759 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  3 10:21:03 compute-0 systemd[1]: libpod-conmon-ce6d640145296eaed98e6403c4f09ce7c112e9956f69ec41f0a0db80d9cd37aa.scope: Deactivated successfully.
Oct  3 10:21:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:03 compute-0 podman[443793]: 2025-10-03 10:21:03.60411418 +0000 UTC m=+0.071991384 container create 369ab12da3235771b478e58881c65b349c137cc7dd10c3bba28429e380d4b2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:21:03 compute-0 podman[443793]: 2025-10-03 10:21:03.572737951 +0000 UTC m=+0.040615165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:21:03 compute-0 systemd[1]: Started libpod-conmon-369ab12da3235771b478e58881c65b349c137cc7dd10c3bba28429e380d4b2be.scope.
Oct  3 10:21:03 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:21:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24673b7df723f6703a9912d2a1c1fac3d52c46fba6eb253c87ae70501e098760/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:21:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24673b7df723f6703a9912d2a1c1fac3d52c46fba6eb253c87ae70501e098760/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:21:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24673b7df723f6703a9912d2a1c1fac3d52c46fba6eb253c87ae70501e098760/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:21:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24673b7df723f6703a9912d2a1c1fac3d52c46fba6eb253c87ae70501e098760/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:21:03 compute-0 podman[443793]: 2025-10-03 10:21:03.730588603 +0000 UTC m=+0.198465787 container init 369ab12da3235771b478e58881c65b349c137cc7dd10c3bba28429e380d4b2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:21:03 compute-0 podman[443793]: 2025-10-03 10:21:03.754498431 +0000 UTC m=+0.222375595 container start 369ab12da3235771b478e58881c65b349c137cc7dd10c3bba28429e380d4b2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:21:03 compute-0 podman[443793]: 2025-10-03 10:21:03.757962642 +0000 UTC m=+0.225839836 container attach 369ab12da3235771b478e58881c65b349c137cc7dd10c3bba28429e380d4b2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:21:04 compute-0 eager_banach[443809]: {
Oct  3 10:21:04 compute-0 eager_banach[443809]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:21:04 compute-0 eager_banach[443809]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:21:04 compute-0 eager_banach[443809]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:21:04 compute-0 eager_banach[443809]:        "osd_id": 1,
Oct  3 10:21:04 compute-0 eager_banach[443809]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:21:04 compute-0 eager_banach[443809]:        "type": "bluestore"
Oct  3 10:21:04 compute-0 eager_banach[443809]:    },
Oct  3 10:21:04 compute-0 eager_banach[443809]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:21:04 compute-0 eager_banach[443809]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:21:04 compute-0 eager_banach[443809]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:21:04 compute-0 eager_banach[443809]:        "osd_id": 2,
Oct  3 10:21:04 compute-0 eager_banach[443809]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:21:04 compute-0 eager_banach[443809]:        "type": "bluestore"
Oct  3 10:21:04 compute-0 eager_banach[443809]:    },
Oct  3 10:21:04 compute-0 eager_banach[443809]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:21:04 compute-0 eager_banach[443809]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:21:04 compute-0 eager_banach[443809]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:21:04 compute-0 eager_banach[443809]:        "osd_id": 0,
Oct  3 10:21:04 compute-0 eager_banach[443809]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:21:04 compute-0 eager_banach[443809]:        "type": "bluestore"
Oct  3 10:21:04 compute-0 eager_banach[443809]:    }
Oct  3 10:21:04 compute-0 eager_banach[443809]: }
Oct  3 10:21:04 compute-0 systemd[1]: libpod-369ab12da3235771b478e58881c65b349c137cc7dd10c3bba28429e380d4b2be.scope: Deactivated successfully.
Oct  3 10:21:04 compute-0 podman[443793]: 2025-10-03 10:21:04.830349535 +0000 UTC m=+1.298226699 container died 369ab12da3235771b478e58881c65b349c137cc7dd10c3bba28429e380d4b2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 10:21:04 compute-0 systemd[1]: libpod-369ab12da3235771b478e58881c65b349c137cc7dd10c3bba28429e380d4b2be.scope: Consumed 1.082s CPU time.
Oct  3 10:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-24673b7df723f6703a9912d2a1c1fac3d52c46fba6eb253c87ae70501e098760-merged.mount: Deactivated successfully.
Oct  3 10:21:04 compute-0 podman[443793]: 2025-10-03 10:21:04.92947028 +0000 UTC m=+1.397347464 container remove 369ab12da3235771b478e58881c65b349c137cc7dd10c3bba28429e380d4b2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:21:04 compute-0 systemd[1]: libpod-conmon-369ab12da3235771b478e58881c65b349c137cc7dd10c3bba28429e380d4b2be.scope: Deactivated successfully.
Oct  3 10:21:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:21:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:21:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:21:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:21:05 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev bf35f208-5223-4a22-988d-041f4a9c03f7 does not exist
Oct  3 10:21:05 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3fc235c6-6eb7-4e54-b1e5-f7827cb281a1 does not exist
Oct  3 10:21:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1613: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:06 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:21:06 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:21:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:21:06 compute-0 nova_compute[351685]: 2025-10-03 10:21:06.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1614: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:07 compute-0 nova_compute[351685]: 2025-10-03 10:21:07.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1615: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:09 compute-0 podman[443906]: 2025-10-03 10:21:09.872720016 +0000 UTC m=+0.117974132 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  3 10:21:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1616: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:21:11 compute-0 nova_compute[351685]: 2025-10-03 10:21:11.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:12 compute-0 nova_compute[351685]: 2025-10-03 10:21:12.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:14 compute-0 nova_compute[351685]: 2025-10-03 10:21:14.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:21:14 compute-0 nova_compute[351685]: 2025-10-03 10:21:14.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 10:21:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1618: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:15 compute-0 podman[443924]: 2025-10-03 10:21:15.813021695 +0000 UTC m=+0.076333343 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:21:15 compute-0 podman[443925]: 2025-10-03 10:21:15.854071784 +0000 UTC m=+0.105262843 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vcs-type=git, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct  3 10:21:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:21:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:21:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:21:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:21:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:21:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:21:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:21:16 compute-0 nova_compute[351685]: 2025-10-03 10:21:16.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1619: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:17 compute-0 nova_compute[351685]: 2025-10-03 10:21:17.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1620: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1621: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:21:21 compute-0 nova_compute[351685]: 2025-10-03 10:21:21.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:22 compute-0 nova_compute[351685]: 2025-10-03 10:21:22.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1622: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:24 compute-0 podman[443967]: 2025-10-03 10:21:24.86920861 +0000 UTC m=+0.111986499 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Oct  3 10:21:24 compute-0 podman[443966]: 2025-10-03 10:21:24.873310672 +0000 UTC m=+0.122226028 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, name=ubi9-minimal, architecture=x86_64, release=1755695350, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct  3 10:21:24 compute-0 podman[443968]: 2025-10-03 10:21:24.904988849 +0000 UTC m=+0.140501794 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  3 10:21:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1623: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:21:26 compute-0 nova_compute[351685]: 2025-10-03 10:21:26.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1624: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:27 compute-0 nova_compute[351685]: 2025-10-03 10:21:27.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1625: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:29 compute-0 podman[157165]: time="2025-10-03T10:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:21:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:21:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9067 "" "Go-http-client/1.1"
Oct  3 10:21:30 compute-0 podman[444025]: 2025-10-03 10:21:30.834008363 +0000 UTC m=+0.082992228 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  3 10:21:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1626: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:31 compute-0 openstack_network_exporter[367524]: ERROR   10:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:21:31 compute-0 openstack_network_exporter[367524]: ERROR   10:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:21:31 compute-0 openstack_network_exporter[367524]: ERROR   10:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:21:31 compute-0 openstack_network_exporter[367524]: ERROR   10:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:21:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:21:31 compute-0 openstack_network_exporter[367524]: ERROR   10:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:21:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:21:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:21:31 compute-0 nova_compute[351685]: 2025-10-03 10:21:31.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:32 compute-0 nova_compute[351685]: 2025-10-03 10:21:32.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1627: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:33 compute-0 podman[444043]: 2025-10-03 10:21:33.83965869 +0000 UTC m=+0.097915707 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:21:33 compute-0 podman[444042]: 2025-10-03 10:21:33.840348922 +0000 UTC m=+0.100261532 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:21:33 compute-0 podman[444044]: 2025-10-03 10:21:33.905698031 +0000 UTC m=+0.157327615 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3)
Oct  3 10:21:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:21:36 compute-0 nova_compute[351685]: 2025-10-03 10:21:36.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:37 compute-0 nova_compute[351685]: 2025-10-03 10:21:37.112 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:21:37 compute-0 nova_compute[351685]: 2025-10-03 10:21:37.133 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Triggering sync for uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  3 10:21:37 compute-0 nova_compute[351685]: 2025-10-03 10:21:37.134 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:21:37 compute-0 nova_compute[351685]: 2025-10-03 10:21:37.135 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:21:37 compute-0 nova_compute[351685]: 2025-10-03 10:21:37.167 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:21:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1629: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:37 compute-0 nova_compute[351685]: 2025-10-03 10:21:37.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1630: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:40 compute-0 podman[444102]: 2025-10-03 10:21:40.868229235 +0000 UTC m=+0.122834668 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm)
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.887 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.887 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.887 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.894 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.894 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.895 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.895 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.895 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.896 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:21:40.895330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.900 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.901 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.901 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.901 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.901 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.901 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.901 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.901 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:21:40.901705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.902 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.902 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.902 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.903 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.903 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:21:40.903128) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.926 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.927 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.927 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.928 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.928 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.928 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.928 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.928 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.928 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.929 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:21:40.928812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.972 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.973 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.973 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.974 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.974 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.974 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.974 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.974 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.975 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.975 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:21:40.975000) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.975 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.976 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.976 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.977 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.977 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.977 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.977 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.978 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.978 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.978 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.978 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:21:40.978020) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.979 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.979 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.980 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.980 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.980 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.980 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.980 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.981 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.981 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:21:40.980516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.981 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.982 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.982 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.982 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.982 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.983 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.983 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:21:40.983031) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.983 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.983 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.984 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.984 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.985 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.985 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.985 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.985 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.985 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.985 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.986 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:21:40.985421) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.986 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.987 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.987 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.987 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.987 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.987 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:40.988 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:21:40.987798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.010 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.010 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.012 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.012 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:21:41.011325) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.015 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.015 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.015 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.015 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.016 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:21:41.013885) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.016 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:21:41.015700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.016 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.016 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.016 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.016 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.017 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.017 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.017 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:21:41.017182) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.018 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:21:41.018344) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.018 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.019 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.020 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.020 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.021 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.021 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:21:41.019653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.021 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:21:41.020684) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.022 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 48830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.022 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.023 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.023 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.023 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.024 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:21:41.021947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.024 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.024 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:21:41.023145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.025 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.025 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:21:41.024483) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.025 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.025 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.025 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.026 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.026 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.026 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.026 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:21:41.025853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.027 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.84765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.027 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.027 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.027 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:21:41.026997) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.027 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.028 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.028 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.028 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.028 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.028 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.029 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.029 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.029 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.029 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.029 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.029 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.029 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.030 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:21:41.028493) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:21:41.029620) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:21:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:21:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1631: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:21:41.611 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:21:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:21:41.612 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:21:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:21:41.612 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:21:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:21:41 compute-0 nova_compute[351685]: 2025-10-03 10:21:41.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:42 compute-0 nova_compute[351685]: 2025-10-03 10:21:42.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1632: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1633: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:21:46
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.meta', 'vms', 'volumes', 'default.rgw.log']
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:21:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:21:46 compute-0 podman[444122]: 2025-10-03 10:21:46.816382677 +0000 UTC m=+0.075978892 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:21:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:21:46 compute-0 podman[444123]: 2025-10-03 10:21:46.870617279 +0000 UTC m=+0.114045175 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.4, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, container_name=kepler)
Oct  3 10:21:46 compute-0 nova_compute[351685]: 2025-10-03 10:21:46.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1634: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:47 compute-0 nova_compute[351685]: 2025-10-03 10:21:47.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:48 compute-0 nova_compute[351685]: 2025-10-03 10:21:48.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:21:48 compute-0 nova_compute[351685]: 2025-10-03 10:21:48.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:21:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1635: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:50 compute-0 nova_compute[351685]: 2025-10-03 10:21:50.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:21:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1636: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:51 compute-0 nova_compute[351685]: 2025-10-03 10:21:51.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:21:51 compute-0 nova_compute[351685]: 2025-10-03 10:21:51.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:21:51 compute-0 nova_compute[351685]: 2025-10-03 10:21:51.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:21:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:21:51 compute-0 nova_compute[351685]: 2025-10-03 10:21:51.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:51 compute-0 nova_compute[351685]: 2025-10-03 10:21:51.957 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:21:51 compute-0 nova_compute[351685]: 2025-10-03 10:21:51.957 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:21:51 compute-0 nova_compute[351685]: 2025-10-03 10:21:51.957 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:21:51 compute-0 nova_compute[351685]: 2025-10-03 10:21:51.958 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:21:52 compute-0 nova_compute[351685]: 2025-10-03 10:21:52.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:52 compute-0 nova_compute[351685]: 2025-10-03 10:21:52.965 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:21:52 compute-0 nova_compute[351685]: 2025-10-03 10:21:52.979 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:21:52 compute-0 nova_compute[351685]: 2025-10-03 10:21:52.979 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:21:52 compute-0 nova_compute[351685]: 2025-10-03 10:21:52.979 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:21:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1637: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:21:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4142210638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:21:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:21:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4142210638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:21:54 compute-0 nova_compute[351685]: 2025-10-03 10:21:54.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:21:54 compute-0 nova_compute[351685]: 2025-10-03 10:21:54.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1638: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:21:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:21:55 compute-0 podman[444168]: 2025-10-03 10:21:55.823757292 +0000 UTC m=+0.079662540 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 10:21:55 compute-0 podman[444167]: 2025-10-03 10:21:55.849170368 +0000 UTC m=+0.109657964 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, managed_by=edpm_ansible, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Oct  3 10:21:55 compute-0 podman[444169]: 2025-10-03 10:21:55.886470497 +0000 UTC m=+0.139411430 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:21:56 compute-0 nova_compute[351685]: 2025-10-03 10:21:56.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:21:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:21:56 compute-0 nova_compute[351685]: 2025-10-03 10:21:56.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:56 compute-0 nova_compute[351685]: 2025-10-03 10:21:56.945 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:21:56 compute-0 nova_compute[351685]: 2025-10-03 10:21:56.946 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:21:56 compute-0 nova_compute[351685]: 2025-10-03 10:21:56.946 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:21:56 compute-0 nova_compute[351685]: 2025-10-03 10:21:56.946 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:21:56 compute-0 nova_compute[351685]: 2025-10-03 10:21:56.946 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:21:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:21:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/40895805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:21:57 compute-0 nova_compute[351685]: 2025-10-03 10:21:57.389 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:21:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1639: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:57 compute-0 nova_compute[351685]: 2025-10-03 10:21:57.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:21:57 compute-0 nova_compute[351685]: 2025-10-03 10:21:57.640 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:21:57 compute-0 nova_compute[351685]: 2025-10-03 10:21:57.640 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:21:57 compute-0 nova_compute[351685]: 2025-10-03 10:21:57.641 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:21:57 compute-0 nova_compute[351685]: 2025-10-03 10:21:57.984 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:21:57 compute-0 nova_compute[351685]: 2025-10-03 10:21:57.986 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3875MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:21:57 compute-0 nova_compute[351685]: 2025-10-03 10:21:57.986 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:21:57 compute-0 nova_compute[351685]: 2025-10-03 10:21:57.986 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:21:58 compute-0 nova_compute[351685]: 2025-10-03 10:21:58.120 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:21:58 compute-0 nova_compute[351685]: 2025-10-03 10:21:58.120 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:21:58 compute-0 nova_compute[351685]: 2025-10-03 10:21:58.120 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:21:58 compute-0 nova_compute[351685]: 2025-10-03 10:21:58.252 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:21:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:21:58 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1482896288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:21:58 compute-0 nova_compute[351685]: 2025-10-03 10:21:58.726 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:21:58 compute-0 nova_compute[351685]: 2025-10-03 10:21:58.735 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:21:58 compute-0 nova_compute[351685]: 2025-10-03 10:21:58.855 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:21:58 compute-0 nova_compute[351685]: 2025-10-03 10:21:58.857 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:21:58 compute-0 nova_compute[351685]: 2025-10-03 10:21:58.857 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:21:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1640: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:21:59 compute-0 podman[157165]: time="2025-10-03T10:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:21:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:21:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9067 "" "Go-http-client/1.1"
Oct  3 10:21:59 compute-0 nova_compute[351685]: 2025-10-03 10:21:59.857 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:22:01 compute-0 openstack_network_exporter[367524]: ERROR   10:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:22:01 compute-0 openstack_network_exporter[367524]: ERROR   10:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:22:01 compute-0 openstack_network_exporter[367524]: ERROR   10:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:22:01 compute-0 openstack_network_exporter[367524]: ERROR   10:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:22:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:22:01 compute-0 openstack_network_exporter[367524]: ERROR   10:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:22:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:22:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1641: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:01 compute-0 podman[444276]: 2025-10-03 10:22:01.844804682 +0000 UTC m=+0.096771451 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  3 10:22:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:22:01 compute-0 nova_compute[351685]: 2025-10-03 10:22:01.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:02 compute-0 nova_compute[351685]: 2025-10-03 10:22:02.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:02 compute-0 nova_compute[351685]: 2025-10-03 10:22:02.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:22:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1642: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:04 compute-0 podman[444295]: 2025-10-03 10:22:04.821435627 +0000 UTC m=+0.072228531 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:22:04 compute-0 podman[444296]: 2025-10-03 10:22:04.821763228 +0000 UTC m=+0.068345927 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct  3 10:22:04 compute-0 podman[444294]: 2025-10-03 10:22:04.843187527 +0000 UTC m=+0.097675480 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:22:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1643: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:22:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:22:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:22:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:22:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:22:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:22:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f0e5bedc-5092-41ac-bae3-0d846f399d72 does not exist
Oct  3 10:22:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 5ef3d457-540f-4b16-80f9-0bcf91245d91 does not exist
Oct  3 10:22:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a0d4c9d8-9750-40e2-9c89-062591a6ccd1 does not exist
Oct  3 10:22:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:22:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:22:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:22:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:22:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:22:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:22:06 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:22:06 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:22:06 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:22:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:22:06 compute-0 nova_compute[351685]: 2025-10-03 10:22:06.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:07 compute-0 podman[444619]: 2025-10-03 10:22:07.127876231 +0000 UTC m=+0.051052961 container create 45d7ae91c25713b80b3b04281c9f55007e020da71cef7662aec23fb80393595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goodall, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 10:22:07 compute-0 systemd[1]: Started libpod-conmon-45d7ae91c25713b80b3b04281c9f55007e020da71cef7662aec23fb80393595d.scope.
Oct  3 10:22:07 compute-0 podman[444619]: 2025-10-03 10:22:07.107672882 +0000 UTC m=+0.030849632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:22:07 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:22:07 compute-0 podman[444619]: 2025-10-03 10:22:07.242203204 +0000 UTC m=+0.165379934 container init 45d7ae91c25713b80b3b04281c9f55007e020da71cef7662aec23fb80393595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goodall, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 10:22:07 compute-0 podman[444619]: 2025-10-03 10:22:07.252746813 +0000 UTC m=+0.175923543 container start 45d7ae91c25713b80b3b04281c9f55007e020da71cef7662aec23fb80393595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 10:22:07 compute-0 podman[444619]: 2025-10-03 10:22:07.256633208 +0000 UTC m=+0.179809968 container attach 45d7ae91c25713b80b3b04281c9f55007e020da71cef7662aec23fb80393595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:22:07 compute-0 relaxed_goodall[444635]: 167 167
Oct  3 10:22:07 compute-0 systemd[1]: libpod-45d7ae91c25713b80b3b04281c9f55007e020da71cef7662aec23fb80393595d.scope: Deactivated successfully.
Oct  3 10:22:07 compute-0 conmon[444635]: conmon 45d7ae91c25713b80b3b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45d7ae91c25713b80b3b04281c9f55007e020da71cef7662aec23fb80393595d.scope/container/memory.events
Oct  3 10:22:07 compute-0 podman[444640]: 2025-10-03 10:22:07.311880553 +0000 UTC m=+0.033884880 container died 45d7ae91c25713b80b3b04281c9f55007e020da71cef7662aec23fb80393595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 10:22:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4574bef337d9473405b84960beeda6e65ed5578ceef0deb234f2960b8883949-merged.mount: Deactivated successfully.
Oct  3 10:22:07 compute-0 podman[444640]: 2025-10-03 10:22:07.369654029 +0000 UTC m=+0.091658326 container remove 45d7ae91c25713b80b3b04281c9f55007e020da71cef7662aec23fb80393595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goodall, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 10:22:07 compute-0 systemd[1]: libpod-conmon-45d7ae91c25713b80b3b04281c9f55007e020da71cef7662aec23fb80393595d.scope: Deactivated successfully.
Oct  3 10:22:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1644: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:07 compute-0 nova_compute[351685]: 2025-10-03 10:22:07.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:07 compute-0 podman[444661]: 2025-10-03 10:22:07.573774617 +0000 UTC m=+0.046901108 container create 9f48dd272d22fa85284b1d9eb6499e5a1606a09cd91b53593a096ea814a24d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_johnson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 10:22:07 compute-0 systemd[1]: Started libpod-conmon-9f48dd272d22fa85284b1d9eb6499e5a1606a09cd91b53593a096ea814a24d0b.scope.
Oct  3 10:22:07 compute-0 podman[444661]: 2025-10-03 10:22:07.556041478 +0000 UTC m=+0.029167999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:22:07 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e2fd84bd80be6555fce4bcbc1ae726c7335de86c1a535c99d73ab1c9f1fe6da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e2fd84bd80be6555fce4bcbc1ae726c7335de86c1a535c99d73ab1c9f1fe6da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e2fd84bd80be6555fce4bcbc1ae726c7335de86c1a535c99d73ab1c9f1fe6da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e2fd84bd80be6555fce4bcbc1ae726c7335de86c1a535c99d73ab1c9f1fe6da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e2fd84bd80be6555fce4bcbc1ae726c7335de86c1a535c99d73ab1c9f1fe6da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:22:07 compute-0 podman[444661]: 2025-10-03 10:22:07.701859252 +0000 UTC m=+0.174985753 container init 9f48dd272d22fa85284b1d9eb6499e5a1606a09cd91b53593a096ea814a24d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_johnson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct  3 10:22:07 compute-0 podman[444661]: 2025-10-03 10:22:07.714748537 +0000 UTC m=+0.187875038 container start 9f48dd272d22fa85284b1d9eb6499e5a1606a09cd91b53593a096ea814a24d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_johnson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:22:07 compute-0 podman[444661]: 2025-10-03 10:22:07.720435389 +0000 UTC m=+0.193561900 container attach 9f48dd272d22fa85284b1d9eb6499e5a1606a09cd91b53593a096ea814a24d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_johnson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 10:22:08 compute-0 loving_johnson[444677]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:22:08 compute-0 loving_johnson[444677]: --> relative data size: 1.0
Oct  3 10:22:08 compute-0 loving_johnson[444677]: --> All data devices are unavailable
Oct  3 10:22:08 compute-0 systemd[1]: libpod-9f48dd272d22fa85284b1d9eb6499e5a1606a09cd91b53593a096ea814a24d0b.scope: Deactivated successfully.
Oct  3 10:22:08 compute-0 systemd[1]: libpod-9f48dd272d22fa85284b1d9eb6499e5a1606a09cd91b53593a096ea814a24d0b.scope: Consumed 1.049s CPU time.
Oct  3 10:22:08 compute-0 podman[444706]: 2025-10-03 10:22:08.871297056 +0000 UTC m=+0.037120804 container died 9f48dd272d22fa85284b1d9eb6499e5a1606a09cd91b53593a096ea814a24d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 10:22:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e2fd84bd80be6555fce4bcbc1ae726c7335de86c1a535c99d73ab1c9f1fe6da-merged.mount: Deactivated successfully.
Oct  3 10:22:08 compute-0 podman[444706]: 2025-10-03 10:22:08.948874758 +0000 UTC m=+0.114698486 container remove 9f48dd272d22fa85284b1d9eb6499e5a1606a09cd91b53593a096ea814a24d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_johnson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 10:22:08 compute-0 systemd[1]: libpod-conmon-9f48dd272d22fa85284b1d9eb6499e5a1606a09cd91b53593a096ea814a24d0b.scope: Deactivated successfully.
Oct  3 10:22:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1645: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:09 compute-0 podman[444857]: 2025-10-03 10:22:09.757575611 +0000 UTC m=+0.044860242 container create fadd704b67d6bd3a8a06f082eb8effdb12afe9fb42d8518c35d158fb4f1cf12e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ishizaka, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:22:09 compute-0 systemd[1]: Started libpod-conmon-fadd704b67d6bd3a8a06f082eb8effdb12afe9fb42d8518c35d158fb4f1cf12e.scope.
Oct  3 10:22:09 compute-0 podman[444857]: 2025-10-03 10:22:09.740660578 +0000 UTC m=+0.027945229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:22:09 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:22:09 compute-0 podman[444857]: 2025-10-03 10:22:09.861582943 +0000 UTC m=+0.148867594 container init fadd704b67d6bd3a8a06f082eb8effdb12afe9fb42d8518c35d158fb4f1cf12e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ishizaka, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:22:09 compute-0 podman[444857]: 2025-10-03 10:22:09.870136648 +0000 UTC m=+0.157421279 container start fadd704b67d6bd3a8a06f082eb8effdb12afe9fb42d8518c35d158fb4f1cf12e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 10:22:09 compute-0 podman[444857]: 2025-10-03 10:22:09.87488273 +0000 UTC m=+0.162167361 container attach fadd704b67d6bd3a8a06f082eb8effdb12afe9fb42d8518c35d158fb4f1cf12e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ishizaka, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:22:09 compute-0 upbeat_ishizaka[444873]: 167 167
Oct  3 10:22:09 compute-0 systemd[1]: libpod-fadd704b67d6bd3a8a06f082eb8effdb12afe9fb42d8518c35d158fb4f1cf12e.scope: Deactivated successfully.
Oct  3 10:22:09 compute-0 conmon[444873]: conmon fadd704b67d6bd3a8a06 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fadd704b67d6bd3a8a06f082eb8effdb12afe9fb42d8518c35d158fb4f1cf12e.scope/container/memory.events
Oct  3 10:22:09 compute-0 podman[444857]: 2025-10-03 10:22:09.880154819 +0000 UTC m=+0.167439450 container died fadd704b67d6bd3a8a06f082eb8effdb12afe9fb42d8518c35d158fb4f1cf12e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ishizaka, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 10:22:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9416599520a52c1b9dabdfef90ee609b4edd28fcbc08f53d7aa8b3c172d87c4a-merged.mount: Deactivated successfully.
Oct  3 10:22:09 compute-0 podman[444857]: 2025-10-03 10:22:09.93060206 +0000 UTC m=+0.217886691 container remove fadd704b67d6bd3a8a06f082eb8effdb12afe9fb42d8518c35d158fb4f1cf12e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 10:22:09 compute-0 systemd[1]: libpod-conmon-fadd704b67d6bd3a8a06f082eb8effdb12afe9fb42d8518c35d158fb4f1cf12e.scope: Deactivated successfully.
Oct  3 10:22:10 compute-0 podman[444898]: 2025-10-03 10:22:10.14565433 +0000 UTC m=+0.068586545 container create 8a3a2ddd14342c390cc5a86f26449003ebc9534df78dad021bd7289784322a80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_blackwell, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 10:22:10 compute-0 podman[444898]: 2025-10-03 10:22:10.10989319 +0000 UTC m=+0.032825455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:22:10 compute-0 systemd[1]: Started libpod-conmon-8a3a2ddd14342c390cc5a86f26449003ebc9534df78dad021bd7289784322a80.scope.
Oct  3 10:22:10 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304eeb65ced14f3b6274d58761e1575d1ff6828231cbbf339d467c995a58bfaf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304eeb65ced14f3b6274d58761e1575d1ff6828231cbbf339d467c995a58bfaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304eeb65ced14f3b6274d58761e1575d1ff6828231cbbf339d467c995a58bfaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304eeb65ced14f3b6274d58761e1575d1ff6828231cbbf339d467c995a58bfaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:22:10 compute-0 podman[444898]: 2025-10-03 10:22:10.258816015 +0000 UTC m=+0.181748220 container init 8a3a2ddd14342c390cc5a86f26449003ebc9534df78dad021bd7289784322a80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_blackwell, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:22:10 compute-0 podman[444898]: 2025-10-03 10:22:10.275996468 +0000 UTC m=+0.198928643 container start 8a3a2ddd14342c390cc5a86f26449003ebc9534df78dad021bd7289784322a80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_blackwell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 10:22:10 compute-0 podman[444898]: 2025-10-03 10:22:10.282381593 +0000 UTC m=+0.205313788 container attach 8a3a2ddd14342c390cc5a86f26449003ebc9534df78dad021bd7289784322a80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_blackwell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]: {
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:    "0": [
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:        {
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "devices": [
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "/dev/loop3"
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            ],
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "lv_name": "ceph_lv0",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "lv_size": "21470642176",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "name": "ceph_lv0",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "tags": {
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.cluster_name": "ceph",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.crush_device_class": "",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.encrypted": "0",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.osd_id": "0",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.type": "block",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.vdo": "0"
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            },
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "type": "block",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "vg_name": "ceph_vg0"
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:        }
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:    ],
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:    "1": [
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:        {
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "devices": [
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "/dev/loop4"
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            ],
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "lv_name": "ceph_lv1",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "lv_size": "21470642176",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "name": "ceph_lv1",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "tags": {
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.cluster_name": "ceph",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.crush_device_class": "",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.encrypted": "0",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.osd_id": "1",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.type": "block",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.vdo": "0"
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            },
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "type": "block",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "vg_name": "ceph_vg1"
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:        }
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:    ],
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:    "2": [
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:        {
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "devices": [
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "/dev/loop5"
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            ],
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "lv_name": "ceph_lv2",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "lv_size": "21470642176",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "name": "ceph_lv2",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "tags": {
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.cluster_name": "ceph",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.crush_device_class": "",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.encrypted": "0",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.osd_id": "2",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.type": "block",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:                "ceph.vdo": "0"
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            },
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "type": "block",
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:            "vg_name": "ceph_vg2"
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:        }
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]:    ]
Oct  3 10:22:11 compute-0 nifty_blackwell[444914]: }
Oct  3 10:22:11 compute-0 systemd[1]: libpod-8a3a2ddd14342c390cc5a86f26449003ebc9534df78dad021bd7289784322a80.scope: Deactivated successfully.
Oct  3 10:22:11 compute-0 podman[444923]: 2025-10-03 10:22:11.126615457 +0000 UTC m=+0.041048859 container died 8a3a2ddd14342c390cc5a86f26449003ebc9534df78dad021bd7289784322a80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_blackwell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 10:22:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-304eeb65ced14f3b6274d58761e1575d1ff6828231cbbf339d467c995a58bfaf-merged.mount: Deactivated successfully.
Oct  3 10:22:11 compute-0 podman[444923]: 2025-10-03 10:22:11.37349236 +0000 UTC m=+0.287925742 container remove 8a3a2ddd14342c390cc5a86f26449003ebc9534df78dad021bd7289784322a80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_blackwell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:22:11 compute-0 systemd[1]: libpod-conmon-8a3a2ddd14342c390cc5a86f26449003ebc9534df78dad021bd7289784322a80.scope: Deactivated successfully.
Oct  3 10:22:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1646: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:11 compute-0 podman[444924]: 2025-10-03 10:22:11.537607742 +0000 UTC m=+0.438581882 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 10:22:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:22:11 compute-0 nova_compute[351685]: 2025-10-03 10:22:11.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:12 compute-0 podman[445094]: 2025-10-03 10:22:12.179053921 +0000 UTC m=+0.046505444 container create 5bdc58f2ed22f7a293759f5926c1c5ec63849f3fbe7e09b00552731d76999222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct  3 10:22:12 compute-0 systemd[1]: Started libpod-conmon-5bdc58f2ed22f7a293759f5926c1c5ec63849f3fbe7e09b00552731d76999222.scope.
Oct  3 10:22:12 compute-0 podman[445094]: 2025-10-03 10:22:12.161208659 +0000 UTC m=+0.028660202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:22:12 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:22:12 compute-0 podman[445094]: 2025-10-03 10:22:12.287725343 +0000 UTC m=+0.155176886 container init 5bdc58f2ed22f7a293759f5926c1c5ec63849f3fbe7e09b00552731d76999222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct  3 10:22:12 compute-0 podman[445094]: 2025-10-03 10:22:12.29818966 +0000 UTC m=+0.165641183 container start 5bdc58f2ed22f7a293759f5926c1c5ec63849f3fbe7e09b00552731d76999222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:22:12 compute-0 podman[445094]: 2025-10-03 10:22:12.302439706 +0000 UTC m=+0.169891249 container attach 5bdc58f2ed22f7a293759f5926c1c5ec63849f3fbe7e09b00552731d76999222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:22:12 compute-0 affectionate_mcnulty[445110]: 167 167
Oct  3 10:22:12 compute-0 systemd[1]: libpod-5bdc58f2ed22f7a293759f5926c1c5ec63849f3fbe7e09b00552731d76999222.scope: Deactivated successfully.
Oct  3 10:22:12 compute-0 podman[445094]: 2025-10-03 10:22:12.304700128 +0000 UTC m=+0.172151661 container died 5bdc58f2ed22f7a293759f5926c1c5ec63849f3fbe7e09b00552731d76999222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcnulty, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:22:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffb4f132163ad7718a1386f556f53e234b505e0b2d6b369d038a858ee103f210-merged.mount: Deactivated successfully.
Oct  3 10:22:12 compute-0 podman[445094]: 2025-10-03 10:22:12.3548798 +0000 UTC m=+0.222331323 container remove 5bdc58f2ed22f7a293759f5926c1c5ec63849f3fbe7e09b00552731d76999222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcnulty, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 10:22:12 compute-0 systemd[1]: libpod-conmon-5bdc58f2ed22f7a293759f5926c1c5ec63849f3fbe7e09b00552731d76999222.scope: Deactivated successfully.
Oct  3 10:22:12 compute-0 nova_compute[351685]: 2025-10-03 10:22:12.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:12 compute-0 podman[445132]: 2025-10-03 10:22:12.573098052 +0000 UTC m=+0.061226198 container create 885d056c410c885f50e9836617ded8d04e19ee749ab3502adb79df88635e1e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 10:22:12 compute-0 systemd[1]: Started libpod-conmon-885d056c410c885f50e9836617ded8d04e19ee749ab3502adb79df88635e1e02.scope.
Oct  3 10:22:12 compute-0 podman[445132]: 2025-10-03 10:22:12.554784604 +0000 UTC m=+0.042912770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:22:12 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783361136f1e1a35f9ac2c0f9480c82ab536545c98f38317e2d5a45a63f77918/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783361136f1e1a35f9ac2c0f9480c82ab536545c98f38317e2d5a45a63f77918/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783361136f1e1a35f9ac2c0f9480c82ab536545c98f38317e2d5a45a63f77918/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783361136f1e1a35f9ac2c0f9480c82ab536545c98f38317e2d5a45a63f77918/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:22:12 compute-0 podman[445132]: 2025-10-03 10:22:12.702570722 +0000 UTC m=+0.190698898 container init 885d056c410c885f50e9836617ded8d04e19ee749ab3502adb79df88635e1e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_torvalds, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:22:12 compute-0 podman[445132]: 2025-10-03 10:22:12.715401374 +0000 UTC m=+0.203529520 container start 885d056c410c885f50e9836617ded8d04e19ee749ab3502adb79df88635e1e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:22:12 compute-0 podman[445132]: 2025-10-03 10:22:12.724139175 +0000 UTC m=+0.212267321 container attach 885d056c410c885f50e9836617ded8d04e19ee749ab3502adb79df88635e1e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_torvalds, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct  3 10:22:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1647: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]: {
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:        "osd_id": 1,
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:        "type": "bluestore"
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:    },
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:        "osd_id": 2,
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:        "type": "bluestore"
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:    },
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:        "osd_id": 0,
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:        "type": "bluestore"
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]:    }
Oct  3 10:22:13 compute-0 hopeful_torvalds[445148]: }
Oct  3 10:22:13 compute-0 systemd[1]: libpod-885d056c410c885f50e9836617ded8d04e19ee749ab3502adb79df88635e1e02.scope: Deactivated successfully.
Oct  3 10:22:13 compute-0 systemd[1]: libpod-885d056c410c885f50e9836617ded8d04e19ee749ab3502adb79df88635e1e02.scope: Consumed 1.186s CPU time.
Oct  3 10:22:13 compute-0 podman[445132]: 2025-10-03 10:22:13.907751154 +0000 UTC m=+1.395879310 container died 885d056c410c885f50e9836617ded8d04e19ee749ab3502adb79df88635e1e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_torvalds, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 10:22:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-783361136f1e1a35f9ac2c0f9480c82ab536545c98f38317e2d5a45a63f77918-merged.mount: Deactivated successfully.
Oct  3 10:22:13 compute-0 podman[445132]: 2025-10-03 10:22:13.989548932 +0000 UTC m=+1.477677068 container remove 885d056c410c885f50e9836617ded8d04e19ee749ab3502adb79df88635e1e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_torvalds, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:22:14 compute-0 systemd[1]: libpod-conmon-885d056c410c885f50e9836617ded8d04e19ee749ab3502adb79df88635e1e02.scope: Deactivated successfully.
Oct  3 10:22:14 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:22:14 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:22:14 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:22:14 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:22:14 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8e847acd-6135-4f6c-bf24-44b08a6fd60e does not exist
Oct  3 10:22:14 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 949e2702-6744-446c-a6af-f418fc4dc563 does not exist
Oct  3 10:22:15 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:22:15 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:22:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1648: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:22:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:22:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:22:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:22:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:22:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:22:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:22:16 compute-0 nova_compute[351685]: 2025-10-03 10:22:16.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1649: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:17 compute-0 nova_compute[351685]: 2025-10-03 10:22:17.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:17 compute-0 podman[445243]: 2025-10-03 10:22:17.83391437 +0000 UTC m=+0.078813083 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30, architecture=x86_64, distribution-scope=public, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, release=1214.1726694543, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9)
Oct  3 10:22:17 compute-0 podman[445242]: 2025-10-03 10:22:17.839094777 +0000 UTC m=+0.096354737 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:22:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1650: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1651: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:22:21 compute-0 nova_compute[351685]: 2025-10-03 10:22:21.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:21 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Oct  3 10:22:21 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:21.944900) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:22:21 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Oct  3 10:22:21 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486941944983, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1309, "num_deletes": 256, "total_data_size": 1990913, "memory_usage": 2025656, "flush_reason": "Manual Compaction"}
Oct  3 10:22:21 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486942083511, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1960714, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32343, "largest_seqno": 33651, "table_properties": {"data_size": 1954550, "index_size": 3431, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12726, "raw_average_key_size": 19, "raw_value_size": 1942161, "raw_average_value_size": 2965, "num_data_blocks": 155, "num_entries": 655, "num_filter_entries": 655, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759486809, "oldest_key_time": 1759486809, "file_creation_time": 1759486941, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 138672 microseconds, and 6613 cpu microseconds.
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:22.083580) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1960714 bytes OK
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:22.083600) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:22.152083) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:22.152135) EVENT_LOG_v1 {"time_micros": 1759486942152122, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:22.152163) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1985046, prev total WAL file size 1985046, number of live WAL files 2.
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:22.153988) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303038' seq:72057594037927935, type:22 .. '6C6F676D0031323630' seq:0, type:0; will stop at (end)
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1914KB)], [71(8951KB)]
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486942154089, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 11127547, "oldest_snapshot_seqno": -1}
Oct  3 10:22:22 compute-0 nova_compute[351685]: 2025-10-03 10:22:22.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5562 keys, 11018586 bytes, temperature: kUnknown
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486942664999, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 11018586, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10977835, "index_size": 25729, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13957, "raw_key_size": 139843, "raw_average_key_size": 25, "raw_value_size": 10873813, "raw_average_value_size": 1955, "num_data_blocks": 1064, "num_entries": 5562, "num_filter_entries": 5562, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759486942, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:22.665399) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 11018586 bytes
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:22.785600) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 21.8 rd, 21.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 8.7 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(11.3) write-amplify(5.6) OK, records in: 6086, records dropped: 524 output_compression: NoCompression
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:22.785648) EVENT_LOG_v1 {"time_micros": 1759486942785628, "job": 40, "event": "compaction_finished", "compaction_time_micros": 510993, "compaction_time_cpu_micros": 39404, "output_level": 6, "num_output_files": 1, "total_output_size": 11018586, "num_input_records": 6086, "num_output_records": 5562, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486942786611, "job": 40, "event": "table_file_deletion", "file_number": 73}
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486942790011, "job": 40, "event": "table_file_deletion", "file_number": 71}
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:22.153639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:22.790301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:22.790308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:22.790310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:22.790311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:22:22 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:22.790313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:22:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1652: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1653: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:26 compute-0 podman[445287]: 2025-10-03 10:22:26.8338542 +0000 UTC m=+0.081353485 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  3 10:22:26 compute-0 podman[445286]: 2025-10-03 10:22:26.861704195 +0000 UTC m=+0.106174262 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct  3 10:22:26 compute-0 podman[445288]: 2025-10-03 10:22:26.867982116 +0000 UTC m=+0.108107644 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller)
Oct  3 10:22:26 compute-0 nova_compute[351685]: 2025-10-03 10:22:26.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:26.948388) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486946948421, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 283, "num_deletes": 250, "total_data_size": 69722, "memory_usage": 75920, "flush_reason": "Manual Compaction"}
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486946950797, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 68908, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33652, "largest_seqno": 33934, "table_properties": {"data_size": 66980, "index_size": 156, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5407, "raw_average_key_size": 20, "raw_value_size": 63237, "raw_average_value_size": 235, "num_data_blocks": 7, "num_entries": 268, "num_filter_entries": 268, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759486943, "oldest_key_time": 1759486943, "file_creation_time": 1759486946, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 2427 microseconds, and 788 cpu microseconds.
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:26.950818) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 68908 bytes OK
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:26.950829) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:26.952450) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:26.952461) EVENT_LOG_v1 {"time_micros": 1759486946952458, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:26.952473) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 67611, prev total WAL file size 67611, number of live WAL files 2.
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:26.952973) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323534' seq:72057594037927935, type:22 .. '6D6772737461740031353035' seq:0, type:0; will stop at (end)
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(67KB)], [74(10MB)]
Oct  3 10:22:26 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486946953020, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11087494, "oldest_snapshot_seqno": -1}
Oct  3 10:22:27 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 5323 keys, 7798061 bytes, temperature: kUnknown
Oct  3 10:22:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486947018845, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 7798061, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7763807, "index_size": 19872, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 135065, "raw_average_key_size": 25, "raw_value_size": 7668696, "raw_average_value_size": 1440, "num_data_blocks": 819, "num_entries": 5323, "num_filter_entries": 5323, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759486946, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:22:27 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:22:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:27.019142) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 7798061 bytes
Oct  3 10:22:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:27.021953) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.2 rd, 118.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.5 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(274.1) write-amplify(113.2) OK, records in: 5830, records dropped: 507 output_compression: NoCompression
Oct  3 10:22:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:27.021985) EVENT_LOG_v1 {"time_micros": 1759486947021971, "job": 42, "event": "compaction_finished", "compaction_time_micros": 65912, "compaction_time_cpu_micros": 26437, "output_level": 6, "num_output_files": 1, "total_output_size": 7798061, "num_input_records": 5830, "num_output_records": 5323, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:22:27 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:22:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486947022187, "job": 42, "event": "table_file_deletion", "file_number": 76}
Oct  3 10:22:27 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:22:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759486947026182, "job": 42, "event": "table_file_deletion", "file_number": 74}
Oct  3 10:22:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:26.952845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:22:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:27.026375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:22:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:27.026381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:22:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:27.026383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:22:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:27.026385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:22:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:22:27.026386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:22:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1654: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:27 compute-0 nova_compute[351685]: 2025-10-03 10:22:27.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1655: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:29 compute-0 podman[157165]: time="2025-10-03T10:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:22:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:22:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9064 "" "Go-http-client/1.1"
Oct  3 10:22:31 compute-0 openstack_network_exporter[367524]: ERROR   10:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:22:31 compute-0 openstack_network_exporter[367524]: ERROR   10:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:22:31 compute-0 openstack_network_exporter[367524]: ERROR   10:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:22:31 compute-0 openstack_network_exporter[367524]: ERROR   10:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:22:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:22:31 compute-0 openstack_network_exporter[367524]: ERROR   10:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:22:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:22:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1656: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:31 compute-0 nova_compute[351685]: 2025-10-03 10:22:31.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:22:32 compute-0 nova_compute[351685]: 2025-10-03 10:22:32.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:32 compute-0 podman[445349]: 2025-10-03 10:22:32.859604684 +0000 UTC m=+0.106511624 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  3 10:22:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1657: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1658: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:35 compute-0 podman[445368]: 2025-10-03 10:22:35.856905345 +0000 UTC m=+0.106059360 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:22:35 compute-0 podman[445370]: 2025-10-03 10:22:35.87638287 +0000 UTC m=+0.106695709 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:22:35 compute-0 podman[445369]: 2025-10-03 10:22:35.878086345 +0000 UTC m=+0.118389205 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  3 10:22:36 compute-0 nova_compute[351685]: 2025-10-03 10:22:36.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:22:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1659: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:37 compute-0 nova_compute[351685]: 2025-10-03 10:22:37.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1660: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1661: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:22:41.612 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:22:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:22:41.613 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:22:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:22:41.613 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:22:41 compute-0 podman[445431]: 2025-10-03 10:22:41.840523132 +0000 UTC m=+0.090868640 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:22:41 compute-0 nova_compute[351685]: 2025-10-03 10:22:41.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:22:42 compute-0 nova_compute[351685]: 2025-10-03 10:22:42.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1663: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:22:46
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.meta', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', '.mgr', 'volumes', 'cephfs.cephfs.data']
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:22:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:22:46 compute-0 nova_compute[351685]: 2025-10-03 10:22:46.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:22:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:47 compute-0 nova_compute[351685]: 2025-10-03 10:22:47.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:48 compute-0 podman[445450]: 2025-10-03 10:22:48.843029806 +0000 UTC m=+0.092679739 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:22:48 compute-0 podman[445451]: 2025-10-03 10:22:48.858203214 +0000 UTC m=+0.114333456 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release=1214.1726694543, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, container_name=kepler, io.buildah.version=1.29.0, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc.)
Oct  3 10:22:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:49 compute-0 nova_compute[351685]: 2025-10-03 10:22:49.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:22:49 compute-0 nova_compute[351685]: 2025-10-03 10:22:49.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:22:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:22:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 8018 writes, 31K keys, 8018 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8018 writes, 1882 syncs, 4.26 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 634 writes, 1906 keys, 634 commit groups, 1.0 writes per commit group, ingest: 0.90 MB, 0.00 MB/s#012Interval WAL: 634 writes, 286 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:22:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1666: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:51 compute-0 nova_compute[351685]: 2025-10-03 10:22:51.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:22:51 compute-0 nova_compute[351685]: 2025-10-03 10:22:51.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:22:51 compute-0 nova_compute[351685]: 2025-10-03 10:22:51.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:22:51 compute-0 nova_compute[351685]: 2025-10-03 10:22:51.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:22:51 compute-0 nova_compute[351685]: 2025-10-03 10:22:51.969 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:22:51 compute-0 nova_compute[351685]: 2025-10-03 10:22:51.970 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:22:51 compute-0 nova_compute[351685]: 2025-10-03 10:22:51.970 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:22:51 compute-0 nova_compute[351685]: 2025-10-03 10:22:51.971 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:22:52 compute-0 nova_compute[351685]: 2025-10-03 10:22:52.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:52 compute-0 nova_compute[351685]: 2025-10-03 10:22:52.829 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:22:52 compute-0 nova_compute[351685]: 2025-10-03 10:22:52.859 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:22:52 compute-0 nova_compute[351685]: 2025-10-03 10:22:52.860 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:22:52 compute-0 nova_compute[351685]: 2025-10-03 10:22:52.860 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:22:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:53 compute-0 nova_compute[351685]: 2025-10-03 10:22:53.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:22:53 compute-0 nova_compute[351685]: 2025-10-03 10:22:53.762 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:22:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:22:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2309974558' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:22:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:22:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2309974558' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:22:54 compute-0 nova_compute[351685]: 2025-10-03 10:22:54.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:22:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1668: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:56 compute-0 nova_compute[351685]: 2025-10-03 10:22:56.724 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:22:56 compute-0 nova_compute[351685]: 2025-10-03 10:22:56.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:22:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:22:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 8944 writes, 34K keys, 8944 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8944 writes, 2168 syncs, 4.13 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 637 writes, 1417 keys, 637 commit groups, 1.0 writes per commit group, ingest: 0.60 MB, 0.00 MB/s#012Interval WAL: 637 writes, 291 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:22:57 compute-0 nova_compute[351685]: 2025-10-03 10:22:57.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:22:57 compute-0 podman[445491]: 2025-10-03 10:22:57.823024805 +0000 UTC m=+0.080739815 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, architecture=x86_64)
Oct  3 10:22:57 compute-0 podman[445492]: 2025-10-03 10:22:57.833033097 +0000 UTC m=+0.089054042 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  3 10:22:57 compute-0 podman[445493]: 2025-10-03 10:22:57.860957515 +0000 UTC m=+0.111781873 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  3 10:22:58 compute-0 nova_compute[351685]: 2025-10-03 10:22:58.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:22:58 compute-0 nova_compute[351685]: 2025-10-03 10:22:58.921 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:22:58 compute-0 nova_compute[351685]: 2025-10-03 10:22:58.922 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:22:58 compute-0 nova_compute[351685]: 2025-10-03 10:22:58.922 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:22:58 compute-0 nova_compute[351685]: 2025-10-03 10:22:58.923 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:22:58 compute-0 nova_compute[351685]: 2025-10-03 10:22:58.923 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:22:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:22:59 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2035681988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:22:59 compute-0 nova_compute[351685]: 2025-10-03 10:22:59.427 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:22:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1670: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:22:59 compute-0 nova_compute[351685]: 2025-10-03 10:22:59.738 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:22:59 compute-0 nova_compute[351685]: 2025-10-03 10:22:59.738 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:22:59 compute-0 nova_compute[351685]: 2025-10-03 10:22:59.739 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:22:59 compute-0 podman[157165]: time="2025-10-03T10:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:22:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:22:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9057 "" "Go-http-client/1.1"
Oct  3 10:23:00 compute-0 nova_compute[351685]: 2025-10-03 10:23:00.125 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:23:00 compute-0 nova_compute[351685]: 2025-10-03 10:23:00.127 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3880MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:23:00 compute-0 nova_compute[351685]: 2025-10-03 10:23:00.127 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:23:00 compute-0 nova_compute[351685]: 2025-10-03 10:23:00.128 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:23:00 compute-0 nova_compute[351685]: 2025-10-03 10:23:00.486 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:23:00 compute-0 nova_compute[351685]: 2025-10-03 10:23:00.487 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:23:00 compute-0 nova_compute[351685]: 2025-10-03 10:23:00.487 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:23:00 compute-0 nova_compute[351685]: 2025-10-03 10:23:00.503 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 10:23:00 compute-0 nova_compute[351685]: 2025-10-03 10:23:00.519 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 10:23:00 compute-0 nova_compute[351685]: 2025-10-03 10:23:00.519 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 10:23:00 compute-0 nova_compute[351685]: 2025-10-03 10:23:00.534 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 10:23:00 compute-0 nova_compute[351685]: 2025-10-03 10:23:00.560 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 10:23:00 compute-0 nova_compute[351685]: 2025-10-03 10:23:00.596 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:23:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:23:01 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2573094022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:23:01 compute-0 nova_compute[351685]: 2025-10-03 10:23:01.115 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:23:01 compute-0 nova_compute[351685]: 2025-10-03 10:23:01.124 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:23:01 compute-0 nova_compute[351685]: 2025-10-03 10:23:01.341 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:23:01 compute-0 nova_compute[351685]: 2025-10-03 10:23:01.342 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:23:01 compute-0 nova_compute[351685]: 2025-10-03 10:23:01.343 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:23:01 compute-0 openstack_network_exporter[367524]: ERROR   10:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:23:01 compute-0 openstack_network_exporter[367524]: ERROR   10:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:23:01 compute-0 openstack_network_exporter[367524]: ERROR   10:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:23:01 compute-0 openstack_network_exporter[367524]: ERROR   10:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:23:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:23:01 compute-0 openstack_network_exporter[367524]: ERROR   10:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:23:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:23:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1671: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:01 compute-0 systemd[1]: Starting dnf makecache...
Oct  3 10:23:01 compute-0 nova_compute[351685]: 2025-10-03 10:23:01.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:23:01 compute-0 dnf[445596]: Metadata cache refreshed recently.
Oct  3 10:23:02 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  3 10:23:02 compute-0 systemd[1]: Finished dnf makecache.
Oct  3 10:23:02 compute-0 nova_compute[351685]: 2025-10-03 10:23:02.344 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:23:02 compute-0 nova_compute[351685]: 2025-10-03 10:23:02.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:02 compute-0 nova_compute[351685]: 2025-10-03 10:23:02.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:23:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:23:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 7129 writes, 28K keys, 7129 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7129 writes, 1523 syncs, 4.68 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 342 writes, 885 keys, 342 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s#012Interval WAL: 342 writes, 144 syncs, 2.38 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:23:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1672: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:03 compute-0 podman[445597]: 2025-10-03 10:23:03.906291995 +0000 UTC m=+0.157937195 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:23:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1673: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:05 compute-0 ceph-mgr[192071]: [devicehealth INFO root] Check health
Oct  3 10:23:06 compute-0 podman[445616]: 2025-10-03 10:23:06.835879621 +0000 UTC m=+0.095937313 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:23:06 compute-0 podman[445618]: 2025-10-03 10:23:06.879009227 +0000 UTC m=+0.130464873 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251001)
Oct  3 10:23:06 compute-0 podman[445617]: 2025-10-03 10:23:06.898369589 +0000 UTC m=+0.145944170 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Oct  3 10:23:06 compute-0 nova_compute[351685]: 2025-10-03 10:23:06.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:23:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:07 compute-0 nova_compute[351685]: 2025-10-03 10:23:07.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1676: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:11 compute-0 nova_compute[351685]: 2025-10-03 10:23:11.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:23:12 compute-0 nova_compute[351685]: 2025-10-03 10:23:12.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:12 compute-0 podman[445676]: 2025-10-03 10:23:12.836417122 +0000 UTC m=+0.100064747 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:23:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1677: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:23:15 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:23:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:23:15 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:23:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:23:15 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:23:15 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8f297bfa-0b73-4552-be56-9a3355c29551 does not exist
Oct  3 10:23:15 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 7d7d615f-9785-4fa6-8b2c-4fde7c89002e does not exist
Oct  3 10:23:15 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8b8d2481-e166-422e-b540-c007d988f9b5 does not exist
Oct  3 10:23:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:23:15 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:23:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:23:15 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:23:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:23:15 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:23:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:15 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:23:15 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:23:15 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:23:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:23:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:23:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:23:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:23:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:23:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:23:16 compute-0 podman[445963]: 2025-10-03 10:23:16.214343682 +0000 UTC m=+0.049392867 container create 312e6caa7cc7df7bdcaa9e6ba52fa824b689d2eb1a6534d20fd3cb87452774e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:23:16 compute-0 systemd[1]: Started libpod-conmon-312e6caa7cc7df7bdcaa9e6ba52fa824b689d2eb1a6534d20fd3cb87452774e0.scope.
Oct  3 10:23:16 compute-0 podman[445963]: 2025-10-03 10:23:16.193416459 +0000 UTC m=+0.028465674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:23:16 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:23:16 compute-0 podman[445963]: 2025-10-03 10:23:16.314574952 +0000 UTC m=+0.149624157 container init 312e6caa7cc7df7bdcaa9e6ba52fa824b689d2eb1a6534d20fd3cb87452774e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:23:16 compute-0 podman[445963]: 2025-10-03 10:23:16.323302283 +0000 UTC m=+0.158351488 container start 312e6caa7cc7df7bdcaa9e6ba52fa824b689d2eb1a6534d20fd3cb87452774e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:23:16 compute-0 podman[445963]: 2025-10-03 10:23:16.328700026 +0000 UTC m=+0.163749231 container attach 312e6caa7cc7df7bdcaa9e6ba52fa824b689d2eb1a6534d20fd3cb87452774e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:23:16 compute-0 clever_antonelli[445978]: 167 167
Oct  3 10:23:16 compute-0 systemd[1]: libpod-312e6caa7cc7df7bdcaa9e6ba52fa824b689d2eb1a6534d20fd3cb87452774e0.scope: Deactivated successfully.
Oct  3 10:23:16 compute-0 podman[445963]: 2025-10-03 10:23:16.330738991 +0000 UTC m=+0.165788176 container died 312e6caa7cc7df7bdcaa9e6ba52fa824b689d2eb1a6534d20fd3cb87452774e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_antonelli, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:23:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-05389fe27f6f230a69717f734b6c4e19f8212e87d3bd47002430d3564475de41-merged.mount: Deactivated successfully.
Oct  3 10:23:16 compute-0 podman[445963]: 2025-10-03 10:23:16.384403746 +0000 UTC m=+0.219452931 container remove 312e6caa7cc7df7bdcaa9e6ba52fa824b689d2eb1a6534d20fd3cb87452774e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_antonelli, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:23:16 compute-0 systemd[1]: libpod-conmon-312e6caa7cc7df7bdcaa9e6ba52fa824b689d2eb1a6534d20fd3cb87452774e0.scope: Deactivated successfully.
Oct  3 10:23:16 compute-0 podman[446002]: 2025-10-03 10:23:16.587111259 +0000 UTC m=+0.067024564 container create 9524053f52e02e0780fc11472263b91e065c7bf41a9ebcebbf5e9ef8e4c0c4ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:23:16 compute-0 systemd[1]: Started libpod-conmon-9524053f52e02e0780fc11472263b91e065c7bf41a9ebcebbf5e9ef8e4c0c4ef.scope.
Oct  3 10:23:16 compute-0 podman[446002]: 2025-10-03 10:23:16.564144681 +0000 UTC m=+0.044058006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:23:16 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b486a1898687e23902671c8608439d237f2e4fff8c48635d6e8eac714350455/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b486a1898687e23902671c8608439d237f2e4fff8c48635d6e8eac714350455/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b486a1898687e23902671c8608439d237f2e4fff8c48635d6e8eac714350455/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b486a1898687e23902671c8608439d237f2e4fff8c48635d6e8eac714350455/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:23:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b486a1898687e23902671c8608439d237f2e4fff8c48635d6e8eac714350455/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:23:16 compute-0 podman[446002]: 2025-10-03 10:23:16.693953482 +0000 UTC m=+0.173866807 container init 9524053f52e02e0780fc11472263b91e065c7bf41a9ebcebbf5e9ef8e4c0c4ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:23:16 compute-0 podman[446002]: 2025-10-03 10:23:16.708868341 +0000 UTC m=+0.188781646 container start 9524053f52e02e0780fc11472263b91e065c7bf41a9ebcebbf5e9ef8e4c0c4ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:23:16 compute-0 podman[446002]: 2025-10-03 10:23:16.71446624 +0000 UTC m=+0.194379545 container attach 9524053f52e02e0780fc11472263b91e065c7bf41a9ebcebbf5e9ef8e4c0c4ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:23:16 compute-0 nova_compute[351685]: 2025-10-03 10:23:16.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:23:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:17 compute-0 nova_compute[351685]: 2025-10-03 10:23:17.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:17 compute-0 epic_mayer[446018]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:23:17 compute-0 epic_mayer[446018]: --> relative data size: 1.0
Oct  3 10:23:17 compute-0 epic_mayer[446018]: --> All data devices are unavailable
Oct  3 10:23:17 compute-0 systemd[1]: libpod-9524053f52e02e0780fc11472263b91e065c7bf41a9ebcebbf5e9ef8e4c0c4ef.scope: Deactivated successfully.
Oct  3 10:23:17 compute-0 systemd[1]: libpod-9524053f52e02e0780fc11472263b91e065c7bf41a9ebcebbf5e9ef8e4c0c4ef.scope: Consumed 1.054s CPU time.
Oct  3 10:23:17 compute-0 podman[446002]: 2025-10-03 10:23:17.833740593 +0000 UTC m=+1.313654018 container died 9524053f52e02e0780fc11472263b91e065c7bf41a9ebcebbf5e9ef8e4c0c4ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  3 10:23:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b486a1898687e23902671c8608439d237f2e4fff8c48635d6e8eac714350455-merged.mount: Deactivated successfully.
Oct  3 10:23:17 compute-0 podman[446002]: 2025-10-03 10:23:17.899406412 +0000 UTC m=+1.379319717 container remove 9524053f52e02e0780fc11472263b91e065c7bf41a9ebcebbf5e9ef8e4c0c4ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 10:23:17 compute-0 systemd[1]: libpod-conmon-9524053f52e02e0780fc11472263b91e065c7bf41a9ebcebbf5e9ef8e4c0c4ef.scope: Deactivated successfully.
Oct  3 10:23:18 compute-0 podman[446199]: 2025-10-03 10:23:18.614370284 +0000 UTC m=+0.041550716 container create e0a9797b44033be96e9a81e74d10f70062cb34ded9d14caf1979212526b7be0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:23:18 compute-0 systemd[1]: Started libpod-conmon-e0a9797b44033be96e9a81e74d10f70062cb34ded9d14caf1979212526b7be0c.scope.
Oct  3 10:23:18 compute-0 podman[446199]: 2025-10-03 10:23:18.597717409 +0000 UTC m=+0.024897871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:23:18 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:23:18 compute-0 podman[446199]: 2025-10-03 10:23:18.722775826 +0000 UTC m=+0.149956288 container init e0a9797b44033be96e9a81e74d10f70062cb34ded9d14caf1979212526b7be0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaum, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:23:18 compute-0 podman[446199]: 2025-10-03 10:23:18.732485179 +0000 UTC m=+0.159665651 container start e0a9797b44033be96e9a81e74d10f70062cb34ded9d14caf1979212526b7be0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaum, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 10:23:18 compute-0 great_chaum[446215]: 167 167
Oct  3 10:23:18 compute-0 systemd[1]: libpod-e0a9797b44033be96e9a81e74d10f70062cb34ded9d14caf1979212526b7be0c.scope: Deactivated successfully.
Oct  3 10:23:18 compute-0 podman[446199]: 2025-10-03 10:23:18.739610347 +0000 UTC m=+0.166790779 container attach e0a9797b44033be96e9a81e74d10f70062cb34ded9d14caf1979212526b7be0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 10:23:18 compute-0 podman[446199]: 2025-10-03 10:23:18.740789095 +0000 UTC m=+0.167969587 container died e0a9797b44033be96e9a81e74d10f70062cb34ded9d14caf1979212526b7be0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 10:23:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a49c8d533a659792c1c41e8a41f3c0268d769615f34900b036bb760860b585b-merged.mount: Deactivated successfully.
Oct  3 10:23:18 compute-0 podman[446199]: 2025-10-03 10:23:18.787659281 +0000 UTC m=+0.214839713 container remove e0a9797b44033be96e9a81e74d10f70062cb34ded9d14caf1979212526b7be0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_chaum, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:23:18 compute-0 systemd[1]: libpod-conmon-e0a9797b44033be96e9a81e74d10f70062cb34ded9d14caf1979212526b7be0c.scope: Deactivated successfully.
Oct  3 10:23:18 compute-0 podman[446240]: 2025-10-03 10:23:18.971648772 +0000 UTC m=+0.051344031 container create d371369daf8c0a02dc16c8876a3c92aeff4a6d39c4d44ed67c297d8ee01fce7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:23:19 compute-0 systemd[1]: Started libpod-conmon-d371369daf8c0a02dc16c8876a3c92aeff4a6d39c4d44ed67c297d8ee01fce7f.scope.
Oct  3 10:23:19 compute-0 podman[446240]: 2025-10-03 10:23:18.949764649 +0000 UTC m=+0.029459948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:23:19 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:23:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0543933409267514a2c321152fe5cb39566c7bd2b6e1416cb9d530f5ab7b558/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:23:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0543933409267514a2c321152fe5cb39566c7bd2b6e1416cb9d530f5ab7b558/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:23:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0543933409267514a2c321152fe5cb39566c7bd2b6e1416cb9d530f5ab7b558/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:23:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0543933409267514a2c321152fe5cb39566c7bd2b6e1416cb9d530f5ab7b558/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:23:19 compute-0 podman[446255]: 2025-10-03 10:23:19.100066958 +0000 UTC m=+0.084137964 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9, release=1214.1726694543)
Oct  3 10:23:19 compute-0 podman[446254]: 2025-10-03 10:23:19.111047851 +0000 UTC m=+0.090925472 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:23:19 compute-0 podman[446240]: 2025-10-03 10:23:19.125890128 +0000 UTC m=+0.205585407 container init d371369daf8c0a02dc16c8876a3c92aeff4a6d39c4d44ed67c297d8ee01fce7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gauss, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 10:23:19 compute-0 podman[446240]: 2025-10-03 10:23:19.136726216 +0000 UTC m=+0.216421475 container start d371369daf8c0a02dc16c8876a3c92aeff4a6d39c4d44ed67c297d8ee01fce7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gauss, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:23:19 compute-0 podman[446240]: 2025-10-03 10:23:19.141101147 +0000 UTC m=+0.220796426 container attach d371369daf8c0a02dc16c8876a3c92aeff4a6d39c4d44ed67c297d8ee01fce7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gauss, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:23:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:19 compute-0 boring_gauss[446275]: {
Oct  3 10:23:19 compute-0 boring_gauss[446275]:    "0": [
Oct  3 10:23:19 compute-0 boring_gauss[446275]:        {
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "devices": [
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "/dev/loop3"
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            ],
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "lv_name": "ceph_lv0",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "lv_size": "21470642176",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "name": "ceph_lv0",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "tags": {
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.cluster_name": "ceph",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.crush_device_class": "",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.encrypted": "0",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.osd_id": "0",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.type": "block",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.vdo": "0"
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            },
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "type": "block",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "vg_name": "ceph_vg0"
Oct  3 10:23:19 compute-0 boring_gauss[446275]:        }
Oct  3 10:23:19 compute-0 boring_gauss[446275]:    ],
Oct  3 10:23:19 compute-0 boring_gauss[446275]:    "1": [
Oct  3 10:23:19 compute-0 boring_gauss[446275]:        {
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "devices": [
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "/dev/loop4"
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            ],
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "lv_name": "ceph_lv1",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "lv_size": "21470642176",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "name": "ceph_lv1",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "tags": {
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.cluster_name": "ceph",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.crush_device_class": "",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.encrypted": "0",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.osd_id": "1",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.type": "block",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.vdo": "0"
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            },
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "type": "block",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "vg_name": "ceph_vg1"
Oct  3 10:23:19 compute-0 boring_gauss[446275]:        }
Oct  3 10:23:19 compute-0 boring_gauss[446275]:    ],
Oct  3 10:23:19 compute-0 boring_gauss[446275]:    "2": [
Oct  3 10:23:19 compute-0 boring_gauss[446275]:        {
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "devices": [
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "/dev/loop5"
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            ],
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "lv_name": "ceph_lv2",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "lv_size": "21470642176",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "name": "ceph_lv2",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "tags": {
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.cluster_name": "ceph",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.crush_device_class": "",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.encrypted": "0",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.osd_id": "2",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.type": "block",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:                "ceph.vdo": "0"
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            },
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "type": "block",
Oct  3 10:23:19 compute-0 boring_gauss[446275]:            "vg_name": "ceph_vg2"
Oct  3 10:23:19 compute-0 boring_gauss[446275]:        }
Oct  3 10:23:19 compute-0 boring_gauss[446275]:    ]
Oct  3 10:23:19 compute-0 boring_gauss[446275]: }
Oct  3 10:23:19 compute-0 systemd[1]: libpod-d371369daf8c0a02dc16c8876a3c92aeff4a6d39c4d44ed67c297d8ee01fce7f.scope: Deactivated successfully.
Oct  3 10:23:19 compute-0 podman[446240]: 2025-10-03 10:23:19.928027171 +0000 UTC m=+1.007722450 container died d371369daf8c0a02dc16c8876a3c92aeff4a6d39c4d44ed67c297d8ee01fce7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gauss, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:23:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0543933409267514a2c321152fe5cb39566c7bd2b6e1416cb9d530f5ab7b558-merged.mount: Deactivated successfully.
Oct  3 10:23:20 compute-0 podman[446240]: 2025-10-03 10:23:20.288058438 +0000 UTC m=+1.367753737 container remove d371369daf8c0a02dc16c8876a3c92aeff4a6d39c4d44ed67c297d8ee01fce7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_gauss, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:23:20 compute-0 systemd[1]: libpod-conmon-d371369daf8c0a02dc16c8876a3c92aeff4a6d39c4d44ed67c297d8ee01fce7f.scope: Deactivated successfully.
Oct  3 10:23:21 compute-0 podman[446457]: 2025-10-03 10:23:21.208717418 +0000 UTC m=+0.064650548 container create abffc300d0e50172764e8ec520b02802cdcb3dad7f9f0f84aafa5a17293e3ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  3 10:23:21 compute-0 systemd[1]: Started libpod-conmon-abffc300d0e50172764e8ec520b02802cdcb3dad7f9f0f84aafa5a17293e3ffe.scope.
Oct  3 10:23:21 compute-0 podman[446457]: 2025-10-03 10:23:21.179916832 +0000 UTC m=+0.035850032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:23:21 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:23:21 compute-0 podman[446457]: 2025-10-03 10:23:21.316353987 +0000 UTC m=+0.172287147 container init abffc300d0e50172764e8ec520b02802cdcb3dad7f9f0f84aafa5a17293e3ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:23:21 compute-0 podman[446457]: 2025-10-03 10:23:21.334776458 +0000 UTC m=+0.190709618 container start abffc300d0e50172764e8ec520b02802cdcb3dad7f9f0f84aafa5a17293e3ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 10:23:21 compute-0 podman[446457]: 2025-10-03 10:23:21.341123112 +0000 UTC m=+0.197056272 container attach abffc300d0e50172764e8ec520b02802cdcb3dad7f9f0f84aafa5a17293e3ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  3 10:23:21 compute-0 blissful_keller[446471]: 167 167
Oct  3 10:23:21 compute-0 systemd[1]: libpod-abffc300d0e50172764e8ec520b02802cdcb3dad7f9f0f84aafa5a17293e3ffe.scope: Deactivated successfully.
Oct  3 10:23:21 compute-0 podman[446457]: 2025-10-03 10:23:21.348027964 +0000 UTC m=+0.203961084 container died abffc300d0e50172764e8ec520b02802cdcb3dad7f9f0f84aafa5a17293e3ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:23:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-df17b38024fcf928528a584014727a4dca1ceae1045805149225b6bd2b6008f6-merged.mount: Deactivated successfully.
Oct  3 10:23:21 compute-0 podman[446457]: 2025-10-03 10:23:21.393807985 +0000 UTC m=+0.249741105 container remove abffc300d0e50172764e8ec520b02802cdcb3dad7f9f0f84aafa5a17293e3ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 10:23:21 compute-0 systemd[1]: libpod-conmon-abffc300d0e50172764e8ec520b02802cdcb3dad7f9f0f84aafa5a17293e3ffe.scope: Deactivated successfully.
Oct  3 10:23:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1681: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:21 compute-0 podman[446494]: 2025-10-03 10:23:21.667795918 +0000 UTC m=+0.113443156 container create 89d70e08b0b7b2e1e59db3568aefa7db68ad00cbb52ff31de16da322e748df9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:23:21 compute-0 podman[446494]: 2025-10-03 10:23:21.582887459 +0000 UTC m=+0.028534707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:23:21 compute-0 systemd[1]: Started libpod-conmon-89d70e08b0b7b2e1e59db3568aefa7db68ad00cbb52ff31de16da322e748df9c.scope.
Oct  3 10:23:21 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb5b2839c2867313fad47c63948ff8cb9b479e4508fa9be364062603b5aa4a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb5b2839c2867313fad47c63948ff8cb9b479e4508fa9be364062603b5aa4a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb5b2839c2867313fad47c63948ff8cb9b479e4508fa9be364062603b5aa4a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebb5b2839c2867313fad47c63948ff8cb9b479e4508fa9be364062603b5aa4a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:23:21 compute-0 podman[446494]: 2025-10-03 10:23:21.850144417 +0000 UTC m=+0.295791655 container init 89d70e08b0b7b2e1e59db3568aefa7db68ad00cbb52ff31de16da322e748df9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:23:21 compute-0 podman[446494]: 2025-10-03 10:23:21.867146853 +0000 UTC m=+0.312794081 container start 89d70e08b0b7b2e1e59db3568aefa7db68ad00cbb52ff31de16da322e748df9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  3 10:23:21 compute-0 podman[446494]: 2025-10-03 10:23:21.871171112 +0000 UTC m=+0.316818340 container attach 89d70e08b0b7b2e1e59db3568aefa7db68ad00cbb52ff31de16da322e748df9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:23:21 compute-0 nova_compute[351685]: 2025-10-03 10:23:21.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:23:22 compute-0 nova_compute[351685]: 2025-10-03 10:23:22.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]: {
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:        "osd_id": 1,
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:        "type": "bluestore"
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:    },
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:        "osd_id": 2,
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:        "type": "bluestore"
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:    },
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:        "osd_id": 0,
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:        "type": "bluestore"
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]:    }
Oct  3 10:23:23 compute-0 quizzical_mclaren[446510]: }
Oct  3 10:23:23 compute-0 systemd[1]: libpod-89d70e08b0b7b2e1e59db3568aefa7db68ad00cbb52ff31de16da322e748df9c.scope: Deactivated successfully.
Oct  3 10:23:23 compute-0 podman[446494]: 2025-10-03 10:23:23.171712038 +0000 UTC m=+1.617359276 container died 89d70e08b0b7b2e1e59db3568aefa7db68ad00cbb52ff31de16da322e748df9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mclaren, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:23:23 compute-0 systemd[1]: libpod-89d70e08b0b7b2e1e59db3568aefa7db68ad00cbb52ff31de16da322e748df9c.scope: Consumed 1.281s CPU time.
Oct  3 10:23:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebb5b2839c2867313fad47c63948ff8cb9b479e4508fa9be364062603b5aa4a6-merged.mount: Deactivated successfully.
Oct  3 10:23:23 compute-0 podman[446494]: 2025-10-03 10:23:23.280822273 +0000 UTC m=+1.726469511 container remove 89d70e08b0b7b2e1e59db3568aefa7db68ad00cbb52ff31de16da322e748df9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 10:23:23 compute-0 systemd[1]: libpod-conmon-89d70e08b0b7b2e1e59db3568aefa7db68ad00cbb52ff31de16da322e748df9c.scope: Deactivated successfully.
Oct  3 10:23:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:23:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:23:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:23:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:23:23 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f78659ac-d548-4fdc-b315-e212003ac148 does not exist
Oct  3 10:23:23 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev af99f13e-89b7-414c-aff5-4a179f4879bd does not exist
Oct  3 10:23:23 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:23:23 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:23:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1682: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1683: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:26 compute-0 nova_compute[351685]: 2025-10-03 10:23:26.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:23:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1684: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:27 compute-0 nova_compute[351685]: 2025-10-03 10:23:27.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:28 compute-0 podman[446604]: 2025-10-03 10:23:28.863687427 +0000 UTC m=+0.113166717 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal)
Oct  3 10:23:28 compute-0 podman[446605]: 2025-10-03 10:23:28.905537422 +0000 UTC m=+0.143080538 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:23:28 compute-0 podman[446606]: 2025-10-03 10:23:28.915980477 +0000 UTC m=+0.142156618 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:23:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1685: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:29 compute-0 podman[157165]: time="2025-10-03T10:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:23:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:23:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9060 "" "Go-http-client/1.1"
Oct  3 10:23:31 compute-0 openstack_network_exporter[367524]: ERROR   10:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:23:31 compute-0 openstack_network_exporter[367524]: ERROR   10:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:23:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:23:31 compute-0 openstack_network_exporter[367524]: ERROR   10:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:23:31 compute-0 openstack_network_exporter[367524]: ERROR   10:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:23:31 compute-0 openstack_network_exporter[367524]: ERROR   10:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:23:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:23:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:31 compute-0 nova_compute[351685]: 2025-10-03 10:23:31.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:23:32 compute-0 nova_compute[351685]: 2025-10-03 10:23:32.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1687: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:34 compute-0 podman[446670]: 2025-10-03 10:23:34.858024571 +0000 UTC m=+0.108562430 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  3 10:23:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:36 compute-0 nova_compute[351685]: 2025-10-03 10:23:36.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:23:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1689: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:37 compute-0 nova_compute[351685]: 2025-10-03 10:23:37.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:37 compute-0 podman[446691]: 2025-10-03 10:23:37.836703244 +0000 UTC m=+0.089189238 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct  3 10:23:37 compute-0 podman[446689]: 2025-10-03 10:23:37.86148174 +0000 UTC m=+0.123685015 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:23:37 compute-0 podman[446690]: 2025-10-03 10:23:37.868729372 +0000 UTC m=+0.123951563 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001)
Oct  3 10:23:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1690: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.887 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.888 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.893 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.894 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.894 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.894 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.894 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.895 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:23:40.894527) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.899 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.900 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.900 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.900 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.901 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.901 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:23:40.900542) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.901 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.901 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.901 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.901 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.901 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.901 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:23:40.901744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.921 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.921 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.921 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.922 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.922 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.922 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.922 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.922 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.922 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.923 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:23:40.922493) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.966 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.966 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.966 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.967 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.967 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.967 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.967 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.967 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.967 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.967 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.968 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.968 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.968 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.968 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.968 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.968 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.969 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.969 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:23:40.967537) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.969 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.969 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:23:40.969029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.969 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.969 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.970 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.970 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.970 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.970 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.970 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.970 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.970 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.970 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.971 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.971 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.971 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.971 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.971 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.971 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.971 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.971 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.971 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.972 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.972 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.972 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.972 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.972 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.972 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.972 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.973 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.973 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.973 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.973 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.973 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.973 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.974 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.974 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.974 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.974 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:23:40.970527) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.975 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:23:40.971730) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.975 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:23:40.972910) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.975 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:23:40.974217) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.993 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.993 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.994 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.994 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.994 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.994 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.994 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.994 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.995 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.995 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.996 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.996 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.996 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.996 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.996 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.996 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:23:40.994496) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.997 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:23:40.996476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.997 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.997 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.997 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.998 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.998 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.998 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.998 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.998 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.999 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.999 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.999 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:23:40.998582) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.999 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:40.999 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.000 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.000 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.000 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.000 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.001 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.001 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.001 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.001 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.002 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.002 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.002 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.002 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.002 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.003 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.003 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.003 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.003 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.004 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.004 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.004 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.005 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.005 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.005 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.005 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.005 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 50370000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.005 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.006 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:23:41.000139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.006 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.006 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:23:41.001432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.006 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:23:41.002889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.006 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.006 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:23:41.003977) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.006 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.006 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:23:41.005469) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.006 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.006 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.007 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.007 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.007 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.007 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.008 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.008 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:23:41.006844) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.008 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.008 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:23:41.008177) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.008 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.009 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.010 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.010 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.010 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.84765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.011 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.011 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.012 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.012 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.012 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:23:41.009469) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.012 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.012 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:23:41.010676) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:23:41.012720) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.013 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.013 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.014 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.014 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.015 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.016 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:23:41.014184) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.016 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:23:41.017 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:23:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1691: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:23:41.613 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:23:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:23:41.613 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:23:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:23:41.613 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:23:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:23:41 compute-0 nova_compute[351685]: 2025-10-03 10:23:41.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:42 compute-0 nova_compute[351685]: 2025-10-03 10:23:42.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1692: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:43 compute-0 podman[446751]: 2025-10-03 10:23:43.898842976 +0000 UTC m=+0.149062880 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  3 10:23:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1693: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:23:46
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', '.rgw.root']
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:23:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:23:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:23:46 compute-0 nova_compute[351685]: 2025-10-03 10:23:46.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1694: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:47 compute-0 nova_compute[351685]: 2025-10-03 10:23:47.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1695: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:49 compute-0 podman[446773]: 2025-10-03 10:23:49.830176785 +0000 UTC m=+0.075779396 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:23:49 compute-0 podman[446774]: 2025-10-03 10:23:49.840133735 +0000 UTC m=+0.082748240 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, version=9.4, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.buildah.version=1.29.0, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release=1214.1726694543, vendor=Red Hat, Inc.)
Oct  3 10:23:50 compute-0 nova_compute[351685]: 2025-10-03 10:23:50.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:23:50 compute-0 nova_compute[351685]: 2025-10-03 10:23:50.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:23:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1696: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:51 compute-0 nova_compute[351685]: 2025-10-03 10:23:51.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:23:51 compute-0 nova_compute[351685]: 2025-10-03 10:23:51.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:23:51 compute-0 nova_compute[351685]: 2025-10-03 10:23:51.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:23:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:23:51 compute-0 nova_compute[351685]: 2025-10-03 10:23:51.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:52 compute-0 nova_compute[351685]: 2025-10-03 10:23:52.117 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:23:52 compute-0 nova_compute[351685]: 2025-10-03 10:23:52.118 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:23:52 compute-0 nova_compute[351685]: 2025-10-03 10:23:52.119 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:23:52 compute-0 nova_compute[351685]: 2025-10-03 10:23:52.119 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:23:52 compute-0 nova_compute[351685]: 2025-10-03 10:23:52.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:53 compute-0 nova_compute[351685]: 2025-10-03 10:23:53.062 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:23:53 compute-0 nova_compute[351685]: 2025-10-03 10:23:53.076 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:23:53 compute-0 nova_compute[351685]: 2025-10-03 10:23:53.076 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:23:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:53 compute-0 nova_compute[351685]: 2025-10-03 10:23:53.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:23:54 compute-0 nova_compute[351685]: 2025-10-03 10:23:54.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:23:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1698: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:55 compute-0 nova_compute[351685]: 2025-10-03 10:23:55.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:23:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:23:56 compute-0 nova_compute[351685]: 2025-10-03 10:23:56.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:57 compute-0 nova_compute[351685]: 2025-10-03 10:23:57.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:23:57 compute-0 nova_compute[351685]: 2025-10-03 10:23:57.724 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:23:58.268820) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487038268857, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 946, "num_deletes": 251, "total_data_size": 1348501, "memory_usage": 1373856, "flush_reason": "Manual Compaction"}
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487038529124, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1335982, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33935, "largest_seqno": 34880, "table_properties": {"data_size": 1331213, "index_size": 2357, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10102, "raw_average_key_size": 19, "raw_value_size": 1321788, "raw_average_value_size": 2561, "num_data_blocks": 106, "num_entries": 516, "num_filter_entries": 516, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759486947, "oldest_key_time": 1759486947, "file_creation_time": 1759487038, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 260578 microseconds, and 4912 cpu microseconds.
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:23:58.529387) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1335982 bytes OK
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:23:58.529420) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:23:58.533902) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:23:58.533942) EVENT_LOG_v1 {"time_micros": 1759487038533933, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:23:58.533967) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 1343980, prev total WAL file size 1343980, number of live WAL files 2.
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:23:58.534948) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(1304KB)], [77(7615KB)]
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487038535046, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 9134043, "oldest_snapshot_seqno": -1}
Oct  3 10:23:58 compute-0 nova_compute[351685]: 2025-10-03 10:23:58.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:23:58 compute-0 nova_compute[351685]: 2025-10-03 10:23:58.862 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:23:58 compute-0 nova_compute[351685]: 2025-10-03 10:23:58.863 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:23:58 compute-0 nova_compute[351685]: 2025-10-03 10:23:58.863 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:23:58 compute-0 nova_compute[351685]: 2025-10-03 10:23:58.864 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:23:58 compute-0 nova_compute[351685]: 2025-10-03 10:23:58.864 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 5325 keys, 7392866 bytes, temperature: kUnknown
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487038954500, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 7392866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7358942, "index_size": 19478, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 135740, "raw_average_key_size": 25, "raw_value_size": 7264215, "raw_average_value_size": 1364, "num_data_blocks": 796, "num_entries": 5325, "num_filter_entries": 5325, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759487038, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:23:58.954784) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 7392866 bytes
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:23:58.997192) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 21.8 rd, 17.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.4 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(12.4) write-amplify(5.5) OK, records in: 5839, records dropped: 514 output_compression: NoCompression
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:23:58.997228) EVENT_LOG_v1 {"time_micros": 1759487038997214, "job": 44, "event": "compaction_finished", "compaction_time_micros": 419517, "compaction_time_cpu_micros": 26158, "output_level": 6, "num_output_files": 1, "total_output_size": 7392866, "num_input_records": 5839, "num_output_records": 5325, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:23:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487038997778, "job": 44, "event": "table_file_deletion", "file_number": 79}
Oct  3 10:23:59 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:23:59 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487039000890, "job": 44, "event": "table_file_deletion", "file_number": 77}
Oct  3 10:23:59 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:23:58.534625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:23:59 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:23:59.001057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:23:59 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:23:59.001065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:23:59 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:23:59.001068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:23:59 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:23:59.001071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:23:59 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:23:59.001074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:23:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:23:59 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/134468391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:23:59 compute-0 nova_compute[351685]: 2025-10-03 10:23:59.328 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:23:59 compute-0 nova_compute[351685]: 2025-10-03 10:23:59.409 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:23:59 compute-0 nova_compute[351685]: 2025-10-03 10:23:59.410 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:23:59 compute-0 nova_compute[351685]: 2025-10-03 10:23:59.411 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:23:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:23:59 compute-0 podman[157165]: time="2025-10-03T10:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:23:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:23:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9065 "" "Go-http-client/1.1"
Oct  3 10:23:59 compute-0 nova_compute[351685]: 2025-10-03 10:23:59.773 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:23:59 compute-0 nova_compute[351685]: 2025-10-03 10:23:59.774 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3884MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:23:59 compute-0 nova_compute[351685]: 2025-10-03 10:23:59.774 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:23:59 compute-0 nova_compute[351685]: 2025-10-03 10:23:59.774 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:23:59 compute-0 podman[446836]: 2025-10-03 10:23:59.813054137 +0000 UTC m=+0.068374158 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:23:59 compute-0 podman[446835]: 2025-10-03 10:23:59.824524306 +0000 UTC m=+0.088090442 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_id=edpm, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Oct  3 10:23:59 compute-0 nova_compute[351685]: 2025-10-03 10:23:59.837 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:23:59 compute-0 nova_compute[351685]: 2025-10-03 10:23:59.837 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:23:59 compute-0 nova_compute[351685]: 2025-10-03 10:23:59.838 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:23:59 compute-0 podman[446837]: 2025-10-03 10:23:59.854086605 +0000 UTC m=+0.111998749 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  3 10:23:59 compute-0 nova_compute[351685]: 2025-10-03 10:23:59.866 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:24:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:24:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3051020678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:24:00 compute-0 nova_compute[351685]: 2025-10-03 10:24:00.357 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:24:00 compute-0 nova_compute[351685]: 2025-10-03 10:24:00.365 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:24:00 compute-0 nova_compute[351685]: 2025-10-03 10:24:00.381 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:24:00 compute-0 nova_compute[351685]: 2025-10-03 10:24:00.382 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:24:00 compute-0 nova_compute[351685]: 2025-10-03 10:24:00.382 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:24:01 compute-0 openstack_network_exporter[367524]: ERROR   10:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:24:01 compute-0 openstack_network_exporter[367524]: ERROR   10:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:24:01 compute-0 openstack_network_exporter[367524]: ERROR   10:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:24:01 compute-0 openstack_network_exporter[367524]: ERROR   10:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:24:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:24:01 compute-0 openstack_network_exporter[367524]: ERROR   10:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:24:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:24:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:24:01 compute-0 nova_compute[351685]: 2025-10-03 10:24:01.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:02 compute-0 nova_compute[351685]: 2025-10-03 10:24:02.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:03 compute-0 nova_compute[351685]: 2025-10-03 10:24:03.383 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:24:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1702: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:04 compute-0 nova_compute[351685]: 2025-10-03 10:24:04.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:24:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:05 compute-0 podman[446920]: 2025-10-03 10:24:05.864543256 +0000 UTC m=+0.122628631 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:24:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:24:06 compute-0 nova_compute[351685]: 2025-10-03 10:24:06.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1704: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:07 compute-0 nova_compute[351685]: 2025-10-03 10:24:07.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:08 compute-0 podman[446939]: 2025-10-03 10:24:08.809440403 +0000 UTC m=+0.066284081 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:24:08 compute-0 podman[446940]: 2025-10-03 10:24:08.814669111 +0000 UTC m=+0.073543534 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3)
Oct  3 10:24:08 compute-0 podman[446941]: 2025-10-03 10:24:08.819698653 +0000 UTC m=+0.073036378 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 10:24:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1705: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:24:11 compute-0 nova_compute[351685]: 2025-10-03 10:24:11.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:12 compute-0 nova_compute[351685]: 2025-10-03 10:24:12.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:14 compute-0 podman[447001]: 2025-10-03 10:24:14.782621628 +0000 UTC m=+0.065246358 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3)
Oct  3 10:24:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1708: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:24:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:24:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:24:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:24:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:24:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:24:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:24:16 compute-0 nova_compute[351685]: 2025-10-03 10:24:16.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1709: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:17 compute-0 nova_compute[351685]: 2025-10-03 10:24:17.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:20 compute-0 podman[447022]: 2025-10-03 10:24:20.835935457 +0000 UTC m=+0.083555555 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, distribution-scope=public, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, config_id=edpm, name=ubi9)
Oct  3 10:24:20 compute-0 podman[447021]: 2025-10-03 10:24:20.858662768 +0000 UTC m=+0.099586381 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:24:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1711: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:24:21 compute-0 nova_compute[351685]: 2025-10-03 10:24:21.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:22 compute-0 nova_compute[351685]: 2025-10-03 10:24:22.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1712: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:24:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:24:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:24:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:24:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:24:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:24:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f7ba41f8-3b6b-47a1-8a18-7407e842b6ff does not exist
Oct  3 10:24:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f503d314-601c-4071-b85d-98c47e7392d1 does not exist
Oct  3 10:24:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ec500242-9b26-4e5b-bdc1-4871e53dda93 does not exist
Oct  3 10:24:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:24:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:24:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:24:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:24:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:24:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:24:25 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:24:25 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:24:25 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:24:25 compute-0 podman[447331]: 2025-10-03 10:24:25.474295916 +0000 UTC m=+0.059487462 container create cdb8840c89754b5c56f50439033eaaf1bf6178926da6f358278d782eb91e8b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tesla, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:24:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1713: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:25 compute-0 systemd[1]: Started libpod-conmon-cdb8840c89754b5c56f50439033eaaf1bf6178926da6f358278d782eb91e8b5e.scope.
Oct  3 10:24:25 compute-0 podman[447331]: 2025-10-03 10:24:25.450604615 +0000 UTC m=+0.035796191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:24:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:24:25 compute-0 podman[447331]: 2025-10-03 10:24:25.595008794 +0000 UTC m=+0.180200390 container init cdb8840c89754b5c56f50439033eaaf1bf6178926da6f358278d782eb91e8b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tesla, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:24:25 compute-0 podman[447331]: 2025-10-03 10:24:25.607458274 +0000 UTC m=+0.192649820 container start cdb8840c89754b5c56f50439033eaaf1bf6178926da6f358278d782eb91e8b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct  3 10:24:25 compute-0 podman[447331]: 2025-10-03 10:24:25.612366772 +0000 UTC m=+0.197558348 container attach cdb8840c89754b5c56f50439033eaaf1bf6178926da6f358278d782eb91e8b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:24:25 compute-0 magical_tesla[447347]: 167 167
Oct  3 10:24:25 compute-0 systemd[1]: libpod-cdb8840c89754b5c56f50439033eaaf1bf6178926da6f358278d782eb91e8b5e.scope: Deactivated successfully.
Oct  3 10:24:25 compute-0 conmon[447347]: conmon cdb8840c89754b5c56f5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cdb8840c89754b5c56f50439033eaaf1bf6178926da6f358278d782eb91e8b5e.scope/container/memory.events
Oct  3 10:24:25 compute-0 podman[447331]: 2025-10-03 10:24:25.61943926 +0000 UTC m=+0.204630806 container died cdb8840c89754b5c56f50439033eaaf1bf6178926da6f358278d782eb91e8b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:24:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-39d8e131d576140a8a29217b02e3df842dd879b7f4738e3951989019263ac648-merged.mount: Deactivated successfully.
Oct  3 10:24:25 compute-0 podman[447331]: 2025-10-03 10:24:25.687583418 +0000 UTC m=+0.272774964 container remove cdb8840c89754b5c56f50439033eaaf1bf6178926da6f358278d782eb91e8b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tesla, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:24:25 compute-0 systemd[1]: libpod-conmon-cdb8840c89754b5c56f50439033eaaf1bf6178926da6f358278d782eb91e8b5e.scope: Deactivated successfully.
Oct  3 10:24:25 compute-0 podman[447370]: 2025-10-03 10:24:25.899852049 +0000 UTC m=+0.070438694 container create af11eb07b5b6a00756748ee7e75a5ea331174f165d598b0a2e5729c36f2f9b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 10:24:25 compute-0 podman[447370]: 2025-10-03 10:24:25.878667348 +0000 UTC m=+0.049253993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:24:25 compute-0 systemd[1]: Started libpod-conmon-af11eb07b5b6a00756748ee7e75a5ea331174f165d598b0a2e5729c36f2f9b69.scope.
Oct  3 10:24:26 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:24:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b94117c95bbd4dc6c636dd9168f3d949c29b44cd02942f27c593295447db8b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:24:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b94117c95bbd4dc6c636dd9168f3d949c29b44cd02942f27c593295447db8b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:24:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b94117c95bbd4dc6c636dd9168f3d949c29b44cd02942f27c593295447db8b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:24:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b94117c95bbd4dc6c636dd9168f3d949c29b44cd02942f27c593295447db8b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:24:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b94117c95bbd4dc6c636dd9168f3d949c29b44cd02942f27c593295447db8b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:24:26 compute-0 podman[447370]: 2025-10-03 10:24:26.073815877 +0000 UTC m=+0.244402582 container init af11eb07b5b6a00756748ee7e75a5ea331174f165d598b0a2e5729c36f2f9b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:24:26 compute-0 podman[447370]: 2025-10-03 10:24:26.092773358 +0000 UTC m=+0.263359973 container start af11eb07b5b6a00756748ee7e75a5ea331174f165d598b0a2e5729c36f2f9b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:24:26 compute-0 podman[447370]: 2025-10-03 10:24:26.100145374 +0000 UTC m=+0.270731989 container attach af11eb07b5b6a00756748ee7e75a5ea331174f165d598b0a2e5729c36f2f9b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 10:24:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:24:26 compute-0 nova_compute[351685]: 2025-10-03 10:24:26.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:27 compute-0 pedantic_wiles[447386]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:24:27 compute-0 pedantic_wiles[447386]: --> relative data size: 1.0
Oct  3 10:24:27 compute-0 pedantic_wiles[447386]: --> All data devices are unavailable
Oct  3 10:24:27 compute-0 systemd[1]: libpod-af11eb07b5b6a00756748ee7e75a5ea331174f165d598b0a2e5729c36f2f9b69.scope: Deactivated successfully.
Oct  3 10:24:27 compute-0 systemd[1]: libpod-af11eb07b5b6a00756748ee7e75a5ea331174f165d598b0a2e5729c36f2f9b69.scope: Consumed 1.113s CPU time.
Oct  3 10:24:27 compute-0 podman[447370]: 2025-10-03 10:24:27.26128995 +0000 UTC m=+1.431876575 container died af11eb07b5b6a00756748ee7e75a5ea331174f165d598b0a2e5729c36f2f9b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 10:24:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b94117c95bbd4dc6c636dd9168f3d949c29b44cd02942f27c593295447db8b4-merged.mount: Deactivated successfully.
Oct  3 10:24:27 compute-0 podman[447370]: 2025-10-03 10:24:27.326854607 +0000 UTC m=+1.497441232 container remove af11eb07b5b6a00756748ee7e75a5ea331174f165d598b0a2e5729c36f2f9b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  3 10:24:27 compute-0 systemd[1]: libpod-conmon-af11eb07b5b6a00756748ee7e75a5ea331174f165d598b0a2e5729c36f2f9b69.scope: Deactivated successfully.
Oct  3 10:24:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:27 compute-0 nova_compute[351685]: 2025-10-03 10:24:27.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:28 compute-0 podman[447562]: 2025-10-03 10:24:28.221367067 +0000 UTC m=+0.061819817 container create 58634e088f3d43a43d3d77fe7ef148d9aed7fb0cb46c9478b2c4baaa4c8e3196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:24:28 compute-0 systemd[1]: Started libpod-conmon-58634e088f3d43a43d3d77fe7ef148d9aed7fb0cb46c9478b2c4baaa4c8e3196.scope.
Oct  3 10:24:28 compute-0 podman[447562]: 2025-10-03 10:24:28.202071937 +0000 UTC m=+0.042524707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:24:28 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:24:28 compute-0 podman[447562]: 2025-10-03 10:24:28.335075291 +0000 UTC m=+0.175528081 container init 58634e088f3d43a43d3d77fe7ef148d9aed7fb0cb46c9478b2c4baaa4c8e3196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 10:24:28 compute-0 podman[447562]: 2025-10-03 10:24:28.350173905 +0000 UTC m=+0.190626635 container start 58634e088f3d43a43d3d77fe7ef148d9aed7fb0cb46c9478b2c4baaa4c8e3196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct  3 10:24:28 compute-0 podman[447562]: 2025-10-03 10:24:28.354676801 +0000 UTC m=+0.195129551 container attach 58634e088f3d43a43d3d77fe7ef148d9aed7fb0cb46c9478b2c4baaa4c8e3196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 10:24:28 compute-0 nice_hodgkin[447578]: 167 167
Oct  3 10:24:28 compute-0 systemd[1]: libpod-58634e088f3d43a43d3d77fe7ef148d9aed7fb0cb46c9478b2c4baaa4c8e3196.scope: Deactivated successfully.
Oct  3 10:24:28 compute-0 podman[447562]: 2025-10-03 10:24:28.362663067 +0000 UTC m=+0.203115827 container died 58634e088f3d43a43d3d77fe7ef148d9aed7fb0cb46c9478b2c4baaa4c8e3196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:24:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-68f26b48b6343246944aae4eaadba09d7af339361429a02cad1f392476adb2e4-merged.mount: Deactivated successfully.
Oct  3 10:24:28 compute-0 podman[447562]: 2025-10-03 10:24:28.440131836 +0000 UTC m=+0.280584586 container remove 58634e088f3d43a43d3d77fe7ef148d9aed7fb0cb46c9478b2c4baaa4c8e3196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hodgkin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:24:28 compute-0 systemd[1]: libpod-conmon-58634e088f3d43a43d3d77fe7ef148d9aed7fb0cb46c9478b2c4baaa4c8e3196.scope: Deactivated successfully.
Oct  3 10:24:28 compute-0 podman[447600]: 2025-10-03 10:24:28.652408806 +0000 UTC m=+0.060315829 container create e37a9a0663efb98e9aa20731afab57ea6ec82e00659d26b2307af76d23a79690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhabha, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:24:28 compute-0 podman[447600]: 2025-10-03 10:24:28.628484818 +0000 UTC m=+0.036391881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:24:28 compute-0 systemd[1]: Started libpod-conmon-e37a9a0663efb98e9aa20731afab57ea6ec82e00659d26b2307af76d23a79690.scope.
Oct  3 10:24:28 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:24:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248d4d8f0f04a83b893bb1065216277bbbfd7472aae5eaa3dd78cc185fe4d800/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:24:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248d4d8f0f04a83b893bb1065216277bbbfd7472aae5eaa3dd78cc185fe4d800/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:24:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248d4d8f0f04a83b893bb1065216277bbbfd7472aae5eaa3dd78cc185fe4d800/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:24:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248d4d8f0f04a83b893bb1065216277bbbfd7472aae5eaa3dd78cc185fe4d800/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:24:28 compute-0 podman[447600]: 2025-10-03 10:24:28.830214799 +0000 UTC m=+0.238121852 container init e37a9a0663efb98e9aa20731afab57ea6ec82e00659d26b2307af76d23a79690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhabha, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:24:28 compute-0 podman[447600]: 2025-10-03 10:24:28.838429393 +0000 UTC m=+0.246336376 container start e37a9a0663efb98e9aa20731afab57ea6ec82e00659d26b2307af76d23a79690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhabha, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 10:24:28 compute-0 podman[447600]: 2025-10-03 10:24:28.842994379 +0000 UTC m=+0.250901452 container attach e37a9a0663efb98e9aa20731afab57ea6ec82e00659d26b2307af76d23a79690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhabha, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct  3 10:24:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]: {
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:    "0": [
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:        {
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "devices": [
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "/dev/loop3"
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            ],
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "lv_name": "ceph_lv0",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "lv_size": "21470642176",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "name": "ceph_lv0",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "tags": {
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.cluster_name": "ceph",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.crush_device_class": "",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.encrypted": "0",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.osd_id": "0",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.type": "block",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.vdo": "0"
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            },
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "type": "block",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "vg_name": "ceph_vg0"
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:        }
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:    ],
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:    "1": [
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:        {
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "devices": [
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "/dev/loop4"
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            ],
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "lv_name": "ceph_lv1",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "lv_size": "21470642176",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "name": "ceph_lv1",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "tags": {
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.cluster_name": "ceph",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.crush_device_class": "",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.encrypted": "0",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.osd_id": "1",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.type": "block",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.vdo": "0"
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            },
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "type": "block",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "vg_name": "ceph_vg1"
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:        }
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:    ],
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:    "2": [
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:        {
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "devices": [
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "/dev/loop5"
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            ],
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "lv_name": "ceph_lv2",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "lv_size": "21470642176",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "name": "ceph_lv2",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "tags": {
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.cluster_name": "ceph",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.crush_device_class": "",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.encrypted": "0",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.osd_id": "2",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.type": "block",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:                "ceph.vdo": "0"
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            },
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "type": "block",
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:            "vg_name": "ceph_vg2"
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:        }
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]:    ]
Oct  3 10:24:29 compute-0 nifty_bhabha[447616]: }
Oct  3 10:24:29 compute-0 systemd[1]: libpod-e37a9a0663efb98e9aa20731afab57ea6ec82e00659d26b2307af76d23a79690.scope: Deactivated successfully.
Oct  3 10:24:29 compute-0 podman[447600]: 2025-10-03 10:24:29.74341052 +0000 UTC m=+1.151317513 container died e37a9a0663efb98e9aa20731afab57ea6ec82e00659d26b2307af76d23a79690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 10:24:29 compute-0 podman[157165]: time="2025-10-03T10:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:24:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-248d4d8f0f04a83b893bb1065216277bbbfd7472aae5eaa3dd78cc185fe4d800-merged.mount: Deactivated successfully.
Oct  3 10:24:29 compute-0 podman[447600]: 2025-10-03 10:24:29.828587066 +0000 UTC m=+1.236494049 container remove e37a9a0663efb98e9aa20731afab57ea6ec82e00659d26b2307af76d23a79690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhabha, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:24:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47825 "" "Go-http-client/1.1"
Oct  3 10:24:29 compute-0 systemd[1]: libpod-conmon-e37a9a0663efb98e9aa20731afab57ea6ec82e00659d26b2307af76d23a79690.scope: Deactivated successfully.
Oct  3 10:24:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9058 "" "Go-http-client/1.1"
Oct  3 10:24:29 compute-0 podman[447639]: 2025-10-03 10:24:29.980027762 +0000 UTC m=+0.088169414 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4)
Oct  3 10:24:29 compute-0 podman[447638]: 2025-10-03 10:24:29.986439668 +0000 UTC m=+0.094023393 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350)
Oct  3 10:24:30 compute-0 podman[447640]: 2025-10-03 10:24:30.073754353 +0000 UTC m=+0.179669164 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 10:24:30 compute-0 podman[447838]: 2025-10-03 10:24:30.698937489 +0000 UTC m=+0.088407511 container create da6772fcc6295befecacefff37b416198fb82e953a702cb4090aeab7c1be5767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:24:30 compute-0 podman[447838]: 2025-10-03 10:24:30.667002654 +0000 UTC m=+0.056472726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:24:30 compute-0 systemd[1]: Started libpod-conmon-da6772fcc6295befecacefff37b416198fb82e953a702cb4090aeab7c1be5767.scope.
Oct  3 10:24:30 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:24:30 compute-0 podman[447838]: 2025-10-03 10:24:30.843483484 +0000 UTC m=+0.232953546 container init da6772fcc6295befecacefff37b416198fb82e953a702cb4090aeab7c1be5767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:24:30 compute-0 podman[447838]: 2025-10-03 10:24:30.863448016 +0000 UTC m=+0.252918088 container start da6772fcc6295befecacefff37b416198fb82e953a702cb4090aeab7c1be5767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:24:30 compute-0 podman[447838]: 2025-10-03 10:24:30.87105249 +0000 UTC m=+0.260522552 container attach da6772fcc6295befecacefff37b416198fb82e953a702cb4090aeab7c1be5767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 10:24:30 compute-0 wonderful_leavitt[447853]: 167 167
Oct  3 10:24:30 compute-0 systemd[1]: libpod-da6772fcc6295befecacefff37b416198fb82e953a702cb4090aeab7c1be5767.scope: Deactivated successfully.
Oct  3 10:24:30 compute-0 podman[447838]: 2025-10-03 10:24:30.877190537 +0000 UTC m=+0.266660559 container died da6772fcc6295befecacefff37b416198fb82e953a702cb4090aeab7c1be5767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:24:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b4ff317f0d2856bf01b5af819d33bb69b2481d153c4f240e6bf1d618d569e24-merged.mount: Deactivated successfully.
Oct  3 10:24:30 compute-0 podman[447838]: 2025-10-03 10:24:30.962367373 +0000 UTC m=+0.351837395 container remove da6772fcc6295befecacefff37b416198fb82e953a702cb4090aeab7c1be5767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 10:24:30 compute-0 systemd[1]: libpod-conmon-da6772fcc6295befecacefff37b416198fb82e953a702cb4090aeab7c1be5767.scope: Deactivated successfully.
Oct  3 10:24:31 compute-0 podman[447875]: 2025-10-03 10:24:31.196318881 +0000 UTC m=+0.071751997 container create facd4b000c0bbc660039f58889f513d4db0c6f1efc8af733f6cbf3f9a8a5f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:24:31 compute-0 systemd[1]: Started libpod-conmon-facd4b000c0bbc660039f58889f513d4db0c6f1efc8af733f6cbf3f9a8a5f2ec.scope.
Oct  3 10:24:31 compute-0 podman[447875]: 2025-10-03 10:24:31.168413374 +0000 UTC m=+0.043846510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:24:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:24:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8474a099a075827b264e2b4bb0e5b9ddea823d5a1e7646d8bae8477a9081092b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:24:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8474a099a075827b264e2b4bb0e5b9ddea823d5a1e7646d8bae8477a9081092b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:24:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8474a099a075827b264e2b4bb0e5b9ddea823d5a1e7646d8bae8477a9081092b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:24:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8474a099a075827b264e2b4bb0e5b9ddea823d5a1e7646d8bae8477a9081092b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:24:31 compute-0 podman[447875]: 2025-10-03 10:24:31.342651502 +0000 UTC m=+0.218084638 container init facd4b000c0bbc660039f58889f513d4db0c6f1efc8af733f6cbf3f9a8a5f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:24:31 compute-0 podman[447875]: 2025-10-03 10:24:31.360800115 +0000 UTC m=+0.236233251 container start facd4b000c0bbc660039f58889f513d4db0c6f1efc8af733f6cbf3f9a8a5f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 10:24:31 compute-0 podman[447875]: 2025-10-03 10:24:31.365860448 +0000 UTC m=+0.241293624 container attach facd4b000c0bbc660039f58889f513d4db0c6f1efc8af733f6cbf3f9a8a5f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 10:24:31 compute-0 openstack_network_exporter[367524]: ERROR   10:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:24:31 compute-0 openstack_network_exporter[367524]: ERROR   10:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:24:31 compute-0 openstack_network_exporter[367524]: ERROR   10:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:24:31 compute-0 openstack_network_exporter[367524]: ERROR   10:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:24:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:24:31 compute-0 openstack_network_exporter[367524]: ERROR   10:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:24:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:24:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1716: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:24:31 compute-0 nova_compute[351685]: 2025-10-03 10:24:31.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]: {
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:        "osd_id": 1,
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:        "type": "bluestore"
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:    },
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:        "osd_id": 2,
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:        "type": "bluestore"
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:    },
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:        "osd_id": 0,
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:        "type": "bluestore"
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]:    }
Oct  3 10:24:32 compute-0 xenodochial_carver[447892]: }
Oct  3 10:24:32 compute-0 systemd[1]: libpod-facd4b000c0bbc660039f58889f513d4db0c6f1efc8af733f6cbf3f9a8a5f2ec.scope: Deactivated successfully.
Oct  3 10:24:32 compute-0 podman[447875]: 2025-10-03 10:24:32.480034235 +0000 UTC m=+1.355467381 container died facd4b000c0bbc660039f58889f513d4db0c6f1efc8af733f6cbf3f9a8a5f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  3 10:24:32 compute-0 systemd[1]: libpod-facd4b000c0bbc660039f58889f513d4db0c6f1efc8af733f6cbf3f9a8a5f2ec.scope: Consumed 1.120s CPU time.
Oct  3 10:24:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-8474a099a075827b264e2b4bb0e5b9ddea823d5a1e7646d8bae8477a9081092b-merged.mount: Deactivated successfully.
Oct  3 10:24:32 compute-0 podman[447875]: 2025-10-03 10:24:32.550458698 +0000 UTC m=+1.425891824 container remove facd4b000c0bbc660039f58889f513d4db0c6f1efc8af733f6cbf3f9a8a5f2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_carver, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 10:24:32 compute-0 systemd[1]: libpod-conmon-facd4b000c0bbc660039f58889f513d4db0c6f1efc8af733f6cbf3f9a8a5f2ec.scope: Deactivated successfully.
Oct  3 10:24:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:24:32 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:24:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:24:32 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:24:32 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8c309c1d-daad-4c12-a16a-c1b79db8e6e8 does not exist
Oct  3 10:24:32 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b06d16d6-19d6-4d14-87af-6bb8bd46df2b does not exist
Oct  3 10:24:32 compute-0 nova_compute[351685]: 2025-10-03 10:24:32.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:33 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:24:33 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:24:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct  3 10:24:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1718: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 14 op/s
Oct  3 10:24:36 compute-0 podman[447987]: 2025-10-03 10:24:36.903566942 +0000 UTC m=+0.148215143 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  3 10:24:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:24:36 compute-0 nova_compute[351685]: 2025-10-03 10:24:36.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1719: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 14 op/s
Oct  3 10:24:37 compute-0 nova_compute[351685]: 2025-10-03 10:24:37.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Oct  3 10:24:39 compute-0 podman[448004]: 2025-10-03 10:24:39.85347497 +0000 UTC m=+0.105656315 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:24:39 compute-0 podman[448005]: 2025-10-03 10:24:39.867871333 +0000 UTC m=+0.105706117 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 10:24:39 compute-0 podman[448006]: 2025-10-03 10:24:39.896782712 +0000 UTC m=+0.139014137 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  3 10:24:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1721: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 10:24:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:24:41.614 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:24:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:24:41.616 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:24:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:24:41.617 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:24:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:24:41 compute-0 nova_compute[351685]: 2025-10-03 10:24:41.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:42 compute-0 nova_compute[351685]: 2025-10-03 10:24:42.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1722: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 10:24:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1723: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct  3 10:24:45 compute-0 podman[448065]: 2025-10-03 10:24:45.854090174 +0000 UTC m=+0.118223500 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:24:46
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['vms', 'volumes', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control']
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:24:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:24:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:24:46 compute-0 nova_compute[351685]: 2025-10-03 10:24:46.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Oct  3 10:24:47 compute-0 nova_compute[351685]: 2025-10-03 10:24:47.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1725: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Oct  3 10:24:50 compute-0 nova_compute[351685]: 2025-10-03 10:24:50.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:24:50 compute-0 nova_compute[351685]: 2025-10-03 10:24:50.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:24:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1726: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Oct  3 10:24:51 compute-0 podman[448085]: 2025-10-03 10:24:51.815167499 +0000 UTC m=+0.079183426 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:24:51 compute-0 podman[448086]: 2025-10-03 10:24:51.826950157 +0000 UTC m=+0.076541200 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., release=1214.1726694543)
Oct  3 10:24:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:24:52 compute-0 nova_compute[351685]: 2025-10-03 10:24:52.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:52 compute-0 nova_compute[351685]: 2025-10-03 10:24:52.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1727: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:53 compute-0 nova_compute[351685]: 2025-10-03 10:24:53.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:24:53 compute-0 nova_compute[351685]: 2025-10-03 10:24:53.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:24:53 compute-0 nova_compute[351685]: 2025-10-03 10:24:53.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:24:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:24:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1959592373' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:24:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:24:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1959592373' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:24:54 compute-0 nova_compute[351685]: 2025-10-03 10:24:54.035 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:24:54 compute-0 nova_compute[351685]: 2025-10-03 10:24:54.035 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:24:54 compute-0 nova_compute[351685]: 2025-10-03 10:24:54.035 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:24:54 compute-0 nova_compute[351685]: 2025-10-03 10:24:54.036 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:24:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:56 compute-0 nova_compute[351685]: 2025-10-03 10:24:56.069 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:24:56 compute-0 nova_compute[351685]: 2025-10-03 10:24:56.090 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:24:56 compute-0 nova_compute[351685]: 2025-10-03 10:24:56.091 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:24:56 compute-0 nova_compute[351685]: 2025-10-03 10:24:56.091 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:24:56 compute-0 nova_compute[351685]: 2025-10-03 10:24:56.091 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:24:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:24:57 compute-0 nova_compute[351685]: 2025-10-03 10:24:57.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1729: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:57 compute-0 nova_compute[351685]: 2025-10-03 10:24:57.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:24:57 compute-0 nova_compute[351685]: 2025-10-03 10:24:57.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:24:57 compute-0 nova_compute[351685]: 2025-10-03 10:24:57.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:24:57 compute-0 nova_compute[351685]: 2025-10-03 10:24:57.752 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:24:58 compute-0 nova_compute[351685]: 2025-10-03 10:24:58.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:24:58 compute-0 nova_compute[351685]: 2025-10-03 10:24:58.755 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:24:58 compute-0 nova_compute[351685]: 2025-10-03 10:24:58.756 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:24:58 compute-0 nova_compute[351685]: 2025-10-03 10:24:58.756 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:24:58 compute-0 nova_compute[351685]: 2025-10-03 10:24:58.756 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:24:58 compute-0 nova_compute[351685]: 2025-10-03 10:24:58.757 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:24:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:24:59 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2357001039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:24:59 compute-0 nova_compute[351685]: 2025-10-03 10:24:59.233 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:24:59 compute-0 nova_compute[351685]: 2025-10-03 10:24:59.304 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:24:59 compute-0 nova_compute[351685]: 2025-10-03 10:24:59.304 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:24:59 compute-0 nova_compute[351685]: 2025-10-03 10:24:59.304 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:24:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:24:59 compute-0 nova_compute[351685]: 2025-10-03 10:24:59.616 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:24:59 compute-0 nova_compute[351685]: 2025-10-03 10:24:59.617 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3891MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:24:59 compute-0 nova_compute[351685]: 2025-10-03 10:24:59.618 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:24:59 compute-0 nova_compute[351685]: 2025-10-03 10:24:59.618 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:24:59 compute-0 nova_compute[351685]: 2025-10-03 10:24:59.690 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:24:59 compute-0 nova_compute[351685]: 2025-10-03 10:24:59.691 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:24:59 compute-0 nova_compute[351685]: 2025-10-03 10:24:59.691 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:24:59 compute-0 nova_compute[351685]: 2025-10-03 10:24:59.726 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:24:59 compute-0 podman[157165]: time="2025-10-03T10:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:24:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:24:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9064 "" "Go-http-client/1.1"
Oct  3 10:25:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:25:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1614940669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:25:00 compute-0 nova_compute[351685]: 2025-10-03 10:25:00.231 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:25:00 compute-0 nova_compute[351685]: 2025-10-03 10:25:00.239 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:25:00 compute-0 nova_compute[351685]: 2025-10-03 10:25:00.254 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:25:00 compute-0 nova_compute[351685]: 2025-10-03 10:25:00.255 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:25:00 compute-0 nova_compute[351685]: 2025-10-03 10:25:00.256 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:25:00 compute-0 podman[448173]: 2025-10-03 10:25:00.823422274 +0000 UTC m=+0.074635978 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  3 10:25:00 compute-0 podman[448172]: 2025-10-03 10:25:00.846823727 +0000 UTC m=+0.103774176 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, distribution-scope=public, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vendor=Red Hat, Inc., version=9.6, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct  3 10:25:00 compute-0 podman[448174]: 2025-10-03 10:25:00.893714693 +0000 UTC m=+0.134929666 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  3 10:25:01 compute-0 openstack_network_exporter[367524]: ERROR   10:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:25:01 compute-0 openstack_network_exporter[367524]: ERROR   10:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:25:01 compute-0 openstack_network_exporter[367524]: ERROR   10:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:25:01 compute-0 openstack_network_exporter[367524]: ERROR   10:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:25:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:25:01 compute-0 openstack_network_exporter[367524]: ERROR   10:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:25:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:25:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:25:02 compute-0 nova_compute[351685]: 2025-10-03 10:25:02.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:02 compute-0 nova_compute[351685]: 2025-10-03 10:25:02.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:05 compute-0 nova_compute[351685]: 2025-10-03 10:25:05.257 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:25:05 compute-0 nova_compute[351685]: 2025-10-03 10:25:05.259 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:25:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:25:07 compute-0 nova_compute[351685]: 2025-10-03 10:25:07.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:07 compute-0 nova_compute[351685]: 2025-10-03 10:25:07.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:07 compute-0 podman[448233]: 2025-10-03 10:25:07.880663497 +0000 UTC m=+0.128065656 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  3 10:25:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:10 compute-0 podman[448249]: 2025-10-03 10:25:10.828343872 +0000 UTC m=+0.074687280 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:25:10 compute-0 podman[448251]: 2025-10-03 10:25:10.863884805 +0000 UTC m=+0.104581241 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 10:25:10 compute-0 podman[448250]: 2025-10-03 10:25:10.876332295 +0000 UTC m=+0.113861570 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:25:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:25:12 compute-0 nova_compute[351685]: 2025-10-03 10:25:12.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:12 compute-0 nova_compute[351685]: 2025-10-03 10:25:12.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:25:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:25:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:25:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:25:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:25:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:25:16 compute-0 podman[448310]: 2025-10-03 10:25:16.857853197 +0000 UTC m=+0.110118659 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 10:25:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:25:17 compute-0 nova_compute[351685]: 2025-10-03 10:25:17.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:17 compute-0 nova_compute[351685]: 2025-10-03 10:25:17.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:25:22 compute-0 nova_compute[351685]: 2025-10-03 10:25:22.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:22 compute-0 podman[448330]: 2025-10-03 10:25:22.59198894 +0000 UTC m=+0.098622800 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, distribution-scope=public, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., version=9.4)
Oct  3 10:25:22 compute-0 podman[448329]: 2025-10-03 10:25:22.614476953 +0000 UTC m=+0.113945662 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:25:22 compute-0 nova_compute[351685]: 2025-10-03 10:25:22.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1743: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:25:27 compute-0 nova_compute[351685]: 2025-10-03 10:25:27.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:27 compute-0 nova_compute[351685]: 2025-10-03 10:25:27.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:29 compute-0 podman[157165]: time="2025-10-03T10:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:25:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:25:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9063 "" "Go-http-client/1.1"
Oct  3 10:25:31 compute-0 openstack_network_exporter[367524]: ERROR   10:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:25:31 compute-0 openstack_network_exporter[367524]: ERROR   10:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:25:31 compute-0 openstack_network_exporter[367524]: ERROR   10:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:25:31 compute-0 openstack_network_exporter[367524]: ERROR   10:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:25:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:25:31 compute-0 openstack_network_exporter[367524]: ERROR   10:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:25:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:25:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:31 compute-0 podman[448371]: 2025-10-03 10:25:31.862747985 +0000 UTC m=+0.108333832 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 10:25:31 compute-0 podman[448372]: 2025-10-03 10:25:31.864672227 +0000 UTC m=+0.103373082 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:25:31 compute-0 podman[448373]: 2025-10-03 10:25:31.880425783 +0000 UTC m=+0.125855405 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Oct  3 10:25:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:25:32 compute-0 nova_compute[351685]: 2025-10-03 10:25:32.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:32 compute-0 nova_compute[351685]: 2025-10-03 10:25:32.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:25:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:25:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:25:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:25:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:25:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:25:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:25:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:25:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:25:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:25:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 591026c6-befe-495a-b392-413d89fe5a6c does not exist
Oct  3 10:25:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 0f473ed4-7e0f-40b6-8241-bb43e0500117 does not exist
Oct  3 10:25:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8fbd14b7-6568-4ec7-b9c7-7d3bd4f4271f does not exist
Oct  3 10:25:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:25:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:25:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:25:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:25:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:25:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:25:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:25:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:25:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:25:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:25:35 compute-0 podman[448821]: 2025-10-03 10:25:35.500558285 +0000 UTC m=+0.070061192 container create ce79a31ca4d03ce2b6f64831945d60f871b3d38ff7b066b37390a06c57bd54ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:25:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:35 compute-0 systemd[1]: Started libpod-conmon-ce79a31ca4d03ce2b6f64831945d60f871b3d38ff7b066b37390a06c57bd54ce.scope.
Oct  3 10:25:35 compute-0 podman[448821]: 2025-10-03 10:25:35.474047334 +0000 UTC m=+0.043550251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:25:35 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:25:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:25:35 compute-0 podman[448821]: 2025-10-03 10:25:35.609898298 +0000 UTC m=+0.179401185 container init ce79a31ca4d03ce2b6f64831945d60f871b3d38ff7b066b37390a06c57bd54ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:25:35 compute-0 podman[448821]: 2025-10-03 10:25:35.620667884 +0000 UTC m=+0.190170761 container start ce79a31ca4d03ce2b6f64831945d60f871b3d38ff7b066b37390a06c57bd54ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:25:35 compute-0 podman[448821]: 2025-10-03 10:25:35.626422819 +0000 UTC m=+0.195925696 container attach ce79a31ca4d03ce2b6f64831945d60f871b3d38ff7b066b37390a06c57bd54ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:25:35 compute-0 eager_wu[448837]: 167 167
Oct  3 10:25:35 compute-0 systemd[1]: libpod-ce79a31ca4d03ce2b6f64831945d60f871b3d38ff7b066b37390a06c57bd54ce.scope: Deactivated successfully.
Oct  3 10:25:35 compute-0 conmon[448837]: conmon ce79a31ca4d03ce2b6f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce79a31ca4d03ce2b6f64831945d60f871b3d38ff7b066b37390a06c57bd54ce.scope/container/memory.events
Oct  3 10:25:35 compute-0 podman[448821]: 2025-10-03 10:25:35.632433732 +0000 UTC m=+0.201936619 container died ce79a31ca4d03ce2b6f64831945d60f871b3d38ff7b066b37390a06c57bd54ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:25:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cf6ceb5a7ac73c880a24e71e12960b73e4a24030895c1b8f47cbc69b286f85a-merged.mount: Deactivated successfully.
Oct  3 10:25:35 compute-0 podman[448821]: 2025-10-03 10:25:35.687678938 +0000 UTC m=+0.257181805 container remove ce79a31ca4d03ce2b6f64831945d60f871b3d38ff7b066b37390a06c57bd54ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:25:35 compute-0 systemd[1]: libpod-conmon-ce79a31ca4d03ce2b6f64831945d60f871b3d38ff7b066b37390a06c57bd54ce.scope: Deactivated successfully.
Oct  3 10:25:35 compute-0 podman[448860]: 2025-10-03 10:25:35.905967641 +0000 UTC m=+0.055496465 container create 9d467fff14fb8d67be71985ec00b587ef317b344ecf2d9927b64d129d6f12596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 10:25:35 compute-0 systemd[1]: Started libpod-conmon-9d467fff14fb8d67be71985ec00b587ef317b344ecf2d9927b64d129d6f12596.scope.
Oct  3 10:25:35 compute-0 podman[448860]: 2025-10-03 10:25:35.885892536 +0000 UTC m=+0.035421370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:25:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95544e15a543f43452489cd71fd3e8faeb7777c17fac647d4427231644a19232/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95544e15a543f43452489cd71fd3e8faeb7777c17fac647d4427231644a19232/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95544e15a543f43452489cd71fd3e8faeb7777c17fac647d4427231644a19232/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95544e15a543f43452489cd71fd3e8faeb7777c17fac647d4427231644a19232/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95544e15a543f43452489cd71fd3e8faeb7777c17fac647d4427231644a19232/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:25:36 compute-0 podman[448860]: 2025-10-03 10:25:36.038178038 +0000 UTC m=+0.187706862 container init 9d467fff14fb8d67be71985ec00b587ef317b344ecf2d9927b64d129d6f12596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:25:36 compute-0 podman[448860]: 2025-10-03 10:25:36.04694056 +0000 UTC m=+0.196469374 container start 9d467fff14fb8d67be71985ec00b587ef317b344ecf2d9927b64d129d6f12596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 10:25:36 compute-0 podman[448860]: 2025-10-03 10:25:36.052002592 +0000 UTC m=+0.201531416 container attach 9d467fff14fb8d67be71985ec00b587ef317b344ecf2d9927b64d129d6f12596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 10:25:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:25:37 compute-0 nova_compute[351685]: 2025-10-03 10:25:37.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:37 compute-0 suspicious_keldysh[448876]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:25:37 compute-0 suspicious_keldysh[448876]: --> relative data size: 1.0
Oct  3 10:25:37 compute-0 suspicious_keldysh[448876]: --> All data devices are unavailable
Oct  3 10:25:37 compute-0 systemd[1]: libpod-9d467fff14fb8d67be71985ec00b587ef317b344ecf2d9927b64d129d6f12596.scope: Deactivated successfully.
Oct  3 10:25:37 compute-0 podman[448860]: 2025-10-03 10:25:37.196507086 +0000 UTC m=+1.346035920 container died 9d467fff14fb8d67be71985ec00b587ef317b344ecf2d9927b64d129d6f12596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 10:25:37 compute-0 systemd[1]: libpod-9d467fff14fb8d67be71985ec00b587ef317b344ecf2d9927b64d129d6f12596.scope: Consumed 1.091s CPU time.
Oct  3 10:25:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-95544e15a543f43452489cd71fd3e8faeb7777c17fac647d4427231644a19232-merged.mount: Deactivated successfully.
Oct  3 10:25:37 compute-0 podman[448860]: 2025-10-03 10:25:37.275167122 +0000 UTC m=+1.424695946 container remove 9d467fff14fb8d67be71985ec00b587ef317b344ecf2d9927b64d129d6f12596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_keldysh, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:25:37 compute-0 systemd[1]: libpod-conmon-9d467fff14fb8d67be71985ec00b587ef317b344ecf2d9927b64d129d6f12596.scope: Deactivated successfully.
Oct  3 10:25:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:37 compute-0 nova_compute[351685]: 2025-10-03 10:25:37.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:38 compute-0 podman[449051]: 2025-10-03 10:25:38.210867876 +0000 UTC m=+0.056040222 container create 30e63c670d9ecfe71da0e6e5e93afa8f0b3c83f1e6189ab1652fbaa8e0e6d433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:25:38 compute-0 systemd[1]: Started libpod-conmon-30e63c670d9ecfe71da0e6e5e93afa8f0b3c83f1e6189ab1652fbaa8e0e6d433.scope.
Oct  3 10:25:38 compute-0 podman[449051]: 2025-10-03 10:25:38.195609566 +0000 UTC m=+0.040781932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:25:38 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:25:38 compute-0 podman[449051]: 2025-10-03 10:25:38.313764271 +0000 UTC m=+0.158936637 container init 30e63c670d9ecfe71da0e6e5e93afa8f0b3c83f1e6189ab1652fbaa8e0e6d433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:25:38 compute-0 podman[449051]: 2025-10-03 10:25:38.326646456 +0000 UTC m=+0.171818802 container start 30e63c670d9ecfe71da0e6e5e93afa8f0b3c83f1e6189ab1652fbaa8e0e6d433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:25:38 compute-0 elegant_yalow[449068]: 167 167
Oct  3 10:25:38 compute-0 podman[449051]: 2025-10-03 10:25:38.332535765 +0000 UTC m=+0.177708141 container attach 30e63c670d9ecfe71da0e6e5e93afa8f0b3c83f1e6189ab1652fbaa8e0e6d433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_yalow, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:25:38 compute-0 systemd[1]: libpod-30e63c670d9ecfe71da0e6e5e93afa8f0b3c83f1e6189ab1652fbaa8e0e6d433.scope: Deactivated successfully.
Oct  3 10:25:38 compute-0 podman[449051]: 2025-10-03 10:25:38.333550637 +0000 UTC m=+0.178723003 container died 30e63c670d9ecfe71da0e6e5e93afa8f0b3c83f1e6189ab1652fbaa8e0e6d433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:25:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e559d7df1c278a88cd354f38089761ee5f87805d9148c1300df292144136ed90-merged.mount: Deactivated successfully.
Oct  3 10:25:38 compute-0 podman[449065]: 2025-10-03 10:25:38.373291934 +0000 UTC m=+0.105966505 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:25:38 compute-0 podman[449051]: 2025-10-03 10:25:38.378782651 +0000 UTC m=+0.223954997 container remove 30e63c670d9ecfe71da0e6e5e93afa8f0b3c83f1e6189ab1652fbaa8e0e6d433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_yalow, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:25:38 compute-0 systemd[1]: libpod-conmon-30e63c670d9ecfe71da0e6e5e93afa8f0b3c83f1e6189ab1652fbaa8e0e6d433.scope: Deactivated successfully.
Oct  3 10:25:38 compute-0 podman[449107]: 2025-10-03 10:25:38.623539405 +0000 UTC m=+0.088397172 container create 79e96c431c64780989bd6f2947ac42664cb7e6c4a2a4f5ff895f910764c786b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 10:25:38 compute-0 podman[449107]: 2025-10-03 10:25:38.59069737 +0000 UTC m=+0.055555137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:25:38 compute-0 systemd[1]: Started libpod-conmon-79e96c431c64780989bd6f2947ac42664cb7e6c4a2a4f5ff895f910764c786b4.scope.
Oct  3 10:25:38 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2fd905333751503de99478d4d2a15396d254692e3c5c08f02b60bcff030b04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2fd905333751503de99478d4d2a15396d254692e3c5c08f02b60bcff030b04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2fd905333751503de99478d4d2a15396d254692e3c5c08f02b60bcff030b04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c2fd905333751503de99478d4d2a15396d254692e3c5c08f02b60bcff030b04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:25:38 compute-0 podman[449107]: 2025-10-03 10:25:38.752503518 +0000 UTC m=+0.217361275 container init 79e96c431c64780989bd6f2947ac42664cb7e6c4a2a4f5ff895f910764c786b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hopper, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:25:38 compute-0 podman[449107]: 2025-10-03 10:25:38.775556129 +0000 UTC m=+0.240413906 container start 79e96c431c64780989bd6f2947ac42664cb7e6c4a2a4f5ff895f910764c786b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hopper, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 10:25:38 compute-0 podman[449107]: 2025-10-03 10:25:38.782596945 +0000 UTC m=+0.247454772 container attach 79e96c431c64780989bd6f2947ac42664cb7e6c4a2a4f5ff895f910764c786b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:25:39 compute-0 magical_hopper[449123]: {
Oct  3 10:25:39 compute-0 magical_hopper[449123]:    "0": [
Oct  3 10:25:39 compute-0 magical_hopper[449123]:        {
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "devices": [
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "/dev/loop3"
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            ],
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "lv_name": "ceph_lv0",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "lv_size": "21470642176",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "name": "ceph_lv0",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "tags": {
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.cluster_name": "ceph",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.crush_device_class": "",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.encrypted": "0",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.osd_id": "0",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.type": "block",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.vdo": "0"
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            },
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "type": "block",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "vg_name": "ceph_vg0"
Oct  3 10:25:39 compute-0 magical_hopper[449123]:        }
Oct  3 10:25:39 compute-0 magical_hopper[449123]:    ],
Oct  3 10:25:39 compute-0 magical_hopper[449123]:    "1": [
Oct  3 10:25:39 compute-0 magical_hopper[449123]:        {
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "devices": [
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "/dev/loop4"
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            ],
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "lv_name": "ceph_lv1",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "lv_size": "21470642176",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "name": "ceph_lv1",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "tags": {
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.cluster_name": "ceph",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.crush_device_class": "",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.encrypted": "0",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.osd_id": "1",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.type": "block",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.vdo": "0"
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            },
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "type": "block",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "vg_name": "ceph_vg1"
Oct  3 10:25:39 compute-0 magical_hopper[449123]:        }
Oct  3 10:25:39 compute-0 magical_hopper[449123]:    ],
Oct  3 10:25:39 compute-0 magical_hopper[449123]:    "2": [
Oct  3 10:25:39 compute-0 magical_hopper[449123]:        {
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "devices": [
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "/dev/loop5"
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            ],
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "lv_name": "ceph_lv2",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "lv_size": "21470642176",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "name": "ceph_lv2",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "tags": {
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.cluster_name": "ceph",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.crush_device_class": "",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.encrypted": "0",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.osd_id": "2",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.type": "block",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:                "ceph.vdo": "0"
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            },
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "type": "block",
Oct  3 10:25:39 compute-0 magical_hopper[449123]:            "vg_name": "ceph_vg2"
Oct  3 10:25:39 compute-0 magical_hopper[449123]:        }
Oct  3 10:25:39 compute-0 magical_hopper[449123]:    ]
Oct  3 10:25:39 compute-0 magical_hopper[449123]: }
Oct  3 10:25:39 compute-0 systemd[1]: libpod-79e96c431c64780989bd6f2947ac42664cb7e6c4a2a4f5ff895f910764c786b4.scope: Deactivated successfully.
Oct  3 10:25:39 compute-0 podman[449107]: 2025-10-03 10:25:39.536307171 +0000 UTC m=+1.001164958 container died 79e96c431c64780989bd6f2947ac42664cb7e6c4a2a4f5ff895f910764c786b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:25:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c2fd905333751503de99478d4d2a15396d254692e3c5c08f02b60bcff030b04-merged.mount: Deactivated successfully.
Oct  3 10:25:39 compute-0 podman[449107]: 2025-10-03 10:25:39.6252984 +0000 UTC m=+1.090156147 container remove 79e96c431c64780989bd6f2947ac42664cb7e6c4a2a4f5ff895f910764c786b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hopper, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct  3 10:25:39 compute-0 systemd[1]: libpod-conmon-79e96c431c64780989bd6f2947ac42664cb7e6c4a2a4f5ff895f910764c786b4.scope: Deactivated successfully.
Oct  3 10:25:40 compute-0 podman[449283]: 2025-10-03 10:25:40.540424893 +0000 UTC m=+0.051260879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:25:40 compute-0 podman[449283]: 2025-10-03 10:25:40.668546139 +0000 UTC m=+0.179382035 container create 55037e325838e348d55a4b6edce2d60f23f6055afe0221f01aca7cbd3f69751c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gagarin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct  3 10:25:40 compute-0 systemd[1]: Started libpod-conmon-55037e325838e348d55a4b6edce2d60f23f6055afe0221f01aca7cbd3f69751c.scope.
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.888 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.888 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.888 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.898 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.899 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.899 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.899 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.899 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:25:40.899631) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.907 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.908 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.908 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.908 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.908 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.908 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.909 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.909 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.909 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.909 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.909 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.909 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.909 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.910 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.911 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:25:40.909010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:25:40.910068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:40 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.936 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.937 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.937 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.937 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.938 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.938 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.938 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.938 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.938 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.939 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:25:40.938421) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.979 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.979 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.980 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.980 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.980 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.980 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.980 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.981 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.981 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.981 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.981 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.982 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:25:40.981021) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.982 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.982 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.982 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.982 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.982 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.982 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.983 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.983 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.983 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.983 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.984 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.984 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.984 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:25:40.982800) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:25:40.984181) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.984 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.984 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.985 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.985 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.985 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.985 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.985 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.985 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.985 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.985 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.986 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.986 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.986 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.986 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.986 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:25:40.985682) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.986 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.987 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.987 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.987 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.987 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.988 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.988 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.988 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.988 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.988 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.989 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:25:40.987047) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:40.989 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:25:40.988721) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:40 compute-0 podman[449283]: 2025-10-03 10:25:40.99384276 +0000 UTC m=+0.504678656 container init 55037e325838e348d55a4b6edce2d60f23f6055afe0221f01aca7cbd3f69751c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:25:41 compute-0 podman[449283]: 2025-10-03 10:25:41.00688399 +0000 UTC m=+0.517719886 container start 55037e325838e348d55a4b6edce2d60f23f6055afe0221f01aca7cbd3f69751c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:25:41 compute-0 amazing_gagarin[449298]: 167 167
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:41 compute-0 systemd[1]: libpod-55037e325838e348d55a4b6edce2d60f23f6055afe0221f01aca7cbd3f69751c.scope: Deactivated successfully.
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.013 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.014 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.014 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.015 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.015 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:25:41.014662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.015 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.015 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.016 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.016 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.016 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.016 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.017 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:25:41.016837) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.016 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.017 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.017 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.017 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.018 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.018 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.019 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:25:41.018981) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.019 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.020 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.020 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.020 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.021 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:25:41.020622) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.021 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:25:41.022028) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.022 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.023 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.023 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.023 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:25:41.023461) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.024 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:25:41 compute-0 podman[449283]: 2025-10-03 10:25:41.02435154 +0000 UTC m=+0.535187456 container attach 55037e325838e348d55a4b6edce2d60f23f6055afe0221f01aca7cbd3f69751c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gagarin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.024 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.024 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.025 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.025 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.025 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:25:41.024606) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.025 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.025 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.026 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.026 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 52020000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:41 compute-0 podman[449283]: 2025-10-03 10:25:41.026801539 +0000 UTC m=+0.537637455 container died 55037e325838e348d55a4b6edce2d60f23f6055afe0221f01aca7cbd3f69751c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:25:41.026022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.027 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.027 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.028 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.028 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:25:41.027670) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.028 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.028 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.028 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.028 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.029 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:25:41.028704) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.029 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.029 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.029 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.029 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.030 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.030 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:25:41.030095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.030 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.030 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.030 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.031 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.031 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.031 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.031 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.031 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.031 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.031 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.032 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.032 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.032 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.032 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.032 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.032 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.032 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:25:41.031200) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.032 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.032 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:25:41.032403) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.033 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.033 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.033 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.033 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.033 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.033 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.033 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:25:41.033456) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.033 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.034 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.034 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.035 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.036 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.036 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:25:41.036 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:25:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cf5acafc5226ef491d6197e7a196dae3a37cbb955d013565fc3fefe4f125a46-merged.mount: Deactivated successfully.
Oct  3 10:25:41 compute-0 podman[449283]: 2025-10-03 10:25:41.290507011 +0000 UTC m=+0.801342907 container remove 55037e325838e348d55a4b6edce2d60f23f6055afe0221f01aca7cbd3f69751c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct  3 10:25:41 compute-0 systemd[1]: libpod-conmon-55037e325838e348d55a4b6edce2d60f23f6055afe0221f01aca7cbd3f69751c.scope: Deactivated successfully.
Oct  3 10:25:41 compute-0 podman[449308]: 2025-10-03 10:25:41.445566723 +0000 UTC m=+0.531476885 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 10:25:41 compute-0 podman[449302]: 2025-10-03 10:25:41.448802357 +0000 UTC m=+0.525064630 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  3 10:25:41 compute-0 podman[449299]: 2025-10-03 10:25:41.459870773 +0000 UTC m=+0.561893564 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:25:41 compute-0 podman[449375]: 2025-10-03 10:25:41.511341206 +0000 UTC m=+0.060827255 container create 8dbc480b7545ebaa7b59ba60319a12750703ea03dfaa0c50c00e28d2c9f0e62e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 10:25:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:41 compute-0 systemd[1]: Started libpod-conmon-8dbc480b7545ebaa7b59ba60319a12750703ea03dfaa0c50c00e28d2c9f0e62e.scope.
Oct  3 10:25:41 compute-0 podman[449375]: 2025-10-03 10:25:41.489624749 +0000 UTC m=+0.039110828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:25:41 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e71e293eb8a4a8f6b1814dce8d8dc32c13de8a840fe9f98839a2f49f02ec6fb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e71e293eb8a4a8f6b1814dce8d8dc32c13de8a840fe9f98839a2f49f02ec6fb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e71e293eb8a4a8f6b1814dce8d8dc32c13de8a840fe9f98839a2f49f02ec6fb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e71e293eb8a4a8f6b1814dce8d8dc32c13de8a840fe9f98839a2f49f02ec6fb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:25:41 compute-0 podman[449375]: 2025-10-03 10:25:41.614873343 +0000 UTC m=+0.164359402 container init 8dbc480b7545ebaa7b59ba60319a12750703ea03dfaa0c50c00e28d2c9f0e62e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 10:25:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:25:41.615 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:25:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:25:41.617 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:25:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:25:41.617 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:25:41 compute-0 podman[449375]: 2025-10-03 10:25:41.637708857 +0000 UTC m=+0.187194906 container start 8dbc480b7545ebaa7b59ba60319a12750703ea03dfaa0c50c00e28d2c9f0e62e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chandrasekhar, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 10:25:41 compute-0 podman[449375]: 2025-10-03 10:25:41.641809269 +0000 UTC m=+0.191295338 container attach 8dbc480b7545ebaa7b59ba60319a12750703ea03dfaa0c50c00e28d2c9f0e62e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 10:25:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:25:42 compute-0 nova_compute[351685]: 2025-10-03 10:25:42.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:42 compute-0 nova_compute[351685]: 2025-10-03 10:25:42.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]: {
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:        "osd_id": 1,
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:        "type": "bluestore"
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:    },
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:        "osd_id": 2,
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:        "type": "bluestore"
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:    },
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:        "osd_id": 0,
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:        "type": "bluestore"
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]:    }
Oct  3 10:25:42 compute-0 ecstatic_chandrasekhar[449398]: }
Oct  3 10:25:42 compute-0 systemd[1]: libpod-8dbc480b7545ebaa7b59ba60319a12750703ea03dfaa0c50c00e28d2c9f0e62e.scope: Deactivated successfully.
Oct  3 10:25:42 compute-0 podman[449375]: 2025-10-03 10:25:42.822446541 +0000 UTC m=+1.371932590 container died 8dbc480b7545ebaa7b59ba60319a12750703ea03dfaa0c50c00e28d2c9f0e62e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:25:42 compute-0 systemd[1]: libpod-8dbc480b7545ebaa7b59ba60319a12750703ea03dfaa0c50c00e28d2c9f0e62e.scope: Consumed 1.177s CPU time.
Oct  3 10:25:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e71e293eb8a4a8f6b1814dce8d8dc32c13de8a840fe9f98839a2f49f02ec6fb4-merged.mount: Deactivated successfully.
Oct  3 10:25:42 compute-0 podman[449375]: 2025-10-03 10:25:42.897467502 +0000 UTC m=+1.446953541 container remove 8dbc480b7545ebaa7b59ba60319a12750703ea03dfaa0c50c00e28d2c9f0e62e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:25:42 compute-0 systemd[1]: libpod-conmon-8dbc480b7545ebaa7b59ba60319a12750703ea03dfaa0c50c00e28d2c9f0e62e.scope: Deactivated successfully.
Oct  3 10:25:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:25:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:25:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:25:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:25:42 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 55f6199e-be6e-452c-ae7b-f55e34ee8464 does not exist
Oct  3 10:25:42 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 099907e2-171d-408d-b7b1-7c7a6b7b0f24 does not exist
Oct  3 10:25:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:25:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:25:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1753: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:25:46
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images']
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:25:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:25:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:25:47 compute-0 nova_compute[351685]: 2025-10-03 10:25:47.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:47 compute-0 nova_compute[351685]: 2025-10-03 10:25:47.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:47 compute-0 podman[449494]: 2025-10-03 10:25:47.906582188 +0000 UTC m=+0.143960487 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm)
Oct  3 10:25:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1755: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:51 compute-0 nova_compute[351685]: 2025-10-03 10:25:51.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:25:51 compute-0 nova_compute[351685]: 2025-10-03 10:25:51.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:25:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:25:52 compute-0 nova_compute[351685]: 2025-10-03 10:25:52.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:52 compute-0 nova_compute[351685]: 2025-10-03 10:25:52.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:52 compute-0 podman[449514]: 2025-10-03 10:25:52.824444405 +0000 UTC m=+0.078753752 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:25:52 compute-0 podman[449515]: 2025-10-03 10:25:52.863120827 +0000 UTC m=+0.118633512 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.tags=base rhel9, name=ubi9, config_id=edpm, vendor=Red Hat, Inc., io.buildah.version=1.29.0, com.redhat.component=ubi9-container)
Oct  3 10:25:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:25:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2805708439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:25:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:25:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2805708439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:25:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:55 compute-0 nova_compute[351685]: 2025-10-03 10:25:55.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:25:55 compute-0 nova_compute[351685]: 2025-10-03 10:25:55.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:25:55 compute-0 nova_compute[351685]: 2025-10-03 10:25:55.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:25:56 compute-0 nova_compute[351685]: 2025-10-03 10:25:56.045 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:25:56 compute-0 nova_compute[351685]: 2025-10-03 10:25:56.045 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:25:56 compute-0 nova_compute[351685]: 2025-10-03 10:25:56.045 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:25:56 compute-0 nova_compute[351685]: 2025-10-03 10:25:56.046 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:25:56 compute-0 nova_compute[351685]: 2025-10-03 10:25:56.946 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:25:56 compute-0 nova_compute[351685]: 2025-10-03 10:25:56.964 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:25:56 compute-0 nova_compute[351685]: 2025-10-03 10:25:56.965 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:25:56 compute-0 nova_compute[351685]: 2025-10-03 10:25:56.966 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:25:56 compute-0 nova_compute[351685]: 2025-10-03 10:25:56.967 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:25:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:25:57 compute-0 nova_compute[351685]: 2025-10-03 10:25:57.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:57 compute-0 nova_compute[351685]: 2025-10-03 10:25:57.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:25:57 compute-0 nova_compute[351685]: 2025-10-03 10:25:57.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:25:57 compute-0 nova_compute[351685]: 2025-10-03 10:25:57.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 10:25:57 compute-0 nova_compute[351685]: 2025-10-03 10:25:57.748 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 10:25:58 compute-0 nova_compute[351685]: 2025-10-03 10:25:58.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:25:58 compute-0 nova_compute[351685]: 2025-10-03 10:25:58.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:25:58 compute-0 nova_compute[351685]: 2025-10-03 10:25:58.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:25:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:25:59 compute-0 podman[157165]: time="2025-10-03T10:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:25:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:25:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9062 "" "Go-http-client/1.1"
Oct  3 10:26:00 compute-0 nova_compute[351685]: 2025-10-03 10:26:00.744 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:26:00 compute-0 nova_compute[351685]: 2025-10-03 10:26:00.843 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:26:00 compute-0 nova_compute[351685]: 2025-10-03 10:26:00.844 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:26:00 compute-0 nova_compute[351685]: 2025-10-03 10:26:00.845 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:26:00 compute-0 nova_compute[351685]: 2025-10-03 10:26:00.845 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:26:00 compute-0 nova_compute[351685]: 2025-10-03 10:26:00.846 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:26:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:26:01 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3964845987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:26:01 compute-0 nova_compute[351685]: 2025-10-03 10:26:01.298 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:26:01 compute-0 openstack_network_exporter[367524]: ERROR   10:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:26:01 compute-0 openstack_network_exporter[367524]: ERROR   10:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:26:01 compute-0 openstack_network_exporter[367524]: ERROR   10:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:26:01 compute-0 openstack_network_exporter[367524]: ERROR   10:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:26:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:26:01 compute-0 openstack_network_exporter[367524]: ERROR   10:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:26:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:26:01 compute-0 nova_compute[351685]: 2025-10-03 10:26:01.436 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:26:01 compute-0 nova_compute[351685]: 2025-10-03 10:26:01.437 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:26:01 compute-0 nova_compute[351685]: 2025-10-03 10:26:01.437 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:26:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:01 compute-0 nova_compute[351685]: 2025-10-03 10:26:01.759 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:26:01 compute-0 nova_compute[351685]: 2025-10-03 10:26:01.760 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3899MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:26:01 compute-0 nova_compute[351685]: 2025-10-03 10:26:01.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:26:01 compute-0 nova_compute[351685]: 2025-10-03 10:26:01.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:26:01 compute-0 nova_compute[351685]: 2025-10-03 10:26:01.909 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:26:01 compute-0 nova_compute[351685]: 2025-10-03 10:26:01.911 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:26:01 compute-0 nova_compute[351685]: 2025-10-03 10:26:01.912 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:26:01 compute-0 nova_compute[351685]: 2025-10-03 10:26:01.953 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:26:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:26:02 compute-0 nova_compute[351685]: 2025-10-03 10:26:02.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:26:02 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1509320294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:26:02 compute-0 nova_compute[351685]: 2025-10-03 10:26:02.422 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:26:02 compute-0 nova_compute[351685]: 2025-10-03 10:26:02.431 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:26:02 compute-0 nova_compute[351685]: 2025-10-03 10:26:02.463 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:26:02 compute-0 nova_compute[351685]: 2025-10-03 10:26:02.466 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:26:02 compute-0 nova_compute[351685]: 2025-10-03 10:26:02.467 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:26:02 compute-0 nova_compute[351685]: 2025-10-03 10:26:02.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:02 compute-0 podman[449599]: 2025-10-03 10:26:02.838033835 +0000 UTC m=+0.098634519 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:26:02 compute-0 podman[449598]: 2025-10-03 10:26:02.870192109 +0000 UTC m=+0.122409804 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Oct  3 10:26:02 compute-0 podman[449600]: 2025-10-03 10:26:02.876828723 +0000 UTC m=+0.133070037 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  3 10:26:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1762: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:05 compute-0 nova_compute[351685]: 2025-10-03 10:26:05.453 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:26:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1763: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:06 compute-0 nova_compute[351685]: 2025-10-03 10:26:06.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:26:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:26:07 compute-0 nova_compute[351685]: 2025-10-03 10:26:07.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:07 compute-0 nova_compute[351685]: 2025-10-03 10:26:07.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:08 compute-0 podman[449659]: 2025-10-03 10:26:08.831389968 +0000 UTC m=+0.090637024 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct  3 10:26:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1766: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:11 compute-0 podman[449679]: 2025-10-03 10:26:11.832587715 +0000 UTC m=+0.085131896 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 10:26:11 compute-0 podman[449678]: 2025-10-03 10:26:11.83619073 +0000 UTC m=+0.094221818 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:26:11 compute-0 podman[449680]: 2025-10-03 10:26:11.842817273 +0000 UTC m=+0.087867894 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  3 10:26:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:26:12 compute-0 nova_compute[351685]: 2025-10-03 10:26:12.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:12 compute-0 nova_compute[351685]: 2025-10-03 10:26:12.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1768: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:26:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:26:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:26:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:26:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:26:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:26:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:26:17 compute-0 nova_compute[351685]: 2025-10-03 10:26:17.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1769: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:17 compute-0 nova_compute[351685]: 2025-10-03 10:26:17.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:17 compute-0 nova_compute[351685]: 2025-10-03 10:26:17.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:26:17 compute-0 nova_compute[351685]: 2025-10-03 10:26:17.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 10:26:18 compute-0 podman[449740]: 2025-10-03 10:26:18.827523056 +0000 UTC m=+0.092313797 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  3 10:26:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1771: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:26:22 compute-0 nova_compute[351685]: 2025-10-03 10:26:22.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:22 compute-0 nova_compute[351685]: 2025-10-03 10:26:22.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:23 compute-0 podman[449760]: 2025-10-03 10:26:23.816945362 +0000 UTC m=+0.063501051 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, maintainer=Red Hat, Inc., release-0.7.12=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30)
Oct  3 10:26:23 compute-0 podman[449759]: 2025-10-03 10:26:23.838132352 +0000 UTC m=+0.096833771 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:26:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:26:27 compute-0 nova_compute[351685]: 2025-10-03 10:26:27.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1774: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:27 compute-0 nova_compute[351685]: 2025-10-03 10:26:27.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1775: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:29 compute-0 podman[157165]: time="2025-10-03T10:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:26:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:26:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9053 "" "Go-http-client/1.1"
Oct  3 10:26:31 compute-0 openstack_network_exporter[367524]: ERROR   10:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:26:31 compute-0 openstack_network_exporter[367524]: ERROR   10:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:26:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:26:31 compute-0 openstack_network_exporter[367524]: ERROR   10:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:26:31 compute-0 openstack_network_exporter[367524]: ERROR   10:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:26:31 compute-0 openstack_network_exporter[367524]: ERROR   10:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:26:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:26:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:26:32 compute-0 nova_compute[351685]: 2025-10-03 10:26:32.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:32 compute-0 nova_compute[351685]: 2025-10-03 10:26:32.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:33 compute-0 podman[449800]: 2025-10-03 10:26:33.823807036 +0000 UTC m=+0.083249705 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, name=ubi9-minimal, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc.)
Oct  3 10:26:33 compute-0 podman[449801]: 2025-10-03 10:26:33.84756084 +0000 UTC m=+0.096475691 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 10:26:33 compute-0 podman[449802]: 2025-10-03 10:26:33.913752216 +0000 UTC m=+0.161049775 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 10:26:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:26:37 compute-0 nova_compute[351685]: 2025-10-03 10:26:37.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1779: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:37 compute-0 nova_compute[351685]: 2025-10-03 10:26:37.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:39 compute-0 podman[449863]: 2025-10-03 10:26:39.820497636 +0000 UTC m=+0.079454053 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  3 10:26:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:26:41.617 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:26:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:26:41.617 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:26:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:26:41.618 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:26:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:26:42 compute-0 nova_compute[351685]: 2025-10-03 10:26:42.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:42 compute-0 nova_compute[351685]: 2025-10-03 10:26:42.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:42 compute-0 podman[449884]: 2025-10-03 10:26:42.85774082 +0000 UTC m=+0.106249495 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 10:26:42 compute-0 podman[449885]: 2025-10-03 10:26:42.857962426 +0000 UTC m=+0.103115033 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:26:42 compute-0 podman[449883]: 2025-10-03 10:26:42.859115984 +0000 UTC m=+0.113596821 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:26:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  3 10:26:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 10:26:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:26:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:26:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:26:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:26:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:26:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:26:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 41e1d88c-f030-4d15-b980-b8e1ff4945db does not exist
Oct  3 10:26:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 0853f511-4316-4272-b35b-666a19286995 does not exist
Oct  3 10:26:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 96912f7e-5198-4580-a254-7260b897440d does not exist
Oct  3 10:26:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:26:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:26:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:26:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:26:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:26:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:26:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 10:26:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:26:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:26:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:26:45 compute-0 podman[450212]: 2025-10-03 10:26:45.29574508 +0000 UTC m=+0.066452417 container create 119b3ce722cd73318a086a9d39b27986fb7b0fd9eca5b033182b2ef234da871f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_engelbart, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 10:26:45 compute-0 podman[450212]: 2025-10-03 10:26:45.273193875 +0000 UTC m=+0.043901212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:26:45 compute-0 systemd[1]: Started libpod-conmon-119b3ce722cd73318a086a9d39b27986fb7b0fd9eca5b033182b2ef234da871f.scope.
Oct  3 10:26:45 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:26:45 compute-0 podman[450212]: 2025-10-03 10:26:45.461983801 +0000 UTC m=+0.232691198 container init 119b3ce722cd73318a086a9d39b27986fb7b0fd9eca5b033182b2ef234da871f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_engelbart, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:26:45 compute-0 podman[450212]: 2025-10-03 10:26:45.472736786 +0000 UTC m=+0.243444133 container start 119b3ce722cd73318a086a9d39b27986fb7b0fd9eca5b033182b2ef234da871f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_engelbart, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:26:45 compute-0 podman[450212]: 2025-10-03 10:26:45.479849095 +0000 UTC m=+0.250556472 container attach 119b3ce722cd73318a086a9d39b27986fb7b0fd9eca5b033182b2ef234da871f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:26:45 compute-0 zen_engelbart[450228]: 167 167
Oct  3 10:26:45 compute-0 systemd[1]: libpod-119b3ce722cd73318a086a9d39b27986fb7b0fd9eca5b033182b2ef234da871f.scope: Deactivated successfully.
Oct  3 10:26:45 compute-0 podman[450212]: 2025-10-03 10:26:45.483528143 +0000 UTC m=+0.254235520 container died 119b3ce722cd73318a086a9d39b27986fb7b0fd9eca5b033182b2ef234da871f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_engelbart, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:26:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f91874faab415588a4f0f86db70d105dcf63873301e6d57b61918eba69633f5-merged.mount: Deactivated successfully.
Oct  3 10:26:45 compute-0 podman[450212]: 2025-10-03 10:26:45.55223097 +0000 UTC m=+0.322938287 container remove 119b3ce722cd73318a086a9d39b27986fb7b0fd9eca5b033182b2ef234da871f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_engelbart, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:26:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1783: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:45 compute-0 systemd[1]: libpod-conmon-119b3ce722cd73318a086a9d39b27986fb7b0fd9eca5b033182b2ef234da871f.scope: Deactivated successfully.
Oct  3 10:26:45 compute-0 podman[450254]: 2025-10-03 10:26:45.777360714 +0000 UTC m=+0.063874324 container create 5bfc72d3e6b5b2a6c9d69a98ee3f566c5c6dc7eb81f8f6aa0c959e1b382a0566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:26:45 compute-0 podman[450254]: 2025-10-03 10:26:45.759553552 +0000 UTC m=+0.046067292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:26:45 compute-0 systemd[1]: Started libpod-conmon-5bfc72d3e6b5b2a6c9d69a98ee3f566c5c6dc7eb81f8f6aa0c959e1b382a0566.scope.
Oct  3 10:26:45 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d58ed5aaab0feca87d9cf7361240d3462f6ffb9f0d464d15711e18e1363b341/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d58ed5aaab0feca87d9cf7361240d3462f6ffb9f0d464d15711e18e1363b341/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d58ed5aaab0feca87d9cf7361240d3462f6ffb9f0d464d15711e18e1363b341/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d58ed5aaab0feca87d9cf7361240d3462f6ffb9f0d464d15711e18e1363b341/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d58ed5aaab0feca87d9cf7361240d3462f6ffb9f0d464d15711e18e1363b341/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:26:45 compute-0 podman[450254]: 2025-10-03 10:26:45.945074013 +0000 UTC m=+0.231587663 container init 5bfc72d3e6b5b2a6c9d69a98ee3f566c5c6dc7eb81f8f6aa0c959e1b382a0566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_varahamihira, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 10:26:45 compute-0 podman[450254]: 2025-10-03 10:26:45.963352459 +0000 UTC m=+0.249866079 container start 5bfc72d3e6b5b2a6c9d69a98ee3f566c5c6dc7eb81f8f6aa0c959e1b382a0566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 10:26:45 compute-0 podman[450254]: 2025-10-03 10:26:45.969968362 +0000 UTC m=+0.256481992 container attach 5bfc72d3e6b5b2a6c9d69a98ee3f566c5c6dc7eb81f8f6aa0c959e1b382a0566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_varahamihira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:26:46
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.meta', 'backups', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms']
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:26:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:26:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:26:47 compute-0 nova_compute[351685]: 2025-10-03 10:26:47.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:47 compute-0 trusting_varahamihira[450269]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:26:47 compute-0 trusting_varahamihira[450269]: --> relative data size: 1.0
Oct  3 10:26:47 compute-0 trusting_varahamihira[450269]: --> All data devices are unavailable
Oct  3 10:26:47 compute-0 systemd[1]: libpod-5bfc72d3e6b5b2a6c9d69a98ee3f566c5c6dc7eb81f8f6aa0c959e1b382a0566.scope: Deactivated successfully.
Oct  3 10:26:47 compute-0 podman[450254]: 2025-10-03 10:26:47.201155779 +0000 UTC m=+1.487669439 container died 5bfc72d3e6b5b2a6c9d69a98ee3f566c5c6dc7eb81f8f6aa0c959e1b382a0566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:26:47 compute-0 systemd[1]: libpod-5bfc72d3e6b5b2a6c9d69a98ee3f566c5c6dc7eb81f8f6aa0c959e1b382a0566.scope: Consumed 1.165s CPU time.
Oct  3 10:26:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d58ed5aaab0feca87d9cf7361240d3462f6ffb9f0d464d15711e18e1363b341-merged.mount: Deactivated successfully.
Oct  3 10:26:47 compute-0 podman[450254]: 2025-10-03 10:26:47.314032966 +0000 UTC m=+1.600546576 container remove 5bfc72d3e6b5b2a6c9d69a98ee3f566c5c6dc7eb81f8f6aa0c959e1b382a0566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:26:47 compute-0 systemd[1]: libpod-conmon-5bfc72d3e6b5b2a6c9d69a98ee3f566c5c6dc7eb81f8f6aa0c959e1b382a0566.scope: Deactivated successfully.
Oct  3 10:26:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:47 compute-0 nova_compute[351685]: 2025-10-03 10:26:47.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:48 compute-0 podman[450451]: 2025-10-03 10:26:48.22981392 +0000 UTC m=+0.057440347 container create b6bb933057a133d65e9510260f2d0bd6311078c9ba6906b128778cf866fbdafc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 10:26:48 compute-0 systemd[1]: Started libpod-conmon-b6bb933057a133d65e9510260f2d0bd6311078c9ba6906b128778cf866fbdafc.scope.
Oct  3 10:26:48 compute-0 podman[450451]: 2025-10-03 10:26:48.209856698 +0000 UTC m=+0.037483155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:26:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:26:48 compute-0 podman[450451]: 2025-10-03 10:26:48.348971878 +0000 UTC m=+0.176598335 container init b6bb933057a133d65e9510260f2d0bd6311078c9ba6906b128778cf866fbdafc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 10:26:48 compute-0 podman[450451]: 2025-10-03 10:26:48.357900785 +0000 UTC m=+0.185527202 container start b6bb933057a133d65e9510260f2d0bd6311078c9ba6906b128778cf866fbdafc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:26:48 compute-0 podman[450451]: 2025-10-03 10:26:48.363300728 +0000 UTC m=+0.190927165 container attach b6bb933057a133d65e9510260f2d0bd6311078c9ba6906b128778cf866fbdafc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:26:48 compute-0 goofy_heyrovsky[450468]: 167 167
Oct  3 10:26:48 compute-0 systemd[1]: libpod-b6bb933057a133d65e9510260f2d0bd6311078c9ba6906b128778cf866fbdafc.scope: Deactivated successfully.
Oct  3 10:26:48 compute-0 podman[450451]: 2025-10-03 10:26:48.36553775 +0000 UTC m=+0.193164167 container died b6bb933057a133d65e9510260f2d0bd6311078c9ba6906b128778cf866fbdafc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 10:26:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-9007de7cd22578732869149fd2b5f04b2877d9eb99f35869a911bb3ddf626634-merged.mount: Deactivated successfully.
Oct  3 10:26:48 compute-0 podman[450451]: 2025-10-03 10:26:48.410117662 +0000 UTC m=+0.237744079 container remove b6bb933057a133d65e9510260f2d0bd6311078c9ba6906b128778cf866fbdafc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_heyrovsky, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 10:26:48 compute-0 systemd[1]: libpod-conmon-b6bb933057a133d65e9510260f2d0bd6311078c9ba6906b128778cf866fbdafc.scope: Deactivated successfully.
Oct  3 10:26:48 compute-0 podman[450491]: 2025-10-03 10:26:48.593411552 +0000 UTC m=+0.047111915 container create 8e42775d63507e1bd0e3d51cf9fc89917a68e55eb1133d45a26614ce4cd9be80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 10:26:48 compute-0 systemd[1]: Started libpod-conmon-8e42775d63507e1bd0e3d51cf9fc89917a68e55eb1133d45a26614ce4cd9be80.scope.
Oct  3 10:26:48 compute-0 podman[450491]: 2025-10-03 10:26:48.574132192 +0000 UTC m=+0.027832575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:26:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:26:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef32be1e5fe025160eba1c1c793ab8c52308b690ae00ad734db060d7b7dc416c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:26:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef32be1e5fe025160eba1c1c793ab8c52308b690ae00ad734db060d7b7dc416c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:26:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef32be1e5fe025160eba1c1c793ab8c52308b690ae00ad734db060d7b7dc416c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:26:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef32be1e5fe025160eba1c1c793ab8c52308b690ae00ad734db060d7b7dc416c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:26:48 compute-0 podman[450491]: 2025-10-03 10:26:48.71070728 +0000 UTC m=+0.164407653 container init 8e42775d63507e1bd0e3d51cf9fc89917a68e55eb1133d45a26614ce4cd9be80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:26:48 compute-0 podman[450491]: 2025-10-03 10:26:48.726315291 +0000 UTC m=+0.180015654 container start 8e42775d63507e1bd0e3d51cf9fc89917a68e55eb1133d45a26614ce4cd9be80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 10:26:48 compute-0 podman[450491]: 2025-10-03 10:26:48.730806046 +0000 UTC m=+0.184506429 container attach 8e42775d63507e1bd0e3d51cf9fc89917a68e55eb1133d45a26614ce4cd9be80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]: {
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:    "0": [
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:        {
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "devices": [
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "/dev/loop3"
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            ],
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "lv_name": "ceph_lv0",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "lv_size": "21470642176",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "name": "ceph_lv0",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "tags": {
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.cluster_name": "ceph",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.crush_device_class": "",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.encrypted": "0",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.osd_id": "0",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.type": "block",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.vdo": "0"
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            },
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "type": "block",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "vg_name": "ceph_vg0"
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:        }
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:    ],
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:    "1": [
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:        {
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "devices": [
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "/dev/loop4"
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            ],
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "lv_name": "ceph_lv1",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "lv_size": "21470642176",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "name": "ceph_lv1",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "tags": {
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.cluster_name": "ceph",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.crush_device_class": "",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.encrypted": "0",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.osd_id": "1",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.type": "block",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.vdo": "0"
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            },
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "type": "block",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "vg_name": "ceph_vg1"
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:        }
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:    ],
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:    "2": [
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:        {
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "devices": [
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "/dev/loop5"
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            ],
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "lv_name": "ceph_lv2",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "lv_size": "21470642176",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "name": "ceph_lv2",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "tags": {
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.cluster_name": "ceph",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.crush_device_class": "",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.encrypted": "0",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.osd_id": "2",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.type": "block",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:                "ceph.vdo": "0"
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            },
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "type": "block",
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:            "vg_name": "ceph_vg2"
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:        }
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]:    ]
Oct  3 10:26:49 compute-0 epic_dijkstra[450506]: }
Oct  3 10:26:49 compute-0 systemd[1]: libpod-8e42775d63507e1bd0e3d51cf9fc89917a68e55eb1133d45a26614ce4cd9be80.scope: Deactivated successfully.
Oct  3 10:26:49 compute-0 podman[450491]: 2025-10-03 10:26:49.526855982 +0000 UTC m=+0.980556355 container died 8e42775d63507e1bd0e3d51cf9fc89917a68e55eb1133d45a26614ce4cd9be80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:26:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef32be1e5fe025160eba1c1c793ab8c52308b690ae00ad734db060d7b7dc416c-merged.mount: Deactivated successfully.
Oct  3 10:26:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1785: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:49 compute-0 podman[450491]: 2025-10-03 10:26:49.610976825 +0000 UTC m=+1.064677188 container remove 8e42775d63507e1bd0e3d51cf9fc89917a68e55eb1133d45a26614ce4cd9be80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dijkstra, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:26:49 compute-0 systemd[1]: libpod-conmon-8e42775d63507e1bd0e3d51cf9fc89917a68e55eb1133d45a26614ce4cd9be80.scope: Deactivated successfully.
Oct  3 10:26:49 compute-0 podman[450517]: 2025-10-03 10:26:49.690147439 +0000 UTC m=+0.121713371 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:26:50 compute-0 podman[450686]: 2025-10-03 10:26:50.614473277 +0000 UTC m=+0.089071812 container create 6d3daa0cc78bdabb3b39a32b79e77a64f5b7bc422f766b73bc40b4334fbd3c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_faraday, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 10:26:50 compute-0 podman[450686]: 2025-10-03 10:26:50.578028976 +0000 UTC m=+0.052627581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:26:50 compute-0 systemd[1]: Started libpod-conmon-6d3daa0cc78bdabb3b39a32b79e77a64f5b7bc422f766b73bc40b4334fbd3c5d.scope.
Oct  3 10:26:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:26:50 compute-0 podman[450686]: 2025-10-03 10:26:50.746199 +0000 UTC m=+0.220797545 container init 6d3daa0cc78bdabb3b39a32b79e77a64f5b7bc422f766b73bc40b4334fbd3c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 10:26:50 compute-0 podman[450686]: 2025-10-03 10:26:50.758762513 +0000 UTC m=+0.233361048 container start 6d3daa0cc78bdabb3b39a32b79e77a64f5b7bc422f766b73bc40b4334fbd3c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_faraday, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 10:26:50 compute-0 podman[450686]: 2025-10-03 10:26:50.763995191 +0000 UTC m=+0.238593766 container attach 6d3daa0cc78bdabb3b39a32b79e77a64f5b7bc422f766b73bc40b4334fbd3c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 10:26:50 compute-0 gracious_faraday[450701]: 167 167
Oct  3 10:26:50 compute-0 systemd[1]: libpod-6d3daa0cc78bdabb3b39a32b79e77a64f5b7bc422f766b73bc40b4334fbd3c5d.scope: Deactivated successfully.
Oct  3 10:26:50 compute-0 podman[450686]: 2025-10-03 10:26:50.767443312 +0000 UTC m=+0.242041847 container died 6d3daa0cc78bdabb3b39a32b79e77a64f5b7bc422f766b73bc40b4334fbd3c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_faraday, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 10:26:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8b46bcf76a0cf3a8028ac85445508e0c1c001c264736129c3f6d984b8f6222e-merged.mount: Deactivated successfully.
Oct  3 10:26:50 compute-0 podman[450686]: 2025-10-03 10:26:50.82714445 +0000 UTC m=+0.301742975 container remove 6d3daa0cc78bdabb3b39a32b79e77a64f5b7bc422f766b73bc40b4334fbd3c5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 10:26:50 compute-0 systemd[1]: libpod-conmon-6d3daa0cc78bdabb3b39a32b79e77a64f5b7bc422f766b73bc40b4334fbd3c5d.scope: Deactivated successfully.
Oct  3 10:26:51 compute-0 podman[450724]: 2025-10-03 10:26:51.106618869 +0000 UTC m=+0.075491626 container create df480009046509570a814605f04e0d0276afae7646edc4fd4b946eec9138085e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 10:26:51 compute-0 podman[450724]: 2025-10-03 10:26:51.07022421 +0000 UTC m=+0.039097037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:26:51 compute-0 systemd[1]: Started libpod-conmon-df480009046509570a814605f04e0d0276afae7646edc4fd4b946eec9138085e.scope.
Oct  3 10:26:51 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3931844ae9c30a7785d9602f0c86204275547e9321dce5033b72d38a701ab04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3931844ae9c30a7785d9602f0c86204275547e9321dce5033b72d38a701ab04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3931844ae9c30a7785d9602f0c86204275547e9321dce5033b72d38a701ab04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3931844ae9c30a7785d9602f0c86204275547e9321dce5033b72d38a701ab04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:26:51 compute-0 podman[450724]: 2025-10-03 10:26:51.276471246 +0000 UTC m=+0.245343973 container init df480009046509570a814605f04e0d0276afae7646edc4fd4b946eec9138085e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 10:26:51 compute-0 podman[450724]: 2025-10-03 10:26:51.317899638 +0000 UTC m=+0.286772375 container start df480009046509570a814605f04e0d0276afae7646edc4fd4b946eec9138085e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 10:26:51 compute-0 podman[450724]: 2025-10-03 10:26:51.323271951 +0000 UTC m=+0.292144688 container attach df480009046509570a814605f04e0d0276afae7646edc4fd4b946eec9138085e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:26:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:26:52 compute-0 nova_compute[351685]: 2025-10-03 10:26:52.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:52 compute-0 crazy_banach[450740]: {
Oct  3 10:26:52 compute-0 crazy_banach[450740]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:26:52 compute-0 crazy_banach[450740]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:26:52 compute-0 crazy_banach[450740]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:26:52 compute-0 crazy_banach[450740]:        "osd_id": 1,
Oct  3 10:26:52 compute-0 crazy_banach[450740]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:26:52 compute-0 crazy_banach[450740]:        "type": "bluestore"
Oct  3 10:26:52 compute-0 crazy_banach[450740]:    },
Oct  3 10:26:52 compute-0 crazy_banach[450740]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:26:52 compute-0 crazy_banach[450740]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:26:52 compute-0 crazy_banach[450740]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:26:52 compute-0 crazy_banach[450740]:        "osd_id": 2,
Oct  3 10:26:52 compute-0 crazy_banach[450740]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:26:52 compute-0 crazy_banach[450740]:        "type": "bluestore"
Oct  3 10:26:52 compute-0 crazy_banach[450740]:    },
Oct  3 10:26:52 compute-0 crazy_banach[450740]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:26:52 compute-0 crazy_banach[450740]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:26:52 compute-0 crazy_banach[450740]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:26:52 compute-0 crazy_banach[450740]:        "osd_id": 0,
Oct  3 10:26:52 compute-0 crazy_banach[450740]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:26:52 compute-0 crazy_banach[450740]:        "type": "bluestore"
Oct  3 10:26:52 compute-0 crazy_banach[450740]:    }
Oct  3 10:26:52 compute-0 crazy_banach[450740]: }
Oct  3 10:26:52 compute-0 systemd[1]: libpod-df480009046509570a814605f04e0d0276afae7646edc4fd4b946eec9138085e.scope: Deactivated successfully.
Oct  3 10:26:52 compute-0 podman[450724]: 2025-10-03 10:26:52.433375417 +0000 UTC m=+1.402248144 container died df480009046509570a814605f04e0d0276afae7646edc4fd4b946eec9138085e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:26:52 compute-0 systemd[1]: libpod-df480009046509570a814605f04e0d0276afae7646edc4fd4b946eec9138085e.scope: Consumed 1.103s CPU time.
Oct  3 10:26:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3931844ae9c30a7785d9602f0c86204275547e9321dce5033b72d38a701ab04-merged.mount: Deactivated successfully.
Oct  3 10:26:52 compute-0 podman[450724]: 2025-10-03 10:26:52.504870495 +0000 UTC m=+1.473743252 container remove df480009046509570a814605f04e0d0276afae7646edc4fd4b946eec9138085e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:26:52 compute-0 systemd[1]: libpod-conmon-df480009046509570a814605f04e0d0276afae7646edc4fd4b946eec9138085e.scope: Deactivated successfully.
Oct  3 10:26:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:26:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:26:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:26:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:26:52 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e947f731-02b0-41f2-8735-d5271f4c7eac does not exist
Oct  3 10:26:52 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 123353d6-1438-4402-9501-5aa0f97b566a does not exist
Oct  3 10:26:52 compute-0 nova_compute[351685]: 2025-10-03 10:26:52.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:52 compute-0 nova_compute[351685]: 2025-10-03 10:26:52.753 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:26:52 compute-0 nova_compute[351685]: 2025-10-03 10:26:52.753 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:26:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:26:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:26:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1787: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:26:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3064440401' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:26:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:26:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3064440401' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:26:54 compute-0 podman[450835]: 2025-10-03 10:26:54.863924969 +0000 UTC m=+0.108920780 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:26:54 compute-0 podman[450836]: 2025-10-03 10:26:54.912360866 +0000 UTC m=+0.154034750 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, container_name=kepler, distribution-scope=public, release=1214.1726694543, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9)
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:26:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:55 compute-0 nova_compute[351685]: 2025-10-03 10:26:55.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:26:55 compute-0 nova_compute[351685]: 2025-10-03 10:26:55.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:26:55 compute-0 nova_compute[351685]: 2025-10-03 10:26:55.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:26:56 compute-0 nova_compute[351685]: 2025-10-03 10:26:56.045 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:26:56 compute-0 nova_compute[351685]: 2025-10-03 10:26:56.045 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:26:56 compute-0 nova_compute[351685]: 2025-10-03 10:26:56.045 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:26:56 compute-0 nova_compute[351685]: 2025-10-03 10:26:56.045 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:26:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:26:57 compute-0 nova_compute[351685]: 2025-10-03 10:26:57.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:57 compute-0 nova_compute[351685]: 2025-10-03 10:26:57.181 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:26:57 compute-0 nova_compute[351685]: 2025-10-03 10:26:57.203 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:26:57 compute-0 nova_compute[351685]: 2025-10-03 10:26:57.203 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:26:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1789: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:57 compute-0 nova_compute[351685]: 2025-10-03 10:26:57.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:26:57 compute-0 nova_compute[351685]: 2025-10-03 10:26:57.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:26:58 compute-0 nova_compute[351685]: 2025-10-03 10:26:58.724 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:26:58 compute-0 nova_compute[351685]: 2025-10-03 10:26:58.747 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:26:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:26:59 compute-0 podman[157165]: time="2025-10-03T10:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:26:59 compute-0 nova_compute[351685]: 2025-10-03 10:26:59.747 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:26:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:26:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9057 "" "Go-http-client/1.1"
Oct  3 10:27:00 compute-0 nova_compute[351685]: 2025-10-03 10:27:00.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:27:00 compute-0 nova_compute[351685]: 2025-10-03 10:27:00.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:27:00 compute-0 nova_compute[351685]: 2025-10-03 10:27:00.756 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:27:00 compute-0 nova_compute[351685]: 2025-10-03 10:27:00.756 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:27:00 compute-0 nova_compute[351685]: 2025-10-03 10:27:00.757 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:27:00 compute-0 nova_compute[351685]: 2025-10-03 10:27:00.757 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:27:00 compute-0 nova_compute[351685]: 2025-10-03 10:27:00.758 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:27:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:27:01 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2151155408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:27:01 compute-0 nova_compute[351685]: 2025-10-03 10:27:01.234 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:27:01 compute-0 nova_compute[351685]: 2025-10-03 10:27:01.323 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:27:01 compute-0 nova_compute[351685]: 2025-10-03 10:27:01.325 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:27:01 compute-0 nova_compute[351685]: 2025-10-03 10:27:01.325 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:27:01 compute-0 openstack_network_exporter[367524]: ERROR   10:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:27:01 compute-0 openstack_network_exporter[367524]: ERROR   10:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:27:01 compute-0 openstack_network_exporter[367524]: ERROR   10:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:27:01 compute-0 openstack_network_exporter[367524]: ERROR   10:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:27:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:27:01 compute-0 openstack_network_exporter[367524]: ERROR   10:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:27:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:27:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1791: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:01 compute-0 nova_compute[351685]: 2025-10-03 10:27:01.732 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:27:01 compute-0 nova_compute[351685]: 2025-10-03 10:27:01.733 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3884MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:27:01 compute-0 nova_compute[351685]: 2025-10-03 10:27:01.734 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:27:01 compute-0 nova_compute[351685]: 2025-10-03 10:27:01.734 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:27:01 compute-0 nova_compute[351685]: 2025-10-03 10:27:01.922 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:27:01 compute-0 nova_compute[351685]: 2025-10-03 10:27:01.923 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:27:01 compute-0 nova_compute[351685]: 2025-10-03 10:27:01.924 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:27:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:27:02 compute-0 nova_compute[351685]: 2025-10-03 10:27:02.075 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:27:02 compute-0 nova_compute[351685]: 2025-10-03 10:27:02.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:27:02 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/670597109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:27:02 compute-0 nova_compute[351685]: 2025-10-03 10:27:02.558 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:27:02 compute-0 nova_compute[351685]: 2025-10-03 10:27:02.567 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:27:02 compute-0 nova_compute[351685]: 2025-10-03 10:27:02.584 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:27:02 compute-0 nova_compute[351685]: 2025-10-03 10:27:02.586 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:27:02 compute-0 nova_compute[351685]: 2025-10-03 10:27:02.586 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:27:02 compute-0 nova_compute[351685]: 2025-10-03 10:27:02.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1792: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:04 compute-0 podman[450918]: 2025-10-03 10:27:04.819248009 +0000 UTC m=+0.080148085 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, version=9.6, config_id=edpm)
Oct  3 10:27:04 compute-0 podman[450919]: 2025-10-03 10:27:04.850098481 +0000 UTC m=+0.108563759 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 10:27:04 compute-0 podman[450920]: 2025-10-03 10:27:04.882984087 +0000 UTC m=+0.134835963 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Oct  3 10:27:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:06 compute-0 nova_compute[351685]: 2025-10-03 10:27:06.587 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:27:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:27:07 compute-0 nova_compute[351685]: 2025-10-03 10:27:07.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:07 compute-0 nova_compute[351685]: 2025-10-03 10:27:07.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:07 compute-0 nova_compute[351685]: 2025-10-03 10:27:07.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:27:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:10 compute-0 podman[450978]: 2025-10-03 10:27:10.854011018 +0000 UTC m=+0.102676990 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  3 10:27:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1796: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:27:12 compute-0 nova_compute[351685]: 2025-10-03 10:27:12.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:12 compute-0 nova_compute[351685]: 2025-10-03 10:27:12.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1797: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:13 compute-0 podman[450996]: 2025-10-03 10:27:13.836520994 +0000 UTC m=+0.096807651 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:27:13 compute-0 podman[450997]: 2025-10-03 10:27:13.864655648 +0000 UTC m=+0.102714331 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:27:13 compute-0 podman[450998]: 2025-10-03 10:27:13.865897998 +0000 UTC m=+0.106076800 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 10:27:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:27:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:27:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:27:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:27:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:27:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:27:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:27:17 compute-0 nova_compute[351685]: 2025-10-03 10:27:17.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:17 compute-0 nova_compute[351685]: 2025-10-03 10:27:17.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:19 compute-0 podman[451053]: 2025-10-03 10:27:19.867112172 +0000 UTC m=+0.125015397 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 10:27:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1801: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:27:22 compute-0 nova_compute[351685]: 2025-10-03 10:27:22.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:22 compute-0 nova_compute[351685]: 2025-10-03 10:27:22.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:25 compute-0 podman[451073]: 2025-10-03 10:27:25.882182052 +0000 UTC m=+0.112025511 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, architecture=x86_64, name=ubi9, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:27:25 compute-0 podman[451072]: 2025-10-03 10:27:25.891209072 +0000 UTC m=+0.128721247 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:27:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:27:27 compute-0 nova_compute[351685]: 2025-10-03 10:27:27.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:27 compute-0 nova_compute[351685]: 2025-10-03 10:27:27.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:29 compute-0 podman[157165]: time="2025-10-03T10:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:27:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:27:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9061 "" "Go-http-client/1.1"
Oct  3 10:27:31 compute-0 openstack_network_exporter[367524]: ERROR   10:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:27:31 compute-0 openstack_network_exporter[367524]: ERROR   10:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:27:31 compute-0 openstack_network_exporter[367524]: ERROR   10:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:27:31 compute-0 openstack_network_exporter[367524]: ERROR   10:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:27:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:27:31 compute-0 openstack_network_exporter[367524]: ERROR   10:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:27:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:27:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1806: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:27:32 compute-0 nova_compute[351685]: 2025-10-03 10:27:32.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:32 compute-0 nova_compute[351685]: 2025-10-03 10:27:32.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1807: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:35 compute-0 podman[451113]: 2025-10-03 10:27:35.843064076 +0000 UTC m=+0.093286677 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 10:27:35 compute-0 podman[451114]: 2025-10-03 10:27:35.886198493 +0000 UTC m=+0.124059968 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Oct  3 10:27:35 compute-0 podman[451115]: 2025-10-03 10:27:35.903603342 +0000 UTC m=+0.129047297 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Oct  3 10:27:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:27:37 compute-0 nova_compute[351685]: 2025-10-03 10:27:37.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:37 compute-0 nova_compute[351685]: 2025-10-03 10:27:37.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1810: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.888 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.889 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.889 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.898 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.898 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.898 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.899 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.899 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:27:40.899322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.907 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.908 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.908 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.908 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.908 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.908 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.908 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.909 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.909 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.909 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.909 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.909 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.909 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.910 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:27:40.908695) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.910 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:27:40.909921) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.938 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.939 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.939 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.940 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.940 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.940 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.940 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.941 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.941 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:40.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:27:40.941242) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.008 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.008 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.008 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.008 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.008 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:27:41.008735) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.009 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.009 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.010 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.011 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:27:41.010065) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.012 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.012 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.012 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.012 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:27:41.011485) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:27:41.012780) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.013 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.013 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.014 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.014 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.015 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.015 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.015 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.015 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:27:41.014032) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.015 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:27:41.015415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.046 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.047 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.047 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.047 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.047 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.048 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:27:41.047789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.048 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.048 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.049 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.050 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.050 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.050 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.050 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.051 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.052 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:27:41.050705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.052 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.053 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.053 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.053 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.054 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.054 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.055 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:27:41.053550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.055 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.055 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.055 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.055 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.056 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:27:41.055796) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.056 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.057 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.059 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.059 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.059 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:27:41.057572) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.059 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.059 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.060 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.060 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.060 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.060 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:27:41.059669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:27:41.060601) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.061 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.061 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.061 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.061 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 53650000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.062 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.062 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.062 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:27:41.061647) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.062 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.062 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.063 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:27:41.062702) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.063 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.063 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.064 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.064 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:27:41.063705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.064 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.065 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:27:41.064745) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.065 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.065 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.065 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.065 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.065 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.066 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.066 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.066 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.066 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:27:41.065743) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.066 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.066 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.067 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.068 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:27:41.066861) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:27:41.067907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:27:41.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:27:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1811: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:27:41.618 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:27:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:27:41.620 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:27:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:27:41.621 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:27:41 compute-0 podman[451175]: 2025-10-03 10:27:41.832582436 +0000 UTC m=+0.083496534 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  3 10:27:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:27:42 compute-0 nova_compute[351685]: 2025-10-03 10:27:42.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:42 compute-0 nova_compute[351685]: 2025-10-03 10:27:42.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1812: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:44 compute-0 podman[451193]: 2025-10-03 10:27:44.753431232 +0000 UTC m=+0.072683976 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:27:44 compute-0 podman[451194]: 2025-10-03 10:27:44.772815904 +0000 UTC m=+0.085876310 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd)
Oct  3 10:27:44 compute-0 podman[451195]: 2025-10-03 10:27:44.809600506 +0000 UTC m=+0.115625376 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 10:27:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1813: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:27:46
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.data', 'images', '.mgr', 'default.rgw.log', 'default.rgw.meta']
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:27:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:27:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:27:47 compute-0 nova_compute[351685]: 2025-10-03 10:27:47.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1814: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:47 compute-0 nova_compute[351685]: 2025-10-03 10:27:47.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:27:49.069843) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487269069877, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2043, "num_deletes": 251, "total_data_size": 3434892, "memory_usage": 3487136, "flush_reason": "Manual Compaction"}
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487269100515, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3368780, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34881, "largest_seqno": 36923, "table_properties": {"data_size": 3359413, "index_size": 5925, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18504, "raw_average_key_size": 20, "raw_value_size": 3340917, "raw_average_value_size": 3619, "num_data_blocks": 263, "num_entries": 923, "num_filter_entries": 923, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759487039, "oldest_key_time": 1759487039, "file_creation_time": 1759487269, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 30732 microseconds, and 10016 cpu microseconds.
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:27:49.100571) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3368780 bytes OK
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:27:49.100596) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:27:49.102515) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:27:49.102533) EVENT_LOG_v1 {"time_micros": 1759487269102526, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:27:49.102554) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3426364, prev total WAL file size 3426364, number of live WAL files 2.
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:27:49.103944) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3289KB)], [80(7219KB)]
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487269104005, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 10761646, "oldest_snapshot_seqno": -1}
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 5734 keys, 8995301 bytes, temperature: kUnknown
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487269180220, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8995301, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8957159, "index_size": 22736, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14341, "raw_key_size": 144817, "raw_average_key_size": 25, "raw_value_size": 8853610, "raw_average_value_size": 1544, "num_data_blocks": 931, "num_entries": 5734, "num_filter_entries": 5734, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759487269, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:27:49.180702) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8995301 bytes
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:27:49.183390) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.8 rd, 117.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.1 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 6248, records dropped: 514 output_compression: NoCompression
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:27:49.183414) EVENT_LOG_v1 {"time_micros": 1759487269183401, "job": 46, "event": "compaction_finished", "compaction_time_micros": 76427, "compaction_time_cpu_micros": 24532, "output_level": 6, "num_output_files": 1, "total_output_size": 8995301, "num_input_records": 6248, "num_output_records": 5734, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487269184407, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487269186890, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:27:49.103733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:27:49.187069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:27:49.187077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:27:49.187079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:27:49.187081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:27:49 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:27:49.187083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:27:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1815: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:50 compute-0 podman[451250]: 2025-10-03 10:27:50.891992514 +0000 UTC m=+0.134029967 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:27:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:27:52 compute-0 nova_compute[351685]: 2025-10-03 10:27:52.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:52 compute-0 nova_compute[351685]: 2025-10-03 10:27:52.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:27:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4234007531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:27:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:27:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4234007531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:27:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:27:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:27:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:27:53 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:27:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:27:53 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:27:54 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6da10770-5d18-499d-9e53-ede8d1f11ef6 does not exist
Oct  3 10:27:54 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 06335aaa-1fae-42a9-8547-1c8f6018f598 does not exist
Oct  3 10:27:54 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6ffb3192-c4f9-4e8e-9e11-e47b1841b525 does not exist
Oct  3 10:27:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:27:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:27:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:27:54 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:27:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:27:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:27:54 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:27:54 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:27:54 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:27:54 compute-0 nova_compute[351685]: 2025-10-03 10:27:54.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:27:54 compute-0 nova_compute[351685]: 2025-10-03 10:27:54.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:27:54 compute-0 podman[451538]: 2025-10-03 10:27:54.973433158 +0000 UTC m=+0.081303234 container create ae310cf9adea375dac80cf34d79ab37ec36e2f7d2647295b845443ad85e4b10f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lovelace, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:27:55 compute-0 podman[451538]: 2025-10-03 10:27:54.934730514 +0000 UTC m=+0.042600600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:27:55 compute-0 systemd[1]: Started libpod-conmon-ae310cf9adea375dac80cf34d79ab37ec36e2f7d2647295b845443ad85e4b10f.scope.
Oct  3 10:27:55 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:27:55 compute-0 podman[451538]: 2025-10-03 10:27:55.095397636 +0000 UTC m=+0.203267722 container init ae310cf9adea375dac80cf34d79ab37ec36e2f7d2647295b845443ad85e4b10f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lovelace, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:27:55 compute-0 podman[451538]: 2025-10-03 10:27:55.109055796 +0000 UTC m=+0.216925852 container start ae310cf9adea375dac80cf34d79ab37ec36e2f7d2647295b845443ad85e4b10f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:27:55 compute-0 podman[451538]: 2025-10-03 10:27:55.113932732 +0000 UTC m=+0.221802818 container attach ae310cf9adea375dac80cf34d79ab37ec36e2f7d2647295b845443ad85e4b10f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lovelace, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:27:55 compute-0 dreamy_lovelace[451554]: 167 167
Oct  3 10:27:55 compute-0 systemd[1]: libpod-ae310cf9adea375dac80cf34d79ab37ec36e2f7d2647295b845443ad85e4b10f.scope: Deactivated successfully.
Oct  3 10:27:55 compute-0 podman[451538]: 2025-10-03 10:27:55.120009147 +0000 UTC m=+0.227879223 container died ae310cf9adea375dac80cf34d79ab37ec36e2f7d2647295b845443ad85e4b10f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 10:27:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd43cb0239209f902f002d0f18f29091dde3e822cbdc828fbaffcf60871d710a-merged.mount: Deactivated successfully.
Oct  3 10:27:55 compute-0 podman[451538]: 2025-10-03 10:27:55.175974786 +0000 UTC m=+0.283844862 container remove ae310cf9adea375dac80cf34d79ab37ec36e2f7d2647295b845443ad85e4b10f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lovelace, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:27:55 compute-0 systemd[1]: libpod-conmon-ae310cf9adea375dac80cf34d79ab37ec36e2f7d2647295b845443ad85e4b10f.scope: Deactivated successfully.
Oct  3 10:27:55 compute-0 podman[451576]: 2025-10-03 10:27:55.418155376 +0000 UTC m=+0.076570540 container create efca1073d13b7bde604572a692378c4d99689e90502c4a86cc51831f776acadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_germain, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:27:55 compute-0 podman[451576]: 2025-10-03 10:27:55.380176356 +0000 UTC m=+0.038591490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:27:55 compute-0 systemd[1]: Started libpod-conmon-efca1073d13b7bde604572a692378c4d99689e90502c4a86cc51831f776acadc.scope.
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:27:55 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554f343388b7e990536519da4356906f88a93fbde62c3e19b179878fd60bcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554f343388b7e990536519da4356906f88a93fbde62c3e19b179878fd60bcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554f343388b7e990536519da4356906f88a93fbde62c3e19b179878fd60bcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554f343388b7e990536519da4356906f88a93fbde62c3e19b179878fd60bcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554f343388b7e990536519da4356906f88a93fbde62c3e19b179878fd60bcf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:27:55 compute-0 podman[451576]: 2025-10-03 10:27:55.565656895 +0000 UTC m=+0.224072049 container init efca1073d13b7bde604572a692378c4d99689e90502c4a86cc51831f776acadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_germain, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:27:55 compute-0 podman[451576]: 2025-10-03 10:27:55.583395275 +0000 UTC m=+0.241810409 container start efca1073d13b7bde604572a692378c4d99689e90502c4a86cc51831f776acadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_germain, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:27:55 compute-0 podman[451576]: 2025-10-03 10:27:55.587652333 +0000 UTC m=+0.246067657 container attach efca1073d13b7bde604572a692378c4d99689e90502c4a86cc51831f776acadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:27:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1818: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:56 compute-0 fervent_germain[451592]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:27:56 compute-0 fervent_germain[451592]: --> relative data size: 1.0
Oct  3 10:27:56 compute-0 fervent_germain[451592]: --> All data devices are unavailable
Oct  3 10:27:56 compute-0 systemd[1]: libpod-efca1073d13b7bde604572a692378c4d99689e90502c4a86cc51831f776acadc.scope: Deactivated successfully.
Oct  3 10:27:56 compute-0 systemd[1]: libpod-efca1073d13b7bde604572a692378c4d99689e90502c4a86cc51831f776acadc.scope: Consumed 1.072s CPU time.
Oct  3 10:27:56 compute-0 conmon[451592]: conmon efca1073d13b7bde6045 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-efca1073d13b7bde604572a692378c4d99689e90502c4a86cc51831f776acadc.scope/container/memory.events
Oct  3 10:27:56 compute-0 podman[451576]: 2025-10-03 10:27:56.732172645 +0000 UTC m=+1.390587789 container died efca1073d13b7bde604572a692378c4d99689e90502c4a86cc51831f776acadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:27:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd554f343388b7e990536519da4356906f88a93fbde62c3e19b179878fd60bcf-merged.mount: Deactivated successfully.
Oct  3 10:27:56 compute-0 podman[451576]: 2025-10-03 10:27:56.812374572 +0000 UTC m=+1.470789726 container remove efca1073d13b7bde604572a692378c4d99689e90502c4a86cc51831f776acadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 10:27:56 compute-0 systemd[1]: libpod-conmon-efca1073d13b7bde604572a692378c4d99689e90502c4a86cc51831f776acadc.scope: Deactivated successfully.
Oct  3 10:27:56 compute-0 podman[451621]: 2025-10-03 10:27:56.836565379 +0000 UTC m=+0.097353199 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:27:56 compute-0 podman[451622]: 2025-10-03 10:27:56.836019092 +0000 UTC m=+0.093588469 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release=1214.1726694543, container_name=kepler, io.openshift.tags=base rhel9, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct  3 10:27:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:27:57 compute-0 nova_compute[351685]: 2025-10-03 10:27:57.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:57 compute-0 podman[451815]: 2025-10-03 10:27:57.671390631 +0000 UTC m=+0.067420867 container create 82a7dd27e91006082dbc4ed283d5fa46de07da9c6971a9ab6fecacdd2ceea405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_franklin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:27:57 compute-0 nova_compute[351685]: 2025-10-03 10:27:57.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:27:57 compute-0 nova_compute[351685]: 2025-10-03 10:27:57.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:27:57 compute-0 nova_compute[351685]: 2025-10-03 10:27:57.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:27:57 compute-0 nova_compute[351685]: 2025-10-03 10:27:57.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:27:57 compute-0 systemd[1]: Started libpod-conmon-82a7dd27e91006082dbc4ed283d5fa46de07da9c6971a9ab6fecacdd2ceea405.scope.
Oct  3 10:27:57 compute-0 podman[451815]: 2025-10-03 10:27:57.651015147 +0000 UTC m=+0.047045453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:27:57 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:27:57 compute-0 podman[451815]: 2025-10-03 10:27:57.792663178 +0000 UTC m=+0.188693434 container init 82a7dd27e91006082dbc4ed283d5fa46de07da9c6971a9ab6fecacdd2ceea405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_franklin, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 10:27:57 compute-0 podman[451815]: 2025-10-03 10:27:57.806091209 +0000 UTC m=+0.202121455 container start 82a7dd27e91006082dbc4ed283d5fa46de07da9c6971a9ab6fecacdd2ceea405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_franklin, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:27:57 compute-0 podman[451815]: 2025-10-03 10:27:57.812732513 +0000 UTC m=+0.208762749 container attach 82a7dd27e91006082dbc4ed283d5fa46de07da9c6971a9ab6fecacdd2ceea405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 10:27:57 compute-0 musing_franklin[451831]: 167 167
Oct  3 10:27:57 compute-0 systemd[1]: libpod-82a7dd27e91006082dbc4ed283d5fa46de07da9c6971a9ab6fecacdd2ceea405.scope: Deactivated successfully.
Oct  3 10:27:57 compute-0 podman[451815]: 2025-10-03 10:27:57.81918697 +0000 UTC m=+0.215217216 container died 82a7dd27e91006082dbc4ed283d5fa46de07da9c6971a9ab6fecacdd2ceea405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_franklin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 10:27:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac3098ac806dc2e999870298839e27180375254d5db0897a72fb0296029c6487-merged.mount: Deactivated successfully.
Oct  3 10:27:57 compute-0 podman[451815]: 2025-10-03 10:27:57.869177966 +0000 UTC m=+0.265208202 container remove 82a7dd27e91006082dbc4ed283d5fa46de07da9c6971a9ab6fecacdd2ceea405 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_franklin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:27:57 compute-0 systemd[1]: libpod-conmon-82a7dd27e91006082dbc4ed283d5fa46de07da9c6971a9ab6fecacdd2ceea405.scope: Deactivated successfully.
Oct  3 10:27:57 compute-0 nova_compute[351685]: 2025-10-03 10:27:57.886 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:27:57 compute-0 nova_compute[351685]: 2025-10-03 10:27:57.887 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:27:57 compute-0 nova_compute[351685]: 2025-10-03 10:27:57.887 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:27:57 compute-0 nova_compute[351685]: 2025-10-03 10:27:57.887 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:27:58 compute-0 podman[451853]: 2025-10-03 10:27:58.123979283 +0000 UTC m=+0.107451674 container create 01a88c9e8cb3d46fdc9eecc7dadff72a5f58399028ef4f787a8fe32ee3b438c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cray, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 10:27:58 compute-0 podman[451853]: 2025-10-03 10:27:58.047765354 +0000 UTC m=+0.031237765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:27:58 compute-0 systemd[1]: Started libpod-conmon-01a88c9e8cb3d46fdc9eecc7dadff72a5f58399028ef4f787a8fe32ee3b438c0.scope.
Oct  3 10:27:58 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4187c97400b5c31c0973bf2648bc336e7c245662e52506ce284b2b8e9f90d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4187c97400b5c31c0973bf2648bc336e7c245662e52506ce284b2b8e9f90d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4187c97400b5c31c0973bf2648bc336e7c245662e52506ce284b2b8e9f90d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:27:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4187c97400b5c31c0973bf2648bc336e7c245662e52506ce284b2b8e9f90d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:27:58 compute-0 podman[451853]: 2025-10-03 10:27:58.290835143 +0000 UTC m=+0.274307624 container init 01a88c9e8cb3d46fdc9eecc7dadff72a5f58399028ef4f787a8fe32ee3b438c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cray, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:27:58 compute-0 podman[451853]: 2025-10-03 10:27:58.303929814 +0000 UTC m=+0.287402195 container start 01a88c9e8cb3d46fdc9eecc7dadff72a5f58399028ef4f787a8fe32ee3b438c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cray, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 10:27:58 compute-0 podman[451853]: 2025-10-03 10:27:58.308985807 +0000 UTC m=+0.292458288 container attach 01a88c9e8cb3d46fdc9eecc7dadff72a5f58399028ef4f787a8fe32ee3b438c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:27:59 compute-0 nova_compute[351685]: 2025-10-03 10:27:59.104 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:27:59 compute-0 thirsty_cray[451870]: {
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:    "0": [
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:        {
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "devices": [
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "/dev/loop3"
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            ],
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "lv_name": "ceph_lv0",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "lv_size": "21470642176",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "name": "ceph_lv0",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "tags": {
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.cluster_name": "ceph",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.crush_device_class": "",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.encrypted": "0",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.osd_id": "0",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.type": "block",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.vdo": "0"
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            },
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "type": "block",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "vg_name": "ceph_vg0"
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:        }
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:    ],
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:    "1": [
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:        {
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "devices": [
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "/dev/loop4"
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            ],
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "lv_name": "ceph_lv1",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "lv_size": "21470642176",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "name": "ceph_lv1",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "tags": {
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.cluster_name": "ceph",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.crush_device_class": "",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.encrypted": "0",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.osd_id": "1",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.type": "block",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.vdo": "0"
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            },
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "type": "block",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "vg_name": "ceph_vg1"
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:        }
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:    ],
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:    "2": [
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:        {
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "devices": [
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "/dev/loop5"
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            ],
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "lv_name": "ceph_lv2",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "lv_size": "21470642176",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "name": "ceph_lv2",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "tags": {
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.cluster_name": "ceph",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.crush_device_class": "",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.encrypted": "0",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.osd_id": "2",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.type": "block",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:                "ceph.vdo": "0"
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            },
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "type": "block",
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:            "vg_name": "ceph_vg2"
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:        }
Oct  3 10:27:59 compute-0 thirsty_cray[451870]:    ]
Oct  3 10:27:59 compute-0 thirsty_cray[451870]: }
Oct  3 10:27:59 compute-0 nova_compute[351685]: 2025-10-03 10:27:59.121 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:27:59 compute-0 nova_compute[351685]: 2025-10-03 10:27:59.122 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:27:59 compute-0 nova_compute[351685]: 2025-10-03 10:27:59.122 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:27:59 compute-0 nova_compute[351685]: 2025-10-03 10:27:59.123 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:27:59 compute-0 systemd[1]: libpod-01a88c9e8cb3d46fdc9eecc7dadff72a5f58399028ef4f787a8fe32ee3b438c0.scope: Deactivated successfully.
Oct  3 10:27:59 compute-0 podman[451853]: 2025-10-03 10:27:59.158089167 +0000 UTC m=+1.141561598 container died 01a88c9e8cb3d46fdc9eecc7dadff72a5f58399028ef4f787a8fe32ee3b438c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 10:27:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e4187c97400b5c31c0973bf2648bc336e7c245662e52506ce284b2b8e9f90d8-merged.mount: Deactivated successfully.
Oct  3 10:27:59 compute-0 podman[451853]: 2025-10-03 10:27:59.244494644 +0000 UTC m=+1.227967035 container remove 01a88c9e8cb3d46fdc9eecc7dadff72a5f58399028ef4f787a8fe32ee3b438c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 10:27:59 compute-0 systemd[1]: libpod-conmon-01a88c9e8cb3d46fdc9eecc7dadff72a5f58399028ef4f787a8fe32ee3b438c0.scope: Deactivated successfully.
Oct  3 10:27:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1820: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:27:59 compute-0 podman[157165]: time="2025-10-03T10:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:27:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:27:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9070 "" "Go-http-client/1.1"
Oct  3 10:28:00 compute-0 podman[452028]: 2025-10-03 10:28:00.244352118 +0000 UTC m=+0.069998720 container create 4d5f374161bbfcc10b18dafb4449c36311fa4dd7643fd0ae9a960deabfce2afb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:28:00 compute-0 podman[452028]: 2025-10-03 10:28:00.228461858 +0000 UTC m=+0.054108470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:28:00 compute-0 systemd[1]: Started libpod-conmon-4d5f374161bbfcc10b18dafb4449c36311fa4dd7643fd0ae9a960deabfce2afb.scope.
Oct  3 10:28:00 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:28:00 compute-0 podman[452028]: 2025-10-03 10:28:00.39349023 +0000 UTC m=+0.219136902 container init 4d5f374161bbfcc10b18dafb4449c36311fa4dd7643fd0ae9a960deabfce2afb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 10:28:00 compute-0 podman[452028]: 2025-10-03 10:28:00.402734618 +0000 UTC m=+0.228381220 container start 4d5f374161bbfcc10b18dafb4449c36311fa4dd7643fd0ae9a960deabfce2afb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 10:28:00 compute-0 podman[452028]: 2025-10-03 10:28:00.408081919 +0000 UTC m=+0.233728561 container attach 4d5f374161bbfcc10b18dafb4449c36311fa4dd7643fd0ae9a960deabfce2afb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  3 10:28:00 compute-0 systemd[1]: libpod-4d5f374161bbfcc10b18dafb4449c36311fa4dd7643fd0ae9a960deabfce2afb.scope: Deactivated successfully.
Oct  3 10:28:00 compute-0 tender_ganguly[452043]: 167 167
Oct  3 10:28:00 compute-0 conmon[452043]: conmon 4d5f374161bbfcc10b18 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d5f374161bbfcc10b18dafb4449c36311fa4dd7643fd0ae9a960deabfce2afb.scope/container/memory.events
Oct  3 10:28:00 compute-0 podman[452028]: 2025-10-03 10:28:00.414732652 +0000 UTC m=+0.240379294 container died 4d5f374161bbfcc10b18dafb4449c36311fa4dd7643fd0ae9a960deabfce2afb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 10:28:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe6f18a25c3f37c19f62eace2904ee6c6386e5c9119e3e09cc75049129c8e60a-merged.mount: Deactivated successfully.
Oct  3 10:28:00 compute-0 podman[452028]: 2025-10-03 10:28:00.50614602 +0000 UTC m=+0.331792622 container remove 4d5f374161bbfcc10b18dafb4449c36311fa4dd7643fd0ae9a960deabfce2afb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:28:00 compute-0 systemd[1]: libpod-conmon-4d5f374161bbfcc10b18dafb4449c36311fa4dd7643fd0ae9a960deabfce2afb.scope: Deactivated successfully.
Oct  3 10:28:00 compute-0 podman[452070]: 2025-10-03 10:28:00.749004133 +0000 UTC m=+0.066524568 container create e1727302d9d8d1e329c729368b1e4a870ad85ac59e71d2b0ebc981999e4adc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 10:28:00 compute-0 podman[452070]: 2025-10-03 10:28:00.725158747 +0000 UTC m=+0.042679212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:28:00 compute-0 systemd[1]: Started libpod-conmon-e1727302d9d8d1e329c729368b1e4a870ad85ac59e71d2b0ebc981999e4adc5b.scope.
Oct  3 10:28:00 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccc1da49e81026d378daab68c1f83e90e2be8fb766b01c906eb4fa38c532481/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccc1da49e81026d378daab68c1f83e90e2be8fb766b01c906eb4fa38c532481/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccc1da49e81026d378daab68c1f83e90e2be8fb766b01c906eb4fa38c532481/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccc1da49e81026d378daab68c1f83e90e2be8fb766b01c906eb4fa38c532481/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:28:00 compute-0 podman[452070]: 2025-10-03 10:28:00.893071972 +0000 UTC m=+0.210592427 container init e1727302d9d8d1e329c729368b1e4a870ad85ac59e71d2b0ebc981999e4adc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 10:28:00 compute-0 podman[452070]: 2025-10-03 10:28:00.908991363 +0000 UTC m=+0.226511838 container start e1727302d9d8d1e329c729368b1e4a870ad85ac59e71d2b0ebc981999e4adc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclean, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:28:00 compute-0 podman[452070]: 2025-10-03 10:28:00.914925334 +0000 UTC m=+0.232445769 container attach e1727302d9d8d1e329c729368b1e4a870ad85ac59e71d2b0ebc981999e4adc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 10:28:01 compute-0 openstack_network_exporter[367524]: ERROR   10:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:28:01 compute-0 openstack_network_exporter[367524]: ERROR   10:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:28:01 compute-0 openstack_network_exporter[367524]: ERROR   10:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:28:01 compute-0 openstack_network_exporter[367524]: ERROR   10:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:28:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:28:01 compute-0 openstack_network_exporter[367524]: ERROR   10:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:28:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:28:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:01 compute-0 nova_compute[351685]: 2025-10-03 10:28:01.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:28:01 compute-0 nova_compute[351685]: 2025-10-03 10:28:01.732 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:28:02 compute-0 recursing_mclean[452086]: {
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:        "osd_id": 1,
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:        "type": "bluestore"
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:    },
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:        "osd_id": 2,
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:        "type": "bluestore"
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:    },
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:        "osd_id": 0,
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:        "type": "bluestore"
Oct  3 10:28:02 compute-0 recursing_mclean[452086]:    }
Oct  3 10:28:02 compute-0 recursing_mclean[452086]: }
Oct  3 10:28:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:28:02 compute-0 systemd[1]: libpod-e1727302d9d8d1e329c729368b1e4a870ad85ac59e71d2b0ebc981999e4adc5b.scope: Deactivated successfully.
Oct  3 10:28:02 compute-0 systemd[1]: libpod-e1727302d9d8d1e329c729368b1e4a870ad85ac59e71d2b0ebc981999e4adc5b.scope: Consumed 1.125s CPU time.
Oct  3 10:28:02 compute-0 podman[452070]: 2025-10-03 10:28:02.043037449 +0000 UTC m=+1.360557944 container died e1727302d9d8d1e329c729368b1e4a870ad85ac59e71d2b0ebc981999e4adc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 10:28:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ccc1da49e81026d378daab68c1f83e90e2be8fb766b01c906eb4fa38c532481-merged.mount: Deactivated successfully.
Oct  3 10:28:02 compute-0 nova_compute[351685]: 2025-10-03 10:28:02.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:02 compute-0 podman[452070]: 2025-10-03 10:28:02.144153108 +0000 UTC m=+1.461673553 container remove e1727302d9d8d1e329c729368b1e4a870ad85ac59e71d2b0ebc981999e4adc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mclean, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 10:28:02 compute-0 systemd[1]: libpod-conmon-e1727302d9d8d1e329c729368b1e4a870ad85ac59e71d2b0ebc981999e4adc5b.scope: Deactivated successfully.
Oct  3 10:28:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:28:02 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:28:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:28:02 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:28:02 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev c56fda0a-2e66-490b-9c53-5f7061be6409 does not exist
Oct  3 10:28:02 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 30da7f89-62d1-4a5c-aa42-e08bc55182ca does not exist
Oct  3 10:28:02 compute-0 nova_compute[351685]: 2025-10-03 10:28:02.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:28:02 compute-0 nova_compute[351685]: 2025-10-03 10:28:02.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:02 compute-0 nova_compute[351685]: 2025-10-03 10:28:02.765 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:28:02 compute-0 nova_compute[351685]: 2025-10-03 10:28:02.766 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:28:02 compute-0 nova_compute[351685]: 2025-10-03 10:28:02.766 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:28:02 compute-0 nova_compute[351685]: 2025-10-03 10:28:02.767 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:28:02 compute-0 nova_compute[351685]: 2025-10-03 10:28:02.767 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:28:03 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:28:03 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:28:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:28:03 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1574443689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:28:03 compute-0 nova_compute[351685]: 2025-10-03 10:28:03.258 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:28:03 compute-0 nova_compute[351685]: 2025-10-03 10:28:03.358 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:28:03 compute-0 nova_compute[351685]: 2025-10-03 10:28:03.358 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:28:03 compute-0 nova_compute[351685]: 2025-10-03 10:28:03.358 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:28:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1822: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:03 compute-0 nova_compute[351685]: 2025-10-03 10:28:03.803 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:28:03 compute-0 nova_compute[351685]: 2025-10-03 10:28:03.805 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3853MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:28:03 compute-0 nova_compute[351685]: 2025-10-03 10:28:03.805 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:28:03 compute-0 nova_compute[351685]: 2025-10-03 10:28:03.806 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:28:03 compute-0 nova_compute[351685]: 2025-10-03 10:28:03.879 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:28:03 compute-0 nova_compute[351685]: 2025-10-03 10:28:03.880 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:28:03 compute-0 nova_compute[351685]: 2025-10-03 10:28:03.880 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:28:03 compute-0 nova_compute[351685]: 2025-10-03 10:28:03.897 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 10:28:03 compute-0 nova_compute[351685]: 2025-10-03 10:28:03.912 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 10:28:03 compute-0 nova_compute[351685]: 2025-10-03 10:28:03.912 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 10:28:03 compute-0 nova_compute[351685]: 2025-10-03 10:28:03.923 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 10:28:03 compute-0 nova_compute[351685]: 2025-10-03 10:28:03.941 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 10:28:03 compute-0 nova_compute[351685]: 2025-10-03 10:28:03.974 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:28:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:28:04 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3069508823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:28:04 compute-0 nova_compute[351685]: 2025-10-03 10:28:04.428 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:28:04 compute-0 nova_compute[351685]: 2025-10-03 10:28:04.438 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:28:04 compute-0 nova_compute[351685]: 2025-10-03 10:28:04.457 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:28:04 compute-0 nova_compute[351685]: 2025-10-03 10:28:04.459 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:28:04 compute-0 nova_compute[351685]: 2025-10-03 10:28:04.459 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:28:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1823: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:06 compute-0 podman[452228]: 2025-10-03 10:28:06.853518137 +0000 UTC m=+0.097345648 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, managed_by=edpm_ansible)
Oct  3 10:28:06 compute-0 podman[452229]: 2025-10-03 10:28:06.866704731 +0000 UTC m=+0.110650136 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:28:06 compute-0 podman[452230]: 2025-10-03 10:28:06.919027602 +0000 UTC m=+0.147267622 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:28:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:28:07 compute-0 nova_compute[351685]: 2025-10-03 10:28:07.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:07 compute-0 nova_compute[351685]: 2025-10-03 10:28:07.459 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:28:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1824: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:07 compute-0 nova_compute[351685]: 2025-10-03 10:28:07.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:08 compute-0 nova_compute[351685]: 2025-10-03 10:28:08.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:28:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1826: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:28:12 compute-0 nova_compute[351685]: 2025-10-03 10:28:12.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:12 compute-0 nova_compute[351685]: 2025-10-03 10:28:12.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:12 compute-0 podman[452293]: 2025-10-03 10:28:12.85280377 +0000 UTC m=+0.096221033 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  3 10:28:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:15 compute-0 podman[452312]: 2025-10-03 10:28:15.822076262 +0000 UTC m=+0.071856250 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:28:15 compute-0 podman[452311]: 2025-10-03 10:28:15.837022202 +0000 UTC m=+0.093692121 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:28:15 compute-0 podman[452313]: 2025-10-03 10:28:15.846212357 +0000 UTC m=+0.094815787 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:28:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:28:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:28:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:28:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:28:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:28:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:28:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:28:17 compute-0 nova_compute[351685]: 2025-10-03 10:28:17.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:17 compute-0 nova_compute[351685]: 2025-10-03 10:28:17.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:21 compute-0 podman[452369]: 2025-10-03 10:28:21.818641018 +0000 UTC m=+0.080001841 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct  3 10:28:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:28:22 compute-0 nova_compute[351685]: 2025-10-03 10:28:22.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:22 compute-0 nova_compute[351685]: 2025-10-03 10:28:22.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1833: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:28:27 compute-0 nova_compute[351685]: 2025-10-03 10:28:27.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:27 compute-0 nova_compute[351685]: 2025-10-03 10:28:27.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:27 compute-0 podman[452389]: 2025-10-03 10:28:27.80604174 +0000 UTC m=+0.071661703 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, container_name=kepler, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9)
Oct  3 10:28:27 compute-0 podman[452388]: 2025-10-03 10:28:27.830719962 +0000 UTC m=+0.093825575 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:28:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1835: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:29 compute-0 podman[157165]: time="2025-10-03T10:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:28:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:28:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9070 "" "Go-http-client/1.1"
Oct  3 10:28:31 compute-0 openstack_network_exporter[367524]: ERROR   10:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:28:31 compute-0 openstack_network_exporter[367524]: ERROR   10:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:28:31 compute-0 openstack_network_exporter[367524]: ERROR   10:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:28:31 compute-0 openstack_network_exporter[367524]: ERROR   10:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:28:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:28:31 compute-0 openstack_network_exporter[367524]: ERROR   10:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:28:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:28:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:28:32 compute-0 nova_compute[351685]: 2025-10-03 10:28:32.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:32 compute-0 nova_compute[351685]: 2025-10-03 10:28:32.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:28:37 compute-0 nova_compute[351685]: 2025-10-03 10:28:37.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1839: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:37 compute-0 nova_compute[351685]: 2025-10-03 10:28:37.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:37 compute-0 podman[452432]: 2025-10-03 10:28:37.855461816 +0000 UTC m=+0.095758238 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, container_name=openstack_network_exporter)
Oct  3 10:28:37 compute-0 podman[452433]: 2025-10-03 10:28:37.857286995 +0000 UTC m=+0.104147438 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 10:28:37 compute-0 podman[452434]: 2025-10-03 10:28:37.885031746 +0000 UTC m=+0.126173885 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 10:28:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1840: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:28:41.620 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:28:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:28:41.621 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:28:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:28:41.621 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:28:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:28:42 compute-0 nova_compute[351685]: 2025-10-03 10:28:42.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:42 compute-0 nova_compute[351685]: 2025-10-03 10:28:42.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1842: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:43 compute-0 podman[452494]: 2025-10-03 10:28:43.851413622 +0000 UTC m=+0.100633224 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  3 10:28:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:28:46
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['backups', 'vms', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.log']
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:28:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:28:46 compute-0 podman[452513]: 2025-10-03 10:28:46.831398178 +0000 UTC m=+0.097725732 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:28:46 compute-0 podman[452514]: 2025-10-03 10:28:46.851174632 +0000 UTC m=+0.099332241 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  3 10:28:46 compute-0 podman[452515]: 2025-10-03 10:28:46.862986192 +0000 UTC m=+0.120889695 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  3 10:28:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:28:47 compute-0 nova_compute[351685]: 2025-10-03 10:28:47.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1844: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:47 compute-0 nova_compute[351685]: 2025-10-03 10:28:47.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1846: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:28:52 compute-0 nova_compute[351685]: 2025-10-03 10:28:52.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:52 compute-0 nova_compute[351685]: 2025-10-03 10:28:52.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:52 compute-0 podman[452571]: 2025-10-03 10:28:52.806167072 +0000 UTC m=+0.073354417 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  3 10:28:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:28:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2725526317' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:28:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:28:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2725526317' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:28:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1848: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:56 compute-0 nova_compute[351685]: 2025-10-03 10:28:56.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:28:56 compute-0 nova_compute[351685]: 2025-10-03 10:28:56.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:28:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:28:57 compute-0 nova_compute[351685]: 2025-10-03 10:28:57.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:57 compute-0 nova_compute[351685]: 2025-10-03 10:28:57.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:28:58 compute-0 podman[452589]: 2025-10-03 10:28:58.819855727 +0000 UTC m=+0.083294417 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:28:58 compute-0 podman[452590]: 2025-10-03 10:28:58.835972505 +0000 UTC m=+0.089046362 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, container_name=kepler, io.buildah.version=1.29.0, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, config_id=edpm, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, vcs-type=git)
Oct  3 10:28:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:28:59 compute-0 nova_compute[351685]: 2025-10-03 10:28:59.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:28:59 compute-0 nova_compute[351685]: 2025-10-03 10:28:59.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:28:59 compute-0 nova_compute[351685]: 2025-10-03 10:28:59.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:28:59 compute-0 podman[157165]: time="2025-10-03T10:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:28:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:28:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9065 "" "Go-http-client/1.1"
Oct  3 10:29:00 compute-0 nova_compute[351685]: 2025-10-03 10:29:00.101 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:29:00 compute-0 nova_compute[351685]: 2025-10-03 10:29:00.101 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:29:00 compute-0 nova_compute[351685]: 2025-10-03 10:29:00.102 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:29:00 compute-0 nova_compute[351685]: 2025-10-03 10:29:00.102 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:29:01 compute-0 nova_compute[351685]: 2025-10-03 10:29:01.258 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:29:01 compute-0 nova_compute[351685]: 2025-10-03 10:29:01.281 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:29:01 compute-0 nova_compute[351685]: 2025-10-03 10:29:01.282 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:29:01 compute-0 nova_compute[351685]: 2025-10-03 10:29:01.283 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:29:01 compute-0 nova_compute[351685]: 2025-10-03 10:29:01.283 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:29:01 compute-0 openstack_network_exporter[367524]: ERROR   10:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:29:01 compute-0 openstack_network_exporter[367524]: ERROR   10:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:29:01 compute-0 openstack_network_exporter[367524]: ERROR   10:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:29:01 compute-0 openstack_network_exporter[367524]: ERROR   10:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:29:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:29:01 compute-0 openstack_network_exporter[367524]: ERROR   10:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:29:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:29:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1851: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:29:02 compute-0 nova_compute[351685]: 2025-10-03 10:29:02.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:02 compute-0 nova_compute[351685]: 2025-10-03 10:29:02.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:29:02 compute-0 nova_compute[351685]: 2025-10-03 10:29:02.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:29:02 compute-0 nova_compute[351685]: 2025-10-03 10:29:02.753 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:29:02 compute-0 nova_compute[351685]: 2025-10-03 10:29:02.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:02 compute-0 nova_compute[351685]: 2025-10-03 10:29:02.776 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:29:02 compute-0 nova_compute[351685]: 2025-10-03 10:29:02.777 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:29:02 compute-0 nova_compute[351685]: 2025-10-03 10:29:02.777 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:29:02 compute-0 nova_compute[351685]: 2025-10-03 10:29:02.778 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:29:02 compute-0 nova_compute[351685]: 2025-10-03 10:29:02.778 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:29:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:29:03 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3627566470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:29:03 compute-0 nova_compute[351685]: 2025-10-03 10:29:03.252 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:29:03 compute-0 nova_compute[351685]: 2025-10-03 10:29:03.380 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:29:03 compute-0 nova_compute[351685]: 2025-10-03 10:29:03.380 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:29:03 compute-0 nova_compute[351685]: 2025-10-03 10:29:03.381 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:29:03 compute-0 podman[452821]: 2025-10-03 10:29:03.53217248 +0000 UTC m=+0.198549790 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:29:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:03 compute-0 podman[452840]: 2025-10-03 10:29:03.719005953 +0000 UTC m=+0.071577021 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  3 10:29:03 compute-0 nova_compute[351685]: 2025-10-03 10:29:03.753 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:29:03 compute-0 nova_compute[351685]: 2025-10-03 10:29:03.754 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3877MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:29:03 compute-0 nova_compute[351685]: 2025-10-03 10:29:03.755 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:29:03 compute-0 nova_compute[351685]: 2025-10-03 10:29:03.755 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:29:03 compute-0 podman[452821]: 2025-10-03 10:29:03.835324991 +0000 UTC m=+0.501702201 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 10:29:03 compute-0 nova_compute[351685]: 2025-10-03 10:29:03.886 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:29:03 compute-0 nova_compute[351685]: 2025-10-03 10:29:03.886 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:29:03 compute-0 nova_compute[351685]: 2025-10-03 10:29:03.886 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:29:03 compute-0 nova_compute[351685]: 2025-10-03 10:29:03.922 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:29:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:29:04 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2629759875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:29:04 compute-0 nova_compute[351685]: 2025-10-03 10:29:04.485 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:29:04 compute-0 nova_compute[351685]: 2025-10-03 10:29:04.496 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:29:04 compute-0 nova_compute[351685]: 2025-10-03 10:29:04.553 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:29:04 compute-0 nova_compute[351685]: 2025-10-03 10:29:04.555 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:29:04 compute-0 nova_compute[351685]: 2025-10-03 10:29:04.555 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:29:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:29:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:29:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:29:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:29:05 compute-0 nova_compute[351685]: 2025-10-03 10:29:05.531 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:29:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:05 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:29:05 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:29:05 compute-0 nova_compute[351685]: 2025-10-03 10:29:05.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:29:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:29:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:29:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:29:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:29:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:29:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:29:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 5426c0a2-3842-4ce8-ba51-ff616f9096a9 does not exist
Oct  3 10:29:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b5c352bc-b890-4383-87da-c9e4507de941 does not exist
Oct  3 10:29:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 99cda956-df13-4879-8dc4-904cf65d3399 does not exist
Oct  3 10:29:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:29:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:29:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:29:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:29:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:29:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:29:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:29:07 compute-0 podman[453259]: 2025-10-03 10:29:07.053966803 +0000 UTC m=+0.107613959 container create 004121d2e6e2750db58be411ba42b1d4fa68b9eddc47c3a4e5c4b583b4504fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 10:29:07 compute-0 podman[453259]: 2025-10-03 10:29:06.972067921 +0000 UTC m=+0.025715107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:29:07 compute-0 systemd[1]: Started libpod-conmon-004121d2e6e2750db58be411ba42b1d4fa68b9eddc47c3a4e5c4b583b4504fbd.scope.
Oct  3 10:29:07 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:29:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:29:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:29:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:29:07 compute-0 podman[453259]: 2025-10-03 10:29:07.170061874 +0000 UTC m=+0.223709110 container init 004121d2e6e2750db58be411ba42b1d4fa68b9eddc47c3a4e5c4b583b4504fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 10:29:07 compute-0 podman[453259]: 2025-10-03 10:29:07.182498122 +0000 UTC m=+0.236145308 container start 004121d2e6e2750db58be411ba42b1d4fa68b9eddc47c3a4e5c4b583b4504fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 10:29:07 compute-0 podman[453259]: 2025-10-03 10:29:07.188557697 +0000 UTC m=+0.242204883 container attach 004121d2e6e2750db58be411ba42b1d4fa68b9eddc47c3a4e5c4b583b4504fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:29:07 compute-0 jolly_khorana[453275]: 167 167
Oct  3 10:29:07 compute-0 systemd[1]: libpod-004121d2e6e2750db58be411ba42b1d4fa68b9eddc47c3a4e5c4b583b4504fbd.scope: Deactivated successfully.
Oct  3 10:29:07 compute-0 podman[453259]: 2025-10-03 10:29:07.191211763 +0000 UTC m=+0.244858919 container died 004121d2e6e2750db58be411ba42b1d4fa68b9eddc47c3a4e5c4b583b4504fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:29:07 compute-0 nova_compute[351685]: 2025-10-03 10:29:07.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-44e9a3e562d6eb6cdfab9a4a188146bd378973154a72b53fcb1184e02898de53-merged.mount: Deactivated successfully.
Oct  3 10:29:07 compute-0 podman[453259]: 2025-10-03 10:29:07.246165798 +0000 UTC m=+0.299812954 container remove 004121d2e6e2750db58be411ba42b1d4fa68b9eddc47c3a4e5c4b583b4504fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct  3 10:29:07 compute-0 systemd[1]: libpod-conmon-004121d2e6e2750db58be411ba42b1d4fa68b9eddc47c3a4e5c4b583b4504fbd.scope: Deactivated successfully.
Oct  3 10:29:07 compute-0 podman[453297]: 2025-10-03 10:29:07.456852267 +0000 UTC m=+0.061305270 container create c212f3f4ac05bf188fa0b0fc3f455b26576a522cc0f93fe4399e157feeb1a23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:29:07 compute-0 systemd[1]: Started libpod-conmon-c212f3f4ac05bf188fa0b0fc3f455b26576a522cc0f93fe4399e157feeb1a23e.scope.
Oct  3 10:29:07 compute-0 podman[453297]: 2025-10-03 10:29:07.436303587 +0000 UTC m=+0.040756610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:29:07 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c698d07f28679daa07278cbf7a54045d374e9e1e7c471174d0704ff384c9acc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c698d07f28679daa07278cbf7a54045d374e9e1e7c471174d0704ff384c9acc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c698d07f28679daa07278cbf7a54045d374e9e1e7c471174d0704ff384c9acc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c698d07f28679daa07278cbf7a54045d374e9e1e7c471174d0704ff384c9acc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c698d07f28679daa07278cbf7a54045d374e9e1e7c471174d0704ff384c9acc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:29:07 compute-0 podman[453297]: 2025-10-03 10:29:07.567966197 +0000 UTC m=+0.172419230 container init c212f3f4ac05bf188fa0b0fc3f455b26576a522cc0f93fe4399e157feeb1a23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:29:07 compute-0 podman[453297]: 2025-10-03 10:29:07.583419654 +0000 UTC m=+0.187872657 container start c212f3f4ac05bf188fa0b0fc3f455b26576a522cc0f93fe4399e157feeb1a23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:29:07 compute-0 podman[453297]: 2025-10-03 10:29:07.589553691 +0000 UTC m=+0.194006724 container attach c212f3f4ac05bf188fa0b0fc3f455b26576a522cc0f93fe4399e157feeb1a23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wozniak, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 10:29:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1854: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:07 compute-0 nova_compute[351685]: 2025-10-03 10:29:07.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:08 compute-0 zealous_wozniak[453313]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:29:08 compute-0 zealous_wozniak[453313]: --> relative data size: 1.0
Oct  3 10:29:08 compute-0 zealous_wozniak[453313]: --> All data devices are unavailable
Oct  3 10:29:08 compute-0 systemd[1]: libpod-c212f3f4ac05bf188fa0b0fc3f455b26576a522cc0f93fe4399e157feeb1a23e.scope: Deactivated successfully.
Oct  3 10:29:08 compute-0 podman[453297]: 2025-10-03 10:29:08.788984457 +0000 UTC m=+1.393437480 container died c212f3f4ac05bf188fa0b0fc3f455b26576a522cc0f93fe4399e157feeb1a23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wozniak, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 10:29:08 compute-0 systemd[1]: libpod-c212f3f4ac05bf188fa0b0fc3f455b26576a522cc0f93fe4399e157feeb1a23e.scope: Consumed 1.138s CPU time.
Oct  3 10:29:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c698d07f28679daa07278cbf7a54045d374e9e1e7c471174d0704ff384c9acc-merged.mount: Deactivated successfully.
Oct  3 10:29:08 compute-0 podman[453297]: 2025-10-03 10:29:08.864819884 +0000 UTC m=+1.469272887 container remove c212f3f4ac05bf188fa0b0fc3f455b26576a522cc0f93fe4399e157feeb1a23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wozniak, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 10:29:08 compute-0 podman[453342]: 2025-10-03 10:29:08.866421456 +0000 UTC m=+0.116620118 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct  3 10:29:08 compute-0 systemd[1]: libpod-conmon-c212f3f4ac05bf188fa0b0fc3f455b26576a522cc0f93fe4399e157feeb1a23e.scope: Deactivated successfully.
Oct  3 10:29:08 compute-0 podman[453343]: 2025-10-03 10:29:08.881318434 +0000 UTC m=+0.130002867 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20250930, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 10:29:08 compute-0 podman[453344]: 2025-10-03 10:29:08.920797013 +0000 UTC m=+0.163458403 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:29:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1855: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:09 compute-0 podman[453555]: 2025-10-03 10:29:09.686146203 +0000 UTC m=+0.052141717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:29:09 compute-0 podman[453555]: 2025-10-03 10:29:09.80368679 +0000 UTC m=+0.169682264 container create c419e6851098e8afa61b346d204647b5d1d393c5e03a4dc9c82f489e6f383068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 10:29:09 compute-0 systemd[1]: Started libpod-conmon-c419e6851098e8afa61b346d204647b5d1d393c5e03a4dc9c82f489e6f383068.scope.
Oct  3 10:29:09 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:29:10 compute-0 podman[453555]: 2025-10-03 10:29:10.175137154 +0000 UTC m=+0.541132718 container init c419e6851098e8afa61b346d204647b5d1d393c5e03a4dc9c82f489e6f383068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:29:10 compute-0 podman[453555]: 2025-10-03 10:29:10.196511712 +0000 UTC m=+0.562507176 container start c419e6851098e8afa61b346d204647b5d1d393c5e03a4dc9c82f489e6f383068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:29:10 compute-0 flamboyant_galileo[453571]: 167 167
Oct  3 10:29:10 compute-0 systemd[1]: libpod-c419e6851098e8afa61b346d204647b5d1d393c5e03a4dc9c82f489e6f383068.scope: Deactivated successfully.
Oct  3 10:29:10 compute-0 podman[453555]: 2025-10-03 10:29:10.216050009 +0000 UTC m=+0.582045573 container attach c419e6851098e8afa61b346d204647b5d1d393c5e03a4dc9c82f489e6f383068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 10:29:10 compute-0 podman[453555]: 2025-10-03 10:29:10.216811323 +0000 UTC m=+0.582806847 container died c419e6851098e8afa61b346d204647b5d1d393c5e03a4dc9c82f489e6f383068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e90122c2844c380d5fa9431a057307150e9ad37b6469a4ed5c5dbe5a2561e9d-merged.mount: Deactivated successfully.
Oct  3 10:29:10 compute-0 podman[453555]: 2025-10-03 10:29:10.288215288 +0000 UTC m=+0.654210792 container remove c419e6851098e8afa61b346d204647b5d1d393c5e03a4dc9c82f489e6f383068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galileo, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:29:10 compute-0 systemd[1]: libpod-conmon-c419e6851098e8afa61b346d204647b5d1d393c5e03a4dc9c82f489e6f383068.scope: Deactivated successfully.
Oct  3 10:29:10 compute-0 podman[453596]: 2025-10-03 10:29:10.541148934 +0000 UTC m=+0.085429736 container create ffb53fab478e8daa50bca6dd9dee74169599b132360d0e1c69da3c532a2a2313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mendel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 10:29:10 compute-0 podman[453596]: 2025-10-03 10:29:10.509145336 +0000 UTC m=+0.053426218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:29:10 compute-0 systemd[1]: Started libpod-conmon-ffb53fab478e8daa50bca6dd9dee74169599b132360d0e1c69da3c532a2a2313.scope.
Oct  3 10:29:10 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6d0ca37950f8a48b8f188efa63bd5d807f48c9b1aa9407187c0b010b08faee1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6d0ca37950f8a48b8f188efa63bd5d807f48c9b1aa9407187c0b010b08faee1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6d0ca37950f8a48b8f188efa63bd5d807f48c9b1aa9407187c0b010b08faee1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6d0ca37950f8a48b8f188efa63bd5d807f48c9b1aa9407187c0b010b08faee1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:29:10 compute-0 podman[453596]: 2025-10-03 10:29:10.686847025 +0000 UTC m=+0.231127867 container init ffb53fab478e8daa50bca6dd9dee74169599b132360d0e1c69da3c532a2a2313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mendel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 10:29:10 compute-0 podman[453596]: 2025-10-03 10:29:10.715224047 +0000 UTC m=+0.259504849 container start ffb53fab478e8daa50bca6dd9dee74169599b132360d0e1c69da3c532a2a2313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:29:10 compute-0 podman[453596]: 2025-10-03 10:29:10.720062763 +0000 UTC m=+0.264343565 container attach ffb53fab478e8daa50bca6dd9dee74169599b132360d0e1c69da3c532a2a2313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mendel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:29:10 compute-0 nova_compute[351685]: 2025-10-03 10:29:10.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]: {
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:    "0": [
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:        {
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "devices": [
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "/dev/loop3"
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            ],
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "lv_name": "ceph_lv0",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "lv_size": "21470642176",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "name": "ceph_lv0",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "tags": {
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.cluster_name": "ceph",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.crush_device_class": "",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.encrypted": "0",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.osd_id": "0",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.type": "block",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.vdo": "0"
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            },
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "type": "block",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "vg_name": "ceph_vg0"
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:        }
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:    ],
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:    "1": [
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:        {
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "devices": [
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "/dev/loop4"
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            ],
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "lv_name": "ceph_lv1",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "lv_size": "21470642176",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "name": "ceph_lv1",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "tags": {
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.cluster_name": "ceph",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.crush_device_class": "",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.encrypted": "0",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.osd_id": "1",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.type": "block",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.vdo": "0"
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            },
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "type": "block",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "vg_name": "ceph_vg1"
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:        }
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:    ],
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:    "2": [
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:        {
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "devices": [
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "/dev/loop5"
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            ],
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "lv_name": "ceph_lv2",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "lv_size": "21470642176",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "name": "ceph_lv2",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "tags": {
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.cluster_name": "ceph",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.crush_device_class": "",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.encrypted": "0",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.osd_id": "2",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.type": "block",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:                "ceph.vdo": "0"
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            },
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "type": "block",
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:            "vg_name": "ceph_vg2"
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:        }
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]:    ]
Oct  3 10:29:11 compute-0 vigilant_mendel[453612]: }
Oct  3 10:29:11 compute-0 systemd[1]: libpod-ffb53fab478e8daa50bca6dd9dee74169599b132360d0e1c69da3c532a2a2313.scope: Deactivated successfully.
Oct  3 10:29:11 compute-0 podman[453596]: 2025-10-03 10:29:11.594483897 +0000 UTC m=+1.138764689 container died ffb53fab478e8daa50bca6dd9dee74169599b132360d0e1c69da3c532a2a2313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mendel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 10:29:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6d0ca37950f8a48b8f188efa63bd5d807f48c9b1aa9407187c0b010b08faee1-merged.mount: Deactivated successfully.
Oct  3 10:29:11 compute-0 podman[453596]: 2025-10-03 10:29:11.651495979 +0000 UTC m=+1.195776761 container remove ffb53fab478e8daa50bca6dd9dee74169599b132360d0e1c69da3c532a2a2313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 10:29:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:11 compute-0 systemd[1]: libpod-conmon-ffb53fab478e8daa50bca6dd9dee74169599b132360d0e1c69da3c532a2a2313.scope: Deactivated successfully.
Oct  3 10:29:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:29:12 compute-0 nova_compute[351685]: 2025-10-03 10:29:12.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:12 compute-0 podman[453770]: 2025-10-03 10:29:12.633745028 +0000 UTC m=+0.080245039 container create 38bb9f2e66abeb434de147cf3c7ef7d939c960d2897604d81affd068b19f04ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_taussig, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:29:12 compute-0 systemd[1]: Started libpod-conmon-38bb9f2e66abeb434de147cf3c7ef7d939c960d2897604d81affd068b19f04ac.scope.
Oct  3 10:29:12 compute-0 podman[453770]: 2025-10-03 10:29:12.605914834 +0000 UTC m=+0.052414865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:29:12 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:29:12 compute-0 podman[453770]: 2025-10-03 10:29:12.761586555 +0000 UTC m=+0.208086626 container init 38bb9f2e66abeb434de147cf3c7ef7d939c960d2897604d81affd068b19f04ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 10:29:12 compute-0 nova_compute[351685]: 2025-10-03 10:29:12.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:12 compute-0 podman[453770]: 2025-10-03 10:29:12.780037998 +0000 UTC m=+0.226538009 container start 38bb9f2e66abeb434de147cf3c7ef7d939c960d2897604d81affd068b19f04ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_taussig, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:29:12 compute-0 lucid_taussig[453786]: 167 167
Oct  3 10:29:12 compute-0 systemd[1]: libpod-38bb9f2e66abeb434de147cf3c7ef7d939c960d2897604d81affd068b19f04ac.scope: Deactivated successfully.
Oct  3 10:29:12 compute-0 podman[453770]: 2025-10-03 10:29:12.803964616 +0000 UTC m=+0.250464628 container attach 38bb9f2e66abeb434de147cf3c7ef7d939c960d2897604d81affd068b19f04ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 10:29:12 compute-0 podman[453770]: 2025-10-03 10:29:12.804460453 +0000 UTC m=+0.250960434 container died 38bb9f2e66abeb434de147cf3c7ef7d939c960d2897604d81affd068b19f04ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_taussig, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:29:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-733ef1675754d65071aa2a70e778242c145c80443ff5b9490438b034f552c3f6-merged.mount: Deactivated successfully.
Oct  3 10:29:12 compute-0 podman[453770]: 2025-10-03 10:29:12.872082875 +0000 UTC m=+0.318582897 container remove 38bb9f2e66abeb434de147cf3c7ef7d939c960d2897604d81affd068b19f04ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_taussig, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 10:29:12 compute-0 systemd[1]: libpod-conmon-38bb9f2e66abeb434de147cf3c7ef7d939c960d2897604d81affd068b19f04ac.scope: Deactivated successfully.
Oct  3 10:29:13 compute-0 podman[453808]: 2025-10-03 10:29:13.127603165 +0000 UTC m=+0.087275765 container create ba412cee200ea5eb6d1c23bf8f3410f1ac9b070da3e0ed20991454e74f6b1163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct  3 10:29:13 compute-0 podman[453808]: 2025-10-03 10:29:13.104621687 +0000 UTC m=+0.064294307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:29:13 compute-0 systemd[1]: Started libpod-conmon-ba412cee200ea5eb6d1c23bf8f3410f1ac9b070da3e0ed20991454e74f6b1163.scope.
Oct  3 10:29:13 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d3f69d039049df47cf9f61230ed5a86b7c130e750a388b19f355a2260ce987/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d3f69d039049df47cf9f61230ed5a86b7c130e750a388b19f355a2260ce987/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d3f69d039049df47cf9f61230ed5a86b7c130e750a388b19f355a2260ce987/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d3f69d039049df47cf9f61230ed5a86b7c130e750a388b19f355a2260ce987/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:29:13 compute-0 podman[453808]: 2025-10-03 10:29:13.273679068 +0000 UTC m=+0.233351678 container init ba412cee200ea5eb6d1c23bf8f3410f1ac9b070da3e0ed20991454e74f6b1163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 10:29:13 compute-0 podman[453808]: 2025-10-03 10:29:13.286012915 +0000 UTC m=+0.245685515 container start ba412cee200ea5eb6d1c23bf8f3410f1ac9b070da3e0ed20991454e74f6b1163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elbakyan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 10:29:13 compute-0 podman[453808]: 2025-10-03 10:29:13.290296032 +0000 UTC m=+0.249968652 container attach ba412cee200ea5eb6d1c23bf8f3410f1ac9b070da3e0ed20991454e74f6b1163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elbakyan, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:29:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]: {
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:        "osd_id": 1,
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:        "type": "bluestore"
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:    },
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:        "osd_id": 2,
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:        "type": "bluestore"
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:    },
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:        "osd_id": 0,
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:        "type": "bluestore"
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]:    }
Oct  3 10:29:14 compute-0 laughing_elbakyan[453824]: }
Oct  3 10:29:14 compute-0 systemd[1]: libpod-ba412cee200ea5eb6d1c23bf8f3410f1ac9b070da3e0ed20991454e74f6b1163.scope: Deactivated successfully.
Oct  3 10:29:14 compute-0 podman[453808]: 2025-10-03 10:29:14.330486533 +0000 UTC m=+1.290159133 container died ba412cee200ea5eb6d1c23bf8f3410f1ac9b070da3e0ed20991454e74f6b1163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elbakyan, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:29:14 compute-0 systemd[1]: libpod-ba412cee200ea5eb6d1c23bf8f3410f1ac9b070da3e0ed20991454e74f6b1163.scope: Consumed 1.048s CPU time.
Oct  3 10:29:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-54d3f69d039049df47cf9f61230ed5a86b7c130e750a388b19f355a2260ce987-merged.mount: Deactivated successfully.
Oct  3 10:29:14 compute-0 podman[453808]: 2025-10-03 10:29:14.400507723 +0000 UTC m=+1.360180323 container remove ba412cee200ea5eb6d1c23bf8f3410f1ac9b070da3e0ed20991454e74f6b1163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elbakyan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:29:14 compute-0 systemd[1]: libpod-conmon-ba412cee200ea5eb6d1c23bf8f3410f1ac9b070da3e0ed20991454e74f6b1163.scope: Deactivated successfully.
Oct  3 10:29:14 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:29:14 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:29:14 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:29:14 compute-0 podman[453858]: 2025-10-03 10:29:14.452672218 +0000 UTC m=+0.081533681 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  3 10:29:14 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:29:14 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev fe036da6-41cd-43a4-9659-ce8e82ad3af6 does not exist
Oct  3 10:29:14 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 73d66bd1-37dd-4bf9-9a4c-2c6c802ce8fe does not exist
Oct  3 10:29:15 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:29:15 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:29:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:29:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:29:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:29:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:29:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:29:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:29:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:29:17 compute-0 nova_compute[351685]: 2025-10-03 10:29:17.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1859: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:17 compute-0 nova_compute[351685]: 2025-10-03 10:29:17.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:17 compute-0 podman[453938]: 2025-10-03 10:29:17.826978941 +0000 UTC m=+0.080168767 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:29:17 compute-0 podman[453940]: 2025-10-03 10:29:17.831715353 +0000 UTC m=+0.083674760 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible)
Oct  3 10:29:17 compute-0 podman[453939]: 2025-10-03 10:29:17.834144921 +0000 UTC m=+0.086786840 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  3 10:29:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1861: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:29:22 compute-0 nova_compute[351685]: 2025-10-03 10:29:22.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:22 compute-0 nova_compute[351685]: 2025-10-03 10:29:22.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:23 compute-0 podman[453998]: 2025-10-03 10:29:23.876859601 +0000 UTC m=+0.128818159 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 10:29:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1863: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:29:27 compute-0 nova_compute[351685]: 2025-10-03 10:29:27.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1864: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:27 compute-0 nova_compute[351685]: 2025-10-03 10:29:27.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  3 10:29:29 compute-0 podman[157165]: time="2025-10-03T10:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:29:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:29:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9068 "" "Go-http-client/1.1"
Oct  3 10:29:29 compute-0 podman[454019]: 2025-10-03 10:29:29.839811516 +0000 UTC m=+0.081628513 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:29:29 compute-0 podman[454020]: 2025-10-03 10:29:29.847858995 +0000 UTC m=+0.101235914 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, architecture=x86_64, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, release=1214.1726694543, distribution-scope=public, version=9.4, release-0.7.12=, com.redhat.component=ubi9-container)
Oct  3 10:29:31 compute-0 openstack_network_exporter[367524]: ERROR   10:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:29:31 compute-0 openstack_network_exporter[367524]: ERROR   10:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:29:31 compute-0 openstack_network_exporter[367524]: ERROR   10:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:29:31 compute-0 openstack_network_exporter[367524]: ERROR   10:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:29:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:29:31 compute-0 openstack_network_exporter[367524]: ERROR   10:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:29:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:29:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1866: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  3 10:29:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:29:32 compute-0 nova_compute[351685]: 2025-10-03 10:29:32.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:32 compute-0 nova_compute[351685]: 2025-10-03 10:29:32.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1867: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  3 10:29:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  3 10:29:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:29:37 compute-0 nova_compute[351685]: 2025-10-03 10:29:37.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  3 10:29:37 compute-0 nova_compute[351685]: 2025-10-03 10:29:37.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  3 10:29:39 compute-0 podman[454060]: 2025-10-03 10:29:39.849714182 +0000 UTC m=+0.101663538 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, version=9.6, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1755695350)
Oct  3 10:29:39 compute-0 podman[454061]: 2025-10-03 10:29:39.864808366 +0000 UTC m=+0.115259624 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  3 10:29:39 compute-0 podman[454062]: 2025-10-03 10:29:39.941308514 +0000 UTC m=+0.183710854 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.889 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.890 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.890 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.899 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.899 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.899 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.899 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.899 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:29:40.899819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.906 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.908 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.909 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.909 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.909 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.910 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:29:40.909373) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.910 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.911 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.911 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.911 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.911 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:29:40.912042) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.912 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.940 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.941 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.941 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.942 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.942 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.943 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.943 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.943 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.943 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.944 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:29:40.943549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.987 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.988 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.988 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.988 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.989 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.989 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.989 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.989 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.989 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.989 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.989 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:29:40.989283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.990 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.990 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.990 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.990 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.990 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.991 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.991 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.991 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:29:40.990874) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.991 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.991 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.992 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.992 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.992 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.992 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.992 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.992 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.993 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:29:40.992472) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.993 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.993 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.993 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.994 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.994 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.994 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.994 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:29:40.994181) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.994 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.994 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.995 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.995 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.995 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.995 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.995 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.995 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.995 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:29:40.995527) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.996 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.996 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.996 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.996 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.996 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.996 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.997 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:40.997 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:29:40.996980) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.019 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.020 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.021 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.022 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:29:41.021367) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.022 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.023 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.023 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.023 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.024 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:29:41.023559) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.025 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.025 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.025 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.025 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.025 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.026 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.026 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.027 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:29:41.025711) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:29:41.027333) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.028 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.028 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.028 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:29:41.029081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.029 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.029 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.029 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.030 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.030 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.030 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.030 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.030 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.030 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.031 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.031 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.031 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:29:41.030483) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.031 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.031 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.031 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.032 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.032 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:29:41.031865) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.032 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.032 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.033 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.033 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.033 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.033 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.033 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 55250000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.034 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.034 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.034 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.034 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.034 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.034 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.035 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:29:41.033490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.035 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.035 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:29:41.034953) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.035 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.036 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.036 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.036 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.036 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:29:41.036347) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.037 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.037 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.037 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.037 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.037 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.038 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.038 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.038 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:29:41.037737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.039 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.039 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.039 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.039 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.039 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.040 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.040 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.040 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:29:41.039202) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.040 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.040 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.040 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.041 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.041 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.041 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.042 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:29:41.041090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.042 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.043 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.043 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:29:41.042571) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.044 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.045 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.045 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.045 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.045 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.045 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.045 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.045 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:29:41.045 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:29:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:29:41.622 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:29:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:29:41.622 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:29:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:29:41.622 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:29:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1871: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:29:42 compute-0 nova_compute[351685]: 2025-10-03 10:29:42.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:42 compute-0 nova_compute[351685]: 2025-10-03 10:29:42.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1872: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:44 compute-0 podman[454122]: 2025-10-03 10:29:44.799356329 +0000 UTC m=+0.103257879 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 10:29:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:29:46
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'images', 'vms', 'cephfs.cephfs.data', 'volumes']
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:29:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:29:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:29:47 compute-0 nova_compute[351685]: 2025-10-03 10:29:47.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1874: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:47 compute-0 nova_compute[351685]: 2025-10-03 10:29:47.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:48 compute-0 podman[454141]: 2025-10-03 10:29:48.844213168 +0000 UTC m=+0.102315309 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:29:48 compute-0 podman[454142]: 2025-10-03 10:29:48.864875551 +0000 UTC m=+0.101490331 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 10:29:48 compute-0 podman[454143]: 2025-10-03 10:29:48.897653804 +0000 UTC m=+0.130886366 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 10:29:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1876: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:29:52 compute-0 nova_compute[351685]: 2025-10-03 10:29:52.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:52 compute-0 nova_compute[351685]: 2025-10-03 10:29:52.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:29:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3947705849' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:29:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:29:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3947705849' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:29:54 compute-0 podman[454205]: 2025-10-03 10:29:54.814438855 +0000 UTC m=+0.082264774 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:29:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1878: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:29:57 compute-0 nova_compute[351685]: 2025-10-03 10:29:57.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1879: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:57 compute-0 nova_compute[351685]: 2025-10-03 10:29:57.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:29:57 compute-0 nova_compute[351685]: 2025-10-03 10:29:57.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:29:57 compute-0 nova_compute[351685]: 2025-10-03 10:29:57.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:29:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:29:59 compute-0 nova_compute[351685]: 2025-10-03 10:29:59.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:29:59 compute-0 nova_compute[351685]: 2025-10-03 10:29:59.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:29:59 compute-0 nova_compute[351685]: 2025-10-03 10:29:59.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:29:59 compute-0 podman[157165]: time="2025-10-03T10:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:29:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:29:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9070 "" "Go-http-client/1.1"
Oct  3 10:30:00 compute-0 nova_compute[351685]: 2025-10-03 10:30:00.140 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:30:00 compute-0 nova_compute[351685]: 2025-10-03 10:30:00.142 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:30:00 compute-0 nova_compute[351685]: 2025-10-03 10:30:00.142 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:30:00 compute-0 nova_compute[351685]: 2025-10-03 10:30:00.143 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:30:00 compute-0 podman[454224]: 2025-10-03 10:30:00.833984927 +0000 UTC m=+0.079417973 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, build-date=2024-09-18T21:23:30, version=9.4, io.openshift.tags=base rhel9, release-0.7.12=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 10:30:00 compute-0 podman[454223]: 2025-10-03 10:30:00.848809273 +0000 UTC m=+0.093903908 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:30:01 compute-0 openstack_network_exporter[367524]: ERROR   10:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:30:01 compute-0 openstack_network_exporter[367524]: ERROR   10:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:30:01 compute-0 openstack_network_exporter[367524]: ERROR   10:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:30:01 compute-0 openstack_network_exporter[367524]: ERROR   10:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:30:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:30:01 compute-0 openstack_network_exporter[367524]: ERROR   10:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:30:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:30:01 compute-0 nova_compute[351685]: 2025-10-03 10:30:01.454 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:30:01 compute-0 nova_compute[351685]: 2025-10-03 10:30:01.468 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:30:01 compute-0 nova_compute[351685]: 2025-10-03 10:30:01.468 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:30:01 compute-0 nova_compute[351685]: 2025-10-03 10:30:01.469 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:30:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:01 compute-0 nova_compute[351685]: 2025-10-03 10:30:01.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:30:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:30:02 compute-0 nova_compute[351685]: 2025-10-03 10:30:02.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:02 compute-0 nova_compute[351685]: 2025-10-03 10:30:02.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1882: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:03 compute-0 nova_compute[351685]: 2025-10-03 10:30:03.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:30:04 compute-0 nova_compute[351685]: 2025-10-03 10:30:04.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:30:04 compute-0 nova_compute[351685]: 2025-10-03 10:30:04.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:30:04 compute-0 nova_compute[351685]: 2025-10-03 10:30:04.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:30:04 compute-0 nova_compute[351685]: 2025-10-03 10:30:04.762 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:30:04 compute-0 nova_compute[351685]: 2025-10-03 10:30:04.762 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:30:04 compute-0 nova_compute[351685]: 2025-10-03 10:30:04.762 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:30:04 compute-0 nova_compute[351685]: 2025-10-03 10:30:04.762 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:30:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:30:05 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1041772825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:30:05 compute-0 nova_compute[351685]: 2025-10-03 10:30:05.269 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:30:05 compute-0 nova_compute[351685]: 2025-10-03 10:30:05.350 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:30:05 compute-0 nova_compute[351685]: 2025-10-03 10:30:05.351 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:30:05 compute-0 nova_compute[351685]: 2025-10-03 10:30:05.351 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:30:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:05 compute-0 nova_compute[351685]: 2025-10-03 10:30:05.718 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:30:05 compute-0 nova_compute[351685]: 2025-10-03 10:30:05.719 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3865MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:30:05 compute-0 nova_compute[351685]: 2025-10-03 10:30:05.720 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:30:05 compute-0 nova_compute[351685]: 2025-10-03 10:30:05.720 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:30:05 compute-0 nova_compute[351685]: 2025-10-03 10:30:05.801 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:30:05 compute-0 nova_compute[351685]: 2025-10-03 10:30:05.801 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:30:05 compute-0 nova_compute[351685]: 2025-10-03 10:30:05.801 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:30:05 compute-0 nova_compute[351685]: 2025-10-03 10:30:05.837 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:30:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:30:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/696943285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:30:06 compute-0 nova_compute[351685]: 2025-10-03 10:30:06.305 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:30:06 compute-0 nova_compute[351685]: 2025-10-03 10:30:06.314 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:30:06 compute-0 nova_compute[351685]: 2025-10-03 10:30:06.347 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:30:06 compute-0 nova_compute[351685]: 2025-10-03 10:30:06.350 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:30:06 compute-0 nova_compute[351685]: 2025-10-03 10:30:06.351 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:30:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:30:07 compute-0 nova_compute[351685]: 2025-10-03 10:30:07.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1884: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:07 compute-0 nova_compute[351685]: 2025-10-03 10:30:07.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:08 compute-0 nova_compute[351685]: 2025-10-03 10:30:08.351 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:30:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1885: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:10 compute-0 nova_compute[351685]: 2025-10-03 10:30:10.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:30:10 compute-0 podman[454311]: 2025-10-03 10:30:10.826456118 +0000 UTC m=+0.083725021 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, build-date=2025-08-20T13:12:41, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:30:10 compute-0 podman[454312]: 2025-10-03 10:30:10.873862291 +0000 UTC m=+0.113939212 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible)
Oct  3 10:30:10 compute-0 podman[454318]: 2025-10-03 10:30:10.903035108 +0000 UTC m=+0.147566792 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:30:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:12.071391) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487412071432, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1374, "num_deletes": 255, "total_data_size": 2155643, "memory_usage": 2190064, "flush_reason": "Manual Compaction"}
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487412087043, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 2113356, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36924, "largest_seqno": 38297, "table_properties": {"data_size": 2106863, "index_size": 3693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13295, "raw_average_key_size": 19, "raw_value_size": 2093849, "raw_average_value_size": 3074, "num_data_blocks": 166, "num_entries": 681, "num_filter_entries": 681, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759487270, "oldest_key_time": 1759487270, "file_creation_time": 1759487412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 15717 microseconds, and 6354 cpu microseconds.
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:12.087110) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 2113356 bytes OK
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:12.087132) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:12.089029) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:12.089046) EVENT_LOG_v1 {"time_micros": 1759487412089041, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:12.089066) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2149514, prev total WAL file size 2149514, number of live WAL files 2.
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:12.090086) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323539' seq:72057594037927935, type:22 .. '6C6F676D0031353130' seq:0, type:0; will stop at (end)
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(2063KB)], [83(8784KB)]
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487412090139, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 11108657, "oldest_snapshot_seqno": -1}
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 5893 keys, 11002234 bytes, temperature: kUnknown
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487412162143, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 11002234, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10960320, "index_size": 26060, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 149002, "raw_average_key_size": 25, "raw_value_size": 10851285, "raw_average_value_size": 1841, "num_data_blocks": 1073, "num_entries": 5893, "num_filter_entries": 5893, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759487412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:12.162515) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 11002234 bytes
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:12.164232) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.9 rd, 152.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.6 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(10.5) write-amplify(5.2) OK, records in: 6415, records dropped: 522 output_compression: NoCompression
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:12.164291) EVENT_LOG_v1 {"time_micros": 1759487412164279, "job": 48, "event": "compaction_finished", "compaction_time_micros": 72173, "compaction_time_cpu_micros": 24745, "output_level": 6, "num_output_files": 1, "total_output_size": 11002234, "num_input_records": 6415, "num_output_records": 5893, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487412164816, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487412166697, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:12.089918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:12.166851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:12.166856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:12.166858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:12.166859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:12 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:12.166861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:12 compute-0 nova_compute[351685]: 2025-10-03 10:30:12.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:12 compute-0 nova_compute[351685]: 2025-10-03 10:30:12.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:14 compute-0 podman[454423]: 2025-10-03 10:30:14.948759775 +0000 UTC m=+0.066732664 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  3 10:30:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:30:15 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:30:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:30:15 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:30:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1888: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:30:15 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:30:15 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 0c79018d-ba0c-4088-a57e-2374c7844bfd does not exist
Oct  3 10:30:15 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f9023c11-c5a4-4800-a80f-a1d90d330874 does not exist
Oct  3 10:30:15 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f2ab2cc1-f8f7-4d73-bd1f-65c4ffca0b4b does not exist
Oct  3 10:30:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:30:15 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:30:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:30:15 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:30:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:30:15 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:30:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:30:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:30:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:30:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:30:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:30:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:30:16 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:30:16 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:30:16 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:30:16 compute-0 podman[454662]: 2025-10-03 10:30:16.507528737 +0000 UTC m=+0.048778827 container create 950d7186b5f22f3db9c7c0191b907499444356e47ba9cd85c22c9f40aa6a532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 10:30:16 compute-0 systemd[1]: Started libpod-conmon-950d7186b5f22f3db9c7c0191b907499444356e47ba9cd85c22c9f40aa6a532b.scope.
Oct  3 10:30:16 compute-0 podman[454662]: 2025-10-03 10:30:16.487508055 +0000 UTC m=+0.028758175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:30:16 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:30:16 compute-0 podman[454662]: 2025-10-03 10:30:16.65418684 +0000 UTC m=+0.195437010 container init 950d7186b5f22f3db9c7c0191b907499444356e47ba9cd85c22c9f40aa6a532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_panini, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  3 10:30:16 compute-0 podman[454662]: 2025-10-03 10:30:16.674808263 +0000 UTC m=+0.216058363 container start 950d7186b5f22f3db9c7c0191b907499444356e47ba9cd85c22c9f40aa6a532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 10:30:16 compute-0 podman[454662]: 2025-10-03 10:30:16.682077976 +0000 UTC m=+0.223328166 container attach 950d7186b5f22f3db9c7c0191b907499444356e47ba9cd85c22c9f40aa6a532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_panini, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  3 10:30:16 compute-0 happy_panini[454679]: 167 167
Oct  3 10:30:16 compute-0 systemd[1]: libpod-950d7186b5f22f3db9c7c0191b907499444356e47ba9cd85c22c9f40aa6a532b.scope: Deactivated successfully.
Oct  3 10:30:16 compute-0 conmon[454679]: conmon 950d7186b5f22f3db9c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-950d7186b5f22f3db9c7c0191b907499444356e47ba9cd85c22c9f40aa6a532b.scope/container/memory.events
Oct  3 10:30:16 compute-0 podman[454662]: 2025-10-03 10:30:16.686490678 +0000 UTC m=+0.227740808 container died 950d7186b5f22f3db9c7c0191b907499444356e47ba9cd85c22c9f40aa6a532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:30:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ba5f3326cff6c3a23aefd662b8c23e7102c23c75158f08bc9c70d32f77c498c-merged.mount: Deactivated successfully.
Oct  3 10:30:16 compute-0 podman[454662]: 2025-10-03 10:30:16.783807925 +0000 UTC m=+0.325058025 container remove 950d7186b5f22f3db9c7c0191b907499444356e47ba9cd85c22c9f40aa6a532b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_panini, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:30:16 compute-0 systemd[1]: libpod-conmon-950d7186b5f22f3db9c7c0191b907499444356e47ba9cd85c22c9f40aa6a532b.scope: Deactivated successfully.
Oct  3 10:30:17 compute-0 podman[454703]: 2025-10-03 10:30:17.031500122 +0000 UTC m=+0.059872344 container create 1b05d66c3f3ad56d37c17253f14b1e0633c660c913ef8b2435186fa22c3e8816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lovelace, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:30:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:30:17 compute-0 systemd[1]: Started libpod-conmon-1b05d66c3f3ad56d37c17253f14b1e0633c660c913ef8b2435186fa22c3e8816.scope.
Oct  3 10:30:17 compute-0 podman[454703]: 2025-10-03 10:30:17.00902864 +0000 UTC m=+0.037400862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:30:17 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5e88383589486bb630f838ff976f398b9ad6c4316eaa06d8419fa8af63de30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5e88383589486bb630f838ff976f398b9ad6c4316eaa06d8419fa8af63de30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5e88383589486bb630f838ff976f398b9ad6c4316eaa06d8419fa8af63de30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5e88383589486bb630f838ff976f398b9ad6c4316eaa06d8419fa8af63de30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf5e88383589486bb630f838ff976f398b9ad6c4316eaa06d8419fa8af63de30/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:30:17 compute-0 podman[454703]: 2025-10-03 10:30:17.164705133 +0000 UTC m=+0.193077385 container init 1b05d66c3f3ad56d37c17253f14b1e0633c660c913ef8b2435186fa22c3e8816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lovelace, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:30:17 compute-0 podman[454703]: 2025-10-03 10:30:17.176556633 +0000 UTC m=+0.204928835 container start 1b05d66c3f3ad56d37c17253f14b1e0633c660c913ef8b2435186fa22c3e8816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lovelace, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:30:17 compute-0 podman[454703]: 2025-10-03 10:30:17.181631497 +0000 UTC m=+0.210003719 container attach 1b05d66c3f3ad56d37c17253f14b1e0633c660c913ef8b2435186fa22c3e8816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lovelace, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  3 10:30:17 compute-0 nova_compute[351685]: 2025-10-03 10:30:17.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1889: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:17 compute-0 nova_compute[351685]: 2025-10-03 10:30:17.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:18 compute-0 vigilant_lovelace[454720]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:30:18 compute-0 vigilant_lovelace[454720]: --> relative data size: 1.0
Oct  3 10:30:18 compute-0 vigilant_lovelace[454720]: --> All data devices are unavailable
Oct  3 10:30:18 compute-0 systemd[1]: libpod-1b05d66c3f3ad56d37c17253f14b1e0633c660c913ef8b2435186fa22c3e8816.scope: Deactivated successfully.
Oct  3 10:30:18 compute-0 podman[454703]: 2025-10-03 10:30:18.472763478 +0000 UTC m=+1.501135690 container died 1b05d66c3f3ad56d37c17253f14b1e0633c660c913ef8b2435186fa22c3e8816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:30:18 compute-0 systemd[1]: libpod-1b05d66c3f3ad56d37c17253f14b1e0633c660c913ef8b2435186fa22c3e8816.scope: Consumed 1.212s CPU time.
Oct  3 10:30:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf5e88383589486bb630f838ff976f398b9ad6c4316eaa06d8419fa8af63de30-merged.mount: Deactivated successfully.
Oct  3 10:30:18 compute-0 podman[454703]: 2025-10-03 10:30:18.547774269 +0000 UTC m=+1.576146491 container remove 1b05d66c3f3ad56d37c17253f14b1e0633c660c913ef8b2435186fa22c3e8816 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 10:30:18 compute-0 systemd[1]: libpod-conmon-1b05d66c3f3ad56d37c17253f14b1e0633c660c913ef8b2435186fa22c3e8816.scope: Deactivated successfully.
Oct  3 10:30:19 compute-0 podman[454839]: 2025-10-03 10:30:19.042664969 +0000 UTC m=+0.093698811 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  3 10:30:19 compute-0 podman[454840]: 2025-10-03 10:30:19.055229553 +0000 UTC m=+0.110209172 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 10:30:19 compute-0 podman[454838]: 2025-10-03 10:30:19.060019087 +0000 UTC m=+0.120189342 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:30:19 compute-0 podman[454960]: 2025-10-03 10:30:19.4700159 +0000 UTC m=+0.059498153 container create 3c643fd4192714ff786f45e9aba41cde66988ae7a509131eb9849eec7bb6bd14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wright, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 10:30:19 compute-0 podman[454960]: 2025-10-03 10:30:19.445345917 +0000 UTC m=+0.034828190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:30:19 compute-0 systemd[1]: Started libpod-conmon-3c643fd4192714ff786f45e9aba41cde66988ae7a509131eb9849eec7bb6bd14.scope.
Oct  3 10:30:19 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:30:19 compute-0 podman[454960]: 2025-10-03 10:30:19.593658242 +0000 UTC m=+0.183140525 container init 3c643fd4192714ff786f45e9aba41cde66988ae7a509131eb9849eec7bb6bd14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wright, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 10:30:19 compute-0 podman[454960]: 2025-10-03 10:30:19.603973303 +0000 UTC m=+0.193455556 container start 3c643fd4192714ff786f45e9aba41cde66988ae7a509131eb9849eec7bb6bd14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wright, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 10:30:19 compute-0 podman[454960]: 2025-10-03 10:30:19.609674896 +0000 UTC m=+0.199157159 container attach 3c643fd4192714ff786f45e9aba41cde66988ae7a509131eb9849eec7bb6bd14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wright, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 10:30:19 compute-0 funny_wright[454977]: 167 167
Oct  3 10:30:19 compute-0 systemd[1]: libpod-3c643fd4192714ff786f45e9aba41cde66988ae7a509131eb9849eec7bb6bd14.scope: Deactivated successfully.
Oct  3 10:30:19 compute-0 podman[454960]: 2025-10-03 10:30:19.613711487 +0000 UTC m=+0.203193740 container died 3c643fd4192714ff786f45e9aba41cde66988ae7a509131eb9849eec7bb6bd14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wright, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  3 10:30:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-88bde1c21cc2f1c864b435386c109929fe86c2a05483965cbeab934cf2156557-merged.mount: Deactivated successfully.
Oct  3 10:30:19 compute-0 podman[454960]: 2025-10-03 10:30:19.656913464 +0000 UTC m=+0.246395717 container remove 3c643fd4192714ff786f45e9aba41cde66988ae7a509131eb9849eec7bb6bd14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wright, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:30:19 compute-0 systemd[1]: libpod-conmon-3c643fd4192714ff786f45e9aba41cde66988ae7a509131eb9849eec7bb6bd14.scope: Deactivated successfully.
Oct  3 10:30:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:19 compute-0 podman[455002]: 2025-10-03 10:30:19.921894348 +0000 UTC m=+0.080700633 container create 68ae06439575b549031b08d107dad8d7fb6c77c83afe3b46518767e40d8a3d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:30:19 compute-0 podman[455002]: 2025-10-03 10:30:19.886353057 +0000 UTC m=+0.045159392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:30:19 compute-0 systemd[1]: Started libpod-conmon-68ae06439575b549031b08d107dad8d7fb6c77c83afe3b46518767e40d8a3d2a.scope.
Oct  3 10:30:20 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b782c72b24382ea51bf47542aee99d7daa9670d27c9563e7f4befe37c87d248a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b782c72b24382ea51bf47542aee99d7daa9670d27c9563e7f4befe37c87d248a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b782c72b24382ea51bf47542aee99d7daa9670d27c9563e7f4befe37c87d248a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b782c72b24382ea51bf47542aee99d7daa9670d27c9563e7f4befe37c87d248a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:30:20 compute-0 podman[455002]: 2025-10-03 10:30:20.034928799 +0000 UTC m=+0.193735084 container init 68ae06439575b549031b08d107dad8d7fb6c77c83afe3b46518767e40d8a3d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  3 10:30:20 compute-0 podman[455002]: 2025-10-03 10:30:20.046322475 +0000 UTC m=+0.205128720 container start 68ae06439575b549031b08d107dad8d7fb6c77c83afe3b46518767e40d8a3d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:30:20 compute-0 podman[455002]: 2025-10-03 10:30:20.056878205 +0000 UTC m=+0.215684470 container attach 68ae06439575b549031b08d107dad8d7fb6c77c83afe3b46518767e40d8a3d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:30:20 compute-0 laughing_jemison[455018]: {
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:    "0": [
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:        {
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "devices": [
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "/dev/loop3"
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            ],
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "lv_name": "ceph_lv0",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "lv_size": "21470642176",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "name": "ceph_lv0",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "tags": {
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.cluster_name": "ceph",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.crush_device_class": "",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.encrypted": "0",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.osd_id": "0",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.type": "block",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.vdo": "0"
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            },
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "type": "block",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "vg_name": "ceph_vg0"
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:        }
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:    ],
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:    "1": [
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:        {
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "devices": [
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "/dev/loop4"
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            ],
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "lv_name": "ceph_lv1",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "lv_size": "21470642176",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "name": "ceph_lv1",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "tags": {
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.cluster_name": "ceph",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.crush_device_class": "",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.encrypted": "0",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.osd_id": "1",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.type": "block",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.vdo": "0"
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            },
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "type": "block",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "vg_name": "ceph_vg1"
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:        }
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:    ],
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:    "2": [
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:        {
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "devices": [
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "/dev/loop5"
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            ],
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "lv_name": "ceph_lv2",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "lv_size": "21470642176",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "name": "ceph_lv2",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "tags": {
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.cluster_name": "ceph",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.crush_device_class": "",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.encrypted": "0",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.osd_id": "2",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.type": "block",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:                "ceph.vdo": "0"
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            },
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "type": "block",
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:            "vg_name": "ceph_vg2"
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:        }
Oct  3 10:30:20 compute-0 laughing_jemison[455018]:    ]
Oct  3 10:30:20 compute-0 laughing_jemison[455018]: }
Oct  3 10:30:20 compute-0 systemd[1]: libpod-68ae06439575b549031b08d107dad8d7fb6c77c83afe3b46518767e40d8a3d2a.scope: Deactivated successfully.
Oct  3 10:30:20 compute-0 podman[455002]: 2025-10-03 10:30:20.965431316 +0000 UTC m=+1.124237571 container died 68ae06439575b549031b08d107dad8d7fb6c77c83afe3b46518767e40d8a3d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:30:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b782c72b24382ea51bf47542aee99d7daa9670d27c9563e7f4befe37c87d248a-merged.mount: Deactivated successfully.
Oct  3 10:30:21 compute-0 podman[455002]: 2025-10-03 10:30:21.040670883 +0000 UTC m=+1.199477128 container remove 68ae06439575b549031b08d107dad8d7fb6c77c83afe3b46518767e40d8a3d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 10:30:21 compute-0 systemd[1]: libpod-conmon-68ae06439575b549031b08d107dad8d7fb6c77c83afe3b46518767e40d8a3d2a.scope: Deactivated successfully.
Oct  3 10:30:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1891: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:21 compute-0 podman[455179]: 2025-10-03 10:30:21.985638364 +0000 UTC m=+0.070416533 container create fd17bdd7cfdc7fda7f7ed2ae2b8e874a4fb4701f84f1905a0bc58b92b69a4914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:30:22 compute-0 systemd[1]: Started libpod-conmon-fd17bdd7cfdc7fda7f7ed2ae2b8e874a4fb4701f84f1905a0bc58b92b69a4914.scope.
Oct  3 10:30:22 compute-0 podman[455179]: 2025-10-03 10:30:21.960092313 +0000 UTC m=+0.044870502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:30:22 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:30:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:30:22 compute-0 podman[455179]: 2025-10-03 10:30:22.087422694 +0000 UTC m=+0.172200893 container init fd17bdd7cfdc7fda7f7ed2ae2b8e874a4fb4701f84f1905a0bc58b92b69a4914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  3 10:30:22 compute-0 podman[455179]: 2025-10-03 10:30:22.102125856 +0000 UTC m=+0.186904065 container start fd17bdd7cfdc7fda7f7ed2ae2b8e874a4fb4701f84f1905a0bc58b92b69a4914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 10:30:22 compute-0 podman[455179]: 2025-10-03 10:30:22.108478641 +0000 UTC m=+0.193256830 container attach fd17bdd7cfdc7fda7f7ed2ae2b8e874a4fb4701f84f1905a0bc58b92b69a4914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:30:22 compute-0 mystifying_tesla[455194]: 167 167
Oct  3 10:30:22 compute-0 systemd[1]: libpod-fd17bdd7cfdc7fda7f7ed2ae2b8e874a4fb4701f84f1905a0bc58b92b69a4914.scope: Deactivated successfully.
Oct  3 10:30:22 compute-0 podman[455179]: 2025-10-03 10:30:22.109896266 +0000 UTC m=+0.194674435 container died fd17bdd7cfdc7fda7f7ed2ae2b8e874a4fb4701f84f1905a0bc58b92b69a4914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:30:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb3e574d1a7cad98f30dc18be237a84069e5ae7bbfb26156efb327b8818c2048-merged.mount: Deactivated successfully.
Oct  3 10:30:22 compute-0 podman[455179]: 2025-10-03 10:30:22.153081134 +0000 UTC m=+0.237859303 container remove fd17bdd7cfdc7fda7f7ed2ae2b8e874a4fb4701f84f1905a0bc58b92b69a4914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:30:22 compute-0 systemd[1]: libpod-conmon-fd17bdd7cfdc7fda7f7ed2ae2b8e874a4fb4701f84f1905a0bc58b92b69a4914.scope: Deactivated successfully.
Oct  3 10:30:22 compute-0 nova_compute[351685]: 2025-10-03 10:30:22.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:22 compute-0 podman[455217]: 2025-10-03 10:30:22.37765795 +0000 UTC m=+0.062025194 container create 847998f744cd71ec5ad5e15b13e3acea4f2522fd29b45f516d4f3c8545ffba0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_agnesi, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct  3 10:30:22 compute-0 podman[455217]: 2025-10-03 10:30:22.355686533 +0000 UTC m=+0.040053817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:30:22 compute-0 systemd[1]: Started libpod-conmon-847998f744cd71ec5ad5e15b13e3acea4f2522fd29b45f516d4f3c8545ffba0e.scope.
Oct  3 10:30:22 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a14c437f57b0740b66df64051853a5b33a4dbd02f7c82dd233038f56cc703a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a14c437f57b0740b66df64051853a5b33a4dbd02f7c82dd233038f56cc703a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a14c437f57b0740b66df64051853a5b33a4dbd02f7c82dd233038f56cc703a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a14c437f57b0740b66df64051853a5b33a4dbd02f7c82dd233038f56cc703a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:30:22 compute-0 podman[455217]: 2025-10-03 10:30:22.544012304 +0000 UTC m=+0.228379588 container init 847998f744cd71ec5ad5e15b13e3acea4f2522fd29b45f516d4f3c8545ffba0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:30:22 compute-0 podman[455217]: 2025-10-03 10:30:22.558741457 +0000 UTC m=+0.243108701 container start 847998f744cd71ec5ad5e15b13e3acea4f2522fd29b45f516d4f3c8545ffba0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_agnesi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 10:30:22 compute-0 podman[455217]: 2025-10-03 10:30:22.5641217 +0000 UTC m=+0.248488934 container attach 847998f744cd71ec5ad5e15b13e3acea4f2522fd29b45f516d4f3c8545ffba0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:30:22 compute-0 nova_compute[351685]: 2025-10-03 10:30:22.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]: {
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:        "osd_id": 1,
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:        "type": "bluestore"
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:    },
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:        "osd_id": 2,
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:        "type": "bluestore"
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:    },
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:        "osd_id": 0,
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:        "type": "bluestore"
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]:    }
Oct  3 10:30:23 compute-0 jovial_agnesi[455234]: }
Oct  3 10:30:23 compute-0 systemd[1]: libpod-847998f744cd71ec5ad5e15b13e3acea4f2522fd29b45f516d4f3c8545ffba0e.scope: Deactivated successfully.
Oct  3 10:30:23 compute-0 systemd[1]: libpod-847998f744cd71ec5ad5e15b13e3acea4f2522fd29b45f516d4f3c8545ffba0e.scope: Consumed 1.052s CPU time.
Oct  3 10:30:23 compute-0 podman[455217]: 2025-10-03 10:30:23.626761741 +0000 UTC m=+1.311129015 container died 847998f744cd71ec5ad5e15b13e3acea4f2522fd29b45f516d4f3c8545ffba0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_agnesi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:30:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0a14c437f57b0740b66df64051853a5b33a4dbd02f7c82dd233038f56cc703a-merged.mount: Deactivated successfully.
Oct  3 10:30:23 compute-0 podman[455217]: 2025-10-03 10:30:23.696166142 +0000 UTC m=+1.380533416 container remove 847998f744cd71ec5ad5e15b13e3acea4f2522fd29b45f516d4f3c8545ffba0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_agnesi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:30:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:23 compute-0 systemd[1]: libpod-conmon-847998f744cd71ec5ad5e15b13e3acea4f2522fd29b45f516d4f3c8545ffba0e.scope: Deactivated successfully.
Oct  3 10:30:23 compute-0 nova_compute[351685]: 2025-10-03 10:30:23.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:30:23 compute-0 nova_compute[351685]: 2025-10-03 10:30:23.730 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:30:23 compute-0 nova_compute[351685]: 2025-10-03 10:30:23.731 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:30:23 compute-0 nova_compute[351685]: 2025-10-03 10:30:23.731 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:30:23 compute-0 nova_compute[351685]: 2025-10-03 10:30:23.732 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:30:23 compute-0 nova_compute[351685]: 2025-10-03 10:30:23.732 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:30:23 compute-0 nova_compute[351685]: 2025-10-03 10:30:23.733 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:30:23 compute-0 nova_compute[351685]: 2025-10-03 10:30:23.752 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Oct  3 10:30:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:30:23 compute-0 nova_compute[351685]: 2025-10-03 10:30:23.767 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct  3 10:30:23 compute-0 nova_compute[351685]: 2025-10-03 10:30:23.768 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Image id 37f03e8a-3aed-46a5-8219-fc87e355127e yields fingerprint 8123da205344dbbb79d5d821c9749dc540280b1e _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct  3 10:30:23 compute-0 nova_compute[351685]: 2025-10-03 10:30:23.769 2 INFO nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] image 37f03e8a-3aed-46a5-8219-fc87e355127e at (/var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e): checking#033[00m
Oct  3 10:30:23 compute-0 nova_compute[351685]: 2025-10-03 10:30:23.770 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] image 37f03e8a-3aed-46a5-8219-fc87e355127e at (/var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Oct  3 10:30:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:30:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:30:23 compute-0 nova_compute[351685]: 2025-10-03 10:30:23.774 2 INFO oslo.privsep.daemon [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp9ewvxd3_/privsep.sock']#033[00m
Oct  3 10:30:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:30:23 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f5e242fd-fcd7-4d74-8291-1213047a2183 does not exist
Oct  3 10:30:23 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ee284a45-04e3-4ceb-a167-bdc55853bd88 does not exist
Oct  3 10:30:24 compute-0 nova_compute[351685]: 2025-10-03 10:30:24.509 2 INFO oslo.privsep.daemon [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m
Oct  3 10:30:24 compute-0 nova_compute[351685]: 2025-10-03 10:30:24.356 4814 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  3 10:30:24 compute-0 nova_compute[351685]: 2025-10-03 10:30:24.364 4814 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  3 10:30:24 compute-0 nova_compute[351685]: 2025-10-03 10:30:24.369 4814 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  3 10:30:24 compute-0 nova_compute[351685]: 2025-10-03 10:30:24.370 4814 INFO oslo.privsep.daemon [-] privsep daemon running as pid 4814#033[00m
Oct  3 10:30:24 compute-0 nova_compute[351685]: 2025-10-03 10:30:24.626 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct  3 10:30:24 compute-0 nova_compute[351685]: 2025-10-03 10:30:24.627 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] b43db93c-a4fe-46e9-8418-eedf4f5c135a is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct  3 10:30:24 compute-0 nova_compute[351685]: 2025-10-03 10:30:24.629 2 WARNING nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/e7f67a70e606c08bfea45c9da4c170e96d463110#033[00m
Oct  3 10:30:24 compute-0 nova_compute[351685]: 2025-10-03 10:30:24.630 2 INFO nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Active base files: /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e#033[00m
Oct  3 10:30:24 compute-0 nova_compute[351685]: 2025-10-03 10:30:24.631 2 INFO nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Removable base files: /var/lib/nova/instances/_base/e7f67a70e606c08bfea45c9da4c170e96d463110#033[00m
Oct  3 10:30:24 compute-0 nova_compute[351685]: 2025-10-03 10:30:24.632 2 INFO nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/e7f67a70e606c08bfea45c9da4c170e96d463110#033[00m
Oct  3 10:30:24 compute-0 nova_compute[351685]: 2025-10-03 10:30:24.633 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct  3 10:30:24 compute-0 nova_compute[351685]: 2025-10-03 10:30:24.634 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct  3 10:30:24 compute-0 nova_compute[351685]: 2025-10-03 10:30:24.634 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct  3 10:30:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:30:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:30:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1893: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:25 compute-0 podman[455334]: 2025-10-03 10:30:25.815309179 +0000 UTC m=+0.073499273 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  3 10:30:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:27.081164) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487427081282, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 402, "num_deletes": 250, "total_data_size": 286994, "memory_usage": 296088, "flush_reason": "Manual Compaction"}
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487427089077, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 285629, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38298, "largest_seqno": 38699, "table_properties": {"data_size": 283169, "index_size": 560, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 4937, "raw_average_key_size": 15, "raw_value_size": 278378, "raw_average_value_size": 875, "num_data_blocks": 24, "num_entries": 318, "num_filter_entries": 318, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759487412, "oldest_key_time": 1759487412, "file_creation_time": 1759487427, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 7988 microseconds, and 2472 cpu microseconds.
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:27.089146) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 285629 bytes OK
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:27.089172) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:27.091677) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:27.091700) EVENT_LOG_v1 {"time_micros": 1759487427091692, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:27.091724) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 284434, prev total WAL file size 284434, number of live WAL files 2.
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:27.092479) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(278KB)], [86(10MB)]
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487427092567, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11287863, "oldest_snapshot_seqno": -1}
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 5700 keys, 10551191 bytes, temperature: kUnknown
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487427162453, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10551191, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10510566, "index_size": 25283, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 146791, "raw_average_key_size": 25, "raw_value_size": 10404735, "raw_average_value_size": 1825, "num_data_blocks": 1021, "num_entries": 5700, "num_filter_entries": 5700, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759487427, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:27.163108) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10551191 bytes
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:27.165227) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.3 rd, 150.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.5 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(76.5) write-amplify(36.9) OK, records in: 6211, records dropped: 511 output_compression: NoCompression
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:27.165293) EVENT_LOG_v1 {"time_micros": 1759487427165283, "job": 50, "event": "compaction_finished", "compaction_time_micros": 69961, "compaction_time_cpu_micros": 31382, "output_level": 6, "num_output_files": 1, "total_output_size": 10551191, "num_input_records": 6211, "num_output_records": 5700, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487427165944, "job": 50, "event": "table_file_deletion", "file_number": 88}
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487427168450, "job": 50, "event": "table_file_deletion", "file_number": 86}
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:27.092345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:27.168639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:27.168646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:27.168647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:27.168649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:27.168650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:27 compute-0 nova_compute[351685]: 2025-10-03 10:30:27.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:27 compute-0 nova_compute[351685]: 2025-10-03 10:30:27.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:29 compute-0 podman[157165]: time="2025-10-03T10:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:30:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:30:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9062 "" "Go-http-client/1.1"
Oct  3 10:30:31 compute-0 openstack_network_exporter[367524]: ERROR   10:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:30:31 compute-0 openstack_network_exporter[367524]: ERROR   10:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:30:31 compute-0 openstack_network_exporter[367524]: ERROR   10:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:30:31 compute-0 openstack_network_exporter[367524]: ERROR   10:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:30:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:30:31 compute-0 openstack_network_exporter[367524]: ERROR   10:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:30:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:30:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:31 compute-0 podman[455353]: 2025-10-03 10:30:31.843226081 +0000 UTC m=+0.084087493 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:30:31 compute-0 podman[455354]: 2025-10-03 10:30:31.853995877 +0000 UTC m=+0.096075428 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct  3 10:30:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:30:32 compute-0 nova_compute[351685]: 2025-10-03 10:30:32.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:32 compute-0 nova_compute[351685]: 2025-10-03 10:30:32.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1897: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:30:37 compute-0 nova_compute[351685]: 2025-10-03 10:30:37.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:37 compute-0 nova_compute[351685]: 2025-10-03 10:30:37.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1900: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:30:41.623 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:30:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:30:41.623 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:30:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:30:41.624 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:30:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:41 compute-0 podman[455398]: 2025-10-03 10:30:41.864134445 +0000 UTC m=+0.113924830 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS)
Oct  3 10:30:41 compute-0 podman[455397]: 2025-10-03 10:30:41.902489393 +0000 UTC m=+0.149948903 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm)
Oct  3 10:30:41 compute-0 podman[455399]: 2025-10-03 10:30:41.907896036 +0000 UTC m=+0.151774291 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  3 10:30:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:30:42 compute-0 nova_compute[351685]: 2025-10-03 10:30:42.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:42 compute-0 nova_compute[351685]: 2025-10-03 10:30:42.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1903: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:45 compute-0 podman[455459]: 2025-10-03 10:30:45.816783619 +0000 UTC m=+0.075038094 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:30:46
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.log', 'vms', 'backups', 'cephfs.cephfs.meta']
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:30:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:30:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:47.185825) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487447185873, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 388, "num_deletes": 250, "total_data_size": 290100, "memory_usage": 298224, "flush_reason": "Manual Compaction"}
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Oct  3 10:30:47 compute-0 nova_compute[351685]: 2025-10-03 10:30:47.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487447434202, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 234785, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38700, "largest_seqno": 39087, "table_properties": {"data_size": 232533, "index_size": 416, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 6079, "raw_average_key_size": 20, "raw_value_size": 228062, "raw_average_value_size": 757, "num_data_blocks": 19, "num_entries": 301, "num_filter_entries": 301, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759487428, "oldest_key_time": 1759487428, "file_creation_time": 1759487447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 248573 microseconds, and 2661 cpu microseconds.
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:47.434398) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 234785 bytes OK
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:47.434423) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:47.482708) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:47.482754) EVENT_LOG_v1 {"time_micros": 1759487447482741, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:47.482783) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 287614, prev total WAL file size 288771, number of live WAL files 2.
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:47.483636) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353034' seq:72057594037927935, type:22 .. '6D6772737461740031373535' seq:0, type:0; will stop at (end)
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(229KB)], [89(10MB)]
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487447483671, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 10785976, "oldest_snapshot_seqno": -1}
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 5498 keys, 7550857 bytes, temperature: kUnknown
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487447549203, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 7550857, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7516235, "index_size": 19792, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 142765, "raw_average_key_size": 25, "raw_value_size": 7418503, "raw_average_value_size": 1349, "num_data_blocks": 793, "num_entries": 5498, "num_filter_entries": 5498, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759487447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:47.549481) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 7550857 bytes
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:47.551595) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.3 rd, 115.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.1 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(78.1) write-amplify(32.2) OK, records in: 6001, records dropped: 503 output_compression: NoCompression
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:47.551616) EVENT_LOG_v1 {"time_micros": 1759487447551606, "job": 52, "event": "compaction_finished", "compaction_time_micros": 65638, "compaction_time_cpu_micros": 29302, "output_level": 6, "num_output_files": 1, "total_output_size": 7550857, "num_input_records": 6001, "num_output_records": 5498, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:47.483438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:47.551870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:47.551875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:47.551877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:47.551879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:30:47.551881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487447552451, "job": 0, "event": "table_file_deletion", "file_number": 91}
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:30:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487447554452, "job": 0, "event": "table_file_deletion", "file_number": 89}
Oct  3 10:30:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:47 compute-0 nova_compute[351685]: 2025-10-03 10:30:47.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:49 compute-0 podman[455480]: 2025-10-03 10:30:49.804830079 +0000 UTC m=+0.060034325 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:30:49 compute-0 podman[455478]: 2025-10-03 10:30:49.823811937 +0000 UTC m=+0.076254894 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:30:49 compute-0 podman[455479]: 2025-10-03 10:30:49.851155082 +0000 UTC m=+0.108986582 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  3 10:30:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:30:52 compute-0 nova_compute[351685]: 2025-10-03 10:30:52.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:52 compute-0 nova_compute[351685]: 2025-10-03 10:30:52.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:30:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1502227640' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:30:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:30:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1502227640' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:30:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:30:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 8572 writes, 39K keys, 8572 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 8572 writes, 8572 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1322 writes, 6479 keys, 1322 commit groups, 1.0 writes per commit group, ingest: 8.60 MB, 0.01 MB/s#012Interval WAL: 1322 writes, 1322 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     26.2      1.82              0.17        26    0.070       0      0       0.0       0.0#012  L6      1/0    7.20 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1     82.4     68.2      2.84              0.63        25    0.114    130K    13K       0.0       0.0#012 Sum      1/0    7.20 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.1     50.2     51.8      4.66              0.80        51    0.091    130K    13K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9     36.2     35.4      1.99              0.24        14    0.142     42K   3595       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     82.4     68.2      2.84              0.63        25    0.114    130K    13K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     26.3      1.81              0.17        25    0.072       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.6      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.047, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.24 GB write, 0.07 MB/s write, 0.23 GB read, 0.07 MB/s read, 4.7 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 2.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56005dddb1f0#2 capacity: 304.00 MB usage: 25.72 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000308 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1629,24.75 MB,8.14048%) FilterBlock(52,371.23 KB,0.119254%) IndexBlock(52,627.05 KB,0.201431%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:30:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:56 compute-0 podman[455536]: 2025-10-03 10:30:56.839854281 +0000 UTC m=+0.100701296 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:30:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:30:57 compute-0 nova_compute[351685]: 2025-10-03 10:30:57.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1909: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:57 compute-0 nova_compute[351685]: 2025-10-03 10:30:57.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:30:59 compute-0 nova_compute[351685]: 2025-10-03 10:30:59.637 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:30:59 compute-0 nova_compute[351685]: 2025-10-03 10:30:59.637 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:30:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:30:59 compute-0 nova_compute[351685]: 2025-10-03 10:30:59.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:30:59 compute-0 nova_compute[351685]: 2025-10-03 10:30:59.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:30:59 compute-0 nova_compute[351685]: 2025-10-03 10:30:59.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:30:59 compute-0 podman[157165]: time="2025-10-03T10:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:30:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:30:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9061 "" "Go-http-client/1.1"
Oct  3 10:31:00 compute-0 nova_compute[351685]: 2025-10-03 10:31:00.150 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:31:00 compute-0 nova_compute[351685]: 2025-10-03 10:31:00.151 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:31:00 compute-0 nova_compute[351685]: 2025-10-03 10:31:00.151 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:31:00 compute-0 nova_compute[351685]: 2025-10-03 10:31:00.151 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:31:01 compute-0 nova_compute[351685]: 2025-10-03 10:31:01.367 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:31:01 compute-0 nova_compute[351685]: 2025-10-03 10:31:01.384 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:31:01 compute-0 nova_compute[351685]: 2025-10-03 10:31:01.385 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:31:01 compute-0 openstack_network_exporter[367524]: ERROR   10:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:31:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:31:01 compute-0 openstack_network_exporter[367524]: ERROR   10:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:31:01 compute-0 openstack_network_exporter[367524]: ERROR   10:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:31:01 compute-0 openstack_network_exporter[367524]: ERROR   10:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:31:01 compute-0 openstack_network_exporter[367524]: ERROR   10:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:31:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:31:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:01 compute-0 nova_compute[351685]: 2025-10-03 10:31:01.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:31:01 compute-0 nova_compute[351685]: 2025-10-03 10:31:01.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:31:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:31:02 compute-0 nova_compute[351685]: 2025-10-03 10:31:02.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:02 compute-0 nova_compute[351685]: 2025-10-03 10:31:02.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:02 compute-0 podman[455556]: 2025-10-03 10:31:02.841190163 +0000 UTC m=+0.098232938 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:31:02 compute-0 podman[455557]: 2025-10-03 10:31:02.862660211 +0000 UTC m=+0.112802554 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vcs-type=git, version=9.4, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, com.redhat.component=ubi9-container, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9)
Oct  3 10:31:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:03 compute-0 nova_compute[351685]: 2025-10-03 10:31:03.724 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:31:03 compute-0 nova_compute[351685]: 2025-10-03 10:31:03.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:31:04 compute-0 nova_compute[351685]: 2025-10-03 10:31:04.740 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:31:04 compute-0 nova_compute[351685]: 2025-10-03 10:31:04.741 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:31:04 compute-0 nova_compute[351685]: 2025-10-03 10:31:04.793 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:31:04 compute-0 nova_compute[351685]: 2025-10-03 10:31:04.794 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:31:04 compute-0 nova_compute[351685]: 2025-10-03 10:31:04.795 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:31:04 compute-0 nova_compute[351685]: 2025-10-03 10:31:04.796 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:31:04 compute-0 nova_compute[351685]: 2025-10-03 10:31:04.797 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:31:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:31:05 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2010465538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:31:05 compute-0 nova_compute[351685]: 2025-10-03 10:31:05.298 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:31:05 compute-0 nova_compute[351685]: 2025-10-03 10:31:05.414 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:31:05 compute-0 nova_compute[351685]: 2025-10-03 10:31:05.415 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:31:05 compute-0 nova_compute[351685]: 2025-10-03 10:31:05.415 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:31:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:05 compute-0 nova_compute[351685]: 2025-10-03 10:31:05.768 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:31:05 compute-0 nova_compute[351685]: 2025-10-03 10:31:05.769 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3848MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:31:05 compute-0 nova_compute[351685]: 2025-10-03 10:31:05.769 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:31:05 compute-0 nova_compute[351685]: 2025-10-03 10:31:05.770 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:31:05 compute-0 nova_compute[351685]: 2025-10-03 10:31:05.861 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:31:05 compute-0 nova_compute[351685]: 2025-10-03 10:31:05.862 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:31:05 compute-0 nova_compute[351685]: 2025-10-03 10:31:05.863 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:31:05 compute-0 nova_compute[351685]: 2025-10-03 10:31:05.894 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:31:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:31:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4214780097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:31:06 compute-0 nova_compute[351685]: 2025-10-03 10:31:06.397 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:31:06 compute-0 nova_compute[351685]: 2025-10-03 10:31:06.406 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:31:06 compute-0 nova_compute[351685]: 2025-10-03 10:31:06.425 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:31:06 compute-0 nova_compute[351685]: 2025-10-03 10:31:06.427 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:31:06 compute-0 nova_compute[351685]: 2025-10-03 10:31:06.427 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:31:06 compute-0 nova_compute[351685]: 2025-10-03 10:31:06.428 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:31:06 compute-0 nova_compute[351685]: 2025-10-03 10:31:06.429 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 10:31:06 compute-0 nova_compute[351685]: 2025-10-03 10:31:06.452 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 10:31:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:31:07 compute-0 nova_compute[351685]: 2025-10-03 10:31:07.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:07 compute-0 nova_compute[351685]: 2025-10-03 10:31:07.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:08 compute-0 nova_compute[351685]: 2025-10-03 10:31:08.437 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:31:08 compute-0 nova_compute[351685]: 2025-10-03 10:31:08.459 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:31:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:10 compute-0 nova_compute[351685]: 2025-10-03 10:31:10.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:31:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:31:12 compute-0 nova_compute[351685]: 2025-10-03 10:31:12.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:12 compute-0 nova_compute[351685]: 2025-10-03 10:31:12.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:12 compute-0 podman[455642]: 2025-10-03 10:31:12.854743935 +0000 UTC m=+0.099711144 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct  3 10:31:12 compute-0 podman[455641]: 2025-10-03 10:31:12.88047045 +0000 UTC m=+0.132778144 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vcs-type=git, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc.)
Oct  3 10:31:12 compute-0 podman[455643]: 2025-10-03 10:31:12.896126112 +0000 UTC m=+0.127811516 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct  3 10:31:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:31:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:31:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:31:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:31:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:31:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:31:16 compute-0 podman[455702]: 2025-10-03 10:31:16.829204399 +0000 UTC m=+0.074129676 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 10:31:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:31:17 compute-0 nova_compute[351685]: 2025-10-03 10:31:17.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:17 compute-0 nova_compute[351685]: 2025-10-03 10:31:17.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:20 compute-0 podman[455724]: 2025-10-03 10:31:20.829330603 +0000 UTC m=+0.081158890 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 10:31:20 compute-0 podman[455722]: 2025-10-03 10:31:20.837072551 +0000 UTC m=+0.091979197 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:31:20 compute-0 podman[455723]: 2025-10-03 10:31:20.846969618 +0000 UTC m=+0.088476925 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:31:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:31:22 compute-0 nova_compute[351685]: 2025-10-03 10:31:22.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:22 compute-0 nova_compute[351685]: 2025-10-03 10:31:22.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:31:22 compute-0 nova_compute[351685]: 2025-10-03 10:31:22.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 10:31:22 compute-0 nova_compute[351685]: 2025-10-03 10:31:22.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:31:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:31:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:31:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:31:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:31:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:31:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 1a32cb2c-2c77-4d45-b2dc-d821c88d4782 does not exist
Oct  3 10:31:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 93b1f922-61d5-4c9a-b0a5-4d7e108ef28a does not exist
Oct  3 10:31:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 655df256-2f20-462f-8337-96ef022cf3b2 does not exist
Oct  3 10:31:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:31:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:31:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:31:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:31:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:31:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:31:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:31:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:31:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:31:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:25 compute-0 podman[456048]: 2025-10-03 10:31:25.745169065 +0000 UTC m=+0.085586402 container create 2537efd117faffec8813155db5e574277288f1ae1eff3be3bb0ff5590c55ccb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:31:25 compute-0 podman[456048]: 2025-10-03 10:31:25.696957111 +0000 UTC m=+0.037374438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:31:25 compute-0 systemd[1]: Started libpod-conmon-2537efd117faffec8813155db5e574277288f1ae1eff3be3bb0ff5590c55ccb6.scope.
Oct  3 10:31:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:31:25 compute-0 podman[456048]: 2025-10-03 10:31:25.860739597 +0000 UTC m=+0.201156954 container init 2537efd117faffec8813155db5e574277288f1ae1eff3be3bb0ff5590c55ccb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gates, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 10:31:25 compute-0 podman[456048]: 2025-10-03 10:31:25.877647349 +0000 UTC m=+0.218064656 container start 2537efd117faffec8813155db5e574277288f1ae1eff3be3bb0ff5590c55ccb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 10:31:25 compute-0 podman[456048]: 2025-10-03 10:31:25.887954939 +0000 UTC m=+0.228372286 container attach 2537efd117faffec8813155db5e574277288f1ae1eff3be3bb0ff5590c55ccb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 10:31:25 compute-0 keen_gates[456063]: 167 167
Oct  3 10:31:25 compute-0 systemd[1]: libpod-2537efd117faffec8813155db5e574277288f1ae1eff3be3bb0ff5590c55ccb6.scope: Deactivated successfully.
Oct  3 10:31:25 compute-0 podman[456048]: 2025-10-03 10:31:25.891272005 +0000 UTC m=+0.231689312 container died 2537efd117faffec8813155db5e574277288f1ae1eff3be3bb0ff5590c55ccb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gates, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:31:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a2651b2fad2b99d9b1d6f2a9c532cde8f408d092cf45e9e81c5cb05370665ce-merged.mount: Deactivated successfully.
Oct  3 10:31:25 compute-0 podman[456048]: 2025-10-03 10:31:25.977455005 +0000 UTC m=+0.317872312 container remove 2537efd117faffec8813155db5e574277288f1ae1eff3be3bb0ff5590c55ccb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 10:31:26 compute-0 systemd[1]: libpod-conmon-2537efd117faffec8813155db5e574277288f1ae1eff3be3bb0ff5590c55ccb6.scope: Deactivated successfully.
Oct  3 10:31:26 compute-0 podman[456085]: 2025-10-03 10:31:26.190454168 +0000 UTC m=+0.063015660 container create 7c6d41314c0a3d87d8a1e1c07d2580dd37da7c87ba1f94af37cbaf4237d7318a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:31:26 compute-0 systemd[1]: Started libpod-conmon-7c6d41314c0a3d87d8a1e1c07d2580dd37da7c87ba1f94af37cbaf4237d7318a.scope.
Oct  3 10:31:26 compute-0 podman[456085]: 2025-10-03 10:31:26.167207273 +0000 UTC m=+0.039768745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:31:26 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a467f81e57aeb8fdc6463847d199d98e6d09a34850a0dbbbc925b2f6a548894/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a467f81e57aeb8fdc6463847d199d98e6d09a34850a0dbbbc925b2f6a548894/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a467f81e57aeb8fdc6463847d199d98e6d09a34850a0dbbbc925b2f6a548894/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a467f81e57aeb8fdc6463847d199d98e6d09a34850a0dbbbc925b2f6a548894/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a467f81e57aeb8fdc6463847d199d98e6d09a34850a0dbbbc925b2f6a548894/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:31:26 compute-0 podman[456085]: 2025-10-03 10:31:26.329614545 +0000 UTC m=+0.202176057 container init 7c6d41314c0a3d87d8a1e1c07d2580dd37da7c87ba1f94af37cbaf4237d7318a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:31:26 compute-0 podman[456085]: 2025-10-03 10:31:26.355796824 +0000 UTC m=+0.228358286 container start 7c6d41314c0a3d87d8a1e1c07d2580dd37da7c87ba1f94af37cbaf4237d7318a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 10:31:26 compute-0 podman[456085]: 2025-10-03 10:31:26.36036507 +0000 UTC m=+0.232926522 container attach 7c6d41314c0a3d87d8a1e1c07d2580dd37da7c87ba1f94af37cbaf4237d7318a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 10:31:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:31:27 compute-0 nova_compute[351685]: 2025-10-03 10:31:27.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:27 compute-0 eloquent_pascal[456101]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:31:27 compute-0 eloquent_pascal[456101]: --> relative data size: 1.0
Oct  3 10:31:27 compute-0 eloquent_pascal[456101]: --> All data devices are unavailable
Oct  3 10:31:27 compute-0 systemd[1]: libpod-7c6d41314c0a3d87d8a1e1c07d2580dd37da7c87ba1f94af37cbaf4237d7318a.scope: Deactivated successfully.
Oct  3 10:31:27 compute-0 systemd[1]: libpod-7c6d41314c0a3d87d8a1e1c07d2580dd37da7c87ba1f94af37cbaf4237d7318a.scope: Consumed 1.077s CPU time.
Oct  3 10:31:27 compute-0 podman[456085]: 2025-10-03 10:31:27.508781544 +0000 UTC m=+1.381343026 container died 7c6d41314c0a3d87d8a1e1c07d2580dd37da7c87ba1f94af37cbaf4237d7318a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 10:31:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a467f81e57aeb8fdc6463847d199d98e6d09a34850a0dbbbc925b2f6a548894-merged.mount: Deactivated successfully.
Oct  3 10:31:27 compute-0 podman[456085]: 2025-10-03 10:31:27.601848646 +0000 UTC m=+1.474410108 container remove 7c6d41314c0a3d87d8a1e1c07d2580dd37da7c87ba1f94af37cbaf4237d7318a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:31:27 compute-0 systemd[1]: libpod-conmon-7c6d41314c0a3d87d8a1e1c07d2580dd37da7c87ba1f94af37cbaf4237d7318a.scope: Deactivated successfully.
Oct  3 10:31:27 compute-0 podman[456131]: 2025-10-03 10:31:27.652478387 +0000 UTC m=+0.116109570 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  3 10:31:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:27 compute-0 nova_compute[351685]: 2025-10-03 10:31:27.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:28 compute-0 podman[456301]: 2025-10-03 10:31:28.467208584 +0000 UTC m=+0.062665189 container create 9815351b74c1dc03732480f6afddb0aefb6c7015d454b5dca8eae6159466384c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mcclintock, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 10:31:28 compute-0 systemd[1]: Started libpod-conmon-9815351b74c1dc03732480f6afddb0aefb6c7015d454b5dca8eae6159466384c.scope.
Oct  3 10:31:28 compute-0 podman[456301]: 2025-10-03 10:31:28.442352047 +0000 UTC m=+0.037808672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:31:28 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:31:28 compute-0 podman[456301]: 2025-10-03 10:31:28.588300541 +0000 UTC m=+0.183757146 container init 9815351b74c1dc03732480f6afddb0aefb6c7015d454b5dca8eae6159466384c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mcclintock, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 10:31:28 compute-0 podman[456301]: 2025-10-03 10:31:28.599670596 +0000 UTC m=+0.195127181 container start 9815351b74c1dc03732480f6afddb0aefb6c7015d454b5dca8eae6159466384c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct  3 10:31:28 compute-0 podman[456301]: 2025-10-03 10:31:28.603619593 +0000 UTC m=+0.199076178 container attach 9815351b74c1dc03732480f6afddb0aefb6c7015d454b5dca8eae6159466384c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mcclintock, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:31:28 compute-0 practical_mcclintock[456317]: 167 167
Oct  3 10:31:28 compute-0 systemd[1]: libpod-9815351b74c1dc03732480f6afddb0aefb6c7015d454b5dca8eae6159466384c.scope: Deactivated successfully.
Oct  3 10:31:28 compute-0 conmon[456317]: conmon 9815351b74c1dc037324 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9815351b74c1dc03732480f6afddb0aefb6c7015d454b5dca8eae6159466384c.scope/container/memory.events
Oct  3 10:31:28 compute-0 podman[456301]: 2025-10-03 10:31:28.60915138 +0000 UTC m=+0.204607975 container died 9815351b74c1dc03732480f6afddb0aefb6c7015d454b5dca8eae6159466384c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:31:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-76a110b489a10ee7e76f7dcc293f63fff22da8751cf91d1a7a53bafc0d8e1db6-merged.mount: Deactivated successfully.
Oct  3 10:31:28 compute-0 podman[456301]: 2025-10-03 10:31:28.665765213 +0000 UTC m=+0.261221798 container remove 9815351b74c1dc03732480f6afddb0aefb6c7015d454b5dca8eae6159466384c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mcclintock, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:31:28 compute-0 systemd[1]: libpod-conmon-9815351b74c1dc03732480f6afddb0aefb6c7015d454b5dca8eae6159466384c.scope: Deactivated successfully.
Oct  3 10:31:28 compute-0 podman[456341]: 2025-10-03 10:31:28.911526655 +0000 UTC m=+0.084593871 container create 0f9281c4cd56ba3d83590f3f2b4c9e1ba1abcbfaea85d2e2ea6a77a1f6ab0936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ganguly, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:31:28 compute-0 systemd[1]: Started libpod-conmon-0f9281c4cd56ba3d83590f3f2b4c9e1ba1abcbfaea85d2e2ea6a77a1f6ab0936.scope.
Oct  3 10:31:28 compute-0 podman[456341]: 2025-10-03 10:31:28.877886867 +0000 UTC m=+0.050954093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:31:29 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d71b53f0a531a91fe7329c514775afd401e4aa3ce41f5b058310ebe817bd54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d71b53f0a531a91fe7329c514775afd401e4aa3ce41f5b058310ebe817bd54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d71b53f0a531a91fe7329c514775afd401e4aa3ce41f5b058310ebe817bd54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:31:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d71b53f0a531a91fe7329c514775afd401e4aa3ce41f5b058310ebe817bd54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:31:29 compute-0 podman[456341]: 2025-10-03 10:31:29.041671943 +0000 UTC m=+0.214739199 container init 0f9281c4cd56ba3d83590f3f2b4c9e1ba1abcbfaea85d2e2ea6a77a1f6ab0936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ganguly, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 10:31:29 compute-0 podman[456341]: 2025-10-03 10:31:29.060955972 +0000 UTC m=+0.234023188 container start 0f9281c4cd56ba3d83590f3f2b4c9e1ba1abcbfaea85d2e2ea6a77a1f6ab0936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ganguly, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:31:29 compute-0 podman[456341]: 2025-10-03 10:31:29.075002581 +0000 UTC m=+0.248069807 container attach 0f9281c4cd56ba3d83590f3f2b4c9e1ba1abcbfaea85d2e2ea6a77a1f6ab0936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 10:31:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:29 compute-0 podman[157165]: time="2025-10-03T10:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:31:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47838 "" "Go-http-client/1.1"
Oct  3 10:31:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9478 "" "Go-http-client/1.1"
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]: {
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:    "0": [
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:        {
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "devices": [
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "/dev/loop3"
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            ],
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "lv_name": "ceph_lv0",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "lv_size": "21470642176",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "name": "ceph_lv0",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "tags": {
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.cluster_name": "ceph",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.crush_device_class": "",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.encrypted": "0",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.osd_id": "0",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.type": "block",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.vdo": "0"
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            },
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "type": "block",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "vg_name": "ceph_vg0"
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:        }
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:    ],
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:    "1": [
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:        {
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "devices": [
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "/dev/loop4"
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            ],
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "lv_name": "ceph_lv1",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "lv_size": "21470642176",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "name": "ceph_lv1",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "tags": {
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.cluster_name": "ceph",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.crush_device_class": "",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.encrypted": "0",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.osd_id": "1",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.type": "block",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.vdo": "0"
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            },
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "type": "block",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "vg_name": "ceph_vg1"
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:        }
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:    ],
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:    "2": [
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:        {
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "devices": [
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "/dev/loop5"
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            ],
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "lv_name": "ceph_lv2",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "lv_size": "21470642176",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "name": "ceph_lv2",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "tags": {
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.cluster_name": "ceph",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.crush_device_class": "",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.encrypted": "0",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.osd_id": "2",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.type": "block",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:                "ceph.vdo": "0"
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            },
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "type": "block",
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:            "vg_name": "ceph_vg2"
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:        }
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]:    ]
Oct  3 10:31:29 compute-0 blissful_ganguly[456358]: }
Oct  3 10:31:29 compute-0 systemd[1]: libpod-0f9281c4cd56ba3d83590f3f2b4c9e1ba1abcbfaea85d2e2ea6a77a1f6ab0936.scope: Deactivated successfully.
Oct  3 10:31:29 compute-0 podman[456341]: 2025-10-03 10:31:29.932226738 +0000 UTC m=+1.105293984 container died 0f9281c4cd56ba3d83590f3f2b4c9e1ba1abcbfaea85d2e2ea6a77a1f6ab0936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 10:31:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-46d71b53f0a531a91fe7329c514775afd401e4aa3ce41f5b058310ebe817bd54-merged.mount: Deactivated successfully.
Oct  3 10:31:30 compute-0 podman[456341]: 2025-10-03 10:31:30.01562466 +0000 UTC m=+1.188691866 container remove 0f9281c4cd56ba3d83590f3f2b4c9e1ba1abcbfaea85d2e2ea6a77a1f6ab0936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:31:30 compute-0 systemd[1]: libpod-conmon-0f9281c4cd56ba3d83590f3f2b4c9e1ba1abcbfaea85d2e2ea6a77a1f6ab0936.scope: Deactivated successfully.
Oct  3 10:31:30 compute-0 podman[456521]: 2025-10-03 10:31:30.992828659 +0000 UTC m=+0.065105595 container create dbf9b3a5f3e79cd50f147b603c43063f0554dff18a24942707c56a9e90bd02de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kirch, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 10:31:31 compute-0 podman[456521]: 2025-10-03 10:31:30.96007078 +0000 UTC m=+0.032347736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:31:31 compute-0 systemd[1]: Started libpod-conmon-dbf9b3a5f3e79cd50f147b603c43063f0554dff18a24942707c56a9e90bd02de.scope.
Oct  3 10:31:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:31:31 compute-0 podman[456521]: 2025-10-03 10:31:31.290793064 +0000 UTC m=+0.363070050 container init dbf9b3a5f3e79cd50f147b603c43063f0554dff18a24942707c56a9e90bd02de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kirch, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 10:31:31 compute-0 podman[456521]: 2025-10-03 10:31:31.306374142 +0000 UTC m=+0.378651088 container start dbf9b3a5f3e79cd50f147b603c43063f0554dff18a24942707c56a9e90bd02de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kirch, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 10:31:31 compute-0 beautiful_kirch[456534]: 167 167
Oct  3 10:31:31 compute-0 systemd[1]: libpod-dbf9b3a5f3e79cd50f147b603c43063f0554dff18a24942707c56a9e90bd02de.scope: Deactivated successfully.
Oct  3 10:31:31 compute-0 conmon[456534]: conmon dbf9b3a5f3e79cd50f14 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dbf9b3a5f3e79cd50f147b603c43063f0554dff18a24942707c56a9e90bd02de.scope/container/memory.events
Oct  3 10:31:31 compute-0 openstack_network_exporter[367524]: ERROR   10:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:31:31 compute-0 openstack_network_exporter[367524]: ERROR   10:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:31:31 compute-0 openstack_network_exporter[367524]: ERROR   10:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:31:31 compute-0 openstack_network_exporter[367524]: ERROR   10:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:31:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:31:31 compute-0 openstack_network_exporter[367524]: ERROR   10:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:31:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:31:31 compute-0 podman[456521]: 2025-10-03 10:31:31.423862097 +0000 UTC m=+0.496139063 container attach dbf9b3a5f3e79cd50f147b603c43063f0554dff18a24942707c56a9e90bd02de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kirch, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:31:31 compute-0 podman[456521]: 2025-10-03 10:31:31.424784936 +0000 UTC m=+0.497061882 container died dbf9b3a5f3e79cd50f147b603c43063f0554dff18a24942707c56a9e90bd02de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kirch, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 10:31:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1bd4173928d9b7e7698a0b877b9a89a1960c58726d9e6b634d479de2eea18ec-merged.mount: Deactivated successfully.
Oct  3 10:31:31 compute-0 podman[456521]: 2025-10-03 10:31:31.639178083 +0000 UTC m=+0.711455019 container remove dbf9b3a5f3e79cd50f147b603c43063f0554dff18a24942707c56a9e90bd02de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kirch, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:31:31 compute-0 systemd[1]: libpod-conmon-dbf9b3a5f3e79cd50f147b603c43063f0554dff18a24942707c56a9e90bd02de.scope: Deactivated successfully.
Oct  3 10:31:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:31 compute-0 podman[456561]: 2025-10-03 10:31:31.870648297 +0000 UTC m=+0.054167906 container create 4424b19b42ca88cec2155fb883e5e0d244be222048e459545af18dc3164b7d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:31:31 compute-0 systemd[1]: Started libpod-conmon-4424b19b42ca88cec2155fb883e5e0d244be222048e459545af18dc3164b7d19.scope.
Oct  3 10:31:31 compute-0 podman[456561]: 2025-10-03 10:31:31.853818198 +0000 UTC m=+0.037337847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:31:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6bed321c278f1171600c68af97afdafab6db2b682d7bbb5847403c4428f6944/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6bed321c278f1171600c68af97afdafab6db2b682d7bbb5847403c4428f6944/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6bed321c278f1171600c68af97afdafab6db2b682d7bbb5847403c4428f6944/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6bed321c278f1171600c68af97afdafab6db2b682d7bbb5847403c4428f6944/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:31:31 compute-0 podman[456561]: 2025-10-03 10:31:31.969661179 +0000 UTC m=+0.153180818 container init 4424b19b42ca88cec2155fb883e5e0d244be222048e459545af18dc3164b7d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct  3 10:31:31 compute-0 podman[456561]: 2025-10-03 10:31:31.985047271 +0000 UTC m=+0.168566910 container start 4424b19b42ca88cec2155fb883e5e0d244be222048e459545af18dc3164b7d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 10:31:31 compute-0 podman[456561]: 2025-10-03 10:31:31.990159235 +0000 UTC m=+0.173678844 container attach 4424b19b42ca88cec2155fb883e5e0d244be222048e459545af18dc3164b7d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 10:31:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:31:32 compute-0 nova_compute[351685]: 2025-10-03 10:31:32.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:32 compute-0 nova_compute[351685]: 2025-10-03 10:31:32.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:33 compute-0 lucid_wiles[456576]: {
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:        "osd_id": 1,
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:        "type": "bluestore"
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:    },
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:        "osd_id": 2,
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:        "type": "bluestore"
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:    },
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:        "osd_id": 0,
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:        "type": "bluestore"
Oct  3 10:31:33 compute-0 lucid_wiles[456576]:    }
Oct  3 10:31:33 compute-0 lucid_wiles[456576]: }
Oct  3 10:31:33 compute-0 systemd[1]: libpod-4424b19b42ca88cec2155fb883e5e0d244be222048e459545af18dc3164b7d19.scope: Deactivated successfully.
Oct  3 10:31:33 compute-0 podman[456561]: 2025-10-03 10:31:33.139599042 +0000 UTC m=+1.323118661 container died 4424b19b42ca88cec2155fb883e5e0d244be222048e459545af18dc3164b7d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 10:31:33 compute-0 systemd[1]: libpod-4424b19b42ca88cec2155fb883e5e0d244be222048e459545af18dc3164b7d19.scope: Consumed 1.147s CPU time.
Oct  3 10:31:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6bed321c278f1171600c68af97afdafab6db2b682d7bbb5847403c4428f6944-merged.mount: Deactivated successfully.
Oct  3 10:31:33 compute-0 podman[456561]: 2025-10-03 10:31:33.21414707 +0000 UTC m=+1.397666699 container remove 4424b19b42ca88cec2155fb883e5e0d244be222048e459545af18dc3164b7d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 10:31:33 compute-0 systemd[1]: libpod-conmon-4424b19b42ca88cec2155fb883e5e0d244be222048e459545af18dc3164b7d19.scope: Deactivated successfully.
Oct  3 10:31:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:31:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:31:33 compute-0 podman[456618]: 2025-10-03 10:31:33.289766312 +0000 UTC m=+0.100787809 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_id=edpm, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Oct  3 10:31:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:31:33 compute-0 podman[456611]: 2025-10-03 10:31:33.295928039 +0000 UTC m=+0.116892604 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:31:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:31:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d1c54e51-c221-4a7c-a5e7-321359aad7ff does not exist
Oct  3 10:31:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 02cd0859-5739-48eb-b866-1a53130de1c5 does not exist
Oct  3 10:31:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:31:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:31:34.307349) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487494307401, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 638, "num_deletes": 251, "total_data_size": 713880, "memory_usage": 725440, "flush_reason": "Manual Compaction"}
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487494315009, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 707031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39088, "largest_seqno": 39725, "table_properties": {"data_size": 703652, "index_size": 1287, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7884, "raw_average_key_size": 19, "raw_value_size": 696811, "raw_average_value_size": 1712, "num_data_blocks": 57, "num_entries": 407, "num_filter_entries": 407, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759487447, "oldest_key_time": 1759487447, "file_creation_time": 1759487494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 7732 microseconds, and 3670 cpu microseconds.
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:31:34.315078) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 707031 bytes OK
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:31:34.315102) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:31:34.317440) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:31:34.317455) EVENT_LOG_v1 {"time_micros": 1759487494317450, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:31:34.317479) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 710459, prev total WAL file size 736947, number of live WAL files 2.
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:31:34.318094) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(690KB)], [92(7373KB)]
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487494318142, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 8257888, "oldest_snapshot_seqno": -1}
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 5391 keys, 6530979 bytes, temperature: kUnknown
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487494361993, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 6530979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6498036, "index_size": 18369, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 141222, "raw_average_key_size": 26, "raw_value_size": 6403155, "raw_average_value_size": 1187, "num_data_blocks": 724, "num_entries": 5391, "num_filter_entries": 5391, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759487494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:31:34.362350) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 6530979 bytes
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:31:34.364127) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.9 rd, 148.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 7.2 +0.0 blob) out(6.2 +0.0 blob), read-write-amplify(20.9) write-amplify(9.2) OK, records in: 5905, records dropped: 514 output_compression: NoCompression
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:31:34.364150) EVENT_LOG_v1 {"time_micros": 1759487494364138, "job": 54, "event": "compaction_finished", "compaction_time_micros": 43959, "compaction_time_cpu_micros": 28074, "output_level": 6, "num_output_files": 1, "total_output_size": 6530979, "num_input_records": 5905, "num_output_records": 5391, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487494364504, "job": 54, "event": "table_file_deletion", "file_number": 94}
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487494366555, "job": 54, "event": "table_file_deletion", "file_number": 92}
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:31:34.318003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:31:34.366816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:31:34.366825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:31:34.366826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:31:34.366828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:31:34 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:31:34.366830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:31:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:31:37 compute-0 nova_compute[351685]: 2025-10-03 10:31:37.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:37 compute-0 nova_compute[351685]: 2025-10-03 10:31:37.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.890 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.891 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.891 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.892 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.900 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.901 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.901 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.901 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.901 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:31:40.901814) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.910 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.911 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.912 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.912 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.912 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.912 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.912 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.913 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.913 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.914 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.914 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.914 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:31:40.912831) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.914 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.914 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.915 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:31:40.915062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.939 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.939 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.940 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.940 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.940 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.940 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.940 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.940 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.940 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:31:40.940808) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.997 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.998 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:40.999 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.000 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.001 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.001 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.002 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.002 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:31:41.002569) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.003 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.004 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.004 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.006 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.006 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.007 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.007 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.008 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:31:41.007754) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.008 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.011 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.012 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.012 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:31:41.012930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.015 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.018 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.019 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.021 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:31:41.019037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.023 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.023 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:31:41.024499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.025 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.026 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.027 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.028 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.028 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.028 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.029 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:31:41.029210) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.054 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.055 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.055 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.055 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.056 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.056 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.056 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.056 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.056 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:31:41.056136) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.057 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.058 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.058 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.058 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.058 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.058 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:31:41.058436) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.059 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.059 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.059 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.060 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.060 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.060 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.060 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.061 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.062 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:31:41.060491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.062 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:31:41.062356) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.062 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.062 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.063 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.063 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.064 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:31:41.063547) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.064 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.064 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:31:41.064818) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.065 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.065 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.065 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.066 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.066 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.066 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:31:41.065949) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.067 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:31:41.067411) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.067 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 56810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.068 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.068 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.069 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.069 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.070 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.070 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.070 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.070 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.071 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.071 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.071 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:31:41.068759) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.072 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.072 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.072 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.073 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.073 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:31:41.070441) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.074 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:31:41.072055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.074 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:31:41.073989) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.074 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:31:41 compute-0 nova_compute[351685]: 2025-10-03 10:31:41.074 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.075 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.075 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.076 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.076 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.077 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:31:41.076070) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.077 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.077 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.077 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.078 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.078 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:31:41.078029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:31:41.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:31:41 compute-0 nova_compute[351685]: 2025-10-03 10:31:41.096 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Triggering sync for uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  3 10:31:41 compute-0 nova_compute[351685]: 2025-10-03 10:31:41.097 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:31:41 compute-0 nova_compute[351685]: 2025-10-03 10:31:41.099 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:31:41 compute-0 nova_compute[351685]: 2025-10-03 10:31:41.127 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:31:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:31:41.625 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:31:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:31:41.626 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:31:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:31:41.627 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:31:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:31:42 compute-0 nova_compute[351685]: 2025-10-03 10:31:42.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:42 compute-0 nova_compute[351685]: 2025-10-03 10:31:42.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1932: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:43 compute-0 podman[456713]: 2025-10-03 10:31:43.848701516 +0000 UTC m=+0.099099515 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS)
Oct  3 10:31:43 compute-0 podman[456712]: 2025-10-03 10:31:43.869827333 +0000 UTC m=+0.124657945 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct  3 10:31:43 compute-0 podman[456714]: 2025-10-03 10:31:43.878324605 +0000 UTC m=+0.128953131 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct  3 10:31:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:31:46
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.log', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'vms']
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:31:46 compute-0 ceph-mgr[192071]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3262515590
Oct  3 10:31:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:31:47 compute-0 nova_compute[351685]: 2025-10-03 10:31:47.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:47 compute-0 podman[456775]: 2025-10-03 10:31:47.83562093 +0000 UTC m=+0.094860949 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:31:47 compute-0 nova_compute[351685]: 2025-10-03 10:31:47.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:51 compute-0 podman[456794]: 2025-10-03 10:31:51.845748774 +0000 UTC m=+0.092767092 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 10:31:51 compute-0 podman[456795]: 2025-10-03 10:31:51.854600528 +0000 UTC m=+0.093669631 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  3 10:31:51 compute-0 podman[456796]: 2025-10-03 10:31:51.890675264 +0000 UTC m=+0.123642572 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  3 10:31:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:31:52 compute-0 nova_compute[351685]: 2025-10-03 10:31:52.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:52 compute-0 nova_compute[351685]: 2025-10-03 10:31:52.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:31:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/95397802' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:31:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:31:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/95397802' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:31:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:31:57 compute-0 nova_compute[351685]: 2025-10-03 10:31:57.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:57 compute-0 podman[456857]: 2025-10-03 10:31:57.860756083 +0000 UTC m=+0.109868869 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi)
Oct  3 10:31:57 compute-0 nova_compute[351685]: 2025-10-03 10:31:57.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:31:58 compute-0 nova_compute[351685]: 2025-10-03 10:31:58.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:31:58 compute-0 nova_compute[351685]: 2025-10-03 10:31:58.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:31:59 compute-0 nova_compute[351685]: 2025-10-03 10:31:59.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:31:59 compute-0 nova_compute[351685]: 2025-10-03 10:31:59.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:31:59 compute-0 nova_compute[351685]: 2025-10-03 10:31:59.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:31:59 compute-0 podman[157165]: time="2025-10-03T10:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:31:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:31:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:31:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9060 "" "Go-http-client/1.1"
Oct  3 10:32:00 compute-0 nova_compute[351685]: 2025-10-03 10:32:00.144 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:32:00 compute-0 nova_compute[351685]: 2025-10-03 10:32:00.146 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:32:00 compute-0 nova_compute[351685]: 2025-10-03 10:32:00.146 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:32:00 compute-0 nova_compute[351685]: 2025-10-03 10:32:00.146 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:32:01 compute-0 openstack_network_exporter[367524]: ERROR   10:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:32:01 compute-0 openstack_network_exporter[367524]: ERROR   10:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:32:01 compute-0 openstack_network_exporter[367524]: ERROR   10:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:32:01 compute-0 openstack_network_exporter[367524]: ERROR   10:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:32:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:32:01 compute-0 openstack_network_exporter[367524]: ERROR   10:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:32:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:32:01 compute-0 nova_compute[351685]: 2025-10-03 10:32:01.461 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:32:01 compute-0 nova_compute[351685]: 2025-10-03 10:32:01.474 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:32:01 compute-0 nova_compute[351685]: 2025-10-03 10:32:01.475 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:32:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:32:02 compute-0 nova_compute[351685]: 2025-10-03 10:32:02.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:02 compute-0 nova_compute[351685]: 2025-10-03 10:32:02.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:32:02 compute-0 nova_compute[351685]: 2025-10-03 10:32:02.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:03 compute-0 nova_compute[351685]: 2025-10-03 10:32:03.724 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:32:03 compute-0 nova_compute[351685]: 2025-10-03 10:32:03.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:32:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1942: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:03 compute-0 podman[456878]: 2025-10-03 10:32:03.862048825 +0000 UTC m=+0.103850907 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, architecture=x86_64, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, vendor=Red Hat, Inc., version=9.4, container_name=kepler, managed_by=edpm_ansible)
Oct  3 10:32:03 compute-0 podman[456877]: 2025-10-03 10:32:03.868301265 +0000 UTC m=+0.118864378 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:32:04 compute-0 nova_compute[351685]: 2025-10-03 10:32:04.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:32:04 compute-0 nova_compute[351685]: 2025-10-03 10:32:04.758 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:32:04 compute-0 nova_compute[351685]: 2025-10-03 10:32:04.759 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:32:04 compute-0 nova_compute[351685]: 2025-10-03 10:32:04.759 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:32:04 compute-0 nova_compute[351685]: 2025-10-03 10:32:04.759 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:32:04 compute-0 nova_compute[351685]: 2025-10-03 10:32:04.759 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:32:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:32:05 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3050445050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:32:05 compute-0 nova_compute[351685]: 2025-10-03 10:32:05.239 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:32:05 compute-0 nova_compute[351685]: 2025-10-03 10:32:05.328 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:32:05 compute-0 nova_compute[351685]: 2025-10-03 10:32:05.328 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:32:05 compute-0 nova_compute[351685]: 2025-10-03 10:32:05.328 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:32:05 compute-0 nova_compute[351685]: 2025-10-03 10:32:05.688 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:32:05 compute-0 nova_compute[351685]: 2025-10-03 10:32:05.690 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3841MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:32:05 compute-0 nova_compute[351685]: 2025-10-03 10:32:05.691 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:32:05 compute-0 nova_compute[351685]: 2025-10-03 10:32:05.691 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:32:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:05 compute-0 nova_compute[351685]: 2025-10-03 10:32:05.835 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:32:05 compute-0 nova_compute[351685]: 2025-10-03 10:32:05.836 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:32:05 compute-0 nova_compute[351685]: 2025-10-03 10:32:05.836 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:32:06 compute-0 nova_compute[351685]: 2025-10-03 10:32:06.000 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:32:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:32:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4032523103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:32:06 compute-0 nova_compute[351685]: 2025-10-03 10:32:06.503 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:32:06 compute-0 nova_compute[351685]: 2025-10-03 10:32:06.513 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:32:06 compute-0 nova_compute[351685]: 2025-10-03 10:32:06.528 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:32:06 compute-0 nova_compute[351685]: 2025-10-03 10:32:06.530 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:32:06 compute-0 nova_compute[351685]: 2025-10-03 10:32:06.530 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:32:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:32:07 compute-0 nova_compute[351685]: 2025-10-03 10:32:07.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:07 compute-0 nova_compute[351685]: 2025-10-03 10:32:07.530 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:32:07 compute-0 nova_compute[351685]: 2025-10-03 10:32:07.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:32:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:07 compute-0 nova_compute[351685]: 2025-10-03 10:32:07.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1945: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:11 compute-0 nova_compute[351685]: 2025-10-03 10:32:11.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:32:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:32:12 compute-0 nova_compute[351685]: 2025-10-03 10:32:12.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:12 compute-0 nova_compute[351685]: 2025-10-03 10:32:12.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1947: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:14 compute-0 podman[456963]: 2025-10-03 10:32:14.815181976 +0000 UTC m=+0.107797803 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  3 10:32:14 compute-0 podman[456962]: 2025-10-03 10:32:14.819743923 +0000 UTC m=+0.110425908 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-type=git)
Oct  3 10:32:14 compute-0 podman[456964]: 2025-10-03 10:32:14.873616198 +0000 UTC m=+0.157998531 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Oct  3 10:32:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:32:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:32:17 compute-0 nova_compute[351685]: 2025-10-03 10:32:17.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:17 compute-0 nova_compute[351685]: 2025-10-03 10:32:17.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:18 compute-0 podman[457025]: 2025-10-03 10:32:18.812100409 +0000 UTC m=+0.073984911 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  3 10:32:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:32:22 compute-0 nova_compute[351685]: 2025-10-03 10:32:22.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:22 compute-0 podman[457045]: 2025-10-03 10:32:22.838630855 +0000 UTC m=+0.083914858 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:32:22 compute-0 podman[457047]: 2025-10-03 10:32:22.839776002 +0000 UTC m=+0.084695574 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  3 10:32:22 compute-0 podman[457046]: 2025-10-03 10:32:22.851605231 +0000 UTC m=+0.101208252 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.build-date=20251001)
Oct  3 10:32:22 compute-0 nova_compute[351685]: 2025-10-03 10:32:22.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:32:27 compute-0 nova_compute[351685]: 2025-10-03 10:32:27.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:27 compute-0 nova_compute[351685]: 2025-10-03 10:32:27.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:28 compute-0 podman[457106]: 2025-10-03 10:32:28.836525607 +0000 UTC m=+0.088606569 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Oct  3 10:32:29 compute-0 podman[157165]: time="2025-10-03T10:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:32:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:32:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9070 "" "Go-http-client/1.1"
Oct  3 10:32:31 compute-0 openstack_network_exporter[367524]: ERROR   10:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:32:31 compute-0 openstack_network_exporter[367524]: ERROR   10:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:32:31 compute-0 openstack_network_exporter[367524]: ERROR   10:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:32:31 compute-0 openstack_network_exporter[367524]: ERROR   10:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:32:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:32:31 compute-0 openstack_network_exporter[367524]: ERROR   10:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:32:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:32:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:32:32 compute-0 nova_compute[351685]: 2025-10-03 10:32:32.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:32 compute-0 nova_compute[351685]: 2025-10-03 10:32:32.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:34 compute-0 podman[457224]: 2025-10-03 10:32:34.003536665 +0000 UTC m=+0.074078403 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:32:34 compute-0 podman[457225]: 2025-10-03 10:32:34.049458887 +0000 UTC m=+0.114188319 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:32:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:32:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:32:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:32:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:32:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:32:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:32:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 49f8ae79-2b93-42ac-9ad6-9419aafaf449 does not exist
Oct  3 10:32:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 5c2b4805-1c5a-484c-ba46-5385cf064ed6 does not exist
Oct  3 10:32:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 5673ec01-08a3-400a-ab50-22b9f2fcd684 does not exist
Oct  3 10:32:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:32:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:32:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:32:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:32:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:32:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:32:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:32:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:32:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:32:35 compute-0 podman[457434]: 2025-10-03 10:32:35.510459723 +0000 UTC m=+0.061811461 container create 23360d8ddca1d2bb61b5c6c8799521d2f26e21a0c72401e7649cb0e29885e99c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:32:35 compute-0 systemd[1]: Started libpod-conmon-23360d8ddca1d2bb61b5c6c8799521d2f26e21a0c72401e7649cb0e29885e99c.scope.
Oct  3 10:32:35 compute-0 podman[457434]: 2025-10-03 10:32:35.486448494 +0000 UTC m=+0.037800232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:32:35 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:32:35 compute-0 podman[457434]: 2025-10-03 10:32:35.642177342 +0000 UTC m=+0.193529100 container init 23360d8ddca1d2bb61b5c6c8799521d2f26e21a0c72401e7649cb0e29885e99c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:32:35 compute-0 podman[457434]: 2025-10-03 10:32:35.652782212 +0000 UTC m=+0.204133940 container start 23360d8ddca1d2bb61b5c6c8799521d2f26e21a0c72401e7649cb0e29885e99c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 10:32:35 compute-0 podman[457434]: 2025-10-03 10:32:35.657527773 +0000 UTC m=+0.208879531 container attach 23360d8ddca1d2bb61b5c6c8799521d2f26e21a0c72401e7649cb0e29885e99c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 10:32:35 compute-0 nice_kepler[457450]: 167 167
Oct  3 10:32:35 compute-0 systemd[1]: libpod-23360d8ddca1d2bb61b5c6c8799521d2f26e21a0c72401e7649cb0e29885e99c.scope: Deactivated successfully.
Oct  3 10:32:35 compute-0 podman[457434]: 2025-10-03 10:32:35.662478523 +0000 UTC m=+0.213830251 container died 23360d8ddca1d2bb61b5c6c8799521d2f26e21a0c72401e7649cb0e29885e99c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:32:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-648255cd6e19687fcc053e39850e8fcce5b3c7c75a88549863ea480a6ec92aeb-merged.mount: Deactivated successfully.
Oct  3 10:32:35 compute-0 podman[457434]: 2025-10-03 10:32:35.723775586 +0000 UTC m=+0.275127324 container remove 23360d8ddca1d2bb61b5c6c8799521d2f26e21a0c72401e7649cb0e29885e99c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kepler, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:32:35 compute-0 systemd[1]: libpod-conmon-23360d8ddca1d2bb61b5c6c8799521d2f26e21a0c72401e7649cb0e29885e99c.scope: Deactivated successfully.
Oct  3 10:32:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:35 compute-0 podman[457473]: 2025-10-03 10:32:35.926269431 +0000 UTC m=+0.060430586 container create 98dac69ec8b943ef62c602b2fb8e9cdb4e387e58fadf635bf8de22a40dbca3aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:32:35 compute-0 systemd[1]: Started libpod-conmon-98dac69ec8b943ef62c602b2fb8e9cdb4e387e58fadf635bf8de22a40dbca3aa.scope.
Oct  3 10:32:35 compute-0 podman[457473]: 2025-10-03 10:32:35.904665259 +0000 UTC m=+0.038826444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:32:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a60a13afb152e0b6e9ce99b119319e5de6bd8d2538c05a23214887a755a7d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a60a13afb152e0b6e9ce99b119319e5de6bd8d2538c05a23214887a755a7d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a60a13afb152e0b6e9ce99b119319e5de6bd8d2538c05a23214887a755a7d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a60a13afb152e0b6e9ce99b119319e5de6bd8d2538c05a23214887a755a7d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a60a13afb152e0b6e9ce99b119319e5de6bd8d2538c05a23214887a755a7d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:32:36 compute-0 podman[457473]: 2025-10-03 10:32:36.048150686 +0000 UTC m=+0.182311861 container init 98dac69ec8b943ef62c602b2fb8e9cdb4e387e58fadf635bf8de22a40dbca3aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 10:32:36 compute-0 podman[457473]: 2025-10-03 10:32:36.067919209 +0000 UTC m=+0.202080364 container start 98dac69ec8b943ef62c602b2fb8e9cdb4e387e58fadf635bf8de22a40dbca3aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 10:32:36 compute-0 podman[457473]: 2025-10-03 10:32:36.072985361 +0000 UTC m=+0.207146536 container attach 98dac69ec8b943ef62c602b2fb8e9cdb4e387e58fadf635bf8de22a40dbca3aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 10:32:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:32:37 compute-0 adoring_kapitsa[457490]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:32:37 compute-0 adoring_kapitsa[457490]: --> relative data size: 1.0
Oct  3 10:32:37 compute-0 adoring_kapitsa[457490]: --> All data devices are unavailable
Oct  3 10:32:37 compute-0 systemd[1]: libpod-98dac69ec8b943ef62c602b2fb8e9cdb4e387e58fadf635bf8de22a40dbca3aa.scope: Deactivated successfully.
Oct  3 10:32:37 compute-0 systemd[1]: libpod-98dac69ec8b943ef62c602b2fb8e9cdb4e387e58fadf635bf8de22a40dbca3aa.scope: Consumed 1.102s CPU time.
Oct  3 10:32:37 compute-0 podman[457473]: 2025-10-03 10:32:37.231203009 +0000 UTC m=+1.365364184 container died 98dac69ec8b943ef62c602b2fb8e9cdb4e387e58fadf635bf8de22a40dbca3aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 10:32:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-36a60a13afb152e0b6e9ce99b119319e5de6bd8d2538c05a23214887a755a7d0-merged.mount: Deactivated successfully.
Oct  3 10:32:37 compute-0 podman[457473]: 2025-10-03 10:32:37.301694887 +0000 UTC m=+1.435856042 container remove 98dac69ec8b943ef62c602b2fb8e9cdb4e387e58fadf635bf8de22a40dbca3aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  3 10:32:37 compute-0 systemd[1]: libpod-conmon-98dac69ec8b943ef62c602b2fb8e9cdb4e387e58fadf635bf8de22a40dbca3aa.scope: Deactivated successfully.
Oct  3 10:32:37 compute-0 nova_compute[351685]: 2025-10-03 10:32:37.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:37 compute-0 nova_compute[351685]: 2025-10-03 10:32:37.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:38 compute-0 podman[457671]: 2025-10-03 10:32:38.236664954 +0000 UTC m=+0.060434946 container create 71cdb02d5e2d6711a0f65018ad178f206a03454b683d7be4856b425eda7296e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Oct  3 10:32:38 compute-0 systemd[1]: Started libpod-conmon-71cdb02d5e2d6711a0f65018ad178f206a03454b683d7be4856b425eda7296e5.scope.
Oct  3 10:32:38 compute-0 podman[457671]: 2025-10-03 10:32:38.213669058 +0000 UTC m=+0.037439010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:32:38 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:32:38 compute-0 podman[457671]: 2025-10-03 10:32:38.379559272 +0000 UTC m=+0.203329244 container init 71cdb02d5e2d6711a0f65018ad178f206a03454b683d7be4856b425eda7296e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pascal, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 10:32:38 compute-0 podman[457671]: 2025-10-03 10:32:38.390989718 +0000 UTC m=+0.214759680 container start 71cdb02d5e2d6711a0f65018ad178f206a03454b683d7be4856b425eda7296e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pascal, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:32:38 compute-0 podman[457671]: 2025-10-03 10:32:38.396471343 +0000 UTC m=+0.220241395 container attach 71cdb02d5e2d6711a0f65018ad178f206a03454b683d7be4856b425eda7296e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pascal, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:32:38 compute-0 suspicious_pascal[457687]: 167 167
Oct  3 10:32:38 compute-0 systemd[1]: libpod-71cdb02d5e2d6711a0f65018ad178f206a03454b683d7be4856b425eda7296e5.scope: Deactivated successfully.
Oct  3 10:32:38 compute-0 podman[457671]: 2025-10-03 10:32:38.401080901 +0000 UTC m=+0.224850853 container died 71cdb02d5e2d6711a0f65018ad178f206a03454b683d7be4856b425eda7296e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 10:32:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-43319ba24d5abed3ffc8e889e39b3dfc16eff31bed31e445dacf74f9e4b2bdab-merged.mount: Deactivated successfully.
Oct  3 10:32:38 compute-0 podman[457671]: 2025-10-03 10:32:38.449873074 +0000 UTC m=+0.273643026 container remove 71cdb02d5e2d6711a0f65018ad178f206a03454b683d7be4856b425eda7296e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pascal, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:32:38 compute-0 systemd[1]: libpod-conmon-71cdb02d5e2d6711a0f65018ad178f206a03454b683d7be4856b425eda7296e5.scope: Deactivated successfully.
Oct  3 10:32:38 compute-0 podman[457712]: 2025-10-03 10:32:38.645217001 +0000 UTC m=+0.056330556 container create 327beff022e03db7d6ef1f29bfc5a6c4292cb1897b1c936a764256b8212b42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:32:38 compute-0 systemd[1]: Started libpod-conmon-327beff022e03db7d6ef1f29bfc5a6c4292cb1897b1c936a764256b8212b42da.scope.
Oct  3 10:32:38 compute-0 podman[457712]: 2025-10-03 10:32:38.621107539 +0000 UTC m=+0.032221114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:32:38 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334146142a6283c49eed48019b237997e284d4b85a884a07e24d82e5f5a29a8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334146142a6283c49eed48019b237997e284d4b85a884a07e24d82e5f5a29a8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334146142a6283c49eed48019b237997e284d4b85a884a07e24d82e5f5a29a8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334146142a6283c49eed48019b237997e284d4b85a884a07e24d82e5f5a29a8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:32:38 compute-0 podman[457712]: 2025-10-03 10:32:38.760990399 +0000 UTC m=+0.172103974 container init 327beff022e03db7d6ef1f29bfc5a6c4292cb1897b1c936a764256b8212b42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_faraday, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:32:38 compute-0 podman[457712]: 2025-10-03 10:32:38.775407981 +0000 UTC m=+0.186521536 container start 327beff022e03db7d6ef1f29bfc5a6c4292cb1897b1c936a764256b8212b42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 10:32:38 compute-0 podman[457712]: 2025-10-03 10:32:38.780660559 +0000 UTC m=+0.191774124 container attach 327beff022e03db7d6ef1f29bfc5a6c4292cb1897b1c936a764256b8212b42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_faraday, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]: {
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:    "0": [
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:        {
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "devices": [
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "/dev/loop3"
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            ],
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "lv_name": "ceph_lv0",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "lv_size": "21470642176",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "name": "ceph_lv0",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "tags": {
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.cluster_name": "ceph",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.crush_device_class": "",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.encrypted": "0",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.osd_id": "0",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.type": "block",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.vdo": "0"
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            },
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "type": "block",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "vg_name": "ceph_vg0"
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:        }
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:    ],
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:    "1": [
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:        {
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "devices": [
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "/dev/loop4"
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            ],
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "lv_name": "ceph_lv1",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "lv_size": "21470642176",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "name": "ceph_lv1",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "tags": {
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.cluster_name": "ceph",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.crush_device_class": "",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.encrypted": "0",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.osd_id": "1",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.type": "block",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.vdo": "0"
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            },
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "type": "block",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "vg_name": "ceph_vg1"
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:        }
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:    ],
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:    "2": [
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:        {
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "devices": [
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "/dev/loop5"
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            ],
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "lv_name": "ceph_lv2",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "lv_size": "21470642176",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "name": "ceph_lv2",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "tags": {
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.cluster_name": "ceph",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.crush_device_class": "",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.encrypted": "0",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.osd_id": "2",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.type": "block",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:                "ceph.vdo": "0"
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            },
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "type": "block",
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:            "vg_name": "ceph_vg2"
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:        }
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]:    ]
Oct  3 10:32:39 compute-0 ecstatic_faraday[457728]: }
Oct  3 10:32:39 compute-0 systemd[1]: libpod-327beff022e03db7d6ef1f29bfc5a6c4292cb1897b1c936a764256b8212b42da.scope: Deactivated successfully.
Oct  3 10:32:39 compute-0 podman[457712]: 2025-10-03 10:32:39.657863447 +0000 UTC m=+1.068977042 container died 327beff022e03db7d6ef1f29bfc5a6c4292cb1897b1c936a764256b8212b42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:32:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-334146142a6283c49eed48019b237997e284d4b85a884a07e24d82e5f5a29a8b-merged.mount: Deactivated successfully.
Oct  3 10:32:39 compute-0 podman[457712]: 2025-10-03 10:32:39.74320718 +0000 UTC m=+1.154320735 container remove 327beff022e03db7d6ef1f29bfc5a6c4292cb1897b1c936a764256b8212b42da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  3 10:32:39 compute-0 systemd[1]: libpod-conmon-327beff022e03db7d6ef1f29bfc5a6c4292cb1897b1c936a764256b8212b42da.scope: Deactivated successfully.
Oct  3 10:32:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:40 compute-0 podman[457888]: 2025-10-03 10:32:40.589130585 +0000 UTC m=+0.061224682 container create 42a4889a40bb5ae9f1f300b670aa8ce953bec9ad04c3ec41c978e6efd5a3205b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_colden, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 10:32:40 compute-0 systemd[1]: Started libpod-conmon-42a4889a40bb5ae9f1f300b670aa8ce953bec9ad04c3ec41c978e6efd5a3205b.scope.
Oct  3 10:32:40 compute-0 podman[457888]: 2025-10-03 10:32:40.566039986 +0000 UTC m=+0.038134113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:32:40 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:32:40 compute-0 podman[457888]: 2025-10-03 10:32:40.711486304 +0000 UTC m=+0.183580421 container init 42a4889a40bb5ae9f1f300b670aa8ce953bec9ad04c3ec41c978e6efd5a3205b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_colden, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 10:32:40 compute-0 podman[457888]: 2025-10-03 10:32:40.723978035 +0000 UTC m=+0.196072132 container start 42a4889a40bb5ae9f1f300b670aa8ce953bec9ad04c3ec41c978e6efd5a3205b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:32:40 compute-0 great_colden[457905]: 167 167
Oct  3 10:32:40 compute-0 systemd[1]: libpod-42a4889a40bb5ae9f1f300b670aa8ce953bec9ad04c3ec41c978e6efd5a3205b.scope: Deactivated successfully.
Oct  3 10:32:40 compute-0 podman[457888]: 2025-10-03 10:32:40.733851551 +0000 UTC m=+0.205945668 container attach 42a4889a40bb5ae9f1f300b670aa8ce953bec9ad04c3ec41c978e6efd5a3205b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_colden, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 10:32:40 compute-0 podman[457888]: 2025-10-03 10:32:40.7344387 +0000 UTC m=+0.206532797 container died 42a4889a40bb5ae9f1f300b670aa8ce953bec9ad04c3ec41c978e6efd5a3205b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  3 10:32:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-44ff43d8adb73602eefafe2d379b12dde917dfe89879ba887ba1c3cfa4826f6a-merged.mount: Deactivated successfully.
Oct  3 10:32:40 compute-0 podman[457888]: 2025-10-03 10:32:40.817757248 +0000 UTC m=+0.289851345 container remove 42a4889a40bb5ae9f1f300b670aa8ce953bec9ad04c3ec41c978e6efd5a3205b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_colden, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:32:40 compute-0 systemd[1]: libpod-conmon-42a4889a40bb5ae9f1f300b670aa8ce953bec9ad04c3ec41c978e6efd5a3205b.scope: Deactivated successfully.
Oct  3 10:32:41 compute-0 podman[457931]: 2025-10-03 10:32:41.034332265 +0000 UTC m=+0.060514859 container create f5a5e6fa087ee1aa0038c8e92a7f4f89d4692efb790c22e753bd8234f9165ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:32:41 compute-0 systemd[1]: Started libpod-conmon-f5a5e6fa087ee1aa0038c8e92a7f4f89d4692efb790c22e753bd8234f9165ced.scope.
Oct  3 10:32:41 compute-0 podman[457931]: 2025-10-03 10:32:41.010193202 +0000 UTC m=+0.036375776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:32:41 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f46abbb7ebf14c4f2f57bd482b5de78842a1f192174f1483b086cd7f80f71f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f46abbb7ebf14c4f2f57bd482b5de78842a1f192174f1483b086cd7f80f71f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f46abbb7ebf14c4f2f57bd482b5de78842a1f192174f1483b086cd7f80f71f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f46abbb7ebf14c4f2f57bd482b5de78842a1f192174f1483b086cd7f80f71f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:32:41 compute-0 podman[457931]: 2025-10-03 10:32:41.164119102 +0000 UTC m=+0.190301676 container init f5a5e6fa087ee1aa0038c8e92a7f4f89d4692efb790c22e753bd8234f9165ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 10:32:41 compute-0 podman[457931]: 2025-10-03 10:32:41.186997615 +0000 UTC m=+0.213180169 container start f5a5e6fa087ee1aa0038c8e92a7f4f89d4692efb790c22e753bd8234f9165ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:32:41 compute-0 podman[457931]: 2025-10-03 10:32:41.191678105 +0000 UTC m=+0.217860669 container attach f5a5e6fa087ee1aa0038c8e92a7f4f89d4692efb790c22e753bd8234f9165ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 10:32:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:32:41.626 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:32:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:32:41.627 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:32:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:32:41.628 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:32:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:32:42 compute-0 amazing_easley[457946]: {
Oct  3 10:32:42 compute-0 amazing_easley[457946]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:32:42 compute-0 amazing_easley[457946]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:32:42 compute-0 amazing_easley[457946]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:32:42 compute-0 amazing_easley[457946]:        "osd_id": 1,
Oct  3 10:32:42 compute-0 amazing_easley[457946]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:32:42 compute-0 amazing_easley[457946]:        "type": "bluestore"
Oct  3 10:32:42 compute-0 amazing_easley[457946]:    },
Oct  3 10:32:42 compute-0 amazing_easley[457946]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:32:42 compute-0 amazing_easley[457946]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:32:42 compute-0 amazing_easley[457946]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:32:42 compute-0 amazing_easley[457946]:        "osd_id": 2,
Oct  3 10:32:42 compute-0 amazing_easley[457946]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:32:42 compute-0 amazing_easley[457946]:        "type": "bluestore"
Oct  3 10:32:42 compute-0 amazing_easley[457946]:    },
Oct  3 10:32:42 compute-0 amazing_easley[457946]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:32:42 compute-0 amazing_easley[457946]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:32:42 compute-0 amazing_easley[457946]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:32:42 compute-0 amazing_easley[457946]:        "osd_id": 0,
Oct  3 10:32:42 compute-0 amazing_easley[457946]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:32:42 compute-0 amazing_easley[457946]:        "type": "bluestore"
Oct  3 10:32:42 compute-0 amazing_easley[457946]:    }
Oct  3 10:32:42 compute-0 amazing_easley[457946]: }
Oct  3 10:32:42 compute-0 systemd[1]: libpod-f5a5e6fa087ee1aa0038c8e92a7f4f89d4692efb790c22e753bd8234f9165ced.scope: Deactivated successfully.
Oct  3 10:32:42 compute-0 systemd[1]: libpod-f5a5e6fa087ee1aa0038c8e92a7f4f89d4692efb790c22e753bd8234f9165ced.scope: Consumed 1.006s CPU time.
Oct  3 10:32:42 compute-0 podman[457980]: 2025-10-03 10:32:42.254921751 +0000 UTC m=+0.039114473 container died f5a5e6fa087ee1aa0038c8e92a7f4f89d4692efb790c22e753bd8234f9165ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:32:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3f46abbb7ebf14c4f2f57bd482b5de78842a1f192174f1483b086cd7f80f71f-merged.mount: Deactivated successfully.
Oct  3 10:32:42 compute-0 podman[457980]: 2025-10-03 10:32:42.354078097 +0000 UTC m=+0.138270809 container remove f5a5e6fa087ee1aa0038c8e92a7f4f89d4692efb790c22e753bd8234f9165ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_easley, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  3 10:32:42 compute-0 nova_compute[351685]: 2025-10-03 10:32:42.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:42 compute-0 systemd[1]: libpod-conmon-f5a5e6fa087ee1aa0038c8e92a7f4f89d4692efb790c22e753bd8234f9165ced.scope: Deactivated successfully.
Oct  3 10:32:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:32:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:32:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:32:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:32:42 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev c3ea332d-283c-4342-98be-3a62dc3c6fb5 does not exist
Oct  3 10:32:42 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev c500caa4-5130-4d03-9a56-67ce1245cce8 does not exist
Oct  3 10:32:42 compute-0 nova_compute[351685]: 2025-10-03 10:32:42.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:32:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:32:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:45 compute-0 podman[458044]: 2025-10-03 10:32:45.861136928 +0000 UTC m=+0.117985850 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.buildah.version=1.33.7, managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Oct  3 10:32:45 compute-0 podman[458045]: 2025-10-03 10:32:45.870223619 +0000 UTC m=+0.113355642 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 10:32:45 compute-0 podman[458046]: 2025-10-03 10:32:45.899699913 +0000 UTC m=+0.141183883 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:32:46
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', '.mgr', 'backups', 'default.rgw.control', 'images', 'default.rgw.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', '.rgw.root']
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:32:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:32:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:32:47 compute-0 nova_compute[351685]: 2025-10-03 10:32:47.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:47 compute-0 nova_compute[351685]: 2025-10-03 10:32:47.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:49 compute-0 podman[458109]: 2025-10-03 10:32:49.882848306 +0000 UTC m=+0.120312065 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  3 10:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 8230 writes, 31K keys, 8230 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8230 writes, 1988 syncs, 4.14 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 331 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:32:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:32:52 compute-0 nova_compute[351685]: 2025-10-03 10:32:52.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:52 compute-0 nova_compute[351685]: 2025-10-03 10:32:52.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:53 compute-0 podman[458126]: 2025-10-03 10:32:53.80514987 +0000 UTC m=+0.062817512 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:32:53 compute-0 podman[458128]: 2025-10-03 10:32:53.848374545 +0000 UTC m=+0.097789043 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  3 10:32:53 compute-0 podman[458127]: 2025-10-03 10:32:53.864191752 +0000 UTC m=+0.117465354 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  3 10:32:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:32:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4179578394' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:32:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:32:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4179578394' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:32:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:32:57 compute-0 nova_compute[351685]: 2025-10-03 10:32:57.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 9124 writes, 34K keys, 9124 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9124 writes, 2258 syncs, 4.04 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 281 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:32:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:57 compute-0 nova_compute[351685]: 2025-10-03 10:32:57.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:32:59 compute-0 nova_compute[351685]: 2025-10-03 10:32:59.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:32:59 compute-0 nova_compute[351685]: 2025-10-03 10:32:59.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:32:59 compute-0 nova_compute[351685]: 2025-10-03 10:32:59.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:32:59 compute-0 podman[157165]: time="2025-10-03T10:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:32:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:32:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9070 "" "Go-http-client/1.1"
Oct  3 10:32:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:32:59 compute-0 podman[458187]: 2025-10-03 10:32:59.841689299 +0000 UTC m=+0.096427950 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:33:00 compute-0 nova_compute[351685]: 2025-10-03 10:33:00.382 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:33:00 compute-0 nova_compute[351685]: 2025-10-03 10:33:00.383 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:33:00 compute-0 nova_compute[351685]: 2025-10-03 10:33:00.383 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:33:00 compute-0 nova_compute[351685]: 2025-10-03 10:33:00.383 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:33:01 compute-0 openstack_network_exporter[367524]: ERROR   10:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:33:01 compute-0 openstack_network_exporter[367524]: ERROR   10:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:33:01 compute-0 openstack_network_exporter[367524]: ERROR   10:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:33:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:33:01 compute-0 openstack_network_exporter[367524]: ERROR   10:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:33:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:33:01 compute-0 openstack_network_exporter[367524]: ERROR   10:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:33:01 compute-0 nova_compute[351685]: 2025-10-03 10:33:01.634 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:33:01 compute-0 nova_compute[351685]: 2025-10-03 10:33:01.654 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:33:01 compute-0 nova_compute[351685]: 2025-10-03 10:33:01.655 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:33:01 compute-0 nova_compute[351685]: 2025-10-03 10:33:01.655 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:33:01 compute-0 nova_compute[351685]: 2025-10-03 10:33:01.656 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:33:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:33:02 compute-0 nova_compute[351685]: 2025-10-03 10:33:02.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:02 compute-0 nova_compute[351685]: 2025-10-03 10:33:02.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 7309 writes, 28K keys, 7309 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7309 writes, 1613 syncs, 4.53 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 281 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:33:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:04 compute-0 nova_compute[351685]: 2025-10-03 10:33:04.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:33:04 compute-0 nova_compute[351685]: 2025-10-03 10:33:04.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:33:04 compute-0 nova_compute[351685]: 2025-10-03 10:33:04.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:33:04 compute-0 nova_compute[351685]: 2025-10-03 10:33:04.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:33:04 compute-0 nova_compute[351685]: 2025-10-03 10:33:04.754 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:33:04 compute-0 nova_compute[351685]: 2025-10-03 10:33:04.754 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:33:04 compute-0 nova_compute[351685]: 2025-10-03 10:33:04.755 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:33:04 compute-0 nova_compute[351685]: 2025-10-03 10:33:04.755 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:33:04 compute-0 nova_compute[351685]: 2025-10-03 10:33:04.755 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:33:04 compute-0 podman[458206]: 2025-10-03 10:33:04.824475023 +0000 UTC m=+0.077787233 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:33:04 compute-0 podman[458207]: 2025-10-03 10:33:04.866949813 +0000 UTC m=+0.117765433 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, version=9.4, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, vcs-type=git, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct  3 10:33:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:33:05 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/68240151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:33:05 compute-0 nova_compute[351685]: 2025-10-03 10:33:05.250 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:33:05 compute-0 nova_compute[351685]: 2025-10-03 10:33:05.335 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:33:05 compute-0 nova_compute[351685]: 2025-10-03 10:33:05.336 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:33:05 compute-0 nova_compute[351685]: 2025-10-03 10:33:05.337 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:33:05 compute-0 ceph-mgr[192071]: [devicehealth INFO root] Check health
Oct  3 10:33:05 compute-0 nova_compute[351685]: 2025-10-03 10:33:05.765 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:33:05 compute-0 nova_compute[351685]: 2025-10-03 10:33:05.767 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3841MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:33:05 compute-0 nova_compute[351685]: 2025-10-03 10:33:05.767 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:33:05 compute-0 nova_compute[351685]: 2025-10-03 10:33:05.768 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:33:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:05 compute-0 nova_compute[351685]: 2025-10-03 10:33:05.838 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:33:05 compute-0 nova_compute[351685]: 2025-10-03 10:33:05.839 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:33:05 compute-0 nova_compute[351685]: 2025-10-03 10:33:05.839 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:33:05 compute-0 nova_compute[351685]: 2025-10-03 10:33:05.864 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 10:33:05 compute-0 nova_compute[351685]: 2025-10-03 10:33:05.907 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 10:33:05 compute-0 nova_compute[351685]: 2025-10-03 10:33:05.907 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 10:33:05 compute-0 nova_compute[351685]: 2025-10-03 10:33:05.926 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 10:33:05 compute-0 nova_compute[351685]: 2025-10-03 10:33:05.945 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 10:33:05 compute-0 nova_compute[351685]: 2025-10-03 10:33:05.973 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:33:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:33:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4000381932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:33:06 compute-0 nova_compute[351685]: 2025-10-03 10:33:06.430 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:33:06 compute-0 nova_compute[351685]: 2025-10-03 10:33:06.439 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:33:06 compute-0 nova_compute[351685]: 2025-10-03 10:33:06.460 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:33:06 compute-0 nova_compute[351685]: 2025-10-03 10:33:06.462 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:33:06 compute-0 nova_compute[351685]: 2025-10-03 10:33:06.462 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:33:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:33:07 compute-0 nova_compute[351685]: 2025-10-03 10:33:07.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:07 compute-0 nova_compute[351685]: 2025-10-03 10:33:07.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:08 compute-0 nova_compute[351685]: 2025-10-03 10:33:08.461 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:33:08 compute-0 nova_compute[351685]: 2025-10-03 10:33:08.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:33:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:10 compute-0 nova_compute[351685]: 2025-10-03 10:33:10.724 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:33:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:33:12 compute-0 nova_compute[351685]: 2025-10-03 10:33:12.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:12 compute-0 nova_compute[351685]: 2025-10-03 10:33:12.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:33:12 compute-0 nova_compute[351685]: 2025-10-03 10:33:12.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:33:16 compute-0 podman[458295]: 2025-10-03 10:33:16.872950355 +0000 UTC m=+0.125097157 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-type=git, com.redhat.component=ubi9-minimal-container, release=1755695350, distribution-scope=public, container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:33:16 compute-0 podman[458296]: 2025-10-03 10:33:16.877211112 +0000 UTC m=+0.118045632 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 10:33:16 compute-0 podman[458297]: 2025-10-03 10:33:16.878820463 +0000 UTC m=+0.119102065 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  3 10:33:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:33:17 compute-0 nova_compute[351685]: 2025-10-03 10:33:17.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:17 compute-0 nova_compute[351685]: 2025-10-03 10:33:17.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:20 compute-0 podman[458360]: 2025-10-03 10:33:20.81718932 +0000 UTC m=+0.072605227 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 10:33:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1981: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:33:22 compute-0 nova_compute[351685]: 2025-10-03 10:33:22.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:22 compute-0 nova_compute[351685]: 2025-10-03 10:33:22.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:24 compute-0 podman[458377]: 2025-10-03 10:33:24.855480381 +0000 UTC m=+0.099597151 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:33:24 compute-0 podman[458378]: 2025-10-03 10:33:24.85980411 +0000 UTC m=+0.106242155 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:33:24 compute-0 podman[458379]: 2025-10-03 10:33:24.863633972 +0000 UTC m=+0.095644225 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:33:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:33:27 compute-0 nova_compute[351685]: 2025-10-03 10:33:27.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1984: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:27 compute-0 nova_compute[351685]: 2025-10-03 10:33:27.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:29 compute-0 podman[157165]: time="2025-10-03T10:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:33:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:33:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9069 "" "Go-http-client/1.1"
Oct  3 10:33:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:30 compute-0 podman[458436]: 2025-10-03 10:33:30.822288587 +0000 UTC m=+0.081141000 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct  3 10:33:31 compute-0 openstack_network_exporter[367524]: ERROR   10:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:33:31 compute-0 openstack_network_exporter[367524]: ERROR   10:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:33:31 compute-0 openstack_network_exporter[367524]: ERROR   10:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:33:31 compute-0 openstack_network_exporter[367524]: ERROR   10:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:33:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:33:31 compute-0 openstack_network_exporter[367524]: ERROR   10:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:33:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:33:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:33:32 compute-0 nova_compute[351685]: 2025-10-03 10:33:32.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:32 compute-0 nova_compute[351685]: 2025-10-03 10:33:32.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:35 compute-0 podman[458456]: 2025-10-03 10:33:35.846382817 +0000 UTC m=+0.103057362 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:33:35 compute-0 podman[458457]: 2025-10-03 10:33:35.853596199 +0000 UTC m=+0.099053184 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, build-date=2024-09-18T21:23:30, release-0.7.12=, architecture=x86_64, container_name=kepler, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, io.openshift.expose-services=, distribution-scope=public)
Oct  3 10:33:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:33:37 compute-0 nova_compute[351685]: 2025-10-03 10:33:37.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1989: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:37 compute-0 nova_compute[351685]: 2025-10-03 10:33:37.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1990: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.891 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.891 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.900 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.900 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:33:40.900890) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.906 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.908 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.908 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.908 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.908 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.909 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:33:40.908941) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.909 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.910 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.911 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.911 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.911 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:33:40.911552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.911 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.945 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.946 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.946 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.947 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.948 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.948 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.948 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.949 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:40.949 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:33:40.948772) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.001 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.002 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.002 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.002 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.003 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.004 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.005 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.005 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.005 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.005 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.005 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.006 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.006 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.006 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.006 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.008 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.008 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.008 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.008 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.008 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.008 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.008 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.009 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.009 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:33:41.005178) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:33:41.007048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:33:41.008493) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:33:41.010131) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.011 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.011 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.012 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.012 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:33:41.011528) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.012 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.012 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:33:41.013024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.034 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.035 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.036 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.036 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.036 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.036 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.037 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.037 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.038 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.039 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.039 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.039 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:33:41.036428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.040 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.040 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.040 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:33:41.040218) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.041 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.041 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.042 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.042 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.043 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.043 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.043 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.043 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.043 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.044 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:33:41.043671) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.045 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.046 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.046 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.046 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.046 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.047 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:33:41.046501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.048 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.048 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.048 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.048 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:33:41.048683) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.050 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.050 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.050 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.050 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.051 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.051 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.052 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.052 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.052 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.052 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.053 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:33:41.050940) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.053 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.053 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:33:41.052624) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.053 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.053 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.054 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.054 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 58410000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.054 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.055 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:33:41.054035) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.055 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.055 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.055 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.056 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.056 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:33:41.056063) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.057 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.058 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:33:41.057842) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.059 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.059 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.059 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.059 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:33:41.059531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.060 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.061 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.061 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.062 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:33:41.061139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.062 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.063 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.064 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:33:41.063170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.065 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.065 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.066 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:33:41.065062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:33:41.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:33:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:33:41.627 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:33:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:33:41.628 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:33:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:33:41.628 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:33:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:33:42 compute-0 nova_compute[351685]: 2025-10-03 10:33:42.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:42 compute-0 nova_compute[351685]: 2025-10-03 10:33:42.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1992: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:33:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:33:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:33:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:33:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:33:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:33:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:33:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:33:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:33:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:33:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 64501556-7420-438c-8077-968a08f1ef8c does not exist
Oct  3 10:33:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e8d6fbd5-85a0-4442-9c2d-345fd3dbde62 does not exist
Oct  3 10:33:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 7f93ca1b-fc3e-443e-9aaf-98f490caf472 does not exist
Oct  3 10:33:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:33:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:33:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:33:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:33:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:33:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:33:45 compute-0 podman[458888]: 2025-10-03 10:33:45.276310869 +0000 UTC m=+0.064265629 container create 0811ef0f33d4060061c2e74d10bdc09de6850f1ae5dcbf70dfaeb8a922f47eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_antonelli, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:33:45 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:33:45 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:33:45 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:33:45 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:33:45 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:33:45 compute-0 systemd[1]: Started libpod-conmon-0811ef0f33d4060061c2e74d10bdc09de6850f1ae5dcbf70dfaeb8a922f47eba.scope.
Oct  3 10:33:45 compute-0 podman[458888]: 2025-10-03 10:33:45.252186076 +0000 UTC m=+0.040140866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:33:45 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:33:45 compute-0 podman[458888]: 2025-10-03 10:33:45.405817947 +0000 UTC m=+0.193772727 container init 0811ef0f33d4060061c2e74d10bdc09de6850f1ae5dcbf70dfaeb8a922f47eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 10:33:45 compute-0 podman[458888]: 2025-10-03 10:33:45.424656561 +0000 UTC m=+0.212611321 container start 0811ef0f33d4060061c2e74d10bdc09de6850f1ae5dcbf70dfaeb8a922f47eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_antonelli, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 10:33:45 compute-0 podman[458888]: 2025-10-03 10:33:45.429739563 +0000 UTC m=+0.217694403 container attach 0811ef0f33d4060061c2e74d10bdc09de6850f1ae5dcbf70dfaeb8a922f47eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:33:45 compute-0 happy_antonelli[458903]: 167 167
Oct  3 10:33:45 compute-0 systemd[1]: libpod-0811ef0f33d4060061c2e74d10bdc09de6850f1ae5dcbf70dfaeb8a922f47eba.scope: Deactivated successfully.
Oct  3 10:33:45 compute-0 podman[458888]: 2025-10-03 10:33:45.433864686 +0000 UTC m=+0.221819456 container died 0811ef0f33d4060061c2e74d10bdc09de6850f1ae5dcbf70dfaeb8a922f47eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:33:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-412c1ede30c4e6864b001becd6f55b7aa291f2fa8e41a75e98d5aa47f01deb5e-merged.mount: Deactivated successfully.
Oct  3 10:33:45 compute-0 podman[458888]: 2025-10-03 10:33:45.500873642 +0000 UTC m=+0.288828392 container remove 0811ef0f33d4060061c2e74d10bdc09de6850f1ae5dcbf70dfaeb8a922f47eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:33:45 compute-0 systemd[1]: libpod-conmon-0811ef0f33d4060061c2e74d10bdc09de6850f1ae5dcbf70dfaeb8a922f47eba.scope: Deactivated successfully.
Oct  3 10:33:45 compute-0 podman[458927]: 2025-10-03 10:33:45.723990978 +0000 UTC m=+0.066864302 container create e40f13d5cd66a502750b1a1327483e907e2117bdc096b0448d716fb90a08cf75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 10:33:45 compute-0 systemd[1]: Started libpod-conmon-e40f13d5cd66a502750b1a1327483e907e2117bdc096b0448d716fb90a08cf75.scope.
Oct  3 10:33:45 compute-0 podman[458927]: 2025-10-03 10:33:45.703207862 +0000 UTC m=+0.046081176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:33:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1993: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:45 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd2e484b4601c4981fa2bfe029c66241f0bfa25980a0add4b5fda75d232646ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd2e484b4601c4981fa2bfe029c66241f0bfa25980a0add4b5fda75d232646ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd2e484b4601c4981fa2bfe029c66241f0bfa25980a0add4b5fda75d232646ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd2e484b4601c4981fa2bfe029c66241f0bfa25980a0add4b5fda75d232646ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd2e484b4601c4981fa2bfe029c66241f0bfa25980a0add4b5fda75d232646ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:33:45 compute-0 podman[458927]: 2025-10-03 10:33:45.859104396 +0000 UTC m=+0.201977740 container init e40f13d5cd66a502750b1a1327483e907e2117bdc096b0448d716fb90a08cf75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:33:45 compute-0 podman[458927]: 2025-10-03 10:33:45.882721602 +0000 UTC m=+0.225594956 container start e40f13d5cd66a502750b1a1327483e907e2117bdc096b0448d716fb90a08cf75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:33:45 compute-0 podman[458927]: 2025-10-03 10:33:45.89012548 +0000 UTC m=+0.232998794 container attach e40f13d5cd66a502750b1a1327483e907e2117bdc096b0448d716fb90a08cf75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:33:46
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['vms', 'default.rgw.log', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'images']
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:33:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:33:47 compute-0 infallible_williams[458943]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:33:47 compute-0 infallible_williams[458943]: --> relative data size: 1.0
Oct  3 10:33:47 compute-0 infallible_williams[458943]: --> All data devices are unavailable
Oct  3 10:33:47 compute-0 systemd[1]: libpod-e40f13d5cd66a502750b1a1327483e907e2117bdc096b0448d716fb90a08cf75.scope: Deactivated successfully.
Oct  3 10:33:47 compute-0 systemd[1]: libpod-e40f13d5cd66a502750b1a1327483e907e2117bdc096b0448d716fb90a08cf75.scope: Consumed 1.200s CPU time.
Oct  3 10:33:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:33:47 compute-0 podman[458972]: 2025-10-03 10:33:47.242900579 +0000 UTC m=+0.067242605 container died e40f13d5cd66a502750b1a1327483e907e2117bdc096b0448d716fb90a08cf75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 10:33:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd2e484b4601c4981fa2bfe029c66241f0bfa25980a0add4b5fda75d232646ed-merged.mount: Deactivated successfully.
Oct  3 10:33:47 compute-0 podman[458973]: 2025-10-03 10:33:47.301755114 +0000 UTC m=+0.108711563 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, name=ubi9-minimal)
Oct  3 10:33:47 compute-0 podman[458977]: 2025-10-03 10:33:47.322611893 +0000 UTC m=+0.128473366 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:33:47 compute-0 podman[458972]: 2025-10-03 10:33:47.343439969 +0000 UTC m=+0.167781985 container remove e40f13d5cd66a502750b1a1327483e907e2117bdc096b0448d716fb90a08cf75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_williams, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 10:33:47 compute-0 podman[458980]: 2025-10-03 10:33:47.351001852 +0000 UTC m=+0.156150023 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 10:33:47 compute-0 systemd[1]: libpod-conmon-e40f13d5cd66a502750b1a1327483e907e2117bdc096b0448d716fb90a08cf75.scope: Deactivated successfully.
Oct  3 10:33:47 compute-0 nova_compute[351685]: 2025-10-03 10:33:47.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:47 compute-0 nova_compute[351685]: 2025-10-03 10:33:47.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:48 compute-0 podman[459191]: 2025-10-03 10:33:48.23791461 +0000 UTC m=+0.073396212 container create 9f29118599fc8195f548ecdebb49ed474dc341eb7db7e382564688f59406a9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jackson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  3 10:33:48 compute-0 podman[459191]: 2025-10-03 10:33:48.197738083 +0000 UTC m=+0.033219705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:33:48 compute-0 systemd[1]: Started libpod-conmon-9f29118599fc8195f548ecdebb49ed474dc341eb7db7e382564688f59406a9f9.scope.
Oct  3 10:33:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:33:48 compute-0 podman[459191]: 2025-10-03 10:33:48.356029544 +0000 UTC m=+0.191511166 container init 9f29118599fc8195f548ecdebb49ed474dc341eb7db7e382564688f59406a9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 10:33:48 compute-0 podman[459191]: 2025-10-03 10:33:48.368610666 +0000 UTC m=+0.204092278 container start 9f29118599fc8195f548ecdebb49ed474dc341eb7db7e382564688f59406a9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jackson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 10:33:48 compute-0 gifted_jackson[459207]: 167 167
Oct  3 10:33:48 compute-0 systemd[1]: libpod-9f29118599fc8195f548ecdebb49ed474dc341eb7db7e382564688f59406a9f9.scope: Deactivated successfully.
Oct  3 10:33:48 compute-0 podman[459191]: 2025-10-03 10:33:48.40930865 +0000 UTC m=+0.244790262 container attach 9f29118599fc8195f548ecdebb49ed474dc341eb7db7e382564688f59406a9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jackson, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:33:48 compute-0 podman[459191]: 2025-10-03 10:33:48.410667493 +0000 UTC m=+0.246149105 container died 9f29118599fc8195f548ecdebb49ed474dc341eb7db7e382564688f59406a9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jackson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct  3 10:33:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4f307ffe16116d9d1b6fb0f2725e5a46e0adaace6d08445647398978d9a217e-merged.mount: Deactivated successfully.
Oct  3 10:33:48 compute-0 podman[459191]: 2025-10-03 10:33:48.699408102 +0000 UTC m=+0.534889694 container remove 9f29118599fc8195f548ecdebb49ed474dc341eb7db7e382564688f59406a9f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jackson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:33:48 compute-0 systemd[1]: libpod-conmon-9f29118599fc8195f548ecdebb49ed474dc341eb7db7e382564688f59406a9f9.scope: Deactivated successfully.
Oct  3 10:33:49 compute-0 podman[459232]: 2025-10-03 10:33:48.934397399 +0000 UTC m=+0.039263229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:33:49 compute-0 podman[459232]: 2025-10-03 10:33:49.113523226 +0000 UTC m=+0.218389036 container create 05322db9925b5466782c10dcf7c4d8ef5333b4b6b9bb6add700d79124c567673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 10:33:49 compute-0 systemd[1]: Started libpod-conmon-05322db9925b5466782c10dcf7c4d8ef5333b4b6b9bb6add700d79124c567673.scope.
Oct  3 10:33:49 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0a019fc7114bacb7c99296b8c1a4e7a30ba411ae0cd89e59f9c76b2bf2b590f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0a019fc7114bacb7c99296b8c1a4e7a30ba411ae0cd89e59f9c76b2bf2b590f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0a019fc7114bacb7c99296b8c1a4e7a30ba411ae0cd89e59f9c76b2bf2b590f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0a019fc7114bacb7c99296b8c1a4e7a30ba411ae0cd89e59f9c76b2bf2b590f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:33:49 compute-0 podman[459232]: 2025-10-03 10:33:49.468126734 +0000 UTC m=+0.572992594 container init 05322db9925b5466782c10dcf7c4d8ef5333b4b6b9bb6add700d79124c567673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  3 10:33:49 compute-0 podman[459232]: 2025-10-03 10:33:49.485423188 +0000 UTC m=+0.590288998 container start 05322db9925b5466782c10dcf7c4d8ef5333b4b6b9bb6add700d79124c567673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:33:49 compute-0 podman[459232]: 2025-10-03 10:33:49.524072136 +0000 UTC m=+0.628937966 container attach 05322db9925b5466782c10dcf7c4d8ef5333b4b6b9bb6add700d79124c567673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_payne, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:33:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1995: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:50 compute-0 thirsty_payne[459247]: {
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:    "0": [
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:        {
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "devices": [
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "/dev/loop3"
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            ],
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "lv_name": "ceph_lv0",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "lv_size": "21470642176",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "name": "ceph_lv0",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "tags": {
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.cluster_name": "ceph",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.crush_device_class": "",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.encrypted": "0",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.osd_id": "0",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.type": "block",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.vdo": "0"
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            },
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "type": "block",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "vg_name": "ceph_vg0"
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:        }
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:    ],
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:    "1": [
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:        {
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "devices": [
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "/dev/loop4"
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            ],
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "lv_name": "ceph_lv1",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "lv_size": "21470642176",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "name": "ceph_lv1",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "tags": {
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.cluster_name": "ceph",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.crush_device_class": "",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.encrypted": "0",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.osd_id": "1",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.type": "block",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.vdo": "0"
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            },
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "type": "block",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "vg_name": "ceph_vg1"
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:        }
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:    ],
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:    "2": [
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:        {
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "devices": [
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "/dev/loop5"
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            ],
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "lv_name": "ceph_lv2",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "lv_size": "21470642176",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "name": "ceph_lv2",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "tags": {
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.cluster_name": "ceph",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.crush_device_class": "",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.encrypted": "0",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.osd_id": "2",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.type": "block",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:                "ceph.vdo": "0"
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            },
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "type": "block",
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:            "vg_name": "ceph_vg2"
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:        }
Oct  3 10:33:50 compute-0 thirsty_payne[459247]:    ]
Oct  3 10:33:50 compute-0 thirsty_payne[459247]: }
Oct  3 10:33:50 compute-0 systemd[1]: libpod-05322db9925b5466782c10dcf7c4d8ef5333b4b6b9bb6add700d79124c567673.scope: Deactivated successfully.
Oct  3 10:33:50 compute-0 podman[459232]: 2025-10-03 10:33:50.437282666 +0000 UTC m=+1.542148486 container died 05322db9925b5466782c10dcf7c4d8ef5333b4b6b9bb6add700d79124c567673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_payne, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 10:33:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0a019fc7114bacb7c99296b8c1a4e7a30ba411ae0cd89e59f9c76b2bf2b590f-merged.mount: Deactivated successfully.
Oct  3 10:33:50 compute-0 podman[459232]: 2025-10-03 10:33:50.673449361 +0000 UTC m=+1.778315171 container remove 05322db9925b5466782c10dcf7c4d8ef5333b4b6b9bb6add700d79124c567673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 10:33:50 compute-0 systemd[1]: libpod-conmon-05322db9925b5466782c10dcf7c4d8ef5333b4b6b9bb6add700d79124c567673.scope: Deactivated successfully.
Oct  3 10:33:51 compute-0 podman[459293]: 2025-10-03 10:33:51.018426351 +0000 UTC m=+0.086961337 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:33:51 compute-0 podman[459426]: 2025-10-03 10:33:51.663655917 +0000 UTC m=+0.044436974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:33:51 compute-0 podman[459426]: 2025-10-03 10:33:51.775835391 +0000 UTC m=+0.156616418 container create bd6bfaf711f5ae995e5a35a4dc55b57f210059fba3254262a36b8453ad0a3eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 10:33:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1996: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:51 compute-0 systemd[1]: Started libpod-conmon-bd6bfaf711f5ae995e5a35a4dc55b57f210059fba3254262a36b8453ad0a3eee.scope.
Oct  3 10:33:51 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:33:52 compute-0 podman[459426]: 2025-10-03 10:33:52.078460925 +0000 UTC m=+0.459242252 container init bd6bfaf711f5ae995e5a35a4dc55b57f210059fba3254262a36b8453ad0a3eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banzai, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 10:33:52 compute-0 podman[459426]: 2025-10-03 10:33:52.09143298 +0000 UTC m=+0.472213997 container start bd6bfaf711f5ae995e5a35a4dc55b57f210059fba3254262a36b8453ad0a3eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banzai, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:33:52 compute-0 systemd[1]: libpod-bd6bfaf711f5ae995e5a35a4dc55b57f210059fba3254262a36b8453ad0a3eee.scope: Deactivated successfully.
Oct  3 10:33:52 compute-0 vibrant_banzai[459442]: 167 167
Oct  3 10:33:52 compute-0 conmon[459442]: conmon bd6bfaf711f5ae995e5a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd6bfaf711f5ae995e5a35a4dc55b57f210059fba3254262a36b8453ad0a3eee.scope/container/memory.events
Oct  3 10:33:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:33:52 compute-0 podman[459426]: 2025-10-03 10:33:52.187699433 +0000 UTC m=+0.568480480 container attach bd6bfaf711f5ae995e5a35a4dc55b57f210059fba3254262a36b8453ad0a3eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banzai, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 10:33:52 compute-0 podman[459426]: 2025-10-03 10:33:52.188664765 +0000 UTC m=+0.569445822 container died bd6bfaf711f5ae995e5a35a4dc55b57f210059fba3254262a36b8453ad0a3eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banzai, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:33:52 compute-0 nova_compute[351685]: 2025-10-03 10:33:52.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa143da8bada8cad797cb90182ed6c6ee5248808139c9c1b7c3e8b8211c74843-merged.mount: Deactivated successfully.
Oct  3 10:33:52 compute-0 podman[459426]: 2025-10-03 10:33:52.630729284 +0000 UTC m=+1.011510301 container remove bd6bfaf711f5ae995e5a35a4dc55b57f210059fba3254262a36b8453ad0a3eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:33:52 compute-0 systemd[1]: libpod-conmon-bd6bfaf711f5ae995e5a35a4dc55b57f210059fba3254262a36b8453ad0a3eee.scope: Deactivated successfully.
Oct  3 10:33:52 compute-0 podman[459466]: 2025-10-03 10:33:52.840045289 +0000 UTC m=+0.043480334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:33:52 compute-0 nova_compute[351685]: 2025-10-03 10:33:52.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:53 compute-0 podman[459466]: 2025-10-03 10:33:53.006899943 +0000 UTC m=+0.210334928 container create 4e3309f78249e8b12e7b78ee46404681bffe964ed7719f000164d249cc7c24e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:33:53 compute-0 systemd[1]: Started libpod-conmon-4e3309f78249e8b12e7b78ee46404681bffe964ed7719f000164d249cc7c24e6.scope.
Oct  3 10:33:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0b1c3484a273cc3103dfb0abeb72b4d641c4be39d0e56fdbecfaf00138d4d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0b1c3484a273cc3103dfb0abeb72b4d641c4be39d0e56fdbecfaf00138d4d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0b1c3484a273cc3103dfb0abeb72b4d641c4be39d0e56fdbecfaf00138d4d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0b1c3484a273cc3103dfb0abeb72b4d641c4be39d0e56fdbecfaf00138d4d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:33:53 compute-0 podman[459466]: 2025-10-03 10:33:53.358127144 +0000 UTC m=+0.561562119 container init 4e3309f78249e8b12e7b78ee46404681bffe964ed7719f000164d249cc7c24e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:33:53 compute-0 podman[459466]: 2025-10-03 10:33:53.376896725 +0000 UTC m=+0.580331670 container start 4e3309f78249e8b12e7b78ee46404681bffe964ed7719f000164d249cc7c24e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:33:53 compute-0 podman[459466]: 2025-10-03 10:33:53.419796049 +0000 UTC m=+0.623231054 container attach 4e3309f78249e8b12e7b78ee46404681bffe964ed7719f000164d249cc7c24e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:33:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1997: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:33:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1147972044' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:33:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:33:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1147972044' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:33:54 compute-0 blissful_fermi[459482]: {
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:        "osd_id": 1,
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:        "type": "bluestore"
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:    },
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:        "osd_id": 2,
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:        "type": "bluestore"
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:    },
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:        "osd_id": 0,
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:        "type": "bluestore"
Oct  3 10:33:54 compute-0 blissful_fermi[459482]:    }
Oct  3 10:33:54 compute-0 blissful_fermi[459482]: }
Oct  3 10:33:54 compute-0 systemd[1]: libpod-4e3309f78249e8b12e7b78ee46404681bffe964ed7719f000164d249cc7c24e6.scope: Deactivated successfully.
Oct  3 10:33:54 compute-0 systemd[1]: libpod-4e3309f78249e8b12e7b78ee46404681bffe964ed7719f000164d249cc7c24e6.scope: Consumed 1.139s CPU time.
Oct  3 10:33:54 compute-0 conmon[459482]: conmon 4e3309f78249e8b12e7b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e3309f78249e8b12e7b78ee46404681bffe964ed7719f000164d249cc7c24e6.scope/container/memory.events
Oct  3 10:33:54 compute-0 podman[459466]: 2025-10-03 10:33:54.540740573 +0000 UTC m=+1.744175528 container died 4e3309f78249e8b12e7b78ee46404681bffe964ed7719f000164d249cc7c24e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 10:33:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c0b1c3484a273cc3103dfb0abeb72b4d641c4be39d0e56fdbecfaf00138d4d6-merged.mount: Deactivated successfully.
Oct  3 10:33:55 compute-0 podman[459466]: 2025-10-03 10:33:55.024020683 +0000 UTC m=+2.227455638 container remove 4e3309f78249e8b12e7b78ee46404681bffe964ed7719f000164d249cc7c24e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 10:33:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:33:55 compute-0 podman[459527]: 2025-10-03 10:33:55.109746378 +0000 UTC m=+0.139728196 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:33:55 compute-0 podman[459529]: 2025-10-03 10:33:55.112457735 +0000 UTC m=+0.141916386 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3)
Oct  3 10:33:55 compute-0 podman[459528]: 2025-10-03 10:33:55.115012256 +0000 UTC m=+0.141804532 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:33:55 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:33:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:33:55 compute-0 systemd[1]: libpod-conmon-4e3309f78249e8b12e7b78ee46404681bffe964ed7719f000164d249cc7c24e6.scope: Deactivated successfully.
Oct  3 10:33:55 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 75b6f16b-1b3a-460b-a4fd-08f542910f13 does not exist
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 18c7b371-904d-4723-8910-aa05abc41a56 does not exist
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:33:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:33:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:33:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:33:57 compute-0 nova_compute[351685]: 2025-10-03 10:33:57.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v1999: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:33:57 compute-0 nova_compute[351685]: 2025-10-03 10:33:57.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:33:59 compute-0 nova_compute[351685]: 2025-10-03 10:33:59.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:33:59 compute-0 nova_compute[351685]: 2025-10-03 10:33:59.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:33:59 compute-0 nova_compute[351685]: 2025-10-03 10:33:59.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:33:59 compute-0 podman[157165]: time="2025-10-03T10:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:33:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:33:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9069 "" "Go-http-client/1.1"
Oct  3 10:33:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:00 compute-0 nova_compute[351685]: 2025-10-03 10:34:00.155 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:34:00 compute-0 nova_compute[351685]: 2025-10-03 10:34:00.156 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:34:00 compute-0 nova_compute[351685]: 2025-10-03 10:34:00.156 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:34:00 compute-0 nova_compute[351685]: 2025-10-03 10:34:00.156 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:34:01 compute-0 openstack_network_exporter[367524]: ERROR   10:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:34:01 compute-0 openstack_network_exporter[367524]: ERROR   10:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:34:01 compute-0 openstack_network_exporter[367524]: ERROR   10:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:34:01 compute-0 openstack_network_exporter[367524]: ERROR   10:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:34:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:34:01 compute-0 openstack_network_exporter[367524]: ERROR   10:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:34:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:34:01 compute-0 nova_compute[351685]: 2025-10-03 10:34:01.480 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:34:01 compute-0 nova_compute[351685]: 2025-10-03 10:34:01.539 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:34:01 compute-0 nova_compute[351685]: 2025-10-03 10:34:01.540 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:34:01 compute-0 nova_compute[351685]: 2025-10-03 10:34:01.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:34:01 compute-0 nova_compute[351685]: 2025-10-03 10:34:01.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:34:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:01 compute-0 podman[459636]: 2025-10-03 10:34:01.876939701 +0000 UTC m=+0.126159843 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 10:34:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:34:02 compute-0 nova_compute[351685]: 2025-10-03 10:34:02.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:02 compute-0 nova_compute[351685]: 2025-10-03 10:34:02.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:03 compute-0 nova_compute[351685]: 2025-10-03 10:34:03.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:34:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:04 compute-0 nova_compute[351685]: 2025-10-03 10:34:04.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:34:05 compute-0 nova_compute[351685]: 2025-10-03 10:34:05.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:34:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:06 compute-0 nova_compute[351685]: 2025-10-03 10:34:06.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:34:06 compute-0 podman[459655]: 2025-10-03 10:34:06.825017297 +0000 UTC m=+0.082509224 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:34:06 compute-0 nova_compute[351685]: 2025-10-03 10:34:06.828 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:34:06 compute-0 nova_compute[351685]: 2025-10-03 10:34:06.828 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:34:06 compute-0 nova_compute[351685]: 2025-10-03 10:34:06.828 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:34:06 compute-0 nova_compute[351685]: 2025-10-03 10:34:06.829 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:34:06 compute-0 nova_compute[351685]: 2025-10-03 10:34:06.829 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:34:06 compute-0 podman[459656]: 2025-10-03 10:34:06.835144831 +0000 UTC m=+0.093292330 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, managed_by=edpm_ansible, config_id=edpm, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, name=ubi9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct  3 10:34:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:34:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:34:07 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3240986847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:34:07 compute-0 nova_compute[351685]: 2025-10-03 10:34:07.369 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:34:07 compute-0 nova_compute[351685]: 2025-10-03 10:34:07.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:07 compute-0 nova_compute[351685]: 2025-10-03 10:34:07.449 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:34:07 compute-0 nova_compute[351685]: 2025-10-03 10:34:07.449 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:34:07 compute-0 nova_compute[351685]: 2025-10-03 10:34:07.450 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:34:07 compute-0 nova_compute[351685]: 2025-10-03 10:34:07.763 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:34:07 compute-0 nova_compute[351685]: 2025-10-03 10:34:07.765 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3833MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:34:07 compute-0 nova_compute[351685]: 2025-10-03 10:34:07.765 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:34:07 compute-0 nova_compute[351685]: 2025-10-03 10:34:07.765 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:34:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2004: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:07 compute-0 nova_compute[351685]: 2025-10-03 10:34:07.852 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:34:07 compute-0 nova_compute[351685]: 2025-10-03 10:34:07.852 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:34:07 compute-0 nova_compute[351685]: 2025-10-03 10:34:07.852 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:34:07 compute-0 nova_compute[351685]: 2025-10-03 10:34:07.883 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:34:07 compute-0 nova_compute[351685]: 2025-10-03 10:34:07.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:34:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1267978747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:34:08 compute-0 nova_compute[351685]: 2025-10-03 10:34:08.344 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:34:08 compute-0 nova_compute[351685]: 2025-10-03 10:34:08.351 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:34:08 compute-0 nova_compute[351685]: 2025-10-03 10:34:08.369 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:34:08 compute-0 nova_compute[351685]: 2025-10-03 10:34:08.371 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:34:08 compute-0 nova_compute[351685]: 2025-10-03 10:34:08.372 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:34:09 compute-0 nova_compute[351685]: 2025-10-03 10:34:09.372 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:34:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:10 compute-0 nova_compute[351685]: 2025-10-03 10:34:10.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:34:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2006: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:34:12 compute-0 nova_compute[351685]: 2025-10-03 10:34:12.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:12 compute-0 nova_compute[351685]: 2025-10-03 10:34:12.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:34:12 compute-0 nova_compute[351685]: 2025-10-03 10:34:12.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2008: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:34:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:34:17 compute-0 nova_compute[351685]: 2025-10-03 10:34:17.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:17 compute-0 podman[459739]: 2025-10-03 10:34:17.85277825 +0000 UTC m=+0.094795418 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  3 10:34:17 compute-0 podman[459738]: 2025-10-03 10:34:17.858519994 +0000 UTC m=+0.115551222 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vcs-type=git)
Oct  3 10:34:17 compute-0 podman[459740]: 2025-10-03 10:34:17.916417658 +0000 UTC m=+0.153211238 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  3 10:34:17 compute-0 nova_compute[351685]: 2025-10-03 10:34:17.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:21 compute-0 podman[459804]: 2025-10-03 10:34:21.806866939 +0000 UTC m=+0.066321985 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  3 10:34:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:34:22 compute-0 nova_compute[351685]: 2025-10-03 10:34:22.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:22 compute-0 nova_compute[351685]: 2025-10-03 10:34:22.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2012: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:25 compute-0 podman[459824]: 2025-10-03 10:34:25.826204449 +0000 UTC m=+0.089507037 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:34:25 compute-0 podman[459825]: 2025-10-03 10:34:25.839380402 +0000 UTC m=+0.095570122 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  3 10:34:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:25 compute-0 podman[459826]: 2025-10-03 10:34:25.844682382 +0000 UTC m=+0.098616560 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3)
Oct  3 10:34:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:34:27 compute-0 nova_compute[351685]: 2025-10-03 10:34:27.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2014: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:27 compute-0 nova_compute[351685]: 2025-10-03 10:34:27.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:29 compute-0 podman[157165]: time="2025-10-03T10:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:34:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:34:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9074 "" "Go-http-client/1.1"
Oct  3 10:34:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2015: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:31 compute-0 openstack_network_exporter[367524]: ERROR   10:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:34:31 compute-0 openstack_network_exporter[367524]: ERROR   10:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:34:31 compute-0 openstack_network_exporter[367524]: ERROR   10:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:34:31 compute-0 openstack_network_exporter[367524]: ERROR   10:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:34:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:34:31 compute-0 openstack_network_exporter[367524]: ERROR   10:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:34:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:34:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2016: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:34:32 compute-0 nova_compute[351685]: 2025-10-03 10:34:32.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:32 compute-0 podman[459882]: 2025-10-03 10:34:32.873628646 +0000 UTC m=+0.115989026 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:34:32 compute-0 nova_compute[351685]: 2025-10-03 10:34:32.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2017: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Oct  3 10:34:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2018: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s
Oct  3 10:34:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:34:37 compute-0 nova_compute[351685]: 2025-10-03 10:34:37.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:37 compute-0 podman[459902]: 2025-10-03 10:34:37.846851697 +0000 UTC m=+0.101379798 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:34:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2019: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s
Oct  3 10:34:37 compute-0 podman[459903]: 2025-10-03 10:34:37.863599224 +0000 UTC m=+0.107238867 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, version=9.4, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, build-date=2024-09-18T21:23:30, release-0.7.12=, vcs-type=git)
Oct  3 10:34:37 compute-0 nova_compute[351685]: 2025-10-03 10:34:37.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2020: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Oct  3 10:34:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:34:41.629 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:34:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:34:41.629 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:34:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:34:41.630 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:34:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2021: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct  3 10:34:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:34:42 compute-0 nova_compute[351685]: 2025-10-03 10:34:42.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:42 compute-0 nova_compute[351685]: 2025-10-03 10:34:42.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2022: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct  3 10:34:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2023: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 67 op/s
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:34:46
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'vms', 'backups', 'volumes', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.meta', 'default.rgw.log']
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:34:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:34:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:34:47 compute-0 nova_compute[351685]: 2025-10-03 10:34:47.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Oct  3 10:34:47 compute-0 nova_compute[351685]: 2025-10-03 10:34:47.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:48 compute-0 podman[459946]: 2025-10-03 10:34:48.835398391 +0000 UTC m=+0.091928816 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64)
Oct  3 10:34:48 compute-0 podman[459953]: 2025-10-03 10:34:48.869549664 +0000 UTC m=+0.111695628 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:34:48 compute-0 podman[459947]: 2025-10-03 10:34:48.869960957 +0000 UTC m=+0.115593572 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 10:34:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2025: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Oct  3 10:34:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2026: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 0 B/s wr, 11 op/s
Oct  3 10:34:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:34:52 compute-0 nova_compute[351685]: 2025-10-03 10:34:52.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:52 compute-0 podman[460009]: 2025-10-03 10:34:52.847368246 +0000 UTC m=+0.105291074 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:34:52 compute-0 nova_compute[351685]: 2025-10-03 10:34:52.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:34:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/124736803' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:34:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:34:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/124736803' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:34:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2028: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:56 compute-0 podman[460182]: 2025-10-03 10:34:56.624216389 +0000 UTC m=+0.091836803 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:34:56 compute-0 podman[460183]: 2025-10-03 10:34:56.649141708 +0000 UTC m=+0.116923427 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 10:34:56 compute-0 podman[460184]: 2025-10-03 10:34:56.652509925 +0000 UTC m=+0.101346146 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 10:34:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:34:57 compute-0 podman[460356]: 2025-10-03 10:34:57.268820025 +0000 UTC m=+0.053668260 container create 3a924a3c59d39acb2da21b2d6c89719a0c52babf2b0b6b70d61cdb15905c2c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 10:34:57 compute-0 systemd[1]: Started libpod-conmon-3a924a3c59d39acb2da21b2d6c89719a0c52babf2b0b6b70d61cdb15905c2c4a.scope.
Oct  3 10:34:57 compute-0 podman[460356]: 2025-10-03 10:34:57.243803064 +0000 UTC m=+0.028651319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:34:57 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:34:57 compute-0 podman[460356]: 2025-10-03 10:34:57.374001865 +0000 UTC m=+0.158850100 container init 3a924a3c59d39acb2da21b2d6c89719a0c52babf2b0b6b70d61cdb15905c2c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:34:57 compute-0 podman[460356]: 2025-10-03 10:34:57.385269176 +0000 UTC m=+0.170117411 container start 3a924a3c59d39acb2da21b2d6c89719a0c52babf2b0b6b70d61cdb15905c2c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:34:57 compute-0 podman[460356]: 2025-10-03 10:34:57.389816361 +0000 UTC m=+0.174664626 container attach 3a924a3c59d39acb2da21b2d6c89719a0c52babf2b0b6b70d61cdb15905c2c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 10:34:57 compute-0 priceless_satoshi[460372]: 167 167
Oct  3 10:34:57 compute-0 systemd[1]: libpod-3a924a3c59d39acb2da21b2d6c89719a0c52babf2b0b6b70d61cdb15905c2c4a.scope: Deactivated successfully.
Oct  3 10:34:57 compute-0 podman[460356]: 2025-10-03 10:34:57.393366495 +0000 UTC m=+0.178214730 container died 3a924a3c59d39acb2da21b2d6c89719a0c52babf2b0b6b70d61cdb15905c2c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:34:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce368b795c84e97985746399cca392c362096d9e8a45120da998dd54f5b05c11-merged.mount: Deactivated successfully.
Oct  3 10:34:57 compute-0 nova_compute[351685]: 2025-10-03 10:34:57.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:57 compute-0 podman[460356]: 2025-10-03 10:34:57.458521681 +0000 UTC m=+0.243369916 container remove 3a924a3c59d39acb2da21b2d6c89719a0c52babf2b0b6b70d61cdb15905c2c4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:34:57 compute-0 systemd[1]: libpod-conmon-3a924a3c59d39acb2da21b2d6c89719a0c52babf2b0b6b70d61cdb15905c2c4a.scope: Deactivated successfully.
Oct  3 10:34:57 compute-0 podman[460395]: 2025-10-03 10:34:57.653121845 +0000 UTC m=+0.053096992 container create d40b8e34df25ea7d02ebce1869fc5539a26196768736a0b5edded256a1a532cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_robinson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:34:57 compute-0 systemd[1]: Started libpod-conmon-d40b8e34df25ea7d02ebce1869fc5539a26196768736a0b5edded256a1a532cd.scope.
Oct  3 10:34:57 compute-0 podman[460395]: 2025-10-03 10:34:57.632310588 +0000 UTC m=+0.032285755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:34:57 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72eb5af62f39c3b63152f3986378424d8f5c1c2c0e30905e1bf1e9b16c83542/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72eb5af62f39c3b63152f3986378424d8f5c1c2c0e30905e1bf1e9b16c83542/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72eb5af62f39c3b63152f3986378424d8f5c1c2c0e30905e1bf1e9b16c83542/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72eb5af62f39c3b63152f3986378424d8f5c1c2c0e30905e1bf1e9b16c83542/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:34:57 compute-0 podman[460395]: 2025-10-03 10:34:57.777398266 +0000 UTC m=+0.177373433 container init d40b8e34df25ea7d02ebce1869fc5539a26196768736a0b5edded256a1a532cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_robinson, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 10:34:57 compute-0 podman[460395]: 2025-10-03 10:34:57.792396965 +0000 UTC m=+0.192372122 container start d40b8e34df25ea7d02ebce1869fc5539a26196768736a0b5edded256a1a532cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_robinson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 10:34:57 compute-0 podman[460395]: 2025-10-03 10:34:57.797311213 +0000 UTC m=+0.197286390 container attach d40b8e34df25ea7d02ebce1869fc5539a26196768736a0b5edded256a1a532cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_robinson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 10:34:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2029: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:34:57 compute-0 nova_compute[351685]: 2025-10-03 10:34:57.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:34:59 compute-0 podman[157165]: time="2025-10-03T10:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:34:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47867 "" "Go-http-client/1.1"
Oct  3 10:34:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9501 "" "Go-http-client/1.1"
Oct  3 10:34:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:00 compute-0 interesting_robinson[460412]: [
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:    {
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:        "available": false,
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:        "ceph_device": false,
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:        "lsm_data": {},
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:        "lvs": [],
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:        "path": "/dev/sr0",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:        "rejected_reasons": [
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "Insufficient space (<5GB)",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "Has a FileSystem"
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:        ],
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:        "sys_api": {
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "actuators": null,
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "device_nodes": "sr0",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "devname": "sr0",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "human_readable_size": "482.00 KB",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "id_bus": "ata",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "model": "QEMU DVD-ROM",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "nr_requests": "2",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "parent": "/dev/sr0",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "partitions": {},
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "path": "/dev/sr0",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "removable": "1",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "rev": "2.5+",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "ro": "0",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "rotational": "0",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "sas_address": "",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "sas_device_handle": "",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "scheduler_mode": "mq-deadline",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "sectors": 0,
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "sectorsize": "2048",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "size": 493568.0,
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "support_discard": "2048",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "type": "disk",
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:            "vendor": "QEMU"
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:        }
Oct  3 10:35:00 compute-0 interesting_robinson[460412]:    }
Oct  3 10:35:00 compute-0 interesting_robinson[460412]: ]
Oct  3 10:35:00 compute-0 systemd[1]: libpod-d40b8e34df25ea7d02ebce1869fc5539a26196768736a0b5edded256a1a532cd.scope: Deactivated successfully.
Oct  3 10:35:00 compute-0 systemd[1]: libpod-d40b8e34df25ea7d02ebce1869fc5539a26196768736a0b5edded256a1a532cd.scope: Consumed 2.428s CPU time.
Oct  3 10:35:00 compute-0 podman[460395]: 2025-10-03 10:35:00.092367144 +0000 UTC m=+2.492342301 container died d40b8e34df25ea7d02ebce1869fc5539a26196768736a0b5edded256a1a532cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 10:35:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a72eb5af62f39c3b63152f3986378424d8f5c1c2c0e30905e1bf1e9b16c83542-merged.mount: Deactivated successfully.
Oct  3 10:35:00 compute-0 podman[460395]: 2025-10-03 10:35:00.160432344 +0000 UTC m=+2.560407501 container remove d40b8e34df25ea7d02ebce1869fc5539a26196768736a0b5edded256a1a532cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_robinson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:35:00 compute-0 systemd[1]: libpod-conmon-d40b8e34df25ea7d02ebce1869fc5539a26196768736a0b5edded256a1a532cd.scope: Deactivated successfully.
Oct  3 10:35:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:35:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:35:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:35:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:35:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:35:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:35:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:35:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:35:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:35:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:35:00 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e74df12c-a836-426d-a143-7d5141c078cf does not exist
Oct  3 10:35:00 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 24b08cb0-63a4-488c-9ecc-b7263d8dca87 does not exist
Oct  3 10:35:00 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 63559ff2-1ad9-417b-bfe7-498643c8be5a does not exist
Oct  3 10:35:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:35:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:35:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:35:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:35:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:35:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:35:01 compute-0 podman[463090]: 2025-10-03 10:35:01.14595011 +0000 UTC m=+0.066186940 container create 8b95b95a5f3e79b44a7b01ea59b556ef51010c4de91d9e2d8cfc0fc7841ed800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_albattani, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 10:35:01 compute-0 systemd[1]: Started libpod-conmon-8b95b95a5f3e79b44a7b01ea59b556ef51010c4de91d9e2d8cfc0fc7841ed800.scope.
Oct  3 10:35:01 compute-0 podman[463090]: 2025-10-03 10:35:01.120110114 +0000 UTC m=+0.040346984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:35:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:35:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:35:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:35:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:35:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:35:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:35:01 compute-0 podman[463090]: 2025-10-03 10:35:01.274976623 +0000 UTC m=+0.195213463 container init 8b95b95a5f3e79b44a7b01ea59b556ef51010c4de91d9e2d8cfc0fc7841ed800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_albattani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:35:01 compute-0 podman[463090]: 2025-10-03 10:35:01.28829658 +0000 UTC m=+0.208533390 container start 8b95b95a5f3e79b44a7b01ea59b556ef51010c4de91d9e2d8cfc0fc7841ed800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_albattani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:35:01 compute-0 podman[463090]: 2025-10-03 10:35:01.292846376 +0000 UTC m=+0.213083196 container attach 8b95b95a5f3e79b44a7b01ea59b556ef51010c4de91d9e2d8cfc0fc7841ed800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_albattani, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:35:01 compute-0 blissful_albattani[463105]: 167 167
Oct  3 10:35:01 compute-0 systemd[1]: libpod-8b95b95a5f3e79b44a7b01ea59b556ef51010c4de91d9e2d8cfc0fc7841ed800.scope: Deactivated successfully.
Oct  3 10:35:01 compute-0 podman[463090]: 2025-10-03 10:35:01.300347476 +0000 UTC m=+0.220584346 container died 8b95b95a5f3e79b44a7b01ea59b556ef51010c4de91d9e2d8cfc0fc7841ed800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:35:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa9c441bf2ad655d3464d5d0836b2c1a58b1cc21d3a0a70d33c662ff20e3ba4e-merged.mount: Deactivated successfully.
Oct  3 10:35:01 compute-0 podman[463090]: 2025-10-03 10:35:01.363136098 +0000 UTC m=+0.283372928 container remove 8b95b95a5f3e79b44a7b01ea59b556ef51010c4de91d9e2d8cfc0fc7841ed800 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_albattani, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:35:01 compute-0 systemd[1]: libpod-conmon-8b95b95a5f3e79b44a7b01ea59b556ef51010c4de91d9e2d8cfc0fc7841ed800.scope: Deactivated successfully.
Oct  3 10:35:01 compute-0 openstack_network_exporter[367524]: ERROR   10:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:35:01 compute-0 openstack_network_exporter[367524]: ERROR   10:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:35:01 compute-0 openstack_network_exporter[367524]: ERROR   10:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:35:01 compute-0 openstack_network_exporter[367524]: ERROR   10:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:35:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:35:01 compute-0 openstack_network_exporter[367524]: ERROR   10:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:35:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:35:01 compute-0 podman[463130]: 2025-10-03 10:35:01.581331626 +0000 UTC m=+0.063841735 container create 1192069febd8af7df8f3aee190afbcc86ca476c6883dc8a37a76c6409f7b3f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermat, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  3 10:35:01 compute-0 systemd[1]: Started libpod-conmon-1192069febd8af7df8f3aee190afbcc86ca476c6883dc8a37a76c6409f7b3f0f.scope.
Oct  3 10:35:01 compute-0 podman[463130]: 2025-10-03 10:35:01.557397589 +0000 UTC m=+0.039907728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:35:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f47336ecd9b88c62249126fe1196645b584b0aa916925a1f8f5d77cb3aa2a8b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f47336ecd9b88c62249126fe1196645b584b0aa916925a1f8f5d77cb3aa2a8b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f47336ecd9b88c62249126fe1196645b584b0aa916925a1f8f5d77cb3aa2a8b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f47336ecd9b88c62249126fe1196645b584b0aa916925a1f8f5d77cb3aa2a8b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:35:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f47336ecd9b88c62249126fe1196645b584b0aa916925a1f8f5d77cb3aa2a8b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:35:01 compute-0 podman[463130]: 2025-10-03 10:35:01.710016318 +0000 UTC m=+0.192526447 container init 1192069febd8af7df8f3aee190afbcc86ca476c6883dc8a37a76c6409f7b3f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:35:01 compute-0 podman[463130]: 2025-10-03 10:35:01.727806938 +0000 UTC m=+0.210317047 container start 1192069febd8af7df8f3aee190afbcc86ca476c6883dc8a37a76c6409f7b3f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermat, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 10:35:01 compute-0 nova_compute[351685]: 2025-10-03 10:35:01.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:35:01 compute-0 nova_compute[351685]: 2025-10-03 10:35:01.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:35:01 compute-0 nova_compute[351685]: 2025-10-03 10:35:01.732 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:35:01 compute-0 podman[463130]: 2025-10-03 10:35:01.733063887 +0000 UTC m=+0.215574016 container attach 1192069febd8af7df8f3aee190afbcc86ca476c6883dc8a37a76c6409f7b3f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:35:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2031: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:02 compute-0 nova_compute[351685]: 2025-10-03 10:35:02.113 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:35:02 compute-0 nova_compute[351685]: 2025-10-03 10:35:02.114 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:35:02 compute-0 nova_compute[351685]: 2025-10-03 10:35:02.114 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:35:02 compute-0 nova_compute[351685]: 2025-10-03 10:35:02.114 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:35:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:35:02 compute-0 nova_compute[351685]: 2025-10-03 10:35:02.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:02 compute-0 quirky_fermat[463146]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:35:02 compute-0 quirky_fermat[463146]: --> relative data size: 1.0
Oct  3 10:35:02 compute-0 quirky_fermat[463146]: --> All data devices are unavailable
Oct  3 10:35:02 compute-0 systemd[1]: libpod-1192069febd8af7df8f3aee190afbcc86ca476c6883dc8a37a76c6409f7b3f0f.scope: Deactivated successfully.
Oct  3 10:35:02 compute-0 systemd[1]: libpod-1192069febd8af7df8f3aee190afbcc86ca476c6883dc8a37a76c6409f7b3f0f.scope: Consumed 1.173s CPU time.
Oct  3 10:35:02 compute-0 podman[463130]: 2025-10-03 10:35:02.979072307 +0000 UTC m=+1.461582426 container died 1192069febd8af7df8f3aee190afbcc86ca476c6883dc8a37a76c6409f7b3f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermat, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:35:02 compute-0 nova_compute[351685]: 2025-10-03 10:35:02.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f47336ecd9b88c62249126fe1196645b584b0aa916925a1f8f5d77cb3aa2a8b0-merged.mount: Deactivated successfully.
Oct  3 10:35:03 compute-0 podman[463130]: 2025-10-03 10:35:03.071208248 +0000 UTC m=+1.553718367 container remove 1192069febd8af7df8f3aee190afbcc86ca476c6883dc8a37a76c6409f7b3f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermat, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:35:03 compute-0 systemd[1]: libpod-conmon-1192069febd8af7df8f3aee190afbcc86ca476c6883dc8a37a76c6409f7b3f0f.scope: Deactivated successfully.
Oct  3 10:35:03 compute-0 podman[463176]: 2025-10-03 10:35:03.133067389 +0000 UTC m=+0.114198329 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  3 10:35:03 compute-0 nova_compute[351685]: 2025-10-03 10:35:03.394 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:35:03 compute-0 nova_compute[351685]: 2025-10-03 10:35:03.408 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:35:03 compute-0 nova_compute[351685]: 2025-10-03 10:35:03.408 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:35:03 compute-0 nova_compute[351685]: 2025-10-03 10:35:03.409 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:35:03 compute-0 nova_compute[351685]: 2025-10-03 10:35:03.409 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:35:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2032: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:04 compute-0 podman[463345]: 2025-10-03 10:35:04.037644773 +0000 UTC m=+0.061626975 container create a2ded10caabddbb03ecac60c55c3303bc5bd476e7c09e71eb74db5bcf99e7a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lehmann, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 10:35:04 compute-0 systemd[1]: Started libpod-conmon-a2ded10caabddbb03ecac60c55c3303bc5bd476e7c09e71eb74db5bcf99e7a43.scope.
Oct  3 10:35:04 compute-0 podman[463345]: 2025-10-03 10:35:04.018324734 +0000 UTC m=+0.042306946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:35:04 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:35:04 compute-0 podman[463345]: 2025-10-03 10:35:04.143727121 +0000 UTC m=+0.167709393 container init a2ded10caabddbb03ecac60c55c3303bc5bd476e7c09e71eb74db5bcf99e7a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lehmann, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:35:04 compute-0 podman[463345]: 2025-10-03 10:35:04.15586808 +0000 UTC m=+0.179850292 container start a2ded10caabddbb03ecac60c55c3303bc5bd476e7c09e71eb74db5bcf99e7a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 10:35:04 compute-0 podman[463345]: 2025-10-03 10:35:04.161150189 +0000 UTC m=+0.185132381 container attach a2ded10caabddbb03ecac60c55c3303bc5bd476e7c09e71eb74db5bcf99e7a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct  3 10:35:04 compute-0 bold_lehmann[463361]: 167 167
Oct  3 10:35:04 compute-0 systemd[1]: libpod-a2ded10caabddbb03ecac60c55c3303bc5bd476e7c09e71eb74db5bcf99e7a43.scope: Deactivated successfully.
Oct  3 10:35:04 compute-0 podman[463345]: 2025-10-03 10:35:04.16615012 +0000 UTC m=+0.190132332 container died a2ded10caabddbb03ecac60c55c3303bc5bd476e7c09e71eb74db5bcf99e7a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 10:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ea8f58792785cb92a7bacedec925e0d34204d0c4fd088237b0f63968d1d28f4-merged.mount: Deactivated successfully.
Oct  3 10:35:04 compute-0 podman[463345]: 2025-10-03 10:35:04.234764297 +0000 UTC m=+0.258746499 container remove a2ded10caabddbb03ecac60c55c3303bc5bd476e7c09e71eb74db5bcf99e7a43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 10:35:04 compute-0 systemd[1]: libpod-conmon-a2ded10caabddbb03ecac60c55c3303bc5bd476e7c09e71eb74db5bcf99e7a43.scope: Deactivated successfully.
Oct  3 10:35:04 compute-0 podman[463387]: 2025-10-03 10:35:04.527464222 +0000 UTC m=+0.093360151 container create e618165dcaee3fb30703b0dc3035f3d1cc5f555324f915e5433c87f7dd0e9386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euclid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:35:04 compute-0 podman[463387]: 2025-10-03 10:35:04.493018389 +0000 UTC m=+0.058914388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:35:04 compute-0 systemd[1]: Started libpod-conmon-e618165dcaee3fb30703b0dc3035f3d1cc5f555324f915e5433c87f7dd0e9386.scope.
Oct  3 10:35:04 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d5e24e99730c0a0c98d297aad9197bac3b49fb47d5499169d433d996d8d2d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d5e24e99730c0a0c98d297aad9197bac3b49fb47d5499169d433d996d8d2d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d5e24e99730c0a0c98d297aad9197bac3b49fb47d5499169d433d996d8d2d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d5e24e99730c0a0c98d297aad9197bac3b49fb47d5499169d433d996d8d2d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:35:04 compute-0 podman[463387]: 2025-10-03 10:35:04.694823393 +0000 UTC m=+0.260719332 container init e618165dcaee3fb30703b0dc3035f3d1cc5f555324f915e5433c87f7dd0e9386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 10:35:04 compute-0 podman[463387]: 2025-10-03 10:35:04.714048109 +0000 UTC m=+0.279944038 container start e618165dcaee3fb30703b0dc3035f3d1cc5f555324f915e5433c87f7dd0e9386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:35:04 compute-0 podman[463387]: 2025-10-03 10:35:04.720561468 +0000 UTC m=+0.286457447 container attach e618165dcaee3fb30703b0dc3035f3d1cc5f555324f915e5433c87f7dd0e9386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euclid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:35:05 compute-0 condescending_euclid[463403]: {
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:    "0": [
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:        {
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "devices": [
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "/dev/loop3"
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            ],
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "lv_name": "ceph_lv0",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "lv_size": "21470642176",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "name": "ceph_lv0",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "tags": {
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.cluster_name": "ceph",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.crush_device_class": "",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.encrypted": "0",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.osd_id": "0",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.type": "block",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.vdo": "0"
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            },
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "type": "block",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "vg_name": "ceph_vg0"
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:        }
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:    ],
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:    "1": [
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:        {
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "devices": [
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "/dev/loop4"
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            ],
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "lv_name": "ceph_lv1",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "lv_size": "21470642176",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "name": "ceph_lv1",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "tags": {
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.cluster_name": "ceph",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.crush_device_class": "",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.encrypted": "0",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.osd_id": "1",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.type": "block",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.vdo": "0"
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            },
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "type": "block",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "vg_name": "ceph_vg1"
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:        }
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:    ],
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:    "2": [
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:        {
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "devices": [
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "/dev/loop5"
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            ],
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "lv_name": "ceph_lv2",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "lv_size": "21470642176",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "name": "ceph_lv2",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "tags": {
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.cluster_name": "ceph",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.crush_device_class": "",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.encrypted": "0",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.osd_id": "2",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.type": "block",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:                "ceph.vdo": "0"
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            },
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "type": "block",
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:            "vg_name": "ceph_vg2"
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:        }
Oct  3 10:35:05 compute-0 condescending_euclid[463403]:    ]
Oct  3 10:35:05 compute-0 condescending_euclid[463403]: }
Oct  3 10:35:05 compute-0 systemd[1]: libpod-e618165dcaee3fb30703b0dc3035f3d1cc5f555324f915e5433c87f7dd0e9386.scope: Deactivated successfully.
Oct  3 10:35:05 compute-0 podman[463387]: 2025-10-03 10:35:05.558465036 +0000 UTC m=+1.124360935 container died e618165dcaee3fb30703b0dc3035f3d1cc5f555324f915e5433c87f7dd0e9386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:35:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-42d5e24e99730c0a0c98d297aad9197bac3b49fb47d5499169d433d996d8d2d7-merged.mount: Deactivated successfully.
Oct  3 10:35:05 compute-0 podman[463387]: 2025-10-03 10:35:05.622081303 +0000 UTC m=+1.187977202 container remove e618165dcaee3fb30703b0dc3035f3d1cc5f555324f915e5433c87f7dd0e9386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euclid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 10:35:05 compute-0 systemd[1]: libpod-conmon-e618165dcaee3fb30703b0dc3035f3d1cc5f555324f915e5433c87f7dd0e9386.scope: Deactivated successfully.
Oct  3 10:35:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2033: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:06 compute-0 podman[463561]: 2025-10-03 10:35:06.556204364 +0000 UTC m=+0.069641132 container create a07277e4429baa29ddc8f5ab268d0321c000f5de2e3caf55ec0144527f688c59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_turing, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:35:06 compute-0 podman[463561]: 2025-10-03 10:35:06.532873316 +0000 UTC m=+0.046310114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:35:06 compute-0 systemd[1]: Started libpod-conmon-a07277e4429baa29ddc8f5ab268d0321c000f5de2e3caf55ec0144527f688c59.scope.
Oct  3 10:35:06 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:35:06 compute-0 podman[463561]: 2025-10-03 10:35:06.669563564 +0000 UTC m=+0.183000382 container init a07277e4429baa29ddc8f5ab268d0321c000f5de2e3caf55ec0144527f688c59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:35:06 compute-0 podman[463561]: 2025-10-03 10:35:06.679543265 +0000 UTC m=+0.192980033 container start a07277e4429baa29ddc8f5ab268d0321c000f5de2e3caf55ec0144527f688c59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_turing, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:35:06 compute-0 podman[463561]: 2025-10-03 10:35:06.684981239 +0000 UTC m=+0.198418007 container attach a07277e4429baa29ddc8f5ab268d0321c000f5de2e3caf55ec0144527f688c59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:35:06 compute-0 festive_turing[463577]: 167 167
Oct  3 10:35:06 compute-0 systemd[1]: libpod-a07277e4429baa29ddc8f5ab268d0321c000f5de2e3caf55ec0144527f688c59.scope: Deactivated successfully.
Oct  3 10:35:06 compute-0 podman[463561]: 2025-10-03 10:35:06.688478021 +0000 UTC m=+0.201914799 container died a07277e4429baa29ddc8f5ab268d0321c000f5de2e3caf55ec0144527f688c59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_turing, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:35:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-70058c1755bedba8d9c96c78627ac4872c882cddf8a758abb19025e558a16ce3-merged.mount: Deactivated successfully.
Oct  3 10:35:06 compute-0 nova_compute[351685]: 2025-10-03 10:35:06.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:35:06 compute-0 nova_compute[351685]: 2025-10-03 10:35:06.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:35:06 compute-0 nova_compute[351685]: 2025-10-03 10:35:06.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:35:06 compute-0 nova_compute[351685]: 2025-10-03 10:35:06.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:35:06 compute-0 podman[463561]: 2025-10-03 10:35:06.732930475 +0000 UTC m=+0.246367253 container remove a07277e4429baa29ddc8f5ab268d0321c000f5de2e3caf55ec0144527f688c59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:35:06 compute-0 systemd[1]: libpod-conmon-a07277e4429baa29ddc8f5ab268d0321c000f5de2e3caf55ec0144527f688c59.scope: Deactivated successfully.
Oct  3 10:35:06 compute-0 nova_compute[351685]: 2025-10-03 10:35:06.757 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:35:06 compute-0 nova_compute[351685]: 2025-10-03 10:35:06.757 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:35:06 compute-0 nova_compute[351685]: 2025-10-03 10:35:06.758 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:35:06 compute-0 nova_compute[351685]: 2025-10-03 10:35:06.758 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:35:06 compute-0 nova_compute[351685]: 2025-10-03 10:35:06.758 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:35:06 compute-0 podman[463603]: 2025-10-03 10:35:06.977503488 +0000 UTC m=+0.077400190 container create 4e0f914d7210bbc365ef320e760b1fbe8d7291a25cecff4cc4b6a5a984a480fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:35:07 compute-0 systemd[1]: Started libpod-conmon-4e0f914d7210bbc365ef320e760b1fbe8d7291a25cecff4cc4b6a5a984a480fd.scope.
Oct  3 10:35:07 compute-0 podman[463603]: 2025-10-03 10:35:06.943595302 +0000 UTC m=+0.043492094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:35:07 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:35:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdd74688478fc6f4c7c776620912f24c3161e5684d2eacebe484a2c66641de5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:35:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdd74688478fc6f4c7c776620912f24c3161e5684d2eacebe484a2c66641de5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:35:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdd74688478fc6f4c7c776620912f24c3161e5684d2eacebe484a2c66641de5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:35:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdd74688478fc6f4c7c776620912f24c3161e5684d2eacebe484a2c66641de5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:35:07 compute-0 podman[463603]: 2025-10-03 10:35:07.085368273 +0000 UTC m=+0.185265025 container init 4e0f914d7210bbc365ef320e760b1fbe8d7291a25cecff4cc4b6a5a984a480fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_benz, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:35:07 compute-0 podman[463603]: 2025-10-03 10:35:07.100934582 +0000 UTC m=+0.200831284 container start 4e0f914d7210bbc365ef320e760b1fbe8d7291a25cecff4cc4b6a5a984a480fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_benz, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:35:07 compute-0 podman[463603]: 2025-10-03 10:35:07.105406015 +0000 UTC m=+0.205302737 container attach 4e0f914d7210bbc365ef320e760b1fbe8d7291a25cecff4cc4b6a5a984a480fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  3 10:35:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:35:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:35:07 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3319055018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:35:07 compute-0 nova_compute[351685]: 2025-10-03 10:35:07.238 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:35:07 compute-0 nova_compute[351685]: 2025-10-03 10:35:07.337 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:35:07 compute-0 nova_compute[351685]: 2025-10-03 10:35:07.337 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:35:07 compute-0 nova_compute[351685]: 2025-10-03 10:35:07.338 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:35:07 compute-0 nova_compute[351685]: 2025-10-03 10:35:07.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:07 compute-0 nova_compute[351685]: 2025-10-03 10:35:07.729 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:35:07 compute-0 nova_compute[351685]: 2025-10-03 10:35:07.730 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3760MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:35:07 compute-0 nova_compute[351685]: 2025-10-03 10:35:07.731 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:35:07 compute-0 nova_compute[351685]: 2025-10-03 10:35:07.731 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:35:07 compute-0 nova_compute[351685]: 2025-10-03 10:35:07.816 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:35:07 compute-0 nova_compute[351685]: 2025-10-03 10:35:07.816 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:35:07 compute-0 nova_compute[351685]: 2025-10-03 10:35:07.817 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:35:07 compute-0 nova_compute[351685]: 2025-10-03 10:35:07.867 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:35:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2034: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:07 compute-0 nova_compute[351685]: 2025-10-03 10:35:07.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:08 compute-0 funny_benz[463635]: {
Oct  3 10:35:08 compute-0 funny_benz[463635]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:35:08 compute-0 funny_benz[463635]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:35:08 compute-0 funny_benz[463635]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:35:08 compute-0 funny_benz[463635]:        "osd_id": 1,
Oct  3 10:35:08 compute-0 funny_benz[463635]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:35:08 compute-0 funny_benz[463635]:        "type": "bluestore"
Oct  3 10:35:08 compute-0 funny_benz[463635]:    },
Oct  3 10:35:08 compute-0 funny_benz[463635]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:35:08 compute-0 funny_benz[463635]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:35:08 compute-0 funny_benz[463635]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:35:08 compute-0 funny_benz[463635]:        "osd_id": 2,
Oct  3 10:35:08 compute-0 funny_benz[463635]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:35:08 compute-0 funny_benz[463635]:        "type": "bluestore"
Oct  3 10:35:08 compute-0 funny_benz[463635]:    },
Oct  3 10:35:08 compute-0 funny_benz[463635]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:35:08 compute-0 funny_benz[463635]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:35:08 compute-0 funny_benz[463635]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:35:08 compute-0 funny_benz[463635]:        "osd_id": 0,
Oct  3 10:35:08 compute-0 funny_benz[463635]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:35:08 compute-0 funny_benz[463635]:        "type": "bluestore"
Oct  3 10:35:08 compute-0 funny_benz[463635]:    }
Oct  3 10:35:08 compute-0 funny_benz[463635]: }
Oct  3 10:35:08 compute-0 systemd[1]: libpod-4e0f914d7210bbc365ef320e760b1fbe8d7291a25cecff4cc4b6a5a984a480fd.scope: Deactivated successfully.
Oct  3 10:35:08 compute-0 systemd[1]: libpod-4e0f914d7210bbc365ef320e760b1fbe8d7291a25cecff4cc4b6a5a984a480fd.scope: Consumed 1.121s CPU time.
Oct  3 10:35:08 compute-0 conmon[463635]: conmon 4e0f914d7210bbc365ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e0f914d7210bbc365ef320e760b1fbe8d7291a25cecff4cc4b6a5a984a480fd.scope/container/memory.events
Oct  3 10:35:08 compute-0 podman[463603]: 2025-10-03 10:35:08.273413886 +0000 UTC m=+1.373310588 container died 4e0f914d7210bbc365ef320e760b1fbe8d7291a25cecff4cc4b6a5a984a480fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_benz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:35:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-efdd74688478fc6f4c7c776620912f24c3161e5684d2eacebe484a2c66641de5-merged.mount: Deactivated successfully.
Oct  3 10:35:08 compute-0 podman[463603]: 2025-10-03 10:35:08.341122695 +0000 UTC m=+1.441019387 container remove 4e0f914d7210bbc365ef320e760b1fbe8d7291a25cecff4cc4b6a5a984a480fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_benz, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:35:08 compute-0 systemd[1]: libpod-conmon-4e0f914d7210bbc365ef320e760b1fbe8d7291a25cecff4cc4b6a5a984a480fd.scope: Deactivated successfully.
Oct  3 10:35:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:35:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2096146730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:35:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:35:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:35:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:35:08 compute-0 podman[463693]: 2025-10-03 10:35:08.411840661 +0000 UTC m=+0.102935628 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, vcs-type=git, container_name=kepler, release=1214.1726694543)
Oct  3 10:35:08 compute-0 nova_compute[351685]: 2025-10-03 10:35:08.414 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:35:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:35:08 compute-0 podman[463690]: 2025-10-03 10:35:08.417960137 +0000 UTC m=+0.107903238 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:35:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 9ad01993-420b-407b-8dc5-ef441a885ca9 does not exist
Oct  3 10:35:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 167df641-976f-4b30-a5d1-e91bde181419 does not exist
Oct  3 10:35:08 compute-0 nova_compute[351685]: 2025-10-03 10:35:08.425 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:35:08 compute-0 nova_compute[351685]: 2025-10-03 10:35:08.438 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:35:08 compute-0 nova_compute[351685]: 2025-10-03 10:35:08.440 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:35:08 compute-0 nova_compute[351685]: 2025-10-03 10:35:08.440 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:35:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:35:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:35:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:11 compute-0 nova_compute[351685]: 2025-10-03 10:35:11.439 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:35:11 compute-0 nova_compute[351685]: 2025-10-03 10:35:11.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:35:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:35:12 compute-0 nova_compute[351685]: 2025-10-03 10:35:12.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:12 compute-0 nova_compute[351685]: 2025-10-03 10:35:12.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:13 compute-0 nova_compute[351685]: 2025-10-03 10:35:13.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:35:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2037: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:14 compute-0 nova_compute[351685]: 2025-10-03 10:35:14.724 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:35:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2038: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:35:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:35:17 compute-0 nova_compute[351685]: 2025-10-03 10:35:17.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:18 compute-0 nova_compute[351685]: 2025-10-03 10:35:17.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:19 compute-0 podman[463793]: 2025-10-03 10:35:19.834337937 +0000 UTC m=+0.086963817 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, distribution-scope=public, vendor=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, managed_by=edpm_ansible)
Oct  3 10:35:19 compute-0 podman[463794]: 2025-10-03 10:35:19.871308811 +0000 UTC m=+0.118215048 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  3 10:35:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2040: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:19 compute-0 podman[463795]: 2025-10-03 10:35:19.897847391 +0000 UTC m=+0.135205801 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Oct  3 10:35:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:35:22 compute-0 nova_compute[351685]: 2025-10-03 10:35:22.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:23 compute-0 nova_compute[351685]: 2025-10-03 10:35:23.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:23 compute-0 podman[463858]: 2025-10-03 10:35:23.844335231 +0000 UTC m=+0.104409865 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  3 10:35:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:35:24.401114) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487724401149, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 2048, "num_deletes": 251, "total_data_size": 3507471, "memory_usage": 3569776, "flush_reason": "Manual Compaction"}
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487724433855, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 3430444, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39726, "largest_seqno": 41773, "table_properties": {"data_size": 3420958, "index_size": 6044, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18553, "raw_average_key_size": 20, "raw_value_size": 3402330, "raw_average_value_size": 3678, "num_data_blocks": 268, "num_entries": 925, "num_filter_entries": 925, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759487494, "oldest_key_time": 1759487494, "file_creation_time": 1759487724, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 33094 microseconds, and 11437 cpu microseconds.
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:35:24.434205) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 3430444 bytes OK
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:35:24.434472) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:35:24.437229) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:35:24.437306) EVENT_LOG_v1 {"time_micros": 1759487724437299, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:35:24.437330) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 3498908, prev total WAL file size 3498908, number of live WAL files 2.
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:35:24.441385) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(3350KB)], [95(6377KB)]
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487724441521, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 9961423, "oldest_snapshot_seqno": -1}
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 5802 keys, 8224727 bytes, temperature: kUnknown
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487724506892, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 8224727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8187333, "index_size": 21796, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 150348, "raw_average_key_size": 25, "raw_value_size": 8083507, "raw_average_value_size": 1393, "num_data_blocks": 867, "num_entries": 5802, "num_filter_entries": 5802, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759487724, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:35:24.507189) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 8224727 bytes
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:35:24.509833) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.2 rd, 125.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.2 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(5.3) write-amplify(2.4) OK, records in: 6316, records dropped: 514 output_compression: NoCompression
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:35:24.509855) EVENT_LOG_v1 {"time_micros": 1759487724509844, "job": 56, "event": "compaction_finished", "compaction_time_micros": 65453, "compaction_time_cpu_micros": 41304, "output_level": 6, "num_output_files": 1, "total_output_size": 8224727, "num_input_records": 6316, "num_output_records": 5802, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487724510977, "job": 56, "event": "table_file_deletion", "file_number": 97}
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487724512569, "job": 56, "event": "table_file_deletion", "file_number": 95}
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:35:24.440992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:35:24.512726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:35:24.512733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:35:24.512737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:35:24.512921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:35:24 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:35:24.512924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:35:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:26 compute-0 podman[463878]: 2025-10-03 10:35:26.824562278 +0000 UTC m=+0.084203318 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:35:26 compute-0 podman[463880]: 2025-10-03 10:35:26.842136412 +0000 UTC m=+0.096225264 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct  3 10:35:26 compute-0 podman[463879]: 2025-10-03 10:35:26.843559237 +0000 UTC m=+0.102065420 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  3 10:35:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:35:27 compute-0 nova_compute[351685]: 2025-10-03 10:35:27.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:28 compute-0 nova_compute[351685]: 2025-10-03 10:35:28.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:29 compute-0 podman[157165]: time="2025-10-03T10:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:35:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:35:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9072 "" "Go-http-client/1.1"
Oct  3 10:35:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:31 compute-0 openstack_network_exporter[367524]: ERROR   10:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:35:31 compute-0 openstack_network_exporter[367524]: ERROR   10:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:35:31 compute-0 openstack_network_exporter[367524]: ERROR   10:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:35:31 compute-0 openstack_network_exporter[367524]: ERROR   10:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:35:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:35:31 compute-0 openstack_network_exporter[367524]: ERROR   10:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:35:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:35:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:35:32 compute-0 nova_compute[351685]: 2025-10-03 10:35:32.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:33 compute-0 nova_compute[351685]: 2025-10-03 10:35:33.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:33 compute-0 podman[463940]: 2025-10-03 10:35:33.898843065 +0000 UTC m=+0.143714555 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.license=GPLv2)
Oct  3 10:35:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:35:37 compute-0 nova_compute[351685]: 2025-10-03 10:35:37.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2049: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:38 compute-0 nova_compute[351685]: 2025-10-03 10:35:38.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:38 compute-0 podman[463961]: 2025-10-03 10:35:38.837334771 +0000 UTC m=+0.085298223 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:35:38 compute-0 podman[463962]: 2025-10-03 10:35:38.85355831 +0000 UTC m=+0.092142942 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, container_name=kepler, release-0.7.12=, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64)
Oct  3 10:35:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.891 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.892 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.892 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.902 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.903 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.903 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.903 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.903 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:35:40.903684) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.913 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.914 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.914 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.914 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.915 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.915 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.915 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.916 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.916 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:35:40.915434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.916 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.917 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.917 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.918 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.918 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.918 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.919 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:35:40.918628) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.946 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.947 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.947 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.947 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.948 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.948 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.948 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.948 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.949 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:35:40.948756) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.995 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.996 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.997 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.998 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.998 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.999 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.999 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:40.999 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.000 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:35:40.999484) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.000 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.001 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.001 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.002 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.003 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.003 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.003 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.003 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.004 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:35:41.003541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.004 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.004 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.005 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.006 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.006 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.006 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.007 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.007 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:35:41.007188) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.008 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.008 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.009 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.009 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:35:41.009683) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.011 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.011 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.012 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.012 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.012 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:35:41.012029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.013 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.014 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.014 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.015 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:35:41.014399) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.037 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.038 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.038 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.038 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.039 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.039 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.039 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.039 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.039 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:35:41.039073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.040 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.040 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.040 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.040 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.040 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:35:41.040655) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.041 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.041 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.041 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.041 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:35:41.042092) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.042 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.042 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.043 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.043 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.043 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.043 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.043 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.043 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.043 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.043 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:35:41.043486) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.044 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.044 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.044 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.044 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:35:41.044492) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.045 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.045 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.045 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.045 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:35:41.045573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.046 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.046 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.046 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.046 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.046 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.046 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.046 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:35:41.046522) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.047 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.047 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.047 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.047 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.047 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 60030000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.047 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:35:41.047573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.048 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.048 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.048 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.048 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.048 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.049 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:35:41.048603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.049 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.049 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.049 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.049 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.049 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.050 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:35:41.049690) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.050 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.050 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.050 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.050 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.050 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.051 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:35:41.050789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.051 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.051 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.051 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.052 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.052 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:35:41.052087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.052 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.052 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.053 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.053 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.053 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.053 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:35:41.053350) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.054 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.054 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.054 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.054 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.054 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.054 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:35:41.054436) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.054 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.055 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:35:41.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:35:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:35:41.630 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:35:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:35:41.630 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:35:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:35:41.631 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:35:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:35:42 compute-0 nova_compute[351685]: 2025-10-03 10:35:42.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:43 compute-0 nova_compute[351685]: 2025-10-03 10:35:43.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:35:46
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'volumes', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.control']
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:35:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:35:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:35:47 compute-0 nova_compute[351685]: 2025-10-03 10:35:47.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:48 compute-0 nova_compute[351685]: 2025-10-03 10:35:48.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:50 compute-0 podman[464005]: 2025-10-03 10:35:50.851929621 +0000 UTC m=+0.088599869 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true)
Oct  3 10:35:50 compute-0 podman[464006]: 2025-10-03 10:35:50.879798284 +0000 UTC m=+0.127848297 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct  3 10:35:50 compute-0 podman[464004]: 2025-10-03 10:35:50.887616514 +0000 UTC m=+0.140438129 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7)
Oct  3 10:35:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:35:52 compute-0 nova_compute[351685]: 2025-10-03 10:35:52.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:53 compute-0 nova_compute[351685]: 2025-10-03 10:35:53.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:35:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1300751676' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:35:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:35:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1300751676' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:35:54 compute-0 podman[464069]: 2025-10-03 10:35:54.828625779 +0000 UTC m=+0.086993538 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:35:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:35:57 compute-0 nova_compute[351685]: 2025-10-03 10:35:57.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:57 compute-0 podman[464089]: 2025-10-03 10:35:57.841215162 +0000 UTC m=+0.102985159 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 10:35:57 compute-0 podman[464090]: 2025-10-03 10:35:57.859365073 +0000 UTC m=+0.110467508 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:35:57 compute-0 podman[464091]: 2025-10-03 10:35:57.869994054 +0000 UTC m=+0.104166857 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:35:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2059: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:35:58 compute-0 nova_compute[351685]: 2025-10-03 10:35:58.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:35:59 compute-0 podman[157165]: time="2025-10-03T10:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:35:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:35:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9065 "" "Go-http-client/1.1"
Oct  3 10:35:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:01 compute-0 openstack_network_exporter[367524]: ERROR   10:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:36:01 compute-0 openstack_network_exporter[367524]: ERROR   10:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:36:01 compute-0 openstack_network_exporter[367524]: ERROR   10:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:36:01 compute-0 openstack_network_exporter[367524]: ERROR   10:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:36:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:36:01 compute-0 openstack_network_exporter[367524]: ERROR   10:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:36:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:36:01 compute-0 nova_compute[351685]: 2025-10-03 10:36:01.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:36:01 compute-0 nova_compute[351685]: 2025-10-03 10:36:01.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:36:01 compute-0 nova_compute[351685]: 2025-10-03 10:36:01.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:36:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2061: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:02 compute-0 nova_compute[351685]: 2025-10-03 10:36:02.185 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:36:02 compute-0 nova_compute[351685]: 2025-10-03 10:36:02.186 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:36:02 compute-0 nova_compute[351685]: 2025-10-03 10:36:02.186 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:36:02 compute-0 nova_compute[351685]: 2025-10-03 10:36:02.188 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:36:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:36:02 compute-0 nova_compute[351685]: 2025-10-03 10:36:02.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:03 compute-0 nova_compute[351685]: 2025-10-03 10:36:03.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:03 compute-0 nova_compute[351685]: 2025-10-03 10:36:03.098 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:36:03 compute-0 nova_compute[351685]: 2025-10-03 10:36:03.113 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:36:03 compute-0 nova_compute[351685]: 2025-10-03 10:36:03.114 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:36:03 compute-0 nova_compute[351685]: 2025-10-03 10:36:03.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:36:03 compute-0 nova_compute[351685]: 2025-10-03 10:36:03.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:36:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2062: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:04 compute-0 podman[464151]: 2025-10-03 10:36:04.805872465 +0000 UTC m=+0.072814234 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  3 10:36:05 compute-0 nova_compute[351685]: 2025-10-03 10:36:05.726 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:36:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:06 compute-0 nova_compute[351685]: 2025-10-03 10:36:06.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:36:06 compute-0 nova_compute[351685]: 2025-10-03 10:36:06.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:36:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:36:07 compute-0 nova_compute[351685]: 2025-10-03 10:36:07.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:07 compute-0 nova_compute[351685]: 2025-10-03 10:36:07.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:36:07 compute-0 nova_compute[351685]: 2025-10-03 10:36:07.756 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:36:07 compute-0 nova_compute[351685]: 2025-10-03 10:36:07.756 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:36:07 compute-0 nova_compute[351685]: 2025-10-03 10:36:07.757 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:36:07 compute-0 nova_compute[351685]: 2025-10-03 10:36:07.757 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:36:07 compute-0 nova_compute[351685]: 2025-10-03 10:36:07.758 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:36:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:08 compute-0 nova_compute[351685]: 2025-10-03 10:36:08.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:36:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/475134045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:36:08 compute-0 nova_compute[351685]: 2025-10-03 10:36:08.286 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:36:08 compute-0 nova_compute[351685]: 2025-10-03 10:36:08.367 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:36:08 compute-0 nova_compute[351685]: 2025-10-03 10:36:08.368 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:36:08 compute-0 nova_compute[351685]: 2025-10-03 10:36:08.368 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:36:08 compute-0 nova_compute[351685]: 2025-10-03 10:36:08.726 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:36:08 compute-0 nova_compute[351685]: 2025-10-03 10:36:08.728 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3818MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:36:08 compute-0 nova_compute[351685]: 2025-10-03 10:36:08.729 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:36:08 compute-0 nova_compute[351685]: 2025-10-03 10:36:08.729 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:36:08 compute-0 nova_compute[351685]: 2025-10-03 10:36:08.791 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:36:08 compute-0 nova_compute[351685]: 2025-10-03 10:36:08.792 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:36:08 compute-0 nova_compute[351685]: 2025-10-03 10:36:08.792 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:36:08 compute-0 nova_compute[351685]: 2025-10-03 10:36:08.830 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:36:09 compute-0 podman[464269]: 2025-10-03 10:36:09.008388631 +0000 UTC m=+0.100727897 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, container_name=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct  3 10:36:09 compute-0 podman[464268]: 2025-10-03 10:36:09.010289002 +0000 UTC m=+0.108846588 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:36:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:36:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:36:09 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3362073173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:36:09 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:36:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:36:09 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:36:09 compute-0 nova_compute[351685]: 2025-10-03 10:36:09.290 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:36:09 compute-0 nova_compute[351685]: 2025-10-03 10:36:09.303 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:36:09 compute-0 nova_compute[351685]: 2025-10-03 10:36:09.322 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:36:09 compute-0 nova_compute[351685]: 2025-10-03 10:36:09.325 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:36:09 compute-0 nova_compute[351685]: 2025-10-03 10:36:09.326 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:36:09 compute-0 nova_compute[351685]: 2025-10-03 10:36:09.327 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:36:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:36:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:36:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:36:10 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:36:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:36:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:36:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:36:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:36:10 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 35ab7c90-10d6-4091-8d06-47efbcf08bce does not exist
Oct  3 10:36:10 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f5de62bf-6874-4fdc-a364-de398eac8aff does not exist
Oct  3 10:36:10 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 172c3205-446c-4b2e-83da-742224dc2b1c does not exist
Oct  3 10:36:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:36:10 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:36:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:36:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:36:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:36:10 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:36:10 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:36:10 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:36:10 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:36:11 compute-0 podman[464645]: 2025-10-03 10:36:11.083356143 +0000 UTC m=+0.071549123 container create c0a5af62fb45495f463feefb388bbdf4b2a9b093cf19b199e3ce99583f82667a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 10:36:11 compute-0 podman[464645]: 2025-10-03 10:36:11.049944093 +0000 UTC m=+0.038137133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:36:11 compute-0 systemd[1]: Started libpod-conmon-c0a5af62fb45495f463feefb388bbdf4b2a9b093cf19b199e3ce99583f82667a.scope.
Oct  3 10:36:11 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:36:11 compute-0 podman[464645]: 2025-10-03 10:36:11.201118676 +0000 UTC m=+0.189311706 container init c0a5af62fb45495f463feefb388bbdf4b2a9b093cf19b199e3ce99583f82667a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:36:11 compute-0 podman[464645]: 2025-10-03 10:36:11.210856247 +0000 UTC m=+0.199049237 container start c0a5af62fb45495f463feefb388bbdf4b2a9b093cf19b199e3ce99583f82667a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:36:11 compute-0 podman[464645]: 2025-10-03 10:36:11.217693806 +0000 UTC m=+0.205886786 container attach c0a5af62fb45495f463feefb388bbdf4b2a9b093cf19b199e3ce99583f82667a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 10:36:11 compute-0 elastic_ardinghelli[464661]: 167 167
Oct  3 10:36:11 compute-0 systemd[1]: libpod-c0a5af62fb45495f463feefb388bbdf4b2a9b093cf19b199e3ce99583f82667a.scope: Deactivated successfully.
Oct  3 10:36:11 compute-0 podman[464645]: 2025-10-03 10:36:11.226719166 +0000 UTC m=+0.214912156 container died c0a5af62fb45495f463feefb388bbdf4b2a9b093cf19b199e3ce99583f82667a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 10:36:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-29b6ca2466796aa7e1b84fc4750f163e04898686507e0b810355b8ddca7313e5-merged.mount: Deactivated successfully.
Oct  3 10:36:11 compute-0 podman[464645]: 2025-10-03 10:36:11.303694661 +0000 UTC m=+0.291887661 container remove c0a5af62fb45495f463feefb388bbdf4b2a9b093cf19b199e3ce99583f82667a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 10:36:11 compute-0 systemd[1]: libpod-conmon-c0a5af62fb45495f463feefb388bbdf4b2a9b093cf19b199e3ce99583f82667a.scope: Deactivated successfully.
Oct  3 10:36:11 compute-0 podman[464685]: 2025-10-03 10:36:11.583779292 +0000 UTC m=+0.077196253 container create 7957c8d20c6302f92c2e7eb95c2b534cbb8af89878d7e7732e3aece76d44126f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:36:11 compute-0 systemd[1]: Started libpod-conmon-7957c8d20c6302f92c2e7eb95c2b534cbb8af89878d7e7732e3aece76d44126f.scope.
Oct  3 10:36:11 compute-0 podman[464685]: 2025-10-03 10:36:11.553874694 +0000 UTC m=+0.047291655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:36:11 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7852c1d885cbf8e5b9b19a29cdd37f792ff128c71204249f1473f3efb2087d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7852c1d885cbf8e5b9b19a29cdd37f792ff128c71204249f1473f3efb2087d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7852c1d885cbf8e5b9b19a29cdd37f792ff128c71204249f1473f3efb2087d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7852c1d885cbf8e5b9b19a29cdd37f792ff128c71204249f1473f3efb2087d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7852c1d885cbf8e5b9b19a29cdd37f792ff128c71204249f1473f3efb2087d4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:36:11 compute-0 podman[464685]: 2025-10-03 10:36:11.739791599 +0000 UTC m=+0.233208540 container init 7957c8d20c6302f92c2e7eb95c2b534cbb8af89878d7e7732e3aece76d44126f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_goodall, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 10:36:11 compute-0 podman[464685]: 2025-10-03 10:36:11.762073503 +0000 UTC m=+0.255490434 container start 7957c8d20c6302f92c2e7eb95c2b534cbb8af89878d7e7732e3aece76d44126f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_goodall, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  3 10:36:11 compute-0 podman[464685]: 2025-10-03 10:36:11.76760326 +0000 UTC m=+0.261020191 container attach 7957c8d20c6302f92c2e7eb95c2b534cbb8af89878d7e7732e3aece76d44126f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:36:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2066: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:36:12 compute-0 nova_compute[351685]: 2025-10-03 10:36:12.342 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:36:12 compute-0 nova_compute[351685]: 2025-10-03 10:36:12.344 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:36:12 compute-0 nova_compute[351685]: 2025-10-03 10:36:12.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:12 compute-0 vigorous_goodall[464701]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:36:12 compute-0 vigorous_goodall[464701]: --> relative data size: 1.0
Oct  3 10:36:12 compute-0 vigorous_goodall[464701]: --> All data devices are unavailable
Oct  3 10:36:12 compute-0 systemd[1]: libpod-7957c8d20c6302f92c2e7eb95c2b534cbb8af89878d7e7732e3aece76d44126f.scope: Deactivated successfully.
Oct  3 10:36:12 compute-0 systemd[1]: libpod-7957c8d20c6302f92c2e7eb95c2b534cbb8af89878d7e7732e3aece76d44126f.scope: Consumed 1.117s CPU time.
Oct  3 10:36:12 compute-0 podman[464730]: 2025-10-03 10:36:12.991352217 +0000 UTC m=+0.036213071 container died 7957c8d20c6302f92c2e7eb95c2b534cbb8af89878d7e7732e3aece76d44126f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 10:36:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7852c1d885cbf8e5b9b19a29cdd37f792ff128c71204249f1473f3efb2087d4-merged.mount: Deactivated successfully.
Oct  3 10:36:13 compute-0 nova_compute[351685]: 2025-10-03 10:36:13.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:13 compute-0 podman[464730]: 2025-10-03 10:36:13.072185056 +0000 UTC m=+0.117045880 container remove 7957c8d20c6302f92c2e7eb95c2b534cbb8af89878d7e7732e3aece76d44126f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_goodall, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 10:36:13 compute-0 systemd[1]: libpod-conmon-7957c8d20c6302f92c2e7eb95c2b534cbb8af89878d7e7732e3aece76d44126f.scope: Deactivated successfully.
Oct  3 10:36:13 compute-0 podman[464879]: 2025-10-03 10:36:13.896997146 +0000 UTC m=+0.059495787 container create 7d9f73a40f98bb0681ae34caefd05377e5ed72282c551ce229068fc702180291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 10:36:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2067: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:13 compute-0 systemd[1]: Started libpod-conmon-7d9f73a40f98bb0681ae34caefd05377e5ed72282c551ce229068fc702180291.scope.
Oct  3 10:36:13 compute-0 podman[464879]: 2025-10-03 10:36:13.872282984 +0000 UTC m=+0.034781655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:36:13 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:36:14 compute-0 podman[464879]: 2025-10-03 10:36:14.000902924 +0000 UTC m=+0.163401555 container init 7d9f73a40f98bb0681ae34caefd05377e5ed72282c551ce229068fc702180291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_greider, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:36:14 compute-0 podman[464879]: 2025-10-03 10:36:14.012342051 +0000 UTC m=+0.174840682 container start 7d9f73a40f98bb0681ae34caefd05377e5ed72282c551ce229068fc702180291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:36:14 compute-0 romantic_greider[464895]: 167 167
Oct  3 10:36:14 compute-0 systemd[1]: libpod-7d9f73a40f98bb0681ae34caefd05377e5ed72282c551ce229068fc702180291.scope: Deactivated successfully.
Oct  3 10:36:14 compute-0 podman[464879]: 2025-10-03 10:36:14.017824636 +0000 UTC m=+0.180323357 container attach 7d9f73a40f98bb0681ae34caefd05377e5ed72282c551ce229068fc702180291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_greider, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:36:14 compute-0 conmon[464895]: conmon 7d9f73a40f98bb0681ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d9f73a40f98bb0681ae34caefd05377e5ed72282c551ce229068fc702180291.scope/container/memory.events
Oct  3 10:36:14 compute-0 podman[464879]: 2025-10-03 10:36:14.019066765 +0000 UTC m=+0.181565416 container died 7d9f73a40f98bb0681ae34caefd05377e5ed72282c551ce229068fc702180291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_greider, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 10:36:14 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 10:36:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-12e512233a9b515622295e1e0f608dcd7d106ea2eea58f5337f94ffa59440b93-merged.mount: Deactivated successfully.
Oct  3 10:36:14 compute-0 podman[464879]: 2025-10-03 10:36:14.067505207 +0000 UTC m=+0.230003828 container remove 7d9f73a40f98bb0681ae34caefd05377e5ed72282c551ce229068fc702180291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_greider, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:36:14 compute-0 systemd[1]: libpod-conmon-7d9f73a40f98bb0681ae34caefd05377e5ed72282c551ce229068fc702180291.scope: Deactivated successfully.
Oct  3 10:36:14 compute-0 podman[464919]: 2025-10-03 10:36:14.293793345 +0000 UTC m=+0.066728958 container create 97c837451000b073d6b6f6e35de3928f580a983b870c3d2ad1e3d6e62c53c19b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 10:36:14 compute-0 systemd[1]: Started libpod-conmon-97c837451000b073d6b6f6e35de3928f580a983b870c3d2ad1e3d6e62c53c19b.scope.
Oct  3 10:36:14 compute-0 podman[464919]: 2025-10-03 10:36:14.269226469 +0000 UTC m=+0.042162102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:36:14 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c302ed4c77ea3cf8c363d63b7bdd27eb23d639e83c3dc7eb5309be04f486d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c302ed4c77ea3cf8c363d63b7bdd27eb23d639e83c3dc7eb5309be04f486d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c302ed4c77ea3cf8c363d63b7bdd27eb23d639e83c3dc7eb5309be04f486d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:36:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40c302ed4c77ea3cf8c363d63b7bdd27eb23d639e83c3dc7eb5309be04f486d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:36:14 compute-0 podman[464919]: 2025-10-03 10:36:14.430521354 +0000 UTC m=+0.203456977 container init 97c837451000b073d6b6f6e35de3928f580a983b870c3d2ad1e3d6e62c53c19b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  3 10:36:14 compute-0 podman[464919]: 2025-10-03 10:36:14.453837282 +0000 UTC m=+0.226772865 container start 97c837451000b073d6b6f6e35de3928f580a983b870c3d2ad1e3d6e62c53c19b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 10:36:14 compute-0 podman[464919]: 2025-10-03 10:36:14.458415988 +0000 UTC m=+0.231351581 container attach 97c837451000b073d6b6f6e35de3928f580a983b870c3d2ad1e3d6e62c53c19b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:36:14 compute-0 nova_compute[351685]: 2025-10-03 10:36:14.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]: {
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:    "0": [
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:        {
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "devices": [
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "/dev/loop3"
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            ],
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "lv_name": "ceph_lv0",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "lv_size": "21470642176",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "name": "ceph_lv0",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "tags": {
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.cluster_name": "ceph",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.crush_device_class": "",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.encrypted": "0",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.osd_id": "0",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.type": "block",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.vdo": "0"
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            },
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "type": "block",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "vg_name": "ceph_vg0"
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:        }
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:    ],
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:    "1": [
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:        {
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "devices": [
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "/dev/loop4"
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            ],
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "lv_name": "ceph_lv1",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "lv_size": "21470642176",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "name": "ceph_lv1",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "tags": {
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.cluster_name": "ceph",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.crush_device_class": "",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.encrypted": "0",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.osd_id": "1",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.type": "block",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.vdo": "0"
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            },
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "type": "block",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "vg_name": "ceph_vg1"
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:        }
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:    ],
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:    "2": [
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:        {
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "devices": [
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "/dev/loop5"
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            ],
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "lv_name": "ceph_lv2",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "lv_size": "21470642176",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "name": "ceph_lv2",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "tags": {
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.cluster_name": "ceph",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.crush_device_class": "",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.encrypted": "0",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.osd_id": "2",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.type": "block",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:                "ceph.vdo": "0"
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            },
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "type": "block",
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:            "vg_name": "ceph_vg2"
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:        }
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]:    ]
Oct  3 10:36:15 compute-0 recursing_archimedes[464934]: }
Oct  3 10:36:15 compute-0 systemd[1]: libpod-97c837451000b073d6b6f6e35de3928f580a983b870c3d2ad1e3d6e62c53c19b.scope: Deactivated successfully.
Oct  3 10:36:15 compute-0 podman[464919]: 2025-10-03 10:36:15.307718942 +0000 UTC m=+1.080654525 container died 97c837451000b073d6b6f6e35de3928f580a983b870c3d2ad1e3d6e62c53c19b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  3 10:36:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-40c302ed4c77ea3cf8c363d63b7bdd27eb23d639e83c3dc7eb5309be04f486d7-merged.mount: Deactivated successfully.
Oct  3 10:36:15 compute-0 podman[464919]: 2025-10-03 10:36:15.386690412 +0000 UTC m=+1.159625995 container remove 97c837451000b073d6b6f6e35de3928f580a983b870c3d2ad1e3d6e62c53c19b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_archimedes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:36:15 compute-0 systemd[1]: libpod-conmon-97c837451000b073d6b6f6e35de3928f580a983b870c3d2ad1e3d6e62c53c19b.scope: Deactivated successfully.
Oct  3 10:36:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2068: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:36:16 compute-0 podman[465094]: 2025-10-03 10:36:16.198338129 +0000 UTC m=+0.050045954 container create 5c1ab6bf9482afc96559954e4744b68da7b999906c6355734721e17f53585e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:36:16 compute-0 systemd[1]: Started libpod-conmon-5c1ab6bf9482afc96559954e4744b68da7b999906c6355734721e17f53585e1e.scope.
Oct  3 10:36:16 compute-0 podman[465094]: 2025-10-03 10:36:16.180386543 +0000 UTC m=+0.032094388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:36:16 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:36:16 compute-0 podman[465094]: 2025-10-03 10:36:16.315516432 +0000 UTC m=+0.167224267 container init 5c1ab6bf9482afc96559954e4744b68da7b999906c6355734721e17f53585e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclaren, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:36:16 compute-0 podman[465094]: 2025-10-03 10:36:16.327773335 +0000 UTC m=+0.179481160 container start 5c1ab6bf9482afc96559954e4744b68da7b999906c6355734721e17f53585e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclaren, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:36:16 compute-0 podman[465094]: 2025-10-03 10:36:16.332516047 +0000 UTC m=+0.184223902 container attach 5c1ab6bf9482afc96559954e4744b68da7b999906c6355734721e17f53585e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclaren, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 10:36:16 compute-0 lucid_mclaren[465107]: 167 167
Oct  3 10:36:16 compute-0 systemd[1]: libpod-5c1ab6bf9482afc96559954e4744b68da7b999906c6355734721e17f53585e1e.scope: Deactivated successfully.
Oct  3 10:36:16 compute-0 conmon[465107]: conmon 5c1ab6bf9482afc96559 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c1ab6bf9482afc96559954e4744b68da7b999906c6355734721e17f53585e1e.scope/container/memory.events
Oct  3 10:36:16 compute-0 podman[465113]: 2025-10-03 10:36:16.399814862 +0000 UTC m=+0.041921023 container died 5c1ab6bf9482afc96559954e4744b68da7b999906c6355734721e17f53585e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclaren, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:36:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-01323e041a7a446fc144f57da9e1cbdbdd7c8e3edae39e6e7e2e27f912b7b5d6-merged.mount: Deactivated successfully.
Oct  3 10:36:16 compute-0 podman[465113]: 2025-10-03 10:36:16.449887256 +0000 UTC m=+0.091993367 container remove 5c1ab6bf9482afc96559954e4744b68da7b999906c6355734721e17f53585e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:36:16 compute-0 systemd[1]: libpod-conmon-5c1ab6bf9482afc96559954e4744b68da7b999906c6355734721e17f53585e1e.scope: Deactivated successfully.
Oct  3 10:36:16 compute-0 podman[465135]: 2025-10-03 10:36:16.69384515 +0000 UTC m=+0.065867421 container create a1a0eff0e75d9a248b051b68cc2672f5b9f13f61f733ea14faf4cfccd9aa1631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct  3 10:36:16 compute-0 nova_compute[351685]: 2025-10-03 10:36:16.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:36:16 compute-0 nova_compute[351685]: 2025-10-03 10:36:16.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 10:36:16 compute-0 nova_compute[351685]: 2025-10-03 10:36:16.747 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 10:36:16 compute-0 systemd[1]: Started libpod-conmon-a1a0eff0e75d9a248b051b68cc2672f5b9f13f61f733ea14faf4cfccd9aa1631.scope.
Oct  3 10:36:16 compute-0 podman[465135]: 2025-10-03 10:36:16.669298844 +0000 UTC m=+0.041321135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:36:16 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98639cad6dde9911a67dbeb51705d4f59df6130419a55453284a14714d165e49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98639cad6dde9911a67dbeb51705d4f59df6130419a55453284a14714d165e49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98639cad6dde9911a67dbeb51705d4f59df6130419a55453284a14714d165e49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:36:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98639cad6dde9911a67dbeb51705d4f59df6130419a55453284a14714d165e49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:36:16 compute-0 podman[465135]: 2025-10-03 10:36:16.839815496 +0000 UTC m=+0.211837787 container init a1a0eff0e75d9a248b051b68cc2672f5b9f13f61f733ea14faf4cfccd9aa1631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 10:36:16 compute-0 podman[465135]: 2025-10-03 10:36:16.862182352 +0000 UTC m=+0.234204653 container start a1a0eff0e75d9a248b051b68cc2672f5b9f13f61f733ea14faf4cfccd9aa1631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  3 10:36:16 compute-0 podman[465135]: 2025-10-03 10:36:16.876849921 +0000 UTC m=+0.248872182 container attach a1a0eff0e75d9a248b051b68cc2672f5b9f13f61f733ea14faf4cfccd9aa1631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:36:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:36:17 compute-0 nova_compute[351685]: 2025-10-03 10:36:17.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]: {
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:        "osd_id": 1,
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:        "type": "bluestore"
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:    },
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:        "osd_id": 2,
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:        "type": "bluestore"
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:    },
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:        "osd_id": 0,
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:        "type": "bluestore"
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]:    }
Oct  3 10:36:17 compute-0 suspicious_satoshi[465152]: }
Oct  3 10:36:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2069: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:17 compute-0 systemd[1]: libpod-a1a0eff0e75d9a248b051b68cc2672f5b9f13f61f733ea14faf4cfccd9aa1631.scope: Deactivated successfully.
Oct  3 10:36:17 compute-0 systemd[1]: libpod-a1a0eff0e75d9a248b051b68cc2672f5b9f13f61f733ea14faf4cfccd9aa1631.scope: Consumed 1.081s CPU time.
Oct  3 10:36:17 compute-0 podman[465135]: 2025-10-03 10:36:17.945881884 +0000 UTC m=+1.317904195 container died a1a0eff0e75d9a248b051b68cc2672f5b9f13f61f733ea14faf4cfccd9aa1631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:36:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-98639cad6dde9911a67dbeb51705d4f59df6130419a55453284a14714d165e49-merged.mount: Deactivated successfully.
Oct  3 10:36:18 compute-0 nova_compute[351685]: 2025-10-03 10:36:18.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:18 compute-0 podman[465135]: 2025-10-03 10:36:18.064619186 +0000 UTC m=+1.436641447 container remove a1a0eff0e75d9a248b051b68cc2672f5b9f13f61f733ea14faf4cfccd9aa1631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:36:18 compute-0 systemd[1]: libpod-conmon-a1a0eff0e75d9a248b051b68cc2672f5b9f13f61f733ea14faf4cfccd9aa1631.scope: Deactivated successfully.
Oct  3 10:36:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:36:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:36:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:36:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:36:18 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ebb52ccc-744e-4a9a-8a31-0c4b4ce42a3f does not exist
Oct  3 10:36:18 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 742ad2f7-45ee-4b1e-b98b-73e3da6b90a5 does not exist
Oct  3 10:36:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:36:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:36:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:21 compute-0 podman[465246]: 2025-10-03 10:36:21.818522055 +0000 UTC m=+0.078463574 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc.)
Oct  3 10:36:21 compute-0 podman[465247]: 2025-10-03 10:36:21.818935759 +0000 UTC m=+0.078974101 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm)
Oct  3 10:36:21 compute-0 podman[465248]: 2025-10-03 10:36:21.850981246 +0000 UTC m=+0.108802686 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 10:36:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:36:22 compute-0 nova_compute[351685]: 2025-10-03 10:36:22.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:23 compute-0 nova_compute[351685]: 2025-10-03 10:36:23.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:24 compute-0 nova_compute[351685]: 2025-10-03 10:36:24.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:36:24 compute-0 nova_compute[351685]: 2025-10-03 10:36:24.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 10:36:25 compute-0 podman[465311]: 2025-10-03 10:36:25.81296877 +0000 UTC m=+0.073357431 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 10:36:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2073: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:36:27 compute-0 nova_compute[351685]: 2025-10-03 10:36:27.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:28 compute-0 nova_compute[351685]: 2025-10-03 10:36:28.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:28 compute-0 podman[465329]: 2025-10-03 10:36:28.832517627 +0000 UTC m=+0.079653853 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:36:28 compute-0 podman[465330]: 2025-10-03 10:36:28.842045953 +0000 UTC m=+0.090557292 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:36:28 compute-0 podman[465331]: 2025-10-03 10:36:28.843100876 +0000 UTC m=+0.075472338 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct  3 10:36:29 compute-0 podman[157165]: time="2025-10-03T10:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:36:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:36:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9071 "" "Go-http-client/1.1"
Oct  3 10:36:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2075: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:31 compute-0 openstack_network_exporter[367524]: ERROR   10:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:36:31 compute-0 openstack_network_exporter[367524]: ERROR   10:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:36:31 compute-0 openstack_network_exporter[367524]: ERROR   10:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:36:31 compute-0 openstack_network_exporter[367524]: ERROR   10:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:36:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:36:31 compute-0 openstack_network_exporter[367524]: ERROR   10:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:36:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:36:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2076: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:36:32 compute-0 nova_compute[351685]: 2025-10-03 10:36:32.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:33 compute-0 nova_compute[351685]: 2025-10-03 10:36:33.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:35 compute-0 podman[465389]: 2025-10-03 10:36:35.812788364 +0000 UTC m=+0.081321056 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:36:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2078: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:36:37 compute-0 nova_compute[351685]: 2025-10-03 10:36:37.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:38 compute-0 nova_compute[351685]: 2025-10-03 10:36:38.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:39 compute-0 podman[465407]: 2025-10-03 10:36:39.855447928 +0000 UTC m=+0.104953714 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:36:39 compute-0 podman[465408]: 2025-10-03 10:36:39.901199582 +0000 UTC m=+0.141083169 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release=1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc.)
Oct  3 10:36:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:36:41.631 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:36:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:36:41.632 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:36:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:36:41.633 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:36:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:36:42 compute-0 nova_compute[351685]: 2025-10-03 10:36:42.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:43 compute-0 nova_compute[351685]: 2025-10-03 10:36:43.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2083: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:36:46
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'vms', 'default.rgw.log', 'backups', 'default.rgw.control', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images']
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:36:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:36:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:36:47 compute-0 nova_compute[351685]: 2025-10-03 10:36:47.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2084: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:48 compute-0 nova_compute[351685]: 2025-10-03 10:36:48.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:36:52 compute-0 nova_compute[351685]: 2025-10-03 10:36:52.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:52 compute-0 podman[465453]: 2025-10-03 10:36:52.827433784 +0000 UTC m=+0.080027463 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2)
Oct  3 10:36:52 compute-0 podman[465452]: 2025-10-03 10:36:52.83573004 +0000 UTC m=+0.082350729 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, io.buildah.version=1.33.7, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct  3 10:36:52 compute-0 podman[465454]: 2025-10-03 10:36:52.869900095 +0000 UTC m=+0.121362659 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 10:36:53 compute-0 nova_compute[351685]: 2025-10-03 10:36:53.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:36:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1379347005' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:36:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:36:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1379347005' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:36:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:56 compute-0 podman[465515]: 2025-10-03 10:36:56.821690641 +0000 UTC m=+0.088006809 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:36:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:36:57 compute-0 nova_compute[351685]: 2025-10-03 10:36:57.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2089: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:36:58 compute-0 nova_compute[351685]: 2025-10-03 10:36:58.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:36:59 compute-0 podman[157165]: time="2025-10-03T10:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:36:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:36:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9072 "" "Go-http-client/1.1"
Oct  3 10:36:59 compute-0 podman[465535]: 2025-10-03 10:36:59.852897122 +0000 UTC m=+0.113465665 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:36:59 compute-0 podman[465536]: 2025-10-03 10:36:59.857013414 +0000 UTC m=+0.096921215 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct  3 10:36:59 compute-0 podman[465534]: 2025-10-03 10:36:59.876897611 +0000 UTC m=+0.125288324 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:36:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2090: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:01 compute-0 openstack_network_exporter[367524]: ERROR   10:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:37:01 compute-0 openstack_network_exporter[367524]: ERROR   10:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:37:01 compute-0 openstack_network_exporter[367524]: ERROR   10:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:37:01 compute-0 openstack_network_exporter[367524]: ERROR   10:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:37:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:37:01 compute-0 openstack_network_exporter[367524]: ERROR   10:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:37:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:37:01 compute-0 nova_compute[351685]: 2025-10-03 10:37:01.742 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:37:01 compute-0 nova_compute[351685]: 2025-10-03 10:37:01.742 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:37:01 compute-0 nova_compute[351685]: 2025-10-03 10:37:01.743 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:37:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:02 compute-0 nova_compute[351685]: 2025-10-03 10:37:02.161 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:37:02 compute-0 nova_compute[351685]: 2025-10-03 10:37:02.161 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:37:02 compute-0 nova_compute[351685]: 2025-10-03 10:37:02.162 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:37:02 compute-0 nova_compute[351685]: 2025-10-03 10:37:02.162 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:37:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:37:02 compute-0 nova_compute[351685]: 2025-10-03 10:37:02.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:03 compute-0 nova_compute[351685]: 2025-10-03 10:37:03.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:03 compute-0 nova_compute[351685]: 2025-10-03 10:37:03.652 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:37:03 compute-0 nova_compute[351685]: 2025-10-03 10:37:03.952 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:37:03 compute-0 nova_compute[351685]: 2025-10-03 10:37:03.953 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:37:03 compute-0 nova_compute[351685]: 2025-10-03 10:37:03.954 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:37:03 compute-0 nova_compute[351685]: 2025-10-03 10:37:03.954 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:37:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:06 compute-0 nova_compute[351685]: 2025-10-03 10:37:06.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:37:06 compute-0 nova_compute[351685]: 2025-10-03 10:37:06.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:37:06 compute-0 podman[465593]: 2025-10-03 10:37:06.838777511 +0000 UTC m=+0.092946909 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0)
Oct  3 10:37:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:37:07 compute-0 nova_compute[351685]: 2025-10-03 10:37:07.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:07 compute-0 nova_compute[351685]: 2025-10-03 10:37:07.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:37:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:08 compute-0 nova_compute[351685]: 2025-10-03 10:37:08.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:08 compute-0 nova_compute[351685]: 2025-10-03 10:37:08.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:37:08 compute-0 nova_compute[351685]: 2025-10-03 10:37:08.777 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:37:08 compute-0 nova_compute[351685]: 2025-10-03 10:37:08.778 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:37:08 compute-0 nova_compute[351685]: 2025-10-03 10:37:08.778 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:37:08 compute-0 nova_compute[351685]: 2025-10-03 10:37:08.780 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:37:08 compute-0 nova_compute[351685]: 2025-10-03 10:37:08.780 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:37:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:37:09 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4073631716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:37:09 compute-0 nova_compute[351685]: 2025-10-03 10:37:09.322 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:37:09 compute-0 nova_compute[351685]: 2025-10-03 10:37:09.580 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:37:09 compute-0 nova_compute[351685]: 2025-10-03 10:37:09.580 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:37:09 compute-0 nova_compute[351685]: 2025-10-03 10:37:09.580 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:37:09 compute-0 nova_compute[351685]: 2025-10-03 10:37:09.940 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:37:09 compute-0 nova_compute[351685]: 2025-10-03 10:37:09.941 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3836MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:37:09 compute-0 nova_compute[351685]: 2025-10-03 10:37:09.941 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:37:09 compute-0 nova_compute[351685]: 2025-10-03 10:37:09.942 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:37:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2095: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:10 compute-0 nova_compute[351685]: 2025-10-03 10:37:10.356 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:37:10 compute-0 nova_compute[351685]: 2025-10-03 10:37:10.357 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:37:10 compute-0 nova_compute[351685]: 2025-10-03 10:37:10.357 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:37:10 compute-0 nova_compute[351685]: 2025-10-03 10:37:10.501 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:37:10 compute-0 podman[465654]: 2025-10-03 10:37:10.816724695 +0000 UTC m=+0.080532700 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:37:10 compute-0 podman[465655]: 2025-10-03 10:37:10.833029407 +0000 UTC m=+0.093971000 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, release=1214.1726694543, architecture=x86_64, release-0.7.12=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, io.buildah.version=1.29.0)
Oct  3 10:37:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:37:11 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/172358993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:37:11 compute-0 nova_compute[351685]: 2025-10-03 10:37:11.140 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:37:11 compute-0 nova_compute[351685]: 2025-10-03 10:37:11.154 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:37:11 compute-0 nova_compute[351685]: 2025-10-03 10:37:11.231 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:37:11 compute-0 nova_compute[351685]: 2025-10-03 10:37:11.233 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:37:11 compute-0 nova_compute[351685]: 2025-10-03 10:37:11.233 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:37:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:37:12 compute-0 nova_compute[351685]: 2025-10-03 10:37:12.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:13 compute-0 nova_compute[351685]: 2025-10-03 10:37:13.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2097: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:14 compute-0 nova_compute[351685]: 2025-10-03 10:37:14.234 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:37:14 compute-0 nova_compute[351685]: 2025-10-03 10:37:14.235 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:37:14 compute-0 nova_compute[351685]: 2025-10-03 10:37:14.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:37:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2098: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:37:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:37:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:37:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:37:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:37:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:37:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:37:17 compute-0 nova_compute[351685]: 2025-10-03 10:37:17.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2099: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:18 compute-0 nova_compute[351685]: 2025-10-03 10:37:18.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  3 10:37:19 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 10:37:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:37:19 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:37:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:37:19 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:37:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:37:19 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:37:19 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b28b1d75-d661-476e-b5fc-0944d3877361 does not exist
Oct  3 10:37:19 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 4d893ba8-f8dc-4bb4-ad03-23840d86a45f does not exist
Oct  3 10:37:19 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 4426ca8f-ac13-45ab-b2db-1c63aba7301e does not exist
Oct  3 10:37:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:37:19 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:37:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:37:19 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:37:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 10:37:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:37:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:37:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:37:19 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:37:19 compute-0 nova_compute[351685]: 2025-10-03 10:37:19.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:37:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:20 compute-0 podman[465965]: 2025-10-03 10:37:20.30915984 +0000 UTC m=+0.040392635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:37:20 compute-0 podman[465965]: 2025-10-03 10:37:20.719109691 +0000 UTC m=+0.450342446 container create 50354566abb407e9cc5e4dbde8462b4c6bf4d573e4b58290de104ff23dbfd729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:37:20 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:37:21 compute-0 systemd[1]: Started libpod-conmon-50354566abb407e9cc5e4dbde8462b4c6bf4d573e4b58290de104ff23dbfd729.scope.
Oct  3 10:37:21 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:37:21 compute-0 podman[465965]: 2025-10-03 10:37:21.255989977 +0000 UTC m=+0.987222782 container init 50354566abb407e9cc5e4dbde8462b4c6bf4d573e4b58290de104ff23dbfd729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ardinghelli, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:37:21 compute-0 podman[465965]: 2025-10-03 10:37:21.27666351 +0000 UTC m=+1.007896255 container start 50354566abb407e9cc5e4dbde8462b4c6bf4d573e4b58290de104ff23dbfd729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ardinghelli, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 10:37:21 compute-0 sleepy_ardinghelli[465981]: 167 167
Oct  3 10:37:21 compute-0 systemd[1]: libpod-50354566abb407e9cc5e4dbde8462b4c6bf4d573e4b58290de104ff23dbfd729.scope: Deactivated successfully.
Oct  3 10:37:21 compute-0 conmon[465981]: conmon 50354566abb407e9cc5e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50354566abb407e9cc5e4dbde8462b4c6bf4d573e4b58290de104ff23dbfd729.scope/container/memory.events
Oct  3 10:37:21 compute-0 podman[465965]: 2025-10-03 10:37:21.411145377 +0000 UTC m=+1.142378162 container attach 50354566abb407e9cc5e4dbde8462b4c6bf4d573e4b58290de104ff23dbfd729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ardinghelli, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 10:37:21 compute-0 podman[465965]: 2025-10-03 10:37:21.411742117 +0000 UTC m=+1.142974852 container died 50354566abb407e9cc5e4dbde8462b4c6bf4d573e4b58290de104ff23dbfd729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:37:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb8b6a5728211068586636ac0945505753cc776aef03e3cafdf7bd8969936c34-merged.mount: Deactivated successfully.
Oct  3 10:37:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2101: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:22 compute-0 podman[465965]: 2025-10-03 10:37:22.104314489 +0000 UTC m=+1.835547194 container remove 50354566abb407e9cc5e4dbde8462b4c6bf4d573e4b58290de104ff23dbfd729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_ardinghelli, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 10:37:22 compute-0 systemd[1]: libpod-conmon-50354566abb407e9cc5e4dbde8462b4c6bf4d573e4b58290de104ff23dbfd729.scope: Deactivated successfully.
Oct  3 10:37:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:37:22 compute-0 podman[466004]: 2025-10-03 10:37:22.283486398 +0000 UTC m=+0.028437451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:37:22 compute-0 podman[466004]: 2025-10-03 10:37:22.387111717 +0000 UTC m=+0.132062790 container create 81b7a1cb14bfa23921d8d41daa2d5bfc51c34f61440ae5d78f204259c9ad1f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 10:37:22 compute-0 nova_compute[351685]: 2025-10-03 10:37:22.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:22 compute-0 systemd[1]: Started libpod-conmon-81b7a1cb14bfa23921d8d41daa2d5bfc51c34f61440ae5d78f204259c9ad1f6c.scope.
Oct  3 10:37:22 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7096b16baa38719868d0656c802e72264caff6dcf7ea0171a4e60e8e1c782f13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7096b16baa38719868d0656c802e72264caff6dcf7ea0171a4e60e8e1c782f13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7096b16baa38719868d0656c802e72264caff6dcf7ea0171a4e60e8e1c782f13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7096b16baa38719868d0656c802e72264caff6dcf7ea0171a4e60e8e1c782f13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7096b16baa38719868d0656c802e72264caff6dcf7ea0171a4e60e8e1c782f13/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:37:22 compute-0 podman[466004]: 2025-10-03 10:37:22.842179534 +0000 UTC m=+0.587130587 container init 81b7a1cb14bfa23921d8d41daa2d5bfc51c34f61440ae5d78f204259c9ad1f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_montalcini, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:37:22 compute-0 podman[466004]: 2025-10-03 10:37:22.875180611 +0000 UTC m=+0.620131684 container start 81b7a1cb14bfa23921d8d41daa2d5bfc51c34f61440ae5d78f204259c9ad1f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:37:23 compute-0 podman[466004]: 2025-10-03 10:37:23.036442975 +0000 UTC m=+0.781394118 container attach 81b7a1cb14bfa23921d8d41daa2d5bfc51c34f61440ae5d78f204259c9ad1f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 10:37:23 compute-0 nova_compute[351685]: 2025-10-03 10:37:23.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:23 compute-0 podman[466032]: 2025-10-03 10:37:23.838643961 +0000 UTC m=+0.088988711 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true)
Oct  3 10:37:23 compute-0 podman[466031]: 2025-10-03 10:37:23.863181477 +0000 UTC m=+0.110235172 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.buildah.version=1.33.7, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm)
Oct  3 10:37:23 compute-0 podman[466034]: 2025-10-03 10:37:23.905829653 +0000 UTC m=+0.138358443 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  3 10:37:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:24 compute-0 brave_montalcini[466018]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:37:24 compute-0 brave_montalcini[466018]: --> relative data size: 1.0
Oct  3 10:37:24 compute-0 brave_montalcini[466018]: --> All data devices are unavailable
Oct  3 10:37:24 compute-0 systemd[1]: libpod-81b7a1cb14bfa23921d8d41daa2d5bfc51c34f61440ae5d78f204259c9ad1f6c.scope: Deactivated successfully.
Oct  3 10:37:24 compute-0 systemd[1]: libpod-81b7a1cb14bfa23921d8d41daa2d5bfc51c34f61440ae5d78f204259c9ad1f6c.scope: Consumed 1.124s CPU time.
Oct  3 10:37:24 compute-0 podman[466004]: 2025-10-03 10:37:24.058861924 +0000 UTC m=+1.803813007 container died 81b7a1cb14bfa23921d8d41daa2d5bfc51c34f61440ae5d78f204259c9ad1f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 10:37:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7096b16baa38719868d0656c802e72264caff6dcf7ea0171a4e60e8e1c782f13-merged.mount: Deactivated successfully.
Oct  3 10:37:25 compute-0 podman[466004]: 2025-10-03 10:37:25.246510445 +0000 UTC m=+2.991461488 container remove 81b7a1cb14bfa23921d8d41daa2d5bfc51c34f61440ae5d78f204259c9ad1f6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:37:25 compute-0 systemd[1]: libpod-conmon-81b7a1cb14bfa23921d8d41daa2d5bfc51c34f61440ae5d78f204259c9ad1f6c.scope: Deactivated successfully.
Oct  3 10:37:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2103: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:26 compute-0 podman[466258]: 2025-10-03 10:37:26.176831494 +0000 UTC m=+0.124483089 container create d5072c338d2f4badb227302e0121736fd98ec582544d7145f80230a383e1fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_diffie, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:37:26 compute-0 podman[466258]: 2025-10-03 10:37:26.090373785 +0000 UTC m=+0.038025370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:37:26 compute-0 systemd[1]: Started libpod-conmon-d5072c338d2f4badb227302e0121736fd98ec582544d7145f80230a383e1fade.scope.
Oct  3 10:37:26 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:37:26 compute-0 podman[466258]: 2025-10-03 10:37:26.467200784 +0000 UTC m=+0.414852379 container init d5072c338d2f4badb227302e0121736fd98ec582544d7145f80230a383e1fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_diffie, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:37:26 compute-0 podman[466258]: 2025-10-03 10:37:26.480433998 +0000 UTC m=+0.428085553 container start d5072c338d2f4badb227302e0121736fd98ec582544d7145f80230a383e1fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_diffie, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:37:26 compute-0 magical_diffie[466274]: 167 167
Oct  3 10:37:26 compute-0 systemd[1]: libpod-d5072c338d2f4badb227302e0121736fd98ec582544d7145f80230a383e1fade.scope: Deactivated successfully.
Oct  3 10:37:26 compute-0 podman[466258]: 2025-10-03 10:37:26.562863558 +0000 UTC m=+0.510515113 container attach d5072c338d2f4badb227302e0121736fd98ec582544d7145f80230a383e1fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_diffie, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:37:26 compute-0 podman[466258]: 2025-10-03 10:37:26.563179199 +0000 UTC m=+0.510830754 container died d5072c338d2f4badb227302e0121736fd98ec582544d7145f80230a383e1fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_diffie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 10:37:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6205376bcd56551d01a2b2737010d9a2b2749f96ba272fec3578494dcb43dbcb-merged.mount: Deactivated successfully.
Oct  3 10:37:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:37:27 compute-0 podman[466258]: 2025-10-03 10:37:27.404835676 +0000 UTC m=+1.352487241 container remove d5072c338d2f4badb227302e0121736fd98ec582544d7145f80230a383e1fade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:37:27 compute-0 systemd[1]: libpod-conmon-d5072c338d2f4badb227302e0121736fd98ec582544d7145f80230a383e1fade.scope: Deactivated successfully.
Oct  3 10:37:27 compute-0 podman[466290]: 2025-10-03 10:37:27.513425914 +0000 UTC m=+0.629722420 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:37:27 compute-0 nova_compute[351685]: 2025-10-03 10:37:27.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:27 compute-0 podman[466315]: 2025-10-03 10:37:27.604719608 +0000 UTC m=+0.045940743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:37:27 compute-0 podman[466315]: 2025-10-03 10:37:27.867686881 +0000 UTC m=+0.308907986 container create 03bc7aa217a59af21d5b16e85a7c4fd39b766c5bbf19ca6fb99186fb263ded81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:37:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:28 compute-0 systemd[1]: Started libpod-conmon-03bc7aa217a59af21d5b16e85a7c4fd39b766c5bbf19ca6fb99186fb263ded81.scope.
Oct  3 10:37:28 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:37:28 compute-0 nova_compute[351685]: 2025-10-03 10:37:28.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dffcd7de9fb52ba00ea139006e57d9d8e487f5861836f1b03b6a18f2fb789041/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dffcd7de9fb52ba00ea139006e57d9d8e487f5861836f1b03b6a18f2fb789041/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dffcd7de9fb52ba00ea139006e57d9d8e487f5861836f1b03b6a18f2fb789041/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dffcd7de9fb52ba00ea139006e57d9d8e487f5861836f1b03b6a18f2fb789041/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:37:28 compute-0 podman[466315]: 2025-10-03 10:37:28.224265882 +0000 UTC m=+0.665487017 container init 03bc7aa217a59af21d5b16e85a7c4fd39b766c5bbf19ca6fb99186fb263ded81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 10:37:28 compute-0 podman[466315]: 2025-10-03 10:37:28.23481382 +0000 UTC m=+0.676034925 container start 03bc7aa217a59af21d5b16e85a7c4fd39b766c5bbf19ca6fb99186fb263ded81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:37:28 compute-0 podman[466315]: 2025-10-03 10:37:28.378840653 +0000 UTC m=+0.820061818 container attach 03bc7aa217a59af21d5b16e85a7c4fd39b766c5bbf19ca6fb99186fb263ded81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]: {
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:    "0": [
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:        {
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "devices": [
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "/dev/loop3"
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            ],
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "lv_name": "ceph_lv0",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "lv_size": "21470642176",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "name": "ceph_lv0",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "tags": {
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.cluster_name": "ceph",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.crush_device_class": "",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.encrypted": "0",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.osd_id": "0",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.type": "block",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.vdo": "0"
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            },
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "type": "block",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "vg_name": "ceph_vg0"
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:        }
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:    ],
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:    "1": [
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:        {
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "devices": [
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "/dev/loop4"
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            ],
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "lv_name": "ceph_lv1",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "lv_size": "21470642176",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "name": "ceph_lv1",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "tags": {
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.cluster_name": "ceph",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.crush_device_class": "",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.encrypted": "0",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.osd_id": "1",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.type": "block",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.vdo": "0"
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            },
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "type": "block",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "vg_name": "ceph_vg1"
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:        }
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:    ],
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:    "2": [
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:        {
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "devices": [
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "/dev/loop5"
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            ],
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "lv_name": "ceph_lv2",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "lv_size": "21470642176",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "name": "ceph_lv2",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "tags": {
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.cluster_name": "ceph",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.crush_device_class": "",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.encrypted": "0",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.osd_id": "2",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.type": "block",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:                "ceph.vdo": "0"
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            },
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "type": "block",
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:            "vg_name": "ceph_vg2"
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:        }
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]:    ]
Oct  3 10:37:29 compute-0 blissful_driscoll[466330]: }
Oct  3 10:37:29 compute-0 systemd[1]: libpod-03bc7aa217a59af21d5b16e85a7c4fd39b766c5bbf19ca6fb99186fb263ded81.scope: Deactivated successfully.
Oct  3 10:37:29 compute-0 podman[466315]: 2025-10-03 10:37:29.095065454 +0000 UTC m=+1.536286579 container died 03bc7aa217a59af21d5b16e85a7c4fd39b766c5bbf19ca6fb99186fb263ded81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  3 10:37:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-dffcd7de9fb52ba00ea139006e57d9d8e487f5861836f1b03b6a18f2fb789041-merged.mount: Deactivated successfully.
Oct  3 10:37:29 compute-0 podman[157165]: time="2025-10-03T10:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:37:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2105: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:30 compute-0 podman[466315]: 2025-10-03 10:37:30.008819082 +0000 UTC m=+2.450040217 container remove 03bc7aa217a59af21d5b16e85a7c4fd39b766c5bbf19ca6fb99186fb263ded81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_driscoll, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 10:37:30 compute-0 systemd[1]: libpod-conmon-03bc7aa217a59af21d5b16e85a7c4fd39b766c5bbf19ca6fb99186fb263ded81.scope: Deactivated successfully.
Oct  3 10:37:30 compute-0 podman[157165]: @ - - [03/Oct/2025:10:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47830 "" "Go-http-client/1.1"
Oct  3 10:37:30 compute-0 podman[157165]: @ - - [03/Oct/2025:10:37:30 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9070 "" "Go-http-client/1.1"
Oct  3 10:37:30 compute-0 podman[466361]: 2025-10-03 10:37:30.248196099 +0000 UTC m=+0.098165865 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:37:30 compute-0 podman[466366]: 2025-10-03 10:37:30.248875161 +0000 UTC m=+0.107766012 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 10:37:30 compute-0 podman[466357]: 2025-10-03 10:37:30.264142681 +0000 UTC m=+0.124191250 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:37:31 compute-0 podman[466548]: 2025-10-03 10:37:30.911700081 +0000 UTC m=+0.051098087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:37:31 compute-0 podman[466548]: 2025-10-03 10:37:31.015350041 +0000 UTC m=+0.154748027 container create ecf187f23d94bc1b2f4186ff934bdaa245c362263f20dba66ba388fcf969a19e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_liskov, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:37:31 compute-0 systemd[1]: Started libpod-conmon-ecf187f23d94bc1b2f4186ff934bdaa245c362263f20dba66ba388fcf969a19e.scope.
Oct  3 10:37:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:37:31 compute-0 podman[466548]: 2025-10-03 10:37:31.230676108 +0000 UTC m=+0.370074194 container init ecf187f23d94bc1b2f4186ff934bdaa245c362263f20dba66ba388fcf969a19e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_liskov, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 10:37:31 compute-0 podman[466548]: 2025-10-03 10:37:31.244761639 +0000 UTC m=+0.384159655 container start ecf187f23d94bc1b2f4186ff934bdaa245c362263f20dba66ba388fcf969a19e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_liskov, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 10:37:31 compute-0 brave_liskov[466563]: 167 167
Oct  3 10:37:31 compute-0 systemd[1]: libpod-ecf187f23d94bc1b2f4186ff934bdaa245c362263f20dba66ba388fcf969a19e.scope: Deactivated successfully.
Oct  3 10:37:31 compute-0 podman[466548]: 2025-10-03 10:37:31.280555386 +0000 UTC m=+0.419953422 container attach ecf187f23d94bc1b2f4186ff934bdaa245c362263f20dba66ba388fcf969a19e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_liskov, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:37:31 compute-0 podman[466548]: 2025-10-03 10:37:31.282491638 +0000 UTC m=+0.421889664 container died ecf187f23d94bc1b2f4186ff934bdaa245c362263f20dba66ba388fcf969a19e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:37:31 compute-0 openstack_network_exporter[367524]: ERROR   10:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:37:31 compute-0 openstack_network_exporter[367524]: ERROR   10:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:37:31 compute-0 openstack_network_exporter[367524]: ERROR   10:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:37:31 compute-0 openstack_network_exporter[367524]: ERROR   10:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:37:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:37:31 compute-0 openstack_network_exporter[367524]: ERROR   10:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:37:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:37:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4ec24905d7e7a1a483d12084491248f195f1d3410342249c31ace8143f8c7e4-merged.mount: Deactivated successfully.
Oct  3 10:37:31 compute-0 podman[466548]: 2025-10-03 10:37:31.783039661 +0000 UTC m=+0.922437647 container remove ecf187f23d94bc1b2f4186ff934bdaa245c362263f20dba66ba388fcf969a19e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_liskov, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:37:31 compute-0 systemd[1]: libpod-conmon-ecf187f23d94bc1b2f4186ff934bdaa245c362263f20dba66ba388fcf969a19e.scope: Deactivated successfully.
Oct  3 10:37:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:32 compute-0 podman[466587]: 2025-10-03 10:37:31.966014072 +0000 UTC m=+0.041304614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:37:32 compute-0 podman[466587]: 2025-10-03 10:37:32.131802183 +0000 UTC m=+0.207092675 container create 1468bc9965179ecc5f8e69e02d705e29d54d5de0f2a5a6304be91955a871dff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_satoshi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 10:37:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:37:32 compute-0 systemd[1]: Started libpod-conmon-1468bc9965179ecc5f8e69e02d705e29d54d5de0f2a5a6304be91955a871dff5.scope.
Oct  3 10:37:32 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:37:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f8dea11199415e4bd536aeecfc2acafe4d17c7e66e9181c4118a2bcaf09157/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:37:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f8dea11199415e4bd536aeecfc2acafe4d17c7e66e9181c4118a2bcaf09157/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:37:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f8dea11199415e4bd536aeecfc2acafe4d17c7e66e9181c4118a2bcaf09157/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:37:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f8dea11199415e4bd536aeecfc2acafe4d17c7e66e9181c4118a2bcaf09157/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:37:32 compute-0 nova_compute[351685]: 2025-10-03 10:37:32.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:32 compute-0 podman[466587]: 2025-10-03 10:37:32.660316841 +0000 UTC m=+0.735607423 container init 1468bc9965179ecc5f8e69e02d705e29d54d5de0f2a5a6304be91955a871dff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_satoshi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 10:37:32 compute-0 podman[466587]: 2025-10-03 10:37:32.672457409 +0000 UTC m=+0.747747941 container start 1468bc9965179ecc5f8e69e02d705e29d54d5de0f2a5a6304be91955a871dff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 10:37:32 compute-0 podman[466587]: 2025-10-03 10:37:32.717929756 +0000 UTC m=+0.793220268 container attach 1468bc9965179ecc5f8e69e02d705e29d54d5de0f2a5a6304be91955a871dff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:37:33 compute-0 nova_compute[351685]: 2025-10-03 10:37:33.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]: {
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:        "osd_id": 1,
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:        "type": "bluestore"
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:    },
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:        "osd_id": 2,
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:        "type": "bluestore"
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:    },
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:        "osd_id": 0,
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:        "type": "bluestore"
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]:    }
Oct  3 10:37:33 compute-0 hungry_satoshi[466603]: }
Oct  3 10:37:33 compute-0 systemd[1]: libpod-1468bc9965179ecc5f8e69e02d705e29d54d5de0f2a5a6304be91955a871dff5.scope: Deactivated successfully.
Oct  3 10:37:33 compute-0 podman[466587]: 2025-10-03 10:37:33.837561538 +0000 UTC m=+1.912852070 container died 1468bc9965179ecc5f8e69e02d705e29d54d5de0f2a5a6304be91955a871dff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_satoshi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:37:33 compute-0 systemd[1]: libpod-1468bc9965179ecc5f8e69e02d705e29d54d5de0f2a5a6304be91955a871dff5.scope: Consumed 1.159s CPU time.
Oct  3 10:37:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2107: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-81f8dea11199415e4bd536aeecfc2acafe4d17c7e66e9181c4118a2bcaf09157-merged.mount: Deactivated successfully.
Oct  3 10:37:34 compute-0 podman[466587]: 2025-10-03 10:37:34.29969876 +0000 UTC m=+2.374989302 container remove 1468bc9965179ecc5f8e69e02d705e29d54d5de0f2a5a6304be91955a871dff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_satoshi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:37:34 compute-0 systemd[1]: libpod-conmon-1468bc9965179ecc5f8e69e02d705e29d54d5de0f2a5a6304be91955a871dff5.scope: Deactivated successfully.
Oct  3 10:37:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:37:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:37:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:37:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:37:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 78353061-ded9-4aee-8072-5e437c5b05cc does not exist
Oct  3 10:37:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f7f8322b-325b-4be1-8e2c-388667e34f4a does not exist
Oct  3 10:37:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:37:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:37:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2108: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:37:37 compute-0 nova_compute[351685]: 2025-10-03 10:37:37.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:37 compute-0 podman[466699]: 2025-10-03 10:37:37.834661853 +0000 UTC m=+0.080044985 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:37:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:38 compute-0 nova_compute[351685]: 2025-10-03 10:37:38.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2110: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.892 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.893 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b76840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.901 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.901 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.901 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.902 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.902 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:37:40.902108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.908 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.909 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.909 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.909 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.909 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.910 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.910 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.910 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:37:40.910071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.910 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.910 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.911 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.911 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.911 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.911 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.911 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:37:40.911425) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.933 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.934 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.934 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.934 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.934 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.935 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.935 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.935 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.935 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.935 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:37:40.935337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.976 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.977 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.978 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.979 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.979 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.980 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.980 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.980 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.980 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.981 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.982 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.982 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.983 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.984 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:37:40.980539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.984 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.985 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:37:40.984763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.984 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.985 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.986 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.986 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.987 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.988 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.988 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.988 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.989 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.989 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.990 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:37:40.989145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.990 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.991 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.992 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.992 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.992 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.993 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.993 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:37:40.993205) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.993 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.993 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.994 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.994 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.995 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.996 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.996 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.996 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.996 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.997 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:37:40.996871) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.997 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.998 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.998 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.999 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:40.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.000 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.000 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.000 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.000 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.001 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:37:41.000834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.024 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.024 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.024 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.025 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.025 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.025 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.026 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.026 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.026 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.026 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.026 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:37:41.024642) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.026 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.027 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.027 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:37:41.026198) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.027 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.027 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.027 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.027 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.028 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.028 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.028 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.028 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.028 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:37:41.027572) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.029 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:37:41.028644) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.029 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.029 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.029 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.029 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.029 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.029 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:37:41.029433) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.029 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.030 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.030 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.030 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.030 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.030 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.030 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.030 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.030 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.031 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.031 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.031 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:37:41.030334) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.031 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.031 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.031 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.031 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.031 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.031 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.031 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.032 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.032 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:37:41.031128) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.032 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 61570000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.032 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:37:41.032025) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.032 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.032 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.032 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.032 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.032 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.032 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.032 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.033 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.033 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.033 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.033 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.034 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.034 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.034 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:37:41.032818) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.034 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.034 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:37:41.034124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.034 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.034 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.035 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.035 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.035 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.035 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.035 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.036 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.036 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.036 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.036 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.036 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.036 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.036 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.036 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.036 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.037 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.037 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.037 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.037 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.037 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.037 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.037 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:37:41.035085) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.038 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:37:41.036086) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:37:41.037064) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:37:41.037870) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.038 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.038 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.039 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.040 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.040 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.040 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.040 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.040 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.040 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.040 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.040 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.040 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.040 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:37:41.040 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:37:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:37:41.631 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:37:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:37:41.632 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:37:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:37:41.632 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:37:41 compute-0 podman[466720]: 2025-10-03 10:37:41.808969308 +0000 UTC m=+0.074931671 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:37:41 compute-0 podman[466721]: 2025-10-03 10:37:41.810796676 +0000 UTC m=+0.076246212 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, config_id=edpm, distribution-scope=public, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, version=9.4, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, architecture=x86_64)
Oct  3 10:37:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2111: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:37:42 compute-0 nova_compute[351685]: 2025-10-03 10:37:42.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:43 compute-0 nova_compute[351685]: 2025-10-03 10:37:43.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2113: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:37:46
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'images', 'vms', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control']
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:37:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:37:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:37:47 compute-0 nova_compute[351685]: 2025-10-03 10:37:47.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:48 compute-0 nova_compute[351685]: 2025-10-03 10:37:48.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2116: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:37:52 compute-0 nova_compute[351685]: 2025-10-03 10:37:52.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:53 compute-0 nova_compute[351685]: 2025-10-03 10:37:53.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:37:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1125247469' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:37:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:37:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1125247469' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:37:54 compute-0 podman[466762]: 2025-10-03 10:37:54.846527722 +0000 UTC m=+0.102612397 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Oct  3 10:37:54 compute-0 podman[466763]: 2025-10-03 10:37:54.864491768 +0000 UTC m=+0.106746790 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:37:54 compute-0 podman[466764]: 2025-10-03 10:37:54.903157737 +0000 UTC m=+0.140120320 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:37:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:37:57 compute-0 nova_compute[351685]: 2025-10-03 10:37:57.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:57 compute-0 podman[466825]: 2025-10-03 10:37:57.872147344 +0000 UTC m=+0.122398461 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:37:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2119: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:37:58 compute-0 nova_compute[351685]: 2025-10-03 10:37:58.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:37:59 compute-0 podman[157165]: time="2025-10-03T10:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:37:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:37:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9082 "" "Go-http-client/1.1"
Oct  3 10:37:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:00 compute-0 podman[466843]: 2025-10-03 10:38:00.860082558 +0000 UTC m=+0.105010945 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:38:00 compute-0 podman[466845]: 2025-10-03 10:38:00.878844299 +0000 UTC m=+0.119064494 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 10:38:00 compute-0 podman[466844]: 2025-10-03 10:38:00.904427838 +0000 UTC m=+0.141110740 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  3 10:38:01 compute-0 openstack_network_exporter[367524]: ERROR   10:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:38:01 compute-0 openstack_network_exporter[367524]: ERROR   10:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:38:01 compute-0 openstack_network_exporter[367524]: ERROR   10:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:38:01 compute-0 openstack_network_exporter[367524]: ERROR   10:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:38:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:38:01 compute-0 openstack_network_exporter[367524]: ERROR   10:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:38:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:38:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:38:02.239712) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487882239766, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1492, "num_deletes": 257, "total_data_size": 2350033, "memory_usage": 2385568, "flush_reason": "Manual Compaction"}
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487882256835, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 2305432, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41774, "largest_seqno": 43265, "table_properties": {"data_size": 2298488, "index_size": 4015, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14150, "raw_average_key_size": 19, "raw_value_size": 2284586, "raw_average_value_size": 3155, "num_data_blocks": 180, "num_entries": 724, "num_filter_entries": 724, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759487725, "oldest_key_time": 1759487725, "file_creation_time": 1759487882, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 17212 microseconds, and 8141 cpu microseconds.
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:38:02.256913) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 2305432 bytes OK
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:38:02.256943) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:38:02.259052) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:38:02.259067) EVENT_LOG_v1 {"time_micros": 1759487882259062, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:38:02.259623) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 2343484, prev total WAL file size 2343484, number of live WAL files 2.
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:38:02.260729) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353039' seq:72057594037927935, type:22 .. '6C6F676D0031373632' seq:0, type:0; will stop at (end)
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(2251KB)], [98(8031KB)]
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487882260782, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 10530159, "oldest_snapshot_seqno": -1}
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6000 keys, 10425473 bytes, temperature: kUnknown
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487882318657, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 10425473, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10383951, "index_size": 25412, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15045, "raw_key_size": 155320, "raw_average_key_size": 25, "raw_value_size": 10273776, "raw_average_value_size": 1712, "num_data_blocks": 1022, "num_entries": 6000, "num_filter_entries": 6000, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759487882, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:38:02.318987) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10425473 bytes
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:38:02.321357) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.6 rd, 179.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 7.8 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(9.1) write-amplify(4.5) OK, records in: 6526, records dropped: 526 output_compression: NoCompression
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:38:02.321394) EVENT_LOG_v1 {"time_micros": 1759487882321374, "job": 58, "event": "compaction_finished", "compaction_time_micros": 57977, "compaction_time_cpu_micros": 27506, "output_level": 6, "num_output_files": 1, "total_output_size": 10425473, "num_input_records": 6526, "num_output_records": 6000, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487882322622, "job": 58, "event": "table_file_deletion", "file_number": 100}
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487882326341, "job": 58, "event": "table_file_deletion", "file_number": 98}
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:38:02.260604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:38:02.326675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:38:02.326684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:38:02.326687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:38:02.326690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:38:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:38:02.326693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:38:02 compute-0 nova_compute[351685]: 2025-10-03 10:38:02.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:02 compute-0 nova_compute[351685]: 2025-10-03 10:38:02.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:38:02 compute-0 nova_compute[351685]: 2025-10-03 10:38:02.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:38:02 compute-0 nova_compute[351685]: 2025-10-03 10:38:02.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:38:03 compute-0 nova_compute[351685]: 2025-10-03 10:38:03.076 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:38:03 compute-0 nova_compute[351685]: 2025-10-03 10:38:03.076 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:38:03 compute-0 nova_compute[351685]: 2025-10-03 10:38:03.077 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:38:03 compute-0 nova_compute[351685]: 2025-10-03 10:38:03.077 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:38:03 compute-0 nova_compute[351685]: 2025-10-03 10:38:03.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:04 compute-0 nova_compute[351685]: 2025-10-03 10:38:04.434 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:38:04 compute-0 nova_compute[351685]: 2025-10-03 10:38:04.455 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:38:04 compute-0 nova_compute[351685]: 2025-10-03 10:38:04.456 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:38:04 compute-0 nova_compute[351685]: 2025-10-03 10:38:04.457 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:38:04 compute-0 nova_compute[351685]: 2025-10-03 10:38:04.457 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:38:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:38:07 compute-0 nova_compute[351685]: 2025-10-03 10:38:07.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:08 compute-0 nova_compute[351685]: 2025-10-03 10:38:08.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:08 compute-0 nova_compute[351685]: 2025-10-03 10:38:08.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:38:08 compute-0 nova_compute[351685]: 2025-10-03 10:38:08.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:38:08 compute-0 podman[466903]: 2025-10-03 10:38:08.839617173 +0000 UTC m=+0.087202614 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  3 10:38:09 compute-0 nova_compute[351685]: 2025-10-03 10:38:09.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:38:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:10 compute-0 nova_compute[351685]: 2025-10-03 10:38:10.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:38:10 compute-0 nova_compute[351685]: 2025-10-03 10:38:10.751 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:38:10 compute-0 nova_compute[351685]: 2025-10-03 10:38:10.751 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:38:10 compute-0 nova_compute[351685]: 2025-10-03 10:38:10.752 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:38:10 compute-0 nova_compute[351685]: 2025-10-03 10:38:10.752 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:38:10 compute-0 nova_compute[351685]: 2025-10-03 10:38:10.752 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:38:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:38:11 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/11902065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:38:11 compute-0 nova_compute[351685]: 2025-10-03 10:38:11.247 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:38:11 compute-0 nova_compute[351685]: 2025-10-03 10:38:11.331 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:38:11 compute-0 nova_compute[351685]: 2025-10-03 10:38:11.332 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:38:11 compute-0 nova_compute[351685]: 2025-10-03 10:38:11.332 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:38:11 compute-0 nova_compute[351685]: 2025-10-03 10:38:11.692 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:38:11 compute-0 nova_compute[351685]: 2025-10-03 10:38:11.695 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3837MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:38:11 compute-0 nova_compute[351685]: 2025-10-03 10:38:11.696 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:38:11 compute-0 nova_compute[351685]: 2025-10-03 10:38:11.696 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:38:11 compute-0 nova_compute[351685]: 2025-10-03 10:38:11.774 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:38:11 compute-0 nova_compute[351685]: 2025-10-03 10:38:11.775 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:38:11 compute-0 nova_compute[351685]: 2025-10-03 10:38:11.775 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:38:11 compute-0 nova_compute[351685]: 2025-10-03 10:38:11.789 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 10:38:11 compute-0 nova_compute[351685]: 2025-10-03 10:38:11.812 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 10:38:11 compute-0 nova_compute[351685]: 2025-10-03 10:38:11.812 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 10:38:11 compute-0 nova_compute[351685]: 2025-10-03 10:38:11.828 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 10:38:11 compute-0 nova_compute[351685]: 2025-10-03 10:38:11.849 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 10:38:11 compute-0 nova_compute[351685]: 2025-10-03 10:38:11.889 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:38:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:38:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:38:12 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2314981375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:38:12 compute-0 nova_compute[351685]: 2025-10-03 10:38:12.384 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:38:12 compute-0 nova_compute[351685]: 2025-10-03 10:38:12.392 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:38:12 compute-0 nova_compute[351685]: 2025-10-03 10:38:12.519 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:38:12 compute-0 nova_compute[351685]: 2025-10-03 10:38:12.521 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:38:12 compute-0 nova_compute[351685]: 2025-10-03 10:38:12.521 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:38:12 compute-0 nova_compute[351685]: 2025-10-03 10:38:12.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:12 compute-0 podman[466966]: 2025-10-03 10:38:12.837499444 +0000 UTC m=+0.103541718 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:38:12 compute-0 podman[466967]: 2025-10-03 10:38:12.840014184 +0000 UTC m=+0.099352153 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, config_id=edpm, release-0.7.12=, build-date=2024-09-18T21:23:30)
Oct  3 10:38:13 compute-0 nova_compute[351685]: 2025-10-03 10:38:13.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:14 compute-0 nova_compute[351685]: 2025-10-03 10:38:14.522 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:38:14 compute-0 nova_compute[351685]: 2025-10-03 10:38:14.523 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:38:14 compute-0 nova_compute[351685]: 2025-10-03 10:38:14.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:38:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:38:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:38:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:38:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:38:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:38:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:38:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:38:17 compute-0 nova_compute[351685]: 2025-10-03 10:38:17.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:18 compute-0 nova_compute[351685]: 2025-10-03 10:38:18.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2130: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:38:22 compute-0 nova_compute[351685]: 2025-10-03 10:38:22.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:23 compute-0 nova_compute[351685]: 2025-10-03 10:38:23.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2132: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:25 compute-0 podman[467010]: 2025-10-03 10:38:25.841207983 +0000 UTC m=+0.089382323 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:38:25 compute-0 podman[467011]: 2025-10-03 10:38:25.865195021 +0000 UTC m=+0.120592163 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  3 10:38:25 compute-0 podman[467009]: 2025-10-03 10:38:25.878714124 +0000 UTC m=+0.130044496 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Oct  3 10:38:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2133: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:38:27 compute-0 nova_compute[351685]: 2025-10-03 10:38:27.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:28 compute-0 nova_compute[351685]: 2025-10-03 10:38:28.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:28 compute-0 podman[467068]: 2025-10-03 10:38:28.822740903 +0000 UTC m=+0.088600750 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:38:29 compute-0 podman[157165]: time="2025-10-03T10:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:38:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:38:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9080 "" "Go-http-client/1.1"
Oct  3 10:38:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2135: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:31 compute-0 openstack_network_exporter[367524]: ERROR   10:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:38:31 compute-0 openstack_network_exporter[367524]: ERROR   10:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:38:31 compute-0 openstack_network_exporter[367524]: ERROR   10:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:38:31 compute-0 openstack_network_exporter[367524]: ERROR   10:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:38:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:38:31 compute-0 openstack_network_exporter[367524]: ERROR   10:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:38:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:38:31 compute-0 podman[467086]: 2025-10-03 10:38:31.831858565 +0000 UTC m=+0.091834122 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:38:31 compute-0 podman[467088]: 2025-10-03 10:38:31.882463276 +0000 UTC m=+0.122311378 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct  3 10:38:31 compute-0 podman[467087]: 2025-10-03 10:38:31.897857389 +0000 UTC m=+0.142708002 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:38:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:38:32 compute-0 nova_compute[351685]: 2025-10-03 10:38:32.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:33 compute-0 nova_compute[351685]: 2025-10-03 10:38:33.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:38:35 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:38:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:38:35 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:38:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:38:35 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:38:35 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 815b6bbe-c577-4644-b054-36fce3000e08 does not exist
Oct  3 10:38:35 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6d49e347-137f-4079-9502-004f1119cb05 does not exist
Oct  3 10:38:35 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3e58d691-8787-4722-98fe-c541ce3170ec does not exist
Oct  3 10:38:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:38:35 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:38:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:38:35 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:38:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:38:35 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:38:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2138: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:36 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:38:36 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:38:36 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:38:36 compute-0 podman[467413]: 2025-10-03 10:38:36.404365995 +0000 UTC m=+0.050034935 container create 554a7129be2d21f258a643cba513e2470910c5ac656455f818530435f073f6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 10:38:36 compute-0 systemd[1]: Started libpod-conmon-554a7129be2d21f258a643cba513e2470910c5ac656455f818530435f073f6f4.scope.
Oct  3 10:38:36 compute-0 podman[467413]: 2025-10-03 10:38:36.382277347 +0000 UTC m=+0.027946317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:38:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:38:36 compute-0 podman[467413]: 2025-10-03 10:38:36.497741975 +0000 UTC m=+0.143410915 container init 554a7129be2d21f258a643cba513e2470910c5ac656455f818530435f073f6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:38:36 compute-0 podman[467413]: 2025-10-03 10:38:36.507022192 +0000 UTC m=+0.152691112 container start 554a7129be2d21f258a643cba513e2470910c5ac656455f818530435f073f6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 10:38:36 compute-0 podman[467413]: 2025-10-03 10:38:36.51100822 +0000 UTC m=+0.156677140 container attach 554a7129be2d21f258a643cba513e2470910c5ac656455f818530435f073f6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 10:38:36 compute-0 condescending_montalcini[467427]: 167 167
Oct  3 10:38:36 compute-0 systemd[1]: libpod-554a7129be2d21f258a643cba513e2470910c5ac656455f818530435f073f6f4.scope: Deactivated successfully.
Oct  3 10:38:36 compute-0 podman[467413]: 2025-10-03 10:38:36.514375527 +0000 UTC m=+0.160044437 container died 554a7129be2d21f258a643cba513e2470910c5ac656455f818530435f073f6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_montalcini, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 10:38:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-29c73fd99f9ce84331d3328506a1ea40cac76c72ba828ceefa4c75c9501adc6a-merged.mount: Deactivated successfully.
Oct  3 10:38:36 compute-0 podman[467413]: 2025-10-03 10:38:36.572972784 +0000 UTC m=+0.218641704 container remove 554a7129be2d21f258a643cba513e2470910c5ac656455f818530435f073f6f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:38:36 compute-0 systemd[1]: libpod-conmon-554a7129be2d21f258a643cba513e2470910c5ac656455f818530435f073f6f4.scope: Deactivated successfully.
Oct  3 10:38:36 compute-0 podman[467452]: 2025-10-03 10:38:36.770949456 +0000 UTC m=+0.062060989 container create ac6f1df5f8aa4e03462f980ffa509c977545c50216e9773c42beacc26ed23525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 10:38:36 compute-0 systemd[1]: Started libpod-conmon-ac6f1df5f8aa4e03462f980ffa509c977545c50216e9773c42beacc26ed23525.scope.
Oct  3 10:38:36 compute-0 podman[467452]: 2025-10-03 10:38:36.745676347 +0000 UTC m=+0.036787940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:38:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0f48cda1a518c3ec0299b10f03f5a9a033cd18a0b438841163c0d613fe918d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0f48cda1a518c3ec0299b10f03f5a9a033cd18a0b438841163c0d613fe918d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0f48cda1a518c3ec0299b10f03f5a9a033cd18a0b438841163c0d613fe918d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0f48cda1a518c3ec0299b10f03f5a9a033cd18a0b438841163c0d613fe918d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:38:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0f48cda1a518c3ec0299b10f03f5a9a033cd18a0b438841163c0d613fe918d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:38:36 compute-0 podman[467452]: 2025-10-03 10:38:36.905990011 +0000 UTC m=+0.197101614 container init ac6f1df5f8aa4e03462f980ffa509c977545c50216e9773c42beacc26ed23525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:38:36 compute-0 podman[467452]: 2025-10-03 10:38:36.924715922 +0000 UTC m=+0.215827455 container start ac6f1df5f8aa4e03462f980ffa509c977545c50216e9773c42beacc26ed23525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  3 10:38:36 compute-0 podman[467452]: 2025-10-03 10:38:36.961606213 +0000 UTC m=+0.252717776 container attach ac6f1df5f8aa4e03462f980ffa509c977545c50216e9773c42beacc26ed23525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mcnulty, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:38:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:38:37 compute-0 nova_compute[351685]: 2025-10-03 10:38:37.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:38 compute-0 charming_mcnulty[467467]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:38:38 compute-0 charming_mcnulty[467467]: --> relative data size: 1.0
Oct  3 10:38:38 compute-0 charming_mcnulty[467467]: --> All data devices are unavailable
Oct  3 10:38:38 compute-0 systemd[1]: libpod-ac6f1df5f8aa4e03462f980ffa509c977545c50216e9773c42beacc26ed23525.scope: Deactivated successfully.
Oct  3 10:38:38 compute-0 systemd[1]: libpod-ac6f1df5f8aa4e03462f980ffa509c977545c50216e9773c42beacc26ed23525.scope: Consumed 1.085s CPU time.
Oct  3 10:38:38 compute-0 podman[467452]: 2025-10-03 10:38:38.074744157 +0000 UTC m=+1.365855690 container died ac6f1df5f8aa4e03462f980ffa509c977545c50216e9773c42beacc26ed23525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 10:38:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd0f48cda1a518c3ec0299b10f03f5a9a033cd18a0b438841163c0d613fe918d-merged.mount: Deactivated successfully.
Oct  3 10:38:38 compute-0 nova_compute[351685]: 2025-10-03 10:38:38.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:38 compute-0 podman[467452]: 2025-10-03 10:38:38.141387432 +0000 UTC m=+1.432498965 container remove ac6f1df5f8aa4e03462f980ffa509c977545c50216e9773c42beacc26ed23525 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mcnulty, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:38:38 compute-0 systemd[1]: libpod-conmon-ac6f1df5f8aa4e03462f980ffa509c977545c50216e9773c42beacc26ed23525.scope: Deactivated successfully.
Oct  3 10:38:39 compute-0 podman[467646]: 2025-10-03 10:38:39.114188432 +0000 UTC m=+0.061202442 container create c0a4a6dd0bf71ce42aa4e9d481453497be9386cd119df3f35432aa84c782269d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_stonebraker, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:38:39 compute-0 systemd[1]: Started libpod-conmon-c0a4a6dd0bf71ce42aa4e9d481453497be9386cd119df3f35432aa84c782269d.scope.
Oct  3 10:38:39 compute-0 podman[467646]: 2025-10-03 10:38:39.092676672 +0000 UTC m=+0.039690692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:38:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:38:39 compute-0 podman[467646]: 2025-10-03 10:38:39.211163397 +0000 UTC m=+0.158177417 container init c0a4a6dd0bf71ce42aa4e9d481453497be9386cd119df3f35432aa84c782269d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:38:39 compute-0 podman[467646]: 2025-10-03 10:38:39.221483508 +0000 UTC m=+0.168497508 container start c0a4a6dd0bf71ce42aa4e9d481453497be9386cd119df3f35432aa84c782269d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 10:38:39 compute-0 podman[467646]: 2025-10-03 10:38:39.225892669 +0000 UTC m=+0.172906669 container attach c0a4a6dd0bf71ce42aa4e9d481453497be9386cd119df3f35432aa84c782269d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 10:38:39 compute-0 lucid_stonebraker[467663]: 167 167
Oct  3 10:38:39 compute-0 systemd[1]: libpod-c0a4a6dd0bf71ce42aa4e9d481453497be9386cd119df3f35432aa84c782269d.scope: Deactivated successfully.
Oct  3 10:38:39 compute-0 podman[467646]: 2025-10-03 10:38:39.228884795 +0000 UTC m=+0.175898795 container died c0a4a6dd0bf71ce42aa4e9d481453497be9386cd119df3f35432aa84c782269d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:38:39 compute-0 podman[467660]: 2025-10-03 10:38:39.25495438 +0000 UTC m=+0.091522722 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=edpm, org.label-schema.build-date=20251001)
Oct  3 10:38:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-63f5bd39f40ceff59e9d031c67a564d49e8d0e1b7b3d76654e0532c9b674e01c-merged.mount: Deactivated successfully.
Oct  3 10:38:39 compute-0 podman[467646]: 2025-10-03 10:38:39.283520805 +0000 UTC m=+0.230534805 container remove c0a4a6dd0bf71ce42aa4e9d481453497be9386cd119df3f35432aa84c782269d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:38:39 compute-0 systemd[1]: libpod-conmon-c0a4a6dd0bf71ce42aa4e9d481453497be9386cd119df3f35432aa84c782269d.scope: Deactivated successfully.
Oct  3 10:38:39 compute-0 podman[467706]: 2025-10-03 10:38:39.485305958 +0000 UTC m=+0.069008321 container create 2cda12934a963b29d1fd404380408a9fa75477e57ddefdb7c6276baa3e66f05a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 10:38:39 compute-0 podman[467706]: 2025-10-03 10:38:39.456508886 +0000 UTC m=+0.040211339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:38:39 compute-0 systemd[1]: Started libpod-conmon-2cda12934a963b29d1fd404380408a9fa75477e57ddefdb7c6276baa3e66f05a.scope.
Oct  3 10:38:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e555345dfa7a5da8afae7f0b38dbc9147e4e3eaf89bca618f33cb65a808744df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e555345dfa7a5da8afae7f0b38dbc9147e4e3eaf89bca618f33cb65a808744df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e555345dfa7a5da8afae7f0b38dbc9147e4e3eaf89bca618f33cb65a808744df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e555345dfa7a5da8afae7f0b38dbc9147e4e3eaf89bca618f33cb65a808744df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:38:39 compute-0 podman[467706]: 2025-10-03 10:38:39.621552612 +0000 UTC m=+0.205255015 container init 2cda12934a963b29d1fd404380408a9fa75477e57ddefdb7c6276baa3e66f05a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:38:39 compute-0 podman[467706]: 2025-10-03 10:38:39.638067372 +0000 UTC m=+0.221769745 container start 2cda12934a963b29d1fd404380408a9fa75477e57ddefdb7c6276baa3e66f05a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 10:38:39 compute-0 podman[467706]: 2025-10-03 10:38:39.643042231 +0000 UTC m=+0.226744644 container attach 2cda12934a963b29d1fd404380408a9fa75477e57ddefdb7c6276baa3e66f05a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:38:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:40 compute-0 romantic_clarke[467723]: {
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:    "0": [
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:        {
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "devices": [
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "/dev/loop3"
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            ],
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "lv_name": "ceph_lv0",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "lv_size": "21470642176",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "name": "ceph_lv0",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "tags": {
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.cluster_name": "ceph",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.crush_device_class": "",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.encrypted": "0",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.osd_id": "0",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.type": "block",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.vdo": "0"
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            },
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "type": "block",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "vg_name": "ceph_vg0"
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:        }
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:    ],
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:    "1": [
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:        {
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "devices": [
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "/dev/loop4"
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            ],
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "lv_name": "ceph_lv1",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "lv_size": "21470642176",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "name": "ceph_lv1",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "tags": {
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.cluster_name": "ceph",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.crush_device_class": "",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.encrypted": "0",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.osd_id": "1",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.type": "block",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.vdo": "0"
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            },
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "type": "block",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "vg_name": "ceph_vg1"
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:        }
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:    ],
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:    "2": [
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:        {
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "devices": [
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "/dev/loop5"
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            ],
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "lv_name": "ceph_lv2",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "lv_size": "21470642176",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "name": "ceph_lv2",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "tags": {
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.cluster_name": "ceph",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.crush_device_class": "",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.encrypted": "0",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.osd_id": "2",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.type": "block",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:                "ceph.vdo": "0"
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            },
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "type": "block",
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:            "vg_name": "ceph_vg2"
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:        }
Oct  3 10:38:40 compute-0 romantic_clarke[467723]:    ]
Oct  3 10:38:40 compute-0 romantic_clarke[467723]: }
Oct  3 10:38:40 compute-0 systemd[1]: libpod-2cda12934a963b29d1fd404380408a9fa75477e57ddefdb7c6276baa3e66f05a.scope: Deactivated successfully.
Oct  3 10:38:40 compute-0 podman[467732]: 2025-10-03 10:38:40.575531639 +0000 UTC m=+0.042023147 container died 2cda12934a963b29d1fd404380408a9fa75477e57ddefdb7c6276baa3e66f05a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Oct  3 10:38:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-e555345dfa7a5da8afae7f0b38dbc9147e4e3eaf89bca618f33cb65a808744df-merged.mount: Deactivated successfully.
Oct  3 10:38:40 compute-0 podman[467732]: 2025-10-03 10:38:40.660859502 +0000 UTC m=+0.127350950 container remove 2cda12934a963b29d1fd404380408a9fa75477e57ddefdb7c6276baa3e66f05a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 10:38:40 compute-0 systemd[1]: libpod-conmon-2cda12934a963b29d1fd404380408a9fa75477e57ddefdb7c6276baa3e66f05a.scope: Deactivated successfully.
Oct  3 10:38:41 compute-0 podman[467880]: 2025-10-03 10:38:41.464522403 +0000 UTC m=+0.058957989 container create 2e10b4df90193411aa2a8951035a39b89a8d77ad7607bd949a11b85b883a94b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:38:41 compute-0 systemd[1]: Started libpod-conmon-2e10b4df90193411aa2a8951035a39b89a8d77ad7607bd949a11b85b883a94b9.scope.
Oct  3 10:38:41 compute-0 podman[467880]: 2025-10-03 10:38:41.436159455 +0000 UTC m=+0.030595081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:38:41 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:38:41 compute-0 podman[467880]: 2025-10-03 10:38:41.576584123 +0000 UTC m=+0.171019689 container init 2e10b4df90193411aa2a8951035a39b89a8d77ad7607bd949a11b85b883a94b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heyrovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 10:38:41 compute-0 podman[467880]: 2025-10-03 10:38:41.588526426 +0000 UTC m=+0.182961992 container start 2e10b4df90193411aa2a8951035a39b89a8d77ad7607bd949a11b85b883a94b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heyrovsky, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:38:41 compute-0 podman[467880]: 2025-10-03 10:38:41.592463511 +0000 UTC m=+0.186899097 container attach 2e10b4df90193411aa2a8951035a39b89a8d77ad7607bd949a11b85b883a94b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heyrovsky, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 10:38:41 compute-0 nostalgic_heyrovsky[467896]: 167 167
Oct  3 10:38:41 compute-0 systemd[1]: libpod-2e10b4df90193411aa2a8951035a39b89a8d77ad7607bd949a11b85b883a94b9.scope: Deactivated successfully.
Oct  3 10:38:41 compute-0 podman[467880]: 2025-10-03 10:38:41.596391797 +0000 UTC m=+0.190827373 container died 2e10b4df90193411aa2a8951035a39b89a8d77ad7607bd949a11b85b883a94b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:38:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d42b78aa8195cba681cf258e0ba0b3fea0d51d6442d1fdbfe59cd28cdfc8c69b-merged.mount: Deactivated successfully.
Oct  3 10:38:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:38:41.632 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:38:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:38:41.634 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:38:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:38:41.636 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:38:41 compute-0 podman[467880]: 2025-10-03 10:38:41.644336913 +0000 UTC m=+0.238772479 container remove 2e10b4df90193411aa2a8951035a39b89a8d77ad7607bd949a11b85b883a94b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 10:38:41 compute-0 systemd[1]: libpod-conmon-2e10b4df90193411aa2a8951035a39b89a8d77ad7607bd949a11b85b883a94b9.scope: Deactivated successfully.
Oct  3 10:38:41 compute-0 podman[467920]: 2025-10-03 10:38:41.870502167 +0000 UTC m=+0.072586476 container create e1053ad0ec10373d37884245c9de5888038542b417f9290b3328f8817330a8e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_merkle, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:38:41 compute-0 podman[467920]: 2025-10-03 10:38:41.844615198 +0000 UTC m=+0.046699547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:38:41 compute-0 systemd[1]: Started libpod-conmon-e1053ad0ec10373d37884245c9de5888038542b417f9290b3328f8817330a8e2.scope.
Oct  3 10:38:41 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a206c7acd4212078b9936cbf8a4105c8e8588218c9167f59694b53c068ee68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a206c7acd4212078b9936cbf8a4105c8e8588218c9167f59694b53c068ee68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a206c7acd4212078b9936cbf8a4105c8e8588218c9167f59694b53c068ee68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a206c7acd4212078b9936cbf8a4105c8e8588218c9167f59694b53c068ee68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:38:41 compute-0 podman[467920]: 2025-10-03 10:38:41.999098186 +0000 UTC m=+0.201182505 container init e1053ad0ec10373d37884245c9de5888038542b417f9290b3328f8817330a8e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_merkle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:38:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2141: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:42 compute-0 podman[467920]: 2025-10-03 10:38:42.02357601 +0000 UTC m=+0.225660319 container start e1053ad0ec10373d37884245c9de5888038542b417f9290b3328f8817330a8e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 10:38:42 compute-0 podman[467920]: 2025-10-03 10:38:42.028708435 +0000 UTC m=+0.230792814 container attach e1053ad0ec10373d37884245c9de5888038542b417f9290b3328f8817330a8e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_merkle, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  3 10:38:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:38:42 compute-0 nova_compute[351685]: 2025-10-03 10:38:42.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:43 compute-0 crazy_merkle[467935]: {
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:        "osd_id": 1,
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:        "type": "bluestore"
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:    },
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:        "osd_id": 2,
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:        "type": "bluestore"
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:    },
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:        "osd_id": 0,
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:        "type": "bluestore"
Oct  3 10:38:43 compute-0 crazy_merkle[467935]:    }
Oct  3 10:38:43 compute-0 crazy_merkle[467935]: }
Oct  3 10:38:43 compute-0 nova_compute[351685]: 2025-10-03 10:38:43.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:43 compute-0 systemd[1]: libpod-e1053ad0ec10373d37884245c9de5888038542b417f9290b3328f8817330a8e2.scope: Deactivated successfully.
Oct  3 10:38:43 compute-0 systemd[1]: libpod-e1053ad0ec10373d37884245c9de5888038542b417f9290b3328f8817330a8e2.scope: Consumed 1.094s CPU time.
Oct  3 10:38:43 compute-0 podman[467968]: 2025-10-03 10:38:43.185724464 +0000 UTC m=+0.041004854 container died e1053ad0ec10373d37884245c9de5888038542b417f9290b3328f8817330a8e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:38:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1a206c7acd4212078b9936cbf8a4105c8e8588218c9167f59694b53c068ee68-merged.mount: Deactivated successfully.
Oct  3 10:38:43 compute-0 podman[467968]: 2025-10-03 10:38:43.587909097 +0000 UTC m=+0.443189477 container remove e1053ad0ec10373d37884245c9de5888038542b417f9290b3328f8817330a8e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_merkle, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:38:43 compute-0 systemd[1]: libpod-conmon-e1053ad0ec10373d37884245c9de5888038542b417f9290b3328f8817330a8e2.scope: Deactivated successfully.
Oct  3 10:38:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:38:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:38:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:38:43 compute-0 podman[467969]: 2025-10-03 10:38:43.663140157 +0000 UTC m=+0.510496583 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:38:43 compute-0 podman[467970]: 2025-10-03 10:38:43.669396207 +0000 UTC m=+0.511792845 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, distribution-scope=public, release-0.7.12=, architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:38:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:38:43 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 1b5c8e65-1971-4f73-b4bc-8a65203f3530 does not exist
Oct  3 10:38:43 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 0da3d80b-68bc-4ec9-a214-8cfb9f9e226a does not exist
Oct  3 10:38:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:38:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:38:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:38:46
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['volumes', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.meta', 'images', 'cephfs.cephfs.data']
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:38:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:38:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:38:47 compute-0 nova_compute[351685]: 2025-10-03 10:38:47.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2144: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:48 compute-0 nova_compute[351685]: 2025-10-03 10:38:48.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2146: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:38:52 compute-0 nova_compute[351685]: 2025-10-03 10:38:52.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:53 compute-0 nova_compute[351685]: 2025-10-03 10:38:53.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:38:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1123763474' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:38:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:38:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1123763474' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:38:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2147: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:38:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:38:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:56 compute-0 podman[468073]: 2025-10-03 10:38:56.844197766 +0000 UTC m=+0.101073238 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Oct  3 10:38:56 compute-0 podman[468072]: 2025-10-03 10:38:56.858107482 +0000 UTC m=+0.119906433 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public)
Oct  3 10:38:56 compute-0 podman[468074]: 2025-10-03 10:38:56.876603264 +0000 UTC m=+0.129875001 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:38:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:38:57 compute-0 nova_compute[351685]: 2025-10-03 10:38:57.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:38:58 compute-0 nova_compute[351685]: 2025-10-03 10:38:58.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:38:59 compute-0 podman[157165]: time="2025-10-03T10:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:38:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:38:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9080 "" "Go-http-client/1.1"
Oct  3 10:38:59 compute-0 podman[468135]: 2025-10-03 10:38:59.832714379 +0000 UTC m=+0.078495205 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:39:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:01 compute-0 openstack_network_exporter[367524]: ERROR   10:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:39:01 compute-0 openstack_network_exporter[367524]: ERROR   10:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:39:01 compute-0 openstack_network_exporter[367524]: ERROR   10:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:39:01 compute-0 openstack_network_exporter[367524]: ERROR   10:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:39:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:39:01 compute-0 openstack_network_exporter[367524]: ERROR   10:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:39:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:39:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:39:02 compute-0 nova_compute[351685]: 2025-10-03 10:39:02.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:02 compute-0 podman[468156]: 2025-10-03 10:39:02.841307114 +0000 UTC m=+0.075146507 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct  3 10:39:02 compute-0 podman[468154]: 2025-10-03 10:39:02.85926257 +0000 UTC m=+0.107497885 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:39:02 compute-0 podman[468155]: 2025-10-03 10:39:02.886695718 +0000 UTC m=+0.120322085 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct  3 10:39:03 compute-0 nova_compute[351685]: 2025-10-03 10:39:03.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2152: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:04 compute-0 nova_compute[351685]: 2025-10-03 10:39:04.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:39:04 compute-0 nova_compute[351685]: 2025-10-03 10:39:04.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:39:04 compute-0 nova_compute[351685]: 2025-10-03 10:39:04.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:39:05 compute-0 nova_compute[351685]: 2025-10-03 10:39:05.450 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:39:05 compute-0 nova_compute[351685]: 2025-10-03 10:39:05.451 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:39:05 compute-0 nova_compute[351685]: 2025-10-03 10:39:05.451 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:39:05 compute-0 nova_compute[351685]: 2025-10-03 10:39:05.452 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:39:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2153: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:06 compute-0 nova_compute[351685]: 2025-10-03 10:39:06.978 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:39:07 compute-0 nova_compute[351685]: 2025-10-03 10:39:07.011 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:39:07 compute-0 nova_compute[351685]: 2025-10-03 10:39:07.011 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:39:07 compute-0 nova_compute[351685]: 2025-10-03 10:39:07.012 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:39:07 compute-0 nova_compute[351685]: 2025-10-03 10:39:07.012 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:39:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:07.292282) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487947292309, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 763, "num_deletes": 250, "total_data_size": 989460, "memory_usage": 1003872, "flush_reason": "Manual Compaction"}
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487947300825, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 625436, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43266, "largest_seqno": 44028, "table_properties": {"data_size": 622142, "index_size": 1138, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8720, "raw_average_key_size": 20, "raw_value_size": 615177, "raw_average_value_size": 1454, "num_data_blocks": 51, "num_entries": 423, "num_filter_entries": 423, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759487882, "oldest_key_time": 1759487882, "file_creation_time": 1759487947, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 8600 microseconds, and 3185 cpu microseconds.
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:07.300880) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 625436 bytes OK
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:07.300898) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:07.304859) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:07.304875) EVENT_LOG_v1 {"time_micros": 1759487947304870, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:07.304894) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 985597, prev total WAL file size 985597, number of live WAL files 2.
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:07.306348) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373534' seq:72057594037927935, type:22 .. '6D6772737461740032303035' seq:0, type:0; will stop at (end)
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(610KB)], [101(10181KB)]
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487947306448, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11050909, "oldest_snapshot_seqno": -1}
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 5938 keys, 8106277 bytes, temperature: kUnknown
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487947361193, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8106277, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8069178, "index_size": 21181, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14853, "raw_key_size": 154187, "raw_average_key_size": 25, "raw_value_size": 7964084, "raw_average_value_size": 1341, "num_data_blocks": 848, "num_entries": 5938, "num_filter_entries": 5938, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759487947, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:07.361660) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8106277 bytes
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:07.364666) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.0 rd, 147.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.9 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(30.6) write-amplify(13.0) OK, records in: 6423, records dropped: 485 output_compression: NoCompression
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:07.364703) EVENT_LOG_v1 {"time_micros": 1759487947364687, "job": 60, "event": "compaction_finished", "compaction_time_micros": 54977, "compaction_time_cpu_micros": 26571, "output_level": 6, "num_output_files": 1, "total_output_size": 8106277, "num_input_records": 6423, "num_output_records": 5938, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487947365150, "job": 60, "event": "table_file_deletion", "file_number": 103}
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487947368212, "job": 60, "event": "table_file_deletion", "file_number": 101}
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:07.306098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:07.368688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:07.368722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:07.368725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:07.368727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:39:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:07.368729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:39:07 compute-0 nova_compute[351685]: 2025-10-03 10:39:07.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:08 compute-0 nova_compute[351685]: 2025-10-03 10:39:08.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:09 compute-0 podman[468217]: 2025-10-03 10:39:09.883974892 +0000 UTC m=+0.129932433 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Oct  3 10:39:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:10 compute-0 nova_compute[351685]: 2025-10-03 10:39:10.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:39:10 compute-0 nova_compute[351685]: 2025-10-03 10:39:10.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:11.567524) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487951567557, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 291, "num_deletes": 251, "total_data_size": 69902, "memory_usage": 75368, "flush_reason": "Manual Compaction"}
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487951571713, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 69375, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44029, "largest_seqno": 44319, "table_properties": {"data_size": 67447, "index_size": 156, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4987, "raw_average_key_size": 18, "raw_value_size": 63684, "raw_average_value_size": 234, "num_data_blocks": 7, "num_entries": 272, "num_filter_entries": 272, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759487947, "oldest_key_time": 1759487947, "file_creation_time": 1759487951, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 4564 microseconds, and 1734 cpu microseconds.
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:11.572080) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 69375 bytes OK
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:11.572110) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:11.575040) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:11.575063) EVENT_LOG_v1 {"time_micros": 1759487951575056, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:11.575086) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 67758, prev total WAL file size 67758, number of live WAL files 2.
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:11.576729) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(67KB)], [104(7916KB)]
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487951576823, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 8175652, "oldest_snapshot_seqno": -1}
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 5701 keys, 6439585 bytes, temperature: kUnknown
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487951635720, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 6439585, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6405666, "index_size": 18577, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 149955, "raw_average_key_size": 26, "raw_value_size": 6306240, "raw_average_value_size": 1106, "num_data_blocks": 729, "num_entries": 5701, "num_filter_entries": 5701, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759487951, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:11.636503) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 6439585 bytes
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:11.638500) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.6 rd, 109.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 7.7 +0.0 blob) out(6.1 +0.0 blob), read-write-amplify(210.7) write-amplify(92.8) OK, records in: 6210, records dropped: 509 output_compression: NoCompression
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:11.638552) EVENT_LOG_v1 {"time_micros": 1759487951638533, "job": 62, "event": "compaction_finished", "compaction_time_micros": 59000, "compaction_time_cpu_micros": 29995, "output_level": 6, "num_output_files": 1, "total_output_size": 6439585, "num_input_records": 6210, "num_output_records": 5701, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487951638884, "job": 62, "event": "table_file_deletion", "file_number": 106}
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759487951642214, "job": 62, "event": "table_file_deletion", "file_number": 104}
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:11.575866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:11.642758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:11.642766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:11.642769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:11.642772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:39:11 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:39:11.642775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:39:11 compute-0 nova_compute[351685]: 2025-10-03 10:39:11.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:39:11 compute-0 nova_compute[351685]: 2025-10-03 10:39:11.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:39:11 compute-0 nova_compute[351685]: 2025-10-03 10:39:11.757 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:39:11 compute-0 nova_compute[351685]: 2025-10-03 10:39:11.757 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:39:11 compute-0 nova_compute[351685]: 2025-10-03 10:39:11.758 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:39:11 compute-0 nova_compute[351685]: 2025-10-03 10:39:11.758 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:39:11 compute-0 nova_compute[351685]: 2025-10-03 10:39:11.759 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:39:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:39:12 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/925064969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:39:12 compute-0 nova_compute[351685]: 2025-10-03 10:39:12.264 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:39:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:39:12 compute-0 nova_compute[351685]: 2025-10-03 10:39:12.356 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:39:12 compute-0 nova_compute[351685]: 2025-10-03 10:39:12.356 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:39:12 compute-0 nova_compute[351685]: 2025-10-03 10:39:12.357 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:39:12 compute-0 nova_compute[351685]: 2025-10-03 10:39:12.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:12 compute-0 nova_compute[351685]: 2025-10-03 10:39:12.726 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:39:12 compute-0 nova_compute[351685]: 2025-10-03 10:39:12.728 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3823MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:39:12 compute-0 nova_compute[351685]: 2025-10-03 10:39:12.728 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:39:12 compute-0 nova_compute[351685]: 2025-10-03 10:39:12.728 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:39:12 compute-0 nova_compute[351685]: 2025-10-03 10:39:12.810 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:39:12 compute-0 nova_compute[351685]: 2025-10-03 10:39:12.811 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:39:12 compute-0 nova_compute[351685]: 2025-10-03 10:39:12.811 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:39:12 compute-0 nova_compute[351685]: 2025-10-03 10:39:12.851 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:39:13 compute-0 nova_compute[351685]: 2025-10-03 10:39:13.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:39:13 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3127478764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:39:13 compute-0 nova_compute[351685]: 2025-10-03 10:39:13.312 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:39:13 compute-0 nova_compute[351685]: 2025-10-03 10:39:13.319 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:39:13 compute-0 nova_compute[351685]: 2025-10-03 10:39:13.335 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:39:13 compute-0 nova_compute[351685]: 2025-10-03 10:39:13.338 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:39:13 compute-0 nova_compute[351685]: 2025-10-03 10:39:13.338 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:39:13 compute-0 podman[468282]: 2025-10-03 10:39:13.818965798 +0000 UTC m=+0.074307190 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-container, name=ubi9, io.openshift.expose-services=, io.openshift.tags=base rhel9, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_id=edpm, maintainer=Red Hat, Inc.)
Oct  3 10:39:13 compute-0 podman[468281]: 2025-10-03 10:39:13.822916095 +0000 UTC m=+0.086368247 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:39:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:15 compute-0 nova_compute[351685]: 2025-10-03 10:39:15.339 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:39:15 compute-0 nova_compute[351685]: 2025-10-03 10:39:15.340 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:39:15 compute-0 nova_compute[351685]: 2025-10-03 10:39:15.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:39:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:39:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:39:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:39:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:39:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:39:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:39:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:39:17 compute-0 nova_compute[351685]: 2025-10-03 10:39:17.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:18 compute-0 nova_compute[351685]: 2025-10-03 10:39:18.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2161: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:39:22 compute-0 nova_compute[351685]: 2025-10-03 10:39:22.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:23 compute-0 nova_compute[351685]: 2025-10-03 10:39:23.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:23 compute-0 nova_compute[351685]: 2025-10-03 10:39:23.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:39:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:39:27 compute-0 nova_compute[351685]: 2025-10-03 10:39:27.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:27 compute-0 podman[468323]: 2025-10-03 10:39:27.828383179 +0000 UTC m=+0.078694261 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct  3 10:39:27 compute-0 podman[468324]: 2025-10-03 10:39:27.85558615 +0000 UTC m=+0.093391171 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 10:39:27 compute-0 podman[468325]: 2025-10-03 10:39:27.918687862 +0000 UTC m=+0.156487103 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  3 10:39:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2164: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:28 compute-0 nova_compute[351685]: 2025-10-03 10:39:28.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:29 compute-0 podman[157165]: time="2025-10-03T10:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:39:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:39:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9077 "" "Go-http-client/1.1"
Oct  3 10:39:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:30 compute-0 podman[468384]: 2025-10-03 10:39:30.808362659 +0000 UTC m=+0.074886501 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  3 10:39:31 compute-0 openstack_network_exporter[367524]: ERROR   10:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:39:31 compute-0 openstack_network_exporter[367524]: ERROR   10:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:39:31 compute-0 openstack_network_exporter[367524]: ERROR   10:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:39:31 compute-0 openstack_network_exporter[367524]: ERROR   10:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:39:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:39:31 compute-0 openstack_network_exporter[367524]: ERROR   10:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:39:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:39:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2166: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:39:32 compute-0 nova_compute[351685]: 2025-10-03 10:39:32.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:33 compute-0 nova_compute[351685]: 2025-10-03 10:39:33.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:33 compute-0 podman[468402]: 2025-10-03 10:39:33.838227265 +0000 UTC m=+0.094240689 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:39:33 compute-0 podman[468403]: 2025-10-03 10:39:33.84304706 +0000 UTC m=+0.093362332 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:39:33 compute-0 podman[468405]: 2025-10-03 10:39:33.853523135 +0000 UTC m=+0.085710886 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3)
Oct  3 10:39:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:39:37 compute-0 nova_compute[351685]: 2025-10-03 10:39:37.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:38 compute-0 nova_compute[351685]: 2025-10-03 10:39:38.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2170: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:40 compute-0 podman[468462]: 2025-10-03 10:39:40.8334362 +0000 UTC m=+0.080729066 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.892 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.893 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.893 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92dccd40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.900 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.900 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.901 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.901 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.901 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:39:40.901109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.907 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.908 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.908 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.908 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.908 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.908 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.908 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:39:40.908384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.909 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.909 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.909 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.909 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.909 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.910 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:39:40.909820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.909 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.934 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.934 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.935 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.935 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.935 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.935 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.935 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.935 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.935 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.936 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:39:40.935924) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.979 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.979 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.980 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.980 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.980 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.980 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.980 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.981 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.981 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.981 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:39:40.980987) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.981 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.981 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.981 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.982 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.982 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.982 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.982 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.982 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.982 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.983 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:39:40.982437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.983 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.983 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.983 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.983 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.983 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.983 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.984 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:39:40.983801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.984 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.984 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.985 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.985 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.985 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.985 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.985 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.985 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.985 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:39:40.985519) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.986 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.986 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.986 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.986 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.986 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.986 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.987 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.987 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.987 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.987 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.987 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.988 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.988 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.988 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.988 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.988 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:39:40.986970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:40.989 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:39:40.988526) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.012 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:39:41.013284) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.014 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.015 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.015 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.015 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.015 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.016 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:39:41.015757) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.017 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.017 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:39:41.018192) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.018 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.019 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.019 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.020 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:39:41.020570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.021 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.021 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.021 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:39:41.021370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.022 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.023 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.023 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.023 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.023 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.023 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.023 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 63170000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.024 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.024 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.024 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.025 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:39:41.022381) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.025 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:39:41.023170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.025 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:39:41.023946) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.025 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:39:41.024864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.025 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.025 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:39:41.025886) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.026 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.026 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.026 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.026 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.027 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.027 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:39:41.026840) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.027 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.027 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.027 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.028 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:39:41.027812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.028 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.028 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.028 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.028 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.028 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.028 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.028 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.029 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.029 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.029 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.029 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.029 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.029 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:39:41.028858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.029 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.030 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:39:41.029781) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.030 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.030 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.031 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:39:41.032 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:39:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:39:41.635 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:39:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:39:41.635 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:39:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:39:41.635 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:39:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:39:42 compute-0 nova_compute[351685]: 2025-10-03 10:39:42.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:43 compute-0 nova_compute[351685]: 2025-10-03 10:39:43.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2172: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:44 compute-0 podman[468507]: 2025-10-03 10:39:44.051071601 +0000 UTC m=+0.071896493 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:39:44 compute-0 podman[468508]: 2025-10-03 10:39:44.129501623 +0000 UTC m=+0.144675874 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-type=git, release-0.7.12=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, version=9.4, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 10:39:45 compute-0 podman[468694]: 2025-10-03 10:39:45.034438859 +0000 UTC m=+0.105477679 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:39:45 compute-0 podman[468694]: 2025-10-03 10:39:45.12877009 +0000 UTC m=+0.199808920 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:39:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:39:46 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:39:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:39:46 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:39:46
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['backups', 'images', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'vms', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:39:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:39:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:39:47 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:39:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:39:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:39:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:39:47 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:39:47 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:39:47 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:39:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:39:47 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 64f50b22-1b96-4ce0-a267-71a36dd03917 does not exist
Oct  3 10:39:47 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8a2f0479-0184-42fa-968b-77e4bd4c520d does not exist
Oct  3 10:39:47 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 2481b7a8-0454-48f3-b539-bc1df47c5e31 does not exist
Oct  3 10:39:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:39:47 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:39:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:39:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:39:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:39:47 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:39:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:39:47 compute-0 nova_compute[351685]: 2025-10-03 10:39:47.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:48 compute-0 podman[469114]: 2025-10-03 10:39:47.944355935 +0000 UTC m=+0.045086095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:39:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:48 compute-0 nova_compute[351685]: 2025-10-03 10:39:48.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:39:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:39:48 compute-0 podman[469114]: 2025-10-03 10:39:48.398658156 +0000 UTC m=+0.499388256 container create 6d6a4a8a3e164033a0e215a0be7eaf592639026e57f62b0c6a7734112de2c95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:39:48 compute-0 systemd[1]: Started libpod-conmon-6d6a4a8a3e164033a0e215a0be7eaf592639026e57f62b0c6a7734112de2c95e.scope.
Oct  3 10:39:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:39:48 compute-0 podman[469114]: 2025-10-03 10:39:48.533990321 +0000 UTC m=+0.634720441 container init 6d6a4a8a3e164033a0e215a0be7eaf592639026e57f62b0c6a7734112de2c95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:39:48 compute-0 podman[469114]: 2025-10-03 10:39:48.552886826 +0000 UTC m=+0.653616896 container start 6d6a4a8a3e164033a0e215a0be7eaf592639026e57f62b0c6a7734112de2c95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:39:48 compute-0 podman[469114]: 2025-10-03 10:39:48.559914922 +0000 UTC m=+0.660645062 container attach 6d6a4a8a3e164033a0e215a0be7eaf592639026e57f62b0c6a7734112de2c95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_agnesi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:39:48 compute-0 elastic_agnesi[469130]: 167 167
Oct  3 10:39:48 compute-0 systemd[1]: libpod-6d6a4a8a3e164033a0e215a0be7eaf592639026e57f62b0c6a7734112de2c95e.scope: Deactivated successfully.
Oct  3 10:39:48 compute-0 podman[469114]: 2025-10-03 10:39:48.56639786 +0000 UTC m=+0.667127930 container died 6d6a4a8a3e164033a0e215a0be7eaf592639026e57f62b0c6a7734112de2c95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_agnesi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:39:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac5a3848a750aa5b6d7676ccf50103dafe3394abf46819ee3e3d27b9f13fbf4b-merged.mount: Deactivated successfully.
Oct  3 10:39:48 compute-0 podman[469114]: 2025-10-03 10:39:48.637845928 +0000 UTC m=+0.738576008 container remove 6d6a4a8a3e164033a0e215a0be7eaf592639026e57f62b0c6a7734112de2c95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_agnesi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 10:39:48 compute-0 systemd[1]: libpod-conmon-6d6a4a8a3e164033a0e215a0be7eaf592639026e57f62b0c6a7734112de2c95e.scope: Deactivated successfully.
Oct  3 10:39:48 compute-0 podman[469153]: 2025-10-03 10:39:48.873996022 +0000 UTC m=+0.056784960 container create 88611eb19b0ad4b53f89f273b66f65c362a6cdf012169c6e7df23e5de3bc3edc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 10:39:48 compute-0 systemd[1]: Started libpod-conmon-88611eb19b0ad4b53f89f273b66f65c362a6cdf012169c6e7df23e5de3bc3edc.scope.
Oct  3 10:39:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:39:48 compute-0 podman[469153]: 2025-10-03 10:39:48.852114301 +0000 UTC m=+0.034903279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808549cd63719ba6a68fd73d17a6463d309173c94cf5ee5d8127b2ffffd7f3a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808549cd63719ba6a68fd73d17a6463d309173c94cf5ee5d8127b2ffffd7f3a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808549cd63719ba6a68fd73d17a6463d309173c94cf5ee5d8127b2ffffd7f3a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808549cd63719ba6a68fd73d17a6463d309173c94cf5ee5d8127b2ffffd7f3a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808549cd63719ba6a68fd73d17a6463d309173c94cf5ee5d8127b2ffffd7f3a5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:39:49 compute-0 podman[469153]: 2025-10-03 10:39:49.018923494 +0000 UTC m=+0.201712492 container init 88611eb19b0ad4b53f89f273b66f65c362a6cdf012169c6e7df23e5de3bc3edc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elbakyan, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:39:49 compute-0 podman[469153]: 2025-10-03 10:39:49.03504073 +0000 UTC m=+0.217829718 container start 88611eb19b0ad4b53f89f273b66f65c362a6cdf012169c6e7df23e5de3bc3edc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elbakyan, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 10:39:49 compute-0 podman[469153]: 2025-10-03 10:39:49.041741885 +0000 UTC m=+0.224530863 container attach 88611eb19b0ad4b53f89f273b66f65c362a6cdf012169c6e7df23e5de3bc3edc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  3 10:39:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:50 compute-0 sweet_elbakyan[469169]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:39:50 compute-0 sweet_elbakyan[469169]: --> relative data size: 1.0
Oct  3 10:39:50 compute-0 sweet_elbakyan[469169]: --> All data devices are unavailable
Oct  3 10:39:50 compute-0 systemd[1]: libpod-88611eb19b0ad4b53f89f273b66f65c362a6cdf012169c6e7df23e5de3bc3edc.scope: Deactivated successfully.
Oct  3 10:39:50 compute-0 systemd[1]: libpod-88611eb19b0ad4b53f89f273b66f65c362a6cdf012169c6e7df23e5de3bc3edc.scope: Consumed 1.223s CPU time.
Oct  3 10:39:50 compute-0 podman[469153]: 2025-10-03 10:39:50.325903657 +0000 UTC m=+1.508692655 container died 88611eb19b0ad4b53f89f273b66f65c362a6cdf012169c6e7df23e5de3bc3edc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 10:39:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-808549cd63719ba6a68fd73d17a6463d309173c94cf5ee5d8127b2ffffd7f3a5-merged.mount: Deactivated successfully.
Oct  3 10:39:50 compute-0 podman[469153]: 2025-10-03 10:39:50.403811872 +0000 UTC m=+1.586600820 container remove 88611eb19b0ad4b53f89f273b66f65c362a6cdf012169c6e7df23e5de3bc3edc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:39:50 compute-0 systemd[1]: libpod-conmon-88611eb19b0ad4b53f89f273b66f65c362a6cdf012169c6e7df23e5de3bc3edc.scope: Deactivated successfully.
Oct  3 10:39:51 compute-0 podman[469348]: 2025-10-03 10:39:51.302054423 +0000 UTC m=+0.104700294 container create cb1e3b3e5be417be5793c5e4c06d71a038094c000e078784963526b70608b6f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:39:51 compute-0 podman[469348]: 2025-10-03 10:39:51.227773645 +0000 UTC m=+0.030419536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:39:51 compute-0 systemd[1]: Started libpod-conmon-cb1e3b3e5be417be5793c5e4c06d71a038094c000e078784963526b70608b6f7.scope.
Oct  3 10:39:51 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:39:51 compute-0 podman[469348]: 2025-10-03 10:39:51.436043455 +0000 UTC m=+0.238689326 container init cb1e3b3e5be417be5793c5e4c06d71a038094c000e078784963526b70608b6f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 10:39:51 compute-0 podman[469348]: 2025-10-03 10:39:51.449820507 +0000 UTC m=+0.252466378 container start cb1e3b3e5be417be5793c5e4c06d71a038094c000e078784963526b70608b6f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:39:51 compute-0 podman[469348]: 2025-10-03 10:39:51.45427959 +0000 UTC m=+0.256925461 container attach cb1e3b3e5be417be5793c5e4c06d71a038094c000e078784963526b70608b6f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:39:51 compute-0 pensive_gould[469364]: 167 167
Oct  3 10:39:51 compute-0 systemd[1]: libpod-cb1e3b3e5be417be5793c5e4c06d71a038094c000e078784963526b70608b6f7.scope: Deactivated successfully.
Oct  3 10:39:51 compute-0 podman[469348]: 2025-10-03 10:39:51.457975038 +0000 UTC m=+0.260620909 container died cb1e3b3e5be417be5793c5e4c06d71a038094c000e078784963526b70608b6f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 10:39:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-754d8c2ce17dc3c82648fa9eedf1b08c3858791b88e34075088d90ffe75e5c92-merged.mount: Deactivated successfully.
Oct  3 10:39:51 compute-0 podman[469348]: 2025-10-03 10:39:51.508570349 +0000 UTC m=+0.311216220 container remove cb1e3b3e5be417be5793c5e4c06d71a038094c000e078784963526b70608b6f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct  3 10:39:51 compute-0 systemd[1]: libpod-conmon-cb1e3b3e5be417be5793c5e4c06d71a038094c000e078784963526b70608b6f7.scope: Deactivated successfully.
Oct  3 10:39:51 compute-0 podman[469387]: 2025-10-03 10:39:51.751478999 +0000 UTC m=+0.089385894 container create 3233f718d13b318d483ede2c9ca743c78adf4a90ab53b7fad9b70f6cb0848a49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 10:39:51 compute-0 podman[469387]: 2025-10-03 10:39:51.719628379 +0000 UTC m=+0.057535354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:39:51 compute-0 systemd[1]: Started libpod-conmon-3233f718d13b318d483ede2c9ca743c78adf4a90ab53b7fad9b70f6cb0848a49.scope.
Oct  3 10:39:51 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a13ad787a5e72f6d7984af275dfd39ee3f162420360182181d8c7edbb7b9b304/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a13ad787a5e72f6d7984af275dfd39ee3f162420360182181d8c7edbb7b9b304/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a13ad787a5e72f6d7984af275dfd39ee3f162420360182181d8c7edbb7b9b304/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a13ad787a5e72f6d7984af275dfd39ee3f162420360182181d8c7edbb7b9b304/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:39:51 compute-0 podman[469387]: 2025-10-03 10:39:51.913693635 +0000 UTC m=+0.251600550 container init 3233f718d13b318d483ede2c9ca743c78adf4a90ab53b7fad9b70f6cb0848a49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:39:51 compute-0 podman[469387]: 2025-10-03 10:39:51.9251021 +0000 UTC m=+0.263008985 container start 3233f718d13b318d483ede2c9ca743c78adf4a90ab53b7fad9b70f6cb0848a49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:39:51 compute-0 podman[469387]: 2025-10-03 10:39:51.929600994 +0000 UTC m=+0.267507929 container attach 3233f718d13b318d483ede2c9ca743c78adf4a90ab53b7fad9b70f6cb0848a49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct  3 10:39:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:39:52 compute-0 nova_compute[351685]: 2025-10-03 10:39:52.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]: {
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:    "0": [
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:        {
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "devices": [
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "/dev/loop3"
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            ],
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "lv_name": "ceph_lv0",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "lv_size": "21470642176",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "name": "ceph_lv0",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "tags": {
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.cluster_name": "ceph",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.crush_device_class": "",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.encrypted": "0",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.osd_id": "0",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.type": "block",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.vdo": "0"
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            },
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "type": "block",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "vg_name": "ceph_vg0"
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:        }
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:    ],
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:    "1": [
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:        {
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "devices": [
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "/dev/loop4"
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            ],
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "lv_name": "ceph_lv1",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "lv_size": "21470642176",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "name": "ceph_lv1",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "tags": {
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.cluster_name": "ceph",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.crush_device_class": "",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.encrypted": "0",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.osd_id": "1",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.type": "block",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.vdo": "0"
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            },
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "type": "block",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "vg_name": "ceph_vg1"
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:        }
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:    ],
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:    "2": [
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:        {
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "devices": [
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "/dev/loop5"
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            ],
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "lv_name": "ceph_lv2",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "lv_size": "21470642176",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "name": "ceph_lv2",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "tags": {
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.cluster_name": "ceph",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.crush_device_class": "",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.encrypted": "0",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.osd_id": "2",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.type": "block",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:                "ceph.vdo": "0"
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            },
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "type": "block",
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:            "vg_name": "ceph_vg2"
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:        }
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]:    ]
Oct  3 10:39:52 compute-0 determined_goldwasser[469404]: }
Oct  3 10:39:52 compute-0 systemd[1]: libpod-3233f718d13b318d483ede2c9ca743c78adf4a90ab53b7fad9b70f6cb0848a49.scope: Deactivated successfully.
Oct  3 10:39:52 compute-0 podman[469387]: 2025-10-03 10:39:52.786922435 +0000 UTC m=+1.124829420 container died 3233f718d13b318d483ede2c9ca743c78adf4a90ab53b7fad9b70f6cb0848a49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 10:39:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a13ad787a5e72f6d7984af275dfd39ee3f162420360182181d8c7edbb7b9b304-merged.mount: Deactivated successfully.
Oct  3 10:39:52 compute-0 podman[469387]: 2025-10-03 10:39:52.884901063 +0000 UTC m=+1.222807948 container remove 3233f718d13b318d483ede2c9ca743c78adf4a90ab53b7fad9b70f6cb0848a49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 10:39:52 compute-0 systemd[1]: libpod-conmon-3233f718d13b318d483ede2c9ca743c78adf4a90ab53b7fad9b70f6cb0848a49.scope: Deactivated successfully.
Oct  3 10:39:53 compute-0 nova_compute[351685]: 2025-10-03 10:39:53.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:53 compute-0 podman[469565]: 2025-10-03 10:39:53.752223133 +0000 UTC m=+0.053186344 container create 3479c1d9414441efde49c0d3c7d7255a483b3907132853ff2f9f87000baac117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:39:53 compute-0 systemd[1]: Started libpod-conmon-3479c1d9414441efde49c0d3c7d7255a483b3907132853ff2f9f87000baac117.scope.
Oct  3 10:39:53 compute-0 podman[469565]: 2025-10-03 10:39:53.734375032 +0000 UTC m=+0.035338273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:39:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:39:53 compute-0 podman[469565]: 2025-10-03 10:39:53.883584241 +0000 UTC m=+0.184547472 container init 3479c1d9414441efde49c0d3c7d7255a483b3907132853ff2f9f87000baac117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kare, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 10:39:53 compute-0 podman[469565]: 2025-10-03 10:39:53.893570781 +0000 UTC m=+0.194534002 container start 3479c1d9414441efde49c0d3c7d7255a483b3907132853ff2f9f87000baac117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 10:39:53 compute-0 podman[469565]: 2025-10-03 10:39:53.898083036 +0000 UTC m=+0.199046307 container attach 3479c1d9414441efde49c0d3c7d7255a483b3907132853ff2f9f87000baac117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:39:53 compute-0 goofy_kare[469581]: 167 167
Oct  3 10:39:53 compute-0 systemd[1]: libpod-3479c1d9414441efde49c0d3c7d7255a483b3907132853ff2f9f87000baac117.scope: Deactivated successfully.
Oct  3 10:39:53 compute-0 podman[469565]: 2025-10-03 10:39:53.905496963 +0000 UTC m=+0.206460174 container died 3479c1d9414441efde49c0d3c7d7255a483b3907132853ff2f9f87000baac117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 10:39:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e446b7ee493d2439f794147336ffdf36c9a8b0af48f4a8b8e3df5dfadc15ec0-merged.mount: Deactivated successfully.
Oct  3 10:39:53 compute-0 podman[469565]: 2025-10-03 10:39:53.961165557 +0000 UTC m=+0.262128788 container remove 3479c1d9414441efde49c0d3c7d7255a483b3907132853ff2f9f87000baac117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_kare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  3 10:39:53 compute-0 systemd[1]: libpod-conmon-3479c1d9414441efde49c0d3c7d7255a483b3907132853ff2f9f87000baac117.scope: Deactivated successfully.
Oct  3 10:39:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:39:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2260900508' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:39:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:39:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2260900508' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:39:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2177: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:54 compute-0 podman[469604]: 2025-10-03 10:39:54.246395333 +0000 UTC m=+0.106448181 container create 7abd709dbf26a236947e9912db4a7e975095a0cc634f37b74300ec9812e02cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 10:39:54 compute-0 podman[469604]: 2025-10-03 10:39:54.196401661 +0000 UTC m=+0.056454519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:39:54 compute-0 systemd[1]: Started libpod-conmon-7abd709dbf26a236947e9912db4a7e975095a0cc634f37b74300ec9812e02cea.scope.
Oct  3 10:39:54 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:39:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891fefbd7babf987f35aef5a366197c24e107a83975e22cc5d98f7d05493964a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:39:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891fefbd7babf987f35aef5a366197c24e107a83975e22cc5d98f7d05493964a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:39:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891fefbd7babf987f35aef5a366197c24e107a83975e22cc5d98f7d05493964a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:39:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891fefbd7babf987f35aef5a366197c24e107a83975e22cc5d98f7d05493964a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:39:54 compute-0 podman[469604]: 2025-10-03 10:39:54.437641078 +0000 UTC m=+0.297693906 container init 7abd709dbf26a236947e9912db4a7e975095a0cc634f37b74300ec9812e02cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_meninsky, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:39:54 compute-0 podman[469604]: 2025-10-03 10:39:54.454860459 +0000 UTC m=+0.314913277 container start 7abd709dbf26a236947e9912db4a7e975095a0cc634f37b74300ec9812e02cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:39:54 compute-0 podman[469604]: 2025-10-03 10:39:54.461140581 +0000 UTC m=+0.321193409 container attach 7abd709dbf26a236947e9912db4a7e975095a0cc634f37b74300ec9812e02cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]: {
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:        "osd_id": 1,
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:        "type": "bluestore"
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:    },
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:        "osd_id": 2,
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:        "type": "bluestore"
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:    },
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:        "osd_id": 0,
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:        "type": "bluestore"
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]:    }
Oct  3 10:39:55 compute-0 heuristic_meninsky[469620]: }
Oct  3 10:39:55 compute-0 systemd[1]: libpod-7abd709dbf26a236947e9912db4a7e975095a0cc634f37b74300ec9812e02cea.scope: Deactivated successfully.
Oct  3 10:39:55 compute-0 systemd[1]: libpod-7abd709dbf26a236947e9912db4a7e975095a0cc634f37b74300ec9812e02cea.scope: Consumed 1.099s CPU time.
Oct  3 10:39:55 compute-0 podman[469604]: 2025-10-03 10:39:55.558145028 +0000 UTC m=+1.418197846 container died 7abd709dbf26a236947e9912db4a7e975095a0cc634f37b74300ec9812e02cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_meninsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 10:39:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-891fefbd7babf987f35aef5a366197c24e107a83975e22cc5d98f7d05493964a-merged.mount: Deactivated successfully.
Oct  3 10:39:55 compute-0 podman[469604]: 2025-10-03 10:39:55.620510666 +0000 UTC m=+1.480563484 container remove 7abd709dbf26a236947e9912db4a7e975095a0cc634f37b74300ec9812e02cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:39:55 compute-0 systemd[1]: libpod-conmon-7abd709dbf26a236947e9912db4a7e975095a0cc634f37b74300ec9812e02cea.scope: Deactivated successfully.
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:39:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:39:55 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:39:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:39:55 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 0767f473-8ffa-485f-a441-8d64772d5680 does not exist
Oct  3 10:39:55 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev dc34fb11-ea5f-4fb9-9858-4dcd8f5468a2 does not exist
Oct  3 10:39:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:39:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:39:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:39:57 compute-0 nova_compute[351685]: 2025-10-03 10:39:57.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:39:58 compute-0 nova_compute[351685]: 2025-10-03 10:39:58.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:39:58 compute-0 podman[469716]: 2025-10-03 10:39:58.898663495 +0000 UTC m=+0.142718252 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, version=9.6, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal)
Oct  3 10:39:58 compute-0 podman[469717]: 2025-10-03 10:39:58.900598117 +0000 UTC m=+0.140046017 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 10:39:58 compute-0 podman[469718]: 2025-10-03 10:39:58.923437279 +0000 UTC m=+0.170063368 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  3 10:39:59 compute-0 podman[157165]: time="2025-10-03T10:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:39:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:39:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9075 "" "Go-http-client/1.1"
Oct  3 10:40:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:01 compute-0 openstack_network_exporter[367524]: ERROR   10:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:40:01 compute-0 openstack_network_exporter[367524]: ERROR   10:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:40:01 compute-0 openstack_network_exporter[367524]: ERROR   10:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:40:01 compute-0 openstack_network_exporter[367524]: ERROR   10:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:40:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:40:01 compute-0 openstack_network_exporter[367524]: ERROR   10:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:40:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:40:01 compute-0 podman[469777]: 2025-10-03 10:40:01.847951781 +0000 UTC m=+0.098802475 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct  3 10:40:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:40:02 compute-0 nova_compute[351685]: 2025-10-03 10:40:02.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:03 compute-0 nova_compute[351685]: 2025-10-03 10:40:03.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:04 compute-0 nova_compute[351685]: 2025-10-03 10:40:04.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:40:04 compute-0 nova_compute[351685]: 2025-10-03 10:40:04.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:40:04 compute-0 nova_compute[351685]: 2025-10-03 10:40:04.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:40:04 compute-0 podman[469795]: 2025-10-03 10:40:04.860553815 +0000 UTC m=+0.104134936 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:40:04 compute-0 podman[469797]: 2025-10-03 10:40:04.865839364 +0000 UTC m=+0.104297760 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  3 10:40:04 compute-0 podman[469796]: 2025-10-03 10:40:04.868778109 +0000 UTC m=+0.106577725 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:40:05 compute-0 nova_compute[351685]: 2025-10-03 10:40:05.462 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:40:05 compute-0 nova_compute[351685]: 2025-10-03 10:40:05.462 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:40:05 compute-0 nova_compute[351685]: 2025-10-03 10:40:05.463 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:40:05 compute-0 nova_compute[351685]: 2025-10-03 10:40:05.463 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:40:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2183: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:06 compute-0 nova_compute[351685]: 2025-10-03 10:40:06.911 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:40:06 compute-0 nova_compute[351685]: 2025-10-03 10:40:06.927 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:40:06 compute-0 nova_compute[351685]: 2025-10-03 10:40:06.927 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:40:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:40:07 compute-0 nova_compute[351685]: 2025-10-03 10:40:07.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:07 compute-0 nova_compute[351685]: 2025-10-03 10:40:07.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:40:07 compute-0 nova_compute[351685]: 2025-10-03 10:40:07.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:40:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:08 compute-0 nova_compute[351685]: 2025-10-03 10:40:08.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:10 compute-0 nova_compute[351685]: 2025-10-03 10:40:10.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:40:11 compute-0 nova_compute[351685]: 2025-10-03 10:40:11.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:40:11 compute-0 nova_compute[351685]: 2025-10-03 10:40:11.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:40:11 compute-0 nova_compute[351685]: 2025-10-03 10:40:11.768 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:40:11 compute-0 nova_compute[351685]: 2025-10-03 10:40:11.768 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:40:11 compute-0 nova_compute[351685]: 2025-10-03 10:40:11.769 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:40:11 compute-0 nova_compute[351685]: 2025-10-03 10:40:11.769 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:40:11 compute-0 nova_compute[351685]: 2025-10-03 10:40:11.770 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:40:11 compute-0 podman[469852]: 2025-10-03 10:40:11.909491118 +0000 UTC m=+0.151103671 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:40:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:40:12 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3333566588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:40:12 compute-0 nova_compute[351685]: 2025-10-03 10:40:12.239 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:40:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:40:12 compute-0 nova_compute[351685]: 2025-10-03 10:40:12.335 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:40:12 compute-0 nova_compute[351685]: 2025-10-03 10:40:12.335 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:40:12 compute-0 nova_compute[351685]: 2025-10-03 10:40:12.335 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:40:12 compute-0 nova_compute[351685]: 2025-10-03 10:40:12.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:12 compute-0 nova_compute[351685]: 2025-10-03 10:40:12.790 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:40:12 compute-0 nova_compute[351685]: 2025-10-03 10:40:12.791 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3802MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:40:12 compute-0 nova_compute[351685]: 2025-10-03 10:40:12.791 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:40:12 compute-0 nova_compute[351685]: 2025-10-03 10:40:12.791 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:40:12 compute-0 nova_compute[351685]: 2025-10-03 10:40:12.897 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:40:12 compute-0 nova_compute[351685]: 2025-10-03 10:40:12.897 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:40:12 compute-0 nova_compute[351685]: 2025-10-03 10:40:12.898 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:40:12 compute-0 nova_compute[351685]: 2025-10-03 10:40:12.944 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:40:13 compute-0 nova_compute[351685]: 2025-10-03 10:40:13.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:40:13 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4061327689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:40:13 compute-0 nova_compute[351685]: 2025-10-03 10:40:13.415 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:40:13 compute-0 nova_compute[351685]: 2025-10-03 10:40:13.422 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:40:13 compute-0 nova_compute[351685]: 2025-10-03 10:40:13.435 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:40:13 compute-0 nova_compute[351685]: 2025-10-03 10:40:13.438 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:40:13 compute-0 nova_compute[351685]: 2025-10-03 10:40:13.438 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:40:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:14 compute-0 nova_compute[351685]: 2025-10-03 10:40:14.440 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:40:14 compute-0 nova_compute[351685]: 2025-10-03 10:40:14.440 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:40:14 compute-0 podman[469917]: 2025-10-03 10:40:14.783016298 +0000 UTC m=+0.077846784 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, release=1214.1726694543, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct  3 10:40:14 compute-0 podman[469916]: 2025-10-03 10:40:14.803896077 +0000 UTC m=+0.107718281 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:40:15 compute-0 nova_compute[351685]: 2025-10-03 10:40:15.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:40:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:40:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:40:17 compute-0 nova_compute[351685]: 2025-10-03 10:40:17.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:17 compute-0 nova_compute[351685]: 2025-10-03 10:40:17.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:40:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:18 compute-0 nova_compute[351685]: 2025-10-03 10:40:18.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:40:22 compute-0 nova_compute[351685]: 2025-10-03 10:40:22.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:23 compute-0 nova_compute[351685]: 2025-10-03 10:40:23.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:40:27 compute-0 nova_compute[351685]: 2025-10-03 10:40:27.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:28 compute-0 nova_compute[351685]: 2025-10-03 10:40:28.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:29 compute-0 podman[157165]: time="2025-10-03T10:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:40:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:40:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9080 "" "Go-http-client/1.1"
Oct  3 10:40:29 compute-0 podman[469959]: 2025-10-03 10:40:29.818732113 +0000 UTC m=+0.082409770 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, config_id=edpm, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct  3 10:40:29 compute-0 podman[469960]: 2025-10-03 10:40:29.821113739 +0000 UTC m=+0.079930761 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930)
Oct  3 10:40:29 compute-0 podman[469961]: 2025-10-03 10:40:29.883319802 +0000 UTC m=+0.127372951 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:40:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:31 compute-0 openstack_network_exporter[367524]: ERROR   10:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:40:31 compute-0 openstack_network_exporter[367524]: ERROR   10:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:40:31 compute-0 openstack_network_exporter[367524]: ERROR   10:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:40:31 compute-0 openstack_network_exporter[367524]: ERROR   10:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:40:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:40:31 compute-0 openstack_network_exporter[367524]: ERROR   10:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:40:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:40:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:40:32 compute-0 nova_compute[351685]: 2025-10-03 10:40:32.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:32 compute-0 podman[470021]: 2025-10-03 10:40:32.825541011 +0000 UTC m=+0.082335127 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:40:33 compute-0 nova_compute[351685]: 2025-10-03 10:40:33.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:35 compute-0 podman[470038]: 2025-10-03 10:40:35.804137896 +0000 UTC m=+0.067969517 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:40:35 compute-0 podman[470039]: 2025-10-03 10:40:35.816784492 +0000 UTC m=+0.077494954 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 10:40:35 compute-0 podman[470040]: 2025-10-03 10:40:35.836773982 +0000 UTC m=+0.095612583 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Oct  3 10:40:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:40:37 compute-0 nova_compute[351685]: 2025-10-03 10:40:37.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:38 compute-0 nova_compute[351685]: 2025-10-03 10:40:38.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:40:41.636 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:40:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:40:41.637 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:40:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:40:41.637 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:40:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:40:42 compute-0 nova_compute[351685]: 2025-10-03 10:40:42.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:42 compute-0 podman[470096]: 2025-10-03 10:40:42.87574619 +0000 UTC m=+0.130977146 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  3 10:40:43 compute-0 nova_compute[351685]: 2025-10-03 10:40:43.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:45 compute-0 podman[470116]: 2025-10-03 10:40:45.85770739 +0000 UTC m=+0.108934870 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.expose-services=, name=ubi9, release=1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct  3 10:40:45 compute-0 podman[470115]: 2025-10-03 10:40:45.857750601 +0000 UTC m=+0.104413225 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:40:46
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'images', '.mgr', 'vms']
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:40:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:40:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:40:47 compute-0 nova_compute[351685]: 2025-10-03 10:40:47.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:48 compute-0 nova_compute[351685]: 2025-10-03 10:40:48.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:40:52 compute-0 nova_compute[351685]: 2025-10-03 10:40:52.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:53 compute-0 nova_compute[351685]: 2025-10-03 10:40:53.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:40:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/82635092' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:40:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:40:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/82635092' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:40:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:40:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 9894 writes, 44K keys, 9894 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s#012Cumulative WAL: 9894 writes, 9894 syncs, 1.00 writes per sync, written: 0.06 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1322 writes, 5980 keys, 1322 commit groups, 1.0 writes per commit group, ingest: 8.62 MB, 0.01 MB/s#012Interval WAL: 1322 writes, 1322 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     28.8      1.89              0.20        31    0.061       0      0       0.0       0.0#012  L6      1/0    6.14 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.3     89.6     74.1      3.12              0.78        30    0.104    161K    16K       0.0       0.0#012 Sum      1/0    6.14 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3     55.8     57.0      5.01              0.98        61    0.082    161K    16K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.6    129.8    126.8      0.35              0.18        10    0.035     31K   2548       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     89.6     74.1      3.12              0.78        30    0.104    161K    16K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     28.9      1.88              0.20        30    0.063       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.6      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.053, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.28 GB write, 0.07 MB/s write, 0.27 GB read, 0.07 MB/s read, 5.0 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.08 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56005dddb1f0#2 capacity: 304.00 MB usage: 32.72 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.00032 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2076,31.54 MB,10.3735%) FilterBlock(62,459.05 KB,0.147463%) IndexBlock(62,754.48 KB,0.242369%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:40:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:40:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:40:56 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:40:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:40:56 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:40:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:40:56 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:40:56 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e03f80ba-d978-442d-85c0-1d7f82f5eea5 does not exist
Oct  3 10:40:56 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e76470ff-2525-45eb-bb1c-c5901dc2542b does not exist
Oct  3 10:40:56 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 1ace602d-f40c-4309-963a-403463e23bc9 does not exist
Oct  3 10:40:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:40:56 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:40:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:40:56 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:40:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:40:56 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:40:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:40:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:40:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:40:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:40:57 compute-0 nova_compute[351685]: 2025-10-03 10:40:57.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:57 compute-0 podman[470422]: 2025-10-03 10:40:57.839968632 +0000 UTC m=+0.053625988 container create 76e30ca1f89771f99a1775e5509f30ce7f0a8b31fa7e812e0008401716fd5edf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cerf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 10:40:57 compute-0 systemd[1]: Started libpod-conmon-76e30ca1f89771f99a1775e5509f30ce7f0a8b31fa7e812e0008401716fd5edf.scope.
Oct  3 10:40:57 compute-0 podman[470422]: 2025-10-03 10:40:57.817394879 +0000 UTC m=+0.031052215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:40:57 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:40:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:40:58 compute-0 nova_compute[351685]: 2025-10-03 10:40:58.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:40:58 compute-0 podman[470422]: 2025-10-03 10:40:58.222398502 +0000 UTC m=+0.436055878 container init 76e30ca1f89771f99a1775e5509f30ce7f0a8b31fa7e812e0008401716fd5edf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Oct  3 10:40:58 compute-0 podman[470422]: 2025-10-03 10:40:58.240540313 +0000 UTC m=+0.454197649 container start 76e30ca1f89771f99a1775e5509f30ce7f0a8b31fa7e812e0008401716fd5edf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cerf, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:40:58 compute-0 wizardly_cerf[470437]: 167 167
Oct  3 10:40:58 compute-0 systemd[1]: libpod-76e30ca1f89771f99a1775e5509f30ce7f0a8b31fa7e812e0008401716fd5edf.scope: Deactivated successfully.
Oct  3 10:40:58 compute-0 podman[470422]: 2025-10-03 10:40:58.311880328 +0000 UTC m=+0.525537684 container attach 76e30ca1f89771f99a1775e5509f30ce7f0a8b31fa7e812e0008401716fd5edf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:40:58 compute-0 podman[470422]: 2025-10-03 10:40:58.312992093 +0000 UTC m=+0.526649449 container died 76e30ca1f89771f99a1775e5509f30ce7f0a8b31fa7e812e0008401716fd5edf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 10:40:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-99a0a69197a4bceca34721164ca50724fb6f616b04f674ef1d07b57806317dd8-merged.mount: Deactivated successfully.
Oct  3 10:40:58 compute-0 podman[470422]: 2025-10-03 10:40:58.431502229 +0000 UTC m=+0.645159585 container remove 76e30ca1f89771f99a1775e5509f30ce7f0a8b31fa7e812e0008401716fd5edf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 10:40:58 compute-0 systemd[1]: libpod-conmon-76e30ca1f89771f99a1775e5509f30ce7f0a8b31fa7e812e0008401716fd5edf.scope: Deactivated successfully.
Oct  3 10:40:58 compute-0 podman[470461]: 2025-10-03 10:40:58.664445111 +0000 UTC m=+0.064617411 container create 272e9ce794c8f168588bb1a53ba461359fb116ce03cb1d1df906490e9f9c04ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_varahamihira, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:40:58 compute-0 systemd[1]: Started libpod-conmon-272e9ce794c8f168588bb1a53ba461359fb116ce03cb1d1df906490e9f9c04ce.scope.
Oct  3 10:40:58 compute-0 podman[470461]: 2025-10-03 10:40:58.645880606 +0000 UTC m=+0.046052926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:40:58 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202a7a8ff75748a367a3143167afa3cae2cffe1b96b7447ca8325d478ed89183/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202a7a8ff75748a367a3143167afa3cae2cffe1b96b7447ca8325d478ed89183/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202a7a8ff75748a367a3143167afa3cae2cffe1b96b7447ca8325d478ed89183/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202a7a8ff75748a367a3143167afa3cae2cffe1b96b7447ca8325d478ed89183/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202a7a8ff75748a367a3143167afa3cae2cffe1b96b7447ca8325d478ed89183/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:40:58 compute-0 podman[470461]: 2025-10-03 10:40:58.803610328 +0000 UTC m=+0.203782638 container init 272e9ce794c8f168588bb1a53ba461359fb116ce03cb1d1df906490e9f9c04ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_varahamihira, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:40:58 compute-0 podman[470461]: 2025-10-03 10:40:58.816954865 +0000 UTC m=+0.217127175 container start 272e9ce794c8f168588bb1a53ba461359fb116ce03cb1d1df906490e9f9c04ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_varahamihira, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:40:58 compute-0 podman[470461]: 2025-10-03 10:40:58.821623115 +0000 UTC m=+0.221795425 container attach 272e9ce794c8f168588bb1a53ba461359fb116ce03cb1d1df906490e9f9c04ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:40:59 compute-0 podman[157165]: time="2025-10-03T10:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:40:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47978 "" "Go-http-client/1.1"
Oct  3 10:40:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9505 "" "Go-http-client/1.1"
Oct  3 10:40:59 compute-0 relaxed_varahamihira[470478]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:40:59 compute-0 relaxed_varahamihira[470478]: --> relative data size: 1.0
Oct  3 10:40:59 compute-0 relaxed_varahamihira[470478]: --> All data devices are unavailable
Oct  3 10:40:59 compute-0 systemd[1]: libpod-272e9ce794c8f168588bb1a53ba461359fb116ce03cb1d1df906490e9f9c04ce.scope: Deactivated successfully.
Oct  3 10:40:59 compute-0 podman[470461]: 2025-10-03 10:40:59.895090829 +0000 UTC m=+1.295263129 container died 272e9ce794c8f168588bb1a53ba461359fb116ce03cb1d1df906490e9f9c04ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_varahamihira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  3 10:40:59 compute-0 systemd[1]: libpod-272e9ce794c8f168588bb1a53ba461359fb116ce03cb1d1df906490e9f9c04ce.scope: Consumed 1.002s CPU time.
Oct  3 10:40:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-202a7a8ff75748a367a3143167afa3cae2cffe1b96b7447ca8325d478ed89183-merged.mount: Deactivated successfully.
Oct  3 10:40:59 compute-0 podman[470461]: 2025-10-03 10:40:59.970582267 +0000 UTC m=+1.370754567 container remove 272e9ce794c8f168588bb1a53ba461359fb116ce03cb1d1df906490e9f9c04ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  3 10:40:59 compute-0 systemd[1]: libpod-conmon-272e9ce794c8f168588bb1a53ba461359fb116ce03cb1d1df906490e9f9c04ce.scope: Deactivated successfully.
Oct  3 10:41:00 compute-0 podman[470511]: 2025-10-03 10:41:00.0493604 +0000 UTC m=+0.109995334 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, io.buildah.version=1.41.4, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 10:41:00 compute-0 podman[470507]: 2025-10-03 10:41:00.057428318 +0000 UTC m=+0.119979754 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, release=1755695350)
Oct  3 10:41:00 compute-0 podman[470512]: 2025-10-03 10:41:00.080741615 +0000 UTC m=+0.133377963 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 10:41:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:00 compute-0 podman[470723]: 2025-10-03 10:41:00.824816268 +0000 UTC m=+0.061456279 container create b35513fc152921795e4b61e57d228c40fbfced13c16987d3bc7586e1a2c3a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_edison, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:41:00 compute-0 systemd[1]: Started libpod-conmon-b35513fc152921795e4b61e57d228c40fbfced13c16987d3bc7586e1a2c3a03f.scope.
Oct  3 10:41:00 compute-0 podman[470723]: 2025-10-03 10:41:00.7998804 +0000 UTC m=+0.036520451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:41:00 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:41:00 compute-0 podman[470723]: 2025-10-03 10:41:00.939503682 +0000 UTC m=+0.176143713 container init b35513fc152921795e4b61e57d228c40fbfced13c16987d3bc7586e1a2c3a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:41:00 compute-0 podman[470723]: 2025-10-03 10:41:00.951800386 +0000 UTC m=+0.188440387 container start b35513fc152921795e4b61e57d228c40fbfced13c16987d3bc7586e1a2c3a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_edison, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:41:00 compute-0 podman[470723]: 2025-10-03 10:41:00.956022061 +0000 UTC m=+0.192662082 container attach b35513fc152921795e4b61e57d228c40fbfced13c16987d3bc7586e1a2c3a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_edison, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  3 10:41:00 compute-0 nifty_edison[470738]: 167 167
Oct  3 10:41:00 compute-0 systemd[1]: libpod-b35513fc152921795e4b61e57d228c40fbfced13c16987d3bc7586e1a2c3a03f.scope: Deactivated successfully.
Oct  3 10:41:00 compute-0 podman[470723]: 2025-10-03 10:41:00.961978191 +0000 UTC m=+0.198618212 container died b35513fc152921795e4b61e57d228c40fbfced13c16987d3bc7586e1a2c3a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:41:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba8ed5fe8dc5ad5b751f68cc42bbc819c9426458e7ccc01bb3ec6034eca8bc8e-merged.mount: Deactivated successfully.
Oct  3 10:41:01 compute-0 podman[470723]: 2025-10-03 10:41:01.00876529 +0000 UTC m=+0.245405291 container remove b35513fc152921795e4b61e57d228c40fbfced13c16987d3bc7586e1a2c3a03f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct  3 10:41:01 compute-0 systemd[1]: libpod-conmon-b35513fc152921795e4b61e57d228c40fbfced13c16987d3bc7586e1a2c3a03f.scope: Deactivated successfully.
Oct  3 10:41:01 compute-0 podman[470760]: 2025-10-03 10:41:01.206843305 +0000 UTC m=+0.062850384 container create 956e10febe52a52681325adf674fc4d62cf37c8cb0464dd9f8e0024443453470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:41:01 compute-0 systemd[1]: Started libpod-conmon-956e10febe52a52681325adf674fc4d62cf37c8cb0464dd9f8e0024443453470.scope.
Oct  3 10:41:01 compute-0 podman[470760]: 2025-10-03 10:41:01.181320027 +0000 UTC m=+0.037327166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:41:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9675abe596f95e69d3adc7a3daa120ce5d091efc585debcd5cb996ca690cc9cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9675abe596f95e69d3adc7a3daa120ce5d091efc585debcd5cb996ca690cc9cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9675abe596f95e69d3adc7a3daa120ce5d091efc585debcd5cb996ca690cc9cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:41:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9675abe596f95e69d3adc7a3daa120ce5d091efc585debcd5cb996ca690cc9cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:41:01 compute-0 podman[470760]: 2025-10-03 10:41:01.327128348 +0000 UTC m=+0.183135447 container init 956e10febe52a52681325adf674fc4d62cf37c8cb0464dd9f8e0024443453470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ellis, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:41:01 compute-0 podman[470760]: 2025-10-03 10:41:01.340645331 +0000 UTC m=+0.196652410 container start 956e10febe52a52681325adf674fc4d62cf37c8cb0464dd9f8e0024443453470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ellis, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:41:01 compute-0 podman[470760]: 2025-10-03 10:41:01.345665741 +0000 UTC m=+0.201672820 container attach 956e10febe52a52681325adf674fc4d62cf37c8cb0464dd9f8e0024443453470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ellis, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:41:01 compute-0 openstack_network_exporter[367524]: ERROR   10:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:41:01 compute-0 openstack_network_exporter[367524]: ERROR   10:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:41:01 compute-0 openstack_network_exporter[367524]: ERROR   10:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:41:01 compute-0 openstack_network_exporter[367524]: ERROR   10:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:41:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:41:01 compute-0 openstack_network_exporter[367524]: ERROR   10:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:41:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:41:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:02 compute-0 zen_ellis[470777]: {
Oct  3 10:41:02 compute-0 zen_ellis[470777]:    "0": [
Oct  3 10:41:02 compute-0 zen_ellis[470777]:        {
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "devices": [
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "/dev/loop3"
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            ],
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "lv_name": "ceph_lv0",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "lv_size": "21470642176",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "name": "ceph_lv0",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "tags": {
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.cluster_name": "ceph",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.crush_device_class": "",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.encrypted": "0",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.osd_id": "0",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.type": "block",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.vdo": "0"
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            },
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "type": "block",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "vg_name": "ceph_vg0"
Oct  3 10:41:02 compute-0 zen_ellis[470777]:        }
Oct  3 10:41:02 compute-0 zen_ellis[470777]:    ],
Oct  3 10:41:02 compute-0 zen_ellis[470777]:    "1": [
Oct  3 10:41:02 compute-0 zen_ellis[470777]:        {
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "devices": [
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "/dev/loop4"
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            ],
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "lv_name": "ceph_lv1",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "lv_size": "21470642176",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "name": "ceph_lv1",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "tags": {
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.cluster_name": "ceph",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.crush_device_class": "",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.encrypted": "0",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.osd_id": "1",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.type": "block",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.vdo": "0"
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            },
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "type": "block",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "vg_name": "ceph_vg1"
Oct  3 10:41:02 compute-0 zen_ellis[470777]:        }
Oct  3 10:41:02 compute-0 zen_ellis[470777]:    ],
Oct  3 10:41:02 compute-0 zen_ellis[470777]:    "2": [
Oct  3 10:41:02 compute-0 zen_ellis[470777]:        {
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "devices": [
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "/dev/loop5"
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            ],
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "lv_name": "ceph_lv2",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "lv_size": "21470642176",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "name": "ceph_lv2",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "tags": {
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.cluster_name": "ceph",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.crush_device_class": "",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.encrypted": "0",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.osd_id": "2",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.type": "block",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:                "ceph.vdo": "0"
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            },
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "type": "block",
Oct  3 10:41:02 compute-0 zen_ellis[470777]:            "vg_name": "ceph_vg2"
Oct  3 10:41:02 compute-0 zen_ellis[470777]:        }
Oct  3 10:41:02 compute-0 zen_ellis[470777]:    ]
Oct  3 10:41:02 compute-0 zen_ellis[470777]: }
Oct  3 10:41:02 compute-0 systemd[1]: libpod-956e10febe52a52681325adf674fc4d62cf37c8cb0464dd9f8e0024443453470.scope: Deactivated successfully.
Oct  3 10:41:02 compute-0 podman[470760]: 2025-10-03 10:41:02.207091783 +0000 UTC m=+1.063098862 container died 956e10febe52a52681325adf674fc4d62cf37c8cb0464dd9f8e0024443453470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ellis, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 10:41:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9675abe596f95e69d3adc7a3daa120ce5d091efc585debcd5cb996ca690cc9cf-merged.mount: Deactivated successfully.
Oct  3 10:41:02 compute-0 podman[470760]: 2025-10-03 10:41:02.280811994 +0000 UTC m=+1.136819073 container remove 956e10febe52a52681325adf674fc4d62cf37c8cb0464dd9f8e0024443453470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ellis, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  3 10:41:02 compute-0 systemd[1]: libpod-conmon-956e10febe52a52681325adf674fc4d62cf37c8cb0464dd9f8e0024443453470.scope: Deactivated successfully.
Oct  3 10:41:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:41:02 compute-0 nova_compute[351685]: 2025-10-03 10:41:02.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:03 compute-0 nova_compute[351685]: 2025-10-03 10:41:03.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:03 compute-0 podman[470934]: 2025-10-03 10:41:03.299155692 +0000 UTC m=+0.079702063 container create 8815ac73b7c3a5817e36a0000125921c232f4039244d9cb8d4b2ed88eecb5b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ride, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 10:41:03 compute-0 systemd[1]: Started libpod-conmon-8815ac73b7c3a5817e36a0000125921c232f4039244d9cb8d4b2ed88eecb5b4f.scope.
Oct  3 10:41:03 compute-0 podman[470934]: 2025-10-03 10:41:03.275063501 +0000 UTC m=+0.055609882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:41:03 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:41:03 compute-0 podman[470934]: 2025-10-03 10:41:03.412685099 +0000 UTC m=+0.193231500 container init 8815ac73b7c3a5817e36a0000125921c232f4039244d9cb8d4b2ed88eecb5b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ride, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 10:41:03 compute-0 podman[470934]: 2025-10-03 10:41:03.425790329 +0000 UTC m=+0.206336680 container start 8815ac73b7c3a5817e36a0000125921c232f4039244d9cb8d4b2ed88eecb5b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ride, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 10:41:03 compute-0 podman[470934]: 2025-10-03 10:41:03.43051256 +0000 UTC m=+0.211058921 container attach 8815ac73b7c3a5817e36a0000125921c232f4039244d9cb8d4b2ed88eecb5b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ride, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:41:03 compute-0 awesome_ride[470950]: 167 167
Oct  3 10:41:03 compute-0 systemd[1]: libpod-8815ac73b7c3a5817e36a0000125921c232f4039244d9cb8d4b2ed88eecb5b4f.scope: Deactivated successfully.
Oct  3 10:41:03 compute-0 podman[470934]: 2025-10-03 10:41:03.433864497 +0000 UTC m=+0.214410848 container died 8815ac73b7c3a5817e36a0000125921c232f4039244d9cb8d4b2ed88eecb5b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:41:03 compute-0 podman[470947]: 2025-10-03 10:41:03.447219215 +0000 UTC m=+0.084675483 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:41:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-86ed9e626ad6e5c021c6b4399cd4d60b9618fe34a07ccab228df2f81b2050a73-merged.mount: Deactivated successfully.
Oct  3 10:41:03 compute-0 podman[470934]: 2025-10-03 10:41:03.510614426 +0000 UTC m=+0.291160787 container remove 8815ac73b7c3a5817e36a0000125921c232f4039244d9cb8d4b2ed88eecb5b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 10:41:03 compute-0 systemd[1]: libpod-conmon-8815ac73b7c3a5817e36a0000125921c232f4039244d9cb8d4b2ed88eecb5b4f.scope: Deactivated successfully.
Oct  3 10:41:03 compute-0 podman[470988]: 2025-10-03 10:41:03.761045177 +0000 UTC m=+0.077739101 container create 9a1d709a26c1f0f7b4e73274b7c46c87bf0fe6112fbb9ad09bcf035b0e405677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_darwin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:41:03 compute-0 podman[470988]: 2025-10-03 10:41:03.730054504 +0000 UTC m=+0.046748508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:41:03 compute-0 systemd[1]: Started libpod-conmon-9a1d709a26c1f0f7b4e73274b7c46c87bf0fe6112fbb9ad09bcf035b0e405677.scope.
Oct  3 10:41:03 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3bc3246666f6e85bcbe9c90818df24efeb722b5bdf25fcb3cd25f31f9b62f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3bc3246666f6e85bcbe9c90818df24efeb722b5bdf25fcb3cd25f31f9b62f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3bc3246666f6e85bcbe9c90818df24efeb722b5bdf25fcb3cd25f31f9b62f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca3bc3246666f6e85bcbe9c90818df24efeb722b5bdf25fcb3cd25f31f9b62f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:41:03 compute-0 podman[470988]: 2025-10-03 10:41:03.90943532 +0000 UTC m=+0.226129234 container init 9a1d709a26c1f0f7b4e73274b7c46c87bf0fe6112fbb9ad09bcf035b0e405677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_darwin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 10:41:03 compute-0 podman[470988]: 2025-10-03 10:41:03.919582825 +0000 UTC m=+0.236276739 container start 9a1d709a26c1f0f7b4e73274b7c46c87bf0fe6112fbb9ad09bcf035b0e405677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_darwin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 10:41:03 compute-0 podman[470988]: 2025-10-03 10:41:03.923689106 +0000 UTC m=+0.240383050 container attach 9a1d709a26c1f0f7b4e73274b7c46c87bf0fe6112fbb9ad09bcf035b0e405677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_darwin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:41:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:05 compute-0 nice_darwin[471004]: {
Oct  3 10:41:05 compute-0 nice_darwin[471004]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:41:05 compute-0 nice_darwin[471004]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:41:05 compute-0 nice_darwin[471004]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:41:05 compute-0 nice_darwin[471004]:        "osd_id": 1,
Oct  3 10:41:05 compute-0 nice_darwin[471004]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:41:05 compute-0 nice_darwin[471004]:        "type": "bluestore"
Oct  3 10:41:05 compute-0 nice_darwin[471004]:    },
Oct  3 10:41:05 compute-0 nice_darwin[471004]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:41:05 compute-0 nice_darwin[471004]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:41:05 compute-0 nice_darwin[471004]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:41:05 compute-0 nice_darwin[471004]:        "osd_id": 2,
Oct  3 10:41:05 compute-0 nice_darwin[471004]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:41:05 compute-0 nice_darwin[471004]:        "type": "bluestore"
Oct  3 10:41:05 compute-0 nice_darwin[471004]:    },
Oct  3 10:41:05 compute-0 nice_darwin[471004]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:41:05 compute-0 nice_darwin[471004]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:41:05 compute-0 nice_darwin[471004]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:41:05 compute-0 nice_darwin[471004]:        "osd_id": 0,
Oct  3 10:41:05 compute-0 nice_darwin[471004]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:41:05 compute-0 nice_darwin[471004]:        "type": "bluestore"
Oct  3 10:41:05 compute-0 nice_darwin[471004]:    }
Oct  3 10:41:05 compute-0 nice_darwin[471004]: }
Oct  3 10:41:05 compute-0 systemd[1]: libpod-9a1d709a26c1f0f7b4e73274b7c46c87bf0fe6112fbb9ad09bcf035b0e405677.scope: Deactivated successfully.
Oct  3 10:41:05 compute-0 systemd[1]: libpod-9a1d709a26c1f0f7b4e73274b7c46c87bf0fe6112fbb9ad09bcf035b0e405677.scope: Consumed 1.193s CPU time.
Oct  3 10:41:05 compute-0 podman[471037]: 2025-10-03 10:41:05.204853312 +0000 UTC m=+0.055651233 container died 9a1d709a26c1f0f7b4e73274b7c46c87bf0fe6112fbb9ad09bcf035b0e405677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_darwin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:41:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca3bc3246666f6e85bcbe9c90818df24efeb722b5bdf25fcb3cd25f31f9b62f8-merged.mount: Deactivated successfully.
Oct  3 10:41:05 compute-0 podman[471037]: 2025-10-03 10:41:05.293658507 +0000 UTC m=+0.144456368 container remove 9a1d709a26c1f0f7b4e73274b7c46c87bf0fe6112fbb9ad09bcf035b0e405677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_darwin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:41:05 compute-0 systemd[1]: libpod-conmon-9a1d709a26c1f0f7b4e73274b7c46c87bf0fe6112fbb9ad09bcf035b0e405677.scope: Deactivated successfully.
Oct  3 10:41:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:41:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:41:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:41:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:41:05 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 5436dded-d51f-41bb-a6a2-47ee1f4e61d4 does not exist
Oct  3 10:41:05 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 040b8886-09fa-43b0-a6c6-6fe4c2a9e2f4 does not exist
Oct  3 10:41:05 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:41:05 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:41:05 compute-0 nova_compute[351685]: 2025-10-03 10:41:05.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:41:05 compute-0 nova_compute[351685]: 2025-10-03 10:41:05.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:41:05 compute-0 nova_compute[351685]: 2025-10-03 10:41:05.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:41:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2213: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:06 compute-0 nova_compute[351685]: 2025-10-03 10:41:06.530 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:41:06 compute-0 nova_compute[351685]: 2025-10-03 10:41:06.531 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:41:06 compute-0 nova_compute[351685]: 2025-10-03 10:41:06.532 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:41:06 compute-0 nova_compute[351685]: 2025-10-03 10:41:06.532 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:41:06 compute-0 podman[471101]: 2025-10-03 10:41:06.86411331 +0000 UTC m=+0.094658223 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:41:06 compute-0 podman[471102]: 2025-10-03 10:41:06.869900145 +0000 UTC m=+0.101876464 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:41:06 compute-0 podman[471103]: 2025-10-03 10:41:06.893329675 +0000 UTC m=+0.124286081 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible)
Oct  3 10:41:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:41:07 compute-0 nova_compute[351685]: 2025-10-03 10:41:07.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:08 compute-0 nova_compute[351685]: 2025-10-03 10:41:08.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:08 compute-0 nova_compute[351685]: 2025-10-03 10:41:08.537 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:41:08 compute-0 nova_compute[351685]: 2025-10-03 10:41:08.554 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:41:08 compute-0 nova_compute[351685]: 2025-10-03 10:41:08.555 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:41:09 compute-0 nova_compute[351685]: 2025-10-03 10:41:09.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:41:09 compute-0 nova_compute[351685]: 2025-10-03 10:41:09.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:41:09 compute-0 nova_compute[351685]: 2025-10-03 10:41:09.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:41:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:11 compute-0 nova_compute[351685]: 2025-10-03 10:41:11.741 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:41:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:41:12 compute-0 nova_compute[351685]: 2025-10-03 10:41:12.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:41:12 compute-0 nova_compute[351685]: 2025-10-03 10:41:12.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:12 compute-0 nova_compute[351685]: 2025-10-03 10:41:12.755 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:41:12 compute-0 nova_compute[351685]: 2025-10-03 10:41:12.756 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:41:12 compute-0 nova_compute[351685]: 2025-10-03 10:41:12.756 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:41:12 compute-0 nova_compute[351685]: 2025-10-03 10:41:12.756 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:41:12 compute-0 nova_compute[351685]: 2025-10-03 10:41:12.757 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:41:13 compute-0 nova_compute[351685]: 2025-10-03 10:41:13.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:41:13 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2151664945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:41:13 compute-0 nova_compute[351685]: 2025-10-03 10:41:13.260 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:41:13 compute-0 nova_compute[351685]: 2025-10-03 10:41:13.340 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:41:13 compute-0 nova_compute[351685]: 2025-10-03 10:41:13.341 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:41:13 compute-0 nova_compute[351685]: 2025-10-03 10:41:13.341 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:41:13 compute-0 nova_compute[351685]: 2025-10-03 10:41:13.761 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:41:13 compute-0 nova_compute[351685]: 2025-10-03 10:41:13.763 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3815MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:41:13 compute-0 nova_compute[351685]: 2025-10-03 10:41:13.763 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:41:13 compute-0 nova_compute[351685]: 2025-10-03 10:41:13.763 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:41:13 compute-0 nova_compute[351685]: 2025-10-03 10:41:13.846 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:41:13 compute-0 nova_compute[351685]: 2025-10-03 10:41:13.847 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:41:13 compute-0 nova_compute[351685]: 2025-10-03 10:41:13.847 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:41:13 compute-0 podman[471182]: 2025-10-03 10:41:13.857925448 +0000 UTC m=+0.109381664 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:41:13 compute-0 nova_compute[351685]: 2025-10-03 10:41:13.877 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:41:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:14 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:41:14 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/62927675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:41:14 compute-0 nova_compute[351685]: 2025-10-03 10:41:14.341 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:41:14 compute-0 nova_compute[351685]: 2025-10-03 10:41:14.351 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:41:14 compute-0 nova_compute[351685]: 2025-10-03 10:41:14.367 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:41:14 compute-0 nova_compute[351685]: 2025-10-03 10:41:14.369 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:41:14 compute-0 nova_compute[351685]: 2025-10-03 10:41:14.369 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:41:15 compute-0 nova_compute[351685]: 2025-10-03 10:41:15.370 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:41:15 compute-0 nova_compute[351685]: 2025-10-03 10:41:15.371 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:41:15 compute-0 nova_compute[351685]: 2025-10-03 10:41:15.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:41:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:41:16 compute-0 podman[471226]: 2025-10-03 10:41:16.817903043 +0000 UTC m=+0.074355602 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release-0.7.12=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, managed_by=edpm_ansible, name=ubi9, io.openshift.tags=base rhel9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct  3 10:41:16 compute-0 podman[471225]: 2025-10-03 10:41:16.845517518 +0000 UTC m=+0.109955953 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:41:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:41:17 compute-0 nova_compute[351685]: 2025-10-03 10:41:17.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:41:17 compute-0 nova_compute[351685]: 2025-10-03 10:41:17.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:41:17 compute-0 nova_compute[351685]: 2025-10-03 10:41:17.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:18 compute-0 nova_compute[351685]: 2025-10-03 10:41:18.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:41:22 compute-0 nova_compute[351685]: 2025-10-03 10:41:22.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:23 compute-0 nova_compute[351685]: 2025-10-03 10:41:23.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:24 compute-0 nova_compute[351685]: 2025-10-03 10:41:24.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:41:24 compute-0 nova_compute[351685]: 2025-10-03 10:41:24.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 10:41:24 compute-0 nova_compute[351685]: 2025-10-03 10:41:24.753 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 10:41:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:41:27 compute-0 nova_compute[351685]: 2025-10-03 10:41:27.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2224: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:28 compute-0 nova_compute[351685]: 2025-10-03 10:41:28.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:28 compute-0 nova_compute[351685]: 2025-10-03 10:41:28.749 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:41:29 compute-0 podman[157165]: time="2025-10-03T10:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:41:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:41:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9083 "" "Go-http-client/1.1"
Oct  3 10:41:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2225: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:30 compute-0 podman[471266]: 2025-10-03 10:41:30.897172435 +0000 UTC m=+0.139249941 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal)
Oct  3 10:41:30 compute-0 podman[471267]: 2025-10-03 10:41:30.916739162 +0000 UTC m=+0.150666837 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, config_id=edpm, container_name=ceilometer_agent_compute)
Oct  3 10:41:30 compute-0 podman[471268]: 2025-10-03 10:41:30.955444271 +0000 UTC m=+0.183984904 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:41:31 compute-0 openstack_network_exporter[367524]: ERROR   10:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:41:31 compute-0 openstack_network_exporter[367524]: ERROR   10:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:41:31 compute-0 openstack_network_exporter[367524]: ERROR   10:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:41:31 compute-0 openstack_network_exporter[367524]: ERROR   10:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:41:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:41:31 compute-0 openstack_network_exporter[367524]: ERROR   10:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:41:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:41:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:41:32 compute-0 nova_compute[351685]: 2025-10-03 10:41:32.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:33 compute-0 nova_compute[351685]: 2025-10-03 10:41:33.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:33 compute-0 podman[471327]: 2025-10-03 10:41:33.829671715 +0000 UTC m=+0.090856442 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  3 10:41:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2227: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:41:37 compute-0 nova_compute[351685]: 2025-10-03 10:41:37.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:37 compute-0 podman[471344]: 2025-10-03 10:41:37.824430289 +0000 UTC m=+0.082552606 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:41:37 compute-0 podman[471346]: 2025-10-03 10:41:37.831842236 +0000 UTC m=+0.086301665 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:41:37 compute-0 podman[471345]: 2025-10-03 10:41:37.858369716 +0000 UTC m=+0.111259545 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct  3 10:41:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:38 compute-0 nova_compute[351685]: 2025-10-03 10:41:38.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:38 compute-0 nova_compute[351685]: 2025-10-03 10:41:38.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:41:38 compute-0 nova_compute[351685]: 2025-10-03 10:41:38.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 10:41:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.893 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.893 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.903 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.903 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.904 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.904 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.904 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:41:40.904362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.910 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.911 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.911 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.912 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.912 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.912 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.912 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.913 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:41:40.912517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.913 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.914 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.914 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.914 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.914 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.914 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:41:40.914708) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.948 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.948 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.949 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.950 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.950 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.951 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.951 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.951 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:40.952 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:41:40.951650) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.015 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.018 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.019 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.022 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.025 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:41:41.019045) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.026 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.027 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:41:41.022944) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.029 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:41:41.027526) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.029 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.030 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.031 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.031 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.031 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.031 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.032 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.032 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.032 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.033 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:41:41.031956) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.033 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.034 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.034 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.034 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.035 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.035 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.035 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.036 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:41:41.035459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.036 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.036 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.038 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.038 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:41:41.038615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.065 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.066 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.066 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.066 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.066 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.066 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.067 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:41:41.066724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.067 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.068 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.068 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:41:41.068521) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.069 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.069 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.069 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.069 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.070 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.070 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.070 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.070 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.070 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:41:41.070170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.071 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.071 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.071 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.072 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.072 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.072 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.072 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.072 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.072 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.072 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:41:41.071620) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.073 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.074 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.073 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:41:41.072771) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:41:41.073958) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.074 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.074 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.074 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.074 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.075 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.075 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.075 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:41:41.074964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.075 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.076 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 64810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.076 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.076 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.077 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.077 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.077 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.077 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.077 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.078 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.078 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.078 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.078 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.078 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.079 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.079 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:41:41.076146) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.079 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.080 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.080 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.080 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.080 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.080 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.080 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:41:41.077330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.081 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.081 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:41:41.078554) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.081 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.082 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.082 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:41:41.079707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.083 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.083 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:41:41.080830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.083 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:41:41.082200) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:41:41.083390) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:41:41.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:41:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:41:41.637 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:41:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:41:41.638 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:41:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:41:41.639 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:41:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2231: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:41:42 compute-0 nova_compute[351685]: 2025-10-03 10:41:42.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:43 compute-0 nova_compute[351685]: 2025-10-03 10:41:43.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2232: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:44 compute-0 podman[471406]: 2025-10-03 10:41:44.852428614 +0000 UTC m=+0.143933750 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:41:46
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.log', '.mgr', 'images', 'backups', 'volumes']
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:41:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:41:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:41:47 compute-0 nova_compute[351685]: 2025-10-03 10:41:47.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:47 compute-0 podman[471425]: 2025-10-03 10:41:47.888473347 +0000 UTC m=+0.137408962 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:41:47 compute-0 podman[471426]: 2025-10-03 10:41:47.926404362 +0000 UTC m=+0.170230684 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vendor=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.openshift.tags=base rhel9, container_name=kepler)
Oct  3 10:41:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2234: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:48 compute-0 nova_compute[351685]: 2025-10-03 10:41:48.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:52 compute-0 nova_compute[351685]: 2025-10-03 10:41:52.070 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:41:52 compute-0 nova_compute[351685]: 2025-10-03 10:41:52.091 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Triggering sync for uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  3 10:41:52 compute-0 nova_compute[351685]: 2025-10-03 10:41:52.091 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:41:52 compute-0 nova_compute[351685]: 2025-10-03 10:41:52.091 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:41:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:52 compute-0 nova_compute[351685]: 2025-10-03 10:41:52.117 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:41:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:41:52 compute-0 nova_compute[351685]: 2025-10-03 10:41:52.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:53 compute-0 nova_compute[351685]: 2025-10-03 10:41:53.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:41:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/370924723' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:41:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:41:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/370924723' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:41:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:41:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:41:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:41:57 compute-0 nova_compute[351685]: 2025-10-03 10:41:57.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2239: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:41:58 compute-0 nova_compute[351685]: 2025-10-03 10:41:58.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:41:59 compute-0 podman[157165]: time="2025-10-03T10:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:41:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:41:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9084 "" "Go-http-client/1.1"
Oct  3 10:42:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:01 compute-0 openstack_network_exporter[367524]: ERROR   10:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:42:01 compute-0 openstack_network_exporter[367524]: ERROR   10:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:42:01 compute-0 openstack_network_exporter[367524]: ERROR   10:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:42:01 compute-0 openstack_network_exporter[367524]: ERROR   10:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:42:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:42:01 compute-0 openstack_network_exporter[367524]: ERROR   10:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:42:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:42:01 compute-0 podman[471467]: 2025-10-03 10:42:01.854339202 +0000 UTC m=+0.102645419 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64)
Oct  3 10:42:01 compute-0 podman[471468]: 2025-10-03 10:42:01.862981149 +0000 UTC m=+0.116215653 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 10:42:01 compute-0 podman[471469]: 2025-10-03 10:42:01.919397556 +0000 UTC m=+0.161780733 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 10:42:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2241: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:42:02 compute-0 nova_compute[351685]: 2025-10-03 10:42:02.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:03 compute-0 nova_compute[351685]: 2025-10-03 10:42:03.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2242: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:04 compute-0 podman[471528]: 2025-10-03 10:42:04.85983347 +0000 UTC m=+0.107823285 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  3 10:42:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2243: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:06 compute-0 nova_compute[351685]: 2025-10-03 10:42:06.750 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:42:06 compute-0 nova_compute[351685]: 2025-10-03 10:42:06.751 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:42:06 compute-0 nova_compute[351685]: 2025-10-03 10:42:06.751 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:42:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:42:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:42:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:42:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:42:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:42:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:42:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e4814338-5835-499d-bc6f-8ffd82e4110f does not exist
Oct  3 10:42:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f907b7eb-f12f-4af3-8661-4b78645da6ab does not exist
Oct  3 10:42:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 634667af-8b51-4e34-8587-93004062523c does not exist
Oct  3 10:42:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:42:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:42:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:42:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:42:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:42:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:42:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:42:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:42:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:42:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:42:07 compute-0 nova_compute[351685]: 2025-10-03 10:42:07.550 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:42:07 compute-0 nova_compute[351685]: 2025-10-03 10:42:07.551 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:42:07 compute-0 nova_compute[351685]: 2025-10-03 10:42:07.551 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:42:07 compute-0 nova_compute[351685]: 2025-10-03 10:42:07.551 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:42:07 compute-0 nova_compute[351685]: 2025-10-03 10:42:07.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:07 compute-0 podman[471814]: 2025-10-03 10:42:07.873844061 +0000 UTC m=+0.069035812 container create 76097983ea884e1b4171df6460d3cbb3c4f1366a7898a91ea009cddab70777a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:42:07 compute-0 podman[471814]: 2025-10-03 10:42:07.849403038 +0000 UTC m=+0.044594819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:42:07 compute-0 systemd[1]: Started libpod-conmon-76097983ea884e1b4171df6460d3cbb3c4f1366a7898a91ea009cddab70777a6.scope.
Oct  3 10:42:08 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:42:08 compute-0 podman[471814]: 2025-10-03 10:42:08.027943937 +0000 UTC m=+0.223135728 container init 76097983ea884e1b4171df6460d3cbb3c4f1366a7898a91ea009cddab70777a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:42:08 compute-0 podman[471814]: 2025-10-03 10:42:08.040096166 +0000 UTC m=+0.235287927 container start 76097983ea884e1b4171df6460d3cbb3c4f1366a7898a91ea009cddab70777a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chebyshev, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:42:08 compute-0 podman[471814]: 2025-10-03 10:42:08.045284402 +0000 UTC m=+0.240476143 container attach 76097983ea884e1b4171df6460d3cbb3c4f1366a7898a91ea009cddab70777a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chebyshev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:42:08 compute-0 cool_chebyshev[471849]: 167 167
Oct  3 10:42:08 compute-0 systemd[1]: libpod-76097983ea884e1b4171df6460d3cbb3c4f1366a7898a91ea009cddab70777a6.scope: Deactivated successfully.
Oct  3 10:42:08 compute-0 conmon[471849]: conmon 76097983ea884e1b4171 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-76097983ea884e1b4171df6460d3cbb3c4f1366a7898a91ea009cddab70777a6.scope/container/memory.events
Oct  3 10:42:08 compute-0 podman[471831]: 2025-10-03 10:42:08.068047561 +0000 UTC m=+0.099302062 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid)
Oct  3 10:42:08 compute-0 podman[471827]: 2025-10-03 10:42:08.07799904 +0000 UTC m=+0.129432206 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:42:08 compute-0 podman[471830]: 2025-10-03 10:42:08.085599494 +0000 UTC m=+0.120830421 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  3 10:42:08 compute-0 podman[471893]: 2025-10-03 10:42:08.106741581 +0000 UTC m=+0.034744014 container died 76097983ea884e1b4171df6460d3cbb3c4f1366a7898a91ea009cddab70777a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chebyshev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:42:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2244: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7968196bc8adc77cb518a26bbcfb375a7e34889b17730b98ccfe15b37b2ec0f8-merged.mount: Deactivated successfully.
Oct  3 10:42:08 compute-0 podman[471893]: 2025-10-03 10:42:08.157536108 +0000 UTC m=+0.085538571 container remove 76097983ea884e1b4171df6460d3cbb3c4f1366a7898a91ea009cddab70777a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_chebyshev, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:42:08 compute-0 systemd[1]: libpod-conmon-76097983ea884e1b4171df6460d3cbb3c4f1366a7898a91ea009cddab70777a6.scope: Deactivated successfully.
Oct  3 10:42:08 compute-0 nova_compute[351685]: 2025-10-03 10:42:08.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:08 compute-0 podman[471916]: 2025-10-03 10:42:08.464878131 +0000 UTC m=+0.091868213 container create 331c4edd69b87fbca2d25f50f06281ffa30e23bc6b459e36e49374d1beb8c1c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  3 10:42:08 compute-0 podman[471916]: 2025-10-03 10:42:08.429614642 +0000 UTC m=+0.056604774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:42:08 compute-0 systemd[1]: Started libpod-conmon-331c4edd69b87fbca2d25f50f06281ffa30e23bc6b459e36e49374d1beb8c1c5.scope.
Oct  3 10:42:08 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:42:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da6ddb40eb35605237a3c485101d72ab52bce9691a34e03f45316ee6ececdd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:42:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da6ddb40eb35605237a3c485101d72ab52bce9691a34e03f45316ee6ececdd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:42:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da6ddb40eb35605237a3c485101d72ab52bce9691a34e03f45316ee6ececdd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:42:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da6ddb40eb35605237a3c485101d72ab52bce9691a34e03f45316ee6ececdd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:42:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da6ddb40eb35605237a3c485101d72ab52bce9691a34e03f45316ee6ececdd9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:42:08 compute-0 podman[471916]: 2025-10-03 10:42:08.606518339 +0000 UTC m=+0.233508431 container init 331c4edd69b87fbca2d25f50f06281ffa30e23bc6b459e36e49374d1beb8c1c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 10:42:08 compute-0 podman[471916]: 2025-10-03 10:42:08.633595446 +0000 UTC m=+0.260585508 container start 331c4edd69b87fbca2d25f50f06281ffa30e23bc6b459e36e49374d1beb8c1c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 10:42:08 compute-0 podman[471916]: 2025-10-03 10:42:08.642054167 +0000 UTC m=+0.269044309 container attach 331c4edd69b87fbca2d25f50f06281ffa30e23bc6b459e36e49374d1beb8c1c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 10:42:09 compute-0 nova_compute[351685]: 2025-10-03 10:42:09.586 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:42:09 compute-0 nova_compute[351685]: 2025-10-03 10:42:09.602 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:42:09 compute-0 nova_compute[351685]: 2025-10-03 10:42:09.602 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:42:09 compute-0 nova_compute[351685]: 2025-10-03 10:42:09.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:42:09 compute-0 nova_compute[351685]: 2025-10-03 10:42:09.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:42:09 compute-0 lucid_newton[471931]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:42:09 compute-0 lucid_newton[471931]: --> relative data size: 1.0
Oct  3 10:42:09 compute-0 lucid_newton[471931]: --> All data devices are unavailable
Oct  3 10:42:09 compute-0 systemd[1]: libpod-331c4edd69b87fbca2d25f50f06281ffa30e23bc6b459e36e49374d1beb8c1c5.scope: Deactivated successfully.
Oct  3 10:42:09 compute-0 systemd[1]: libpod-331c4edd69b87fbca2d25f50f06281ffa30e23bc6b459e36e49374d1beb8c1c5.scope: Consumed 1.277s CPU time.
Oct  3 10:42:09 compute-0 podman[471916]: 2025-10-03 10:42:09.969097092 +0000 UTC m=+1.596087184 container died 331c4edd69b87fbca2d25f50f06281ffa30e23bc6b459e36e49374d1beb8c1c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 10:42:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-5da6ddb40eb35605237a3c485101d72ab52bce9691a34e03f45316ee6ececdd9-merged.mount: Deactivated successfully.
Oct  3 10:42:10 compute-0 podman[471916]: 2025-10-03 10:42:10.057353278 +0000 UTC m=+1.684343370 container remove 331c4edd69b87fbca2d25f50f06281ffa30e23bc6b459e36e49374d1beb8c1c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:42:10 compute-0 systemd[1]: libpod-conmon-331c4edd69b87fbca2d25f50f06281ffa30e23bc6b459e36e49374d1beb8c1c5.scope: Deactivated successfully.
Oct  3 10:42:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2245: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:11 compute-0 podman[472112]: 2025-10-03 10:42:11.161954339 +0000 UTC m=+0.083731383 container create 2faa296e10201bcb09edef4780c80adc9181b6f0ac8fbccfbeb7d8bd8dd3c601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  3 10:42:11 compute-0 podman[472112]: 2025-10-03 10:42:11.116657458 +0000 UTC m=+0.038434592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:42:11 compute-0 systemd[1]: Started libpod-conmon-2faa296e10201bcb09edef4780c80adc9181b6f0ac8fbccfbeb7d8bd8dd3c601.scope.
Oct  3 10:42:11 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:42:11 compute-0 podman[472112]: 2025-10-03 10:42:11.279779073 +0000 UTC m=+0.201556147 container init 2faa296e10201bcb09edef4780c80adc9181b6f0ac8fbccfbeb7d8bd8dd3c601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 10:42:11 compute-0 podman[472112]: 2025-10-03 10:42:11.298210113 +0000 UTC m=+0.219987167 container start 2faa296e10201bcb09edef4780c80adc9181b6f0ac8fbccfbeb7d8bd8dd3c601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_davinci, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:42:11 compute-0 podman[472112]: 2025-10-03 10:42:11.304442123 +0000 UTC m=+0.226219217 container attach 2faa296e10201bcb09edef4780c80adc9181b6f0ac8fbccfbeb7d8bd8dd3c601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:42:11 compute-0 stoic_davinci[472128]: 167 167
Oct  3 10:42:11 compute-0 systemd[1]: libpod-2faa296e10201bcb09edef4780c80adc9181b6f0ac8fbccfbeb7d8bd8dd3c601.scope: Deactivated successfully.
Oct  3 10:42:11 compute-0 podman[472112]: 2025-10-03 10:42:11.311593471 +0000 UTC m=+0.233370575 container died 2faa296e10201bcb09edef4780c80adc9181b6f0ac8fbccfbeb7d8bd8dd3c601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_davinci, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:42:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a866841ae9422d9c3a9af59ee2023f9e482186f2a43de9e91f9d713223c6471-merged.mount: Deactivated successfully.
Oct  3 10:42:11 compute-0 podman[472112]: 2025-10-03 10:42:11.385300523 +0000 UTC m=+0.307077567 container remove 2faa296e10201bcb09edef4780c80adc9181b6f0ac8fbccfbeb7d8bd8dd3c601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 10:42:11 compute-0 systemd[1]: libpod-conmon-2faa296e10201bcb09edef4780c80adc9181b6f0ac8fbccfbeb7d8bd8dd3c601.scope: Deactivated successfully.
Oct  3 10:42:11 compute-0 podman[472150]: 2025-10-03 10:42:11.665342182 +0000 UTC m=+0.078165084 container create 731e277ba0759cf30a8b924d05fbd2ce407ba43c5d187c47bd7538c25fec5437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 10:42:11 compute-0 podman[472150]: 2025-10-03 10:42:11.635419104 +0000 UTC m=+0.048242026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:42:11 compute-0 systemd[1]: Started libpod-conmon-731e277ba0759cf30a8b924d05fbd2ce407ba43c5d187c47bd7538c25fec5437.scope.
Oct  3 10:42:11 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb943bc98b92d6e7bfab16716b41948407662666d0247df3491b8afebe7f6d70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb943bc98b92d6e7bfab16716b41948407662666d0247df3491b8afebe7f6d70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb943bc98b92d6e7bfab16716b41948407662666d0247df3491b8afebe7f6d70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb943bc98b92d6e7bfab16716b41948407662666d0247df3491b8afebe7f6d70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:42:11 compute-0 podman[472150]: 2025-10-03 10:42:11.865935937 +0000 UTC m=+0.278758869 container init 731e277ba0759cf30a8b924d05fbd2ce407ba43c5d187c47bd7538c25fec5437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:42:11 compute-0 podman[472150]: 2025-10-03 10:42:11.879992037 +0000 UTC m=+0.292814959 container start 731e277ba0759cf30a8b924d05fbd2ce407ba43c5d187c47bd7538c25fec5437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  3 10:42:11 compute-0 podman[472150]: 2025-10-03 10:42:11.888082686 +0000 UTC m=+0.300905658 container attach 731e277ba0759cf30a8b924d05fbd2ce407ba43c5d187c47bd7538c25fec5437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 10:42:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]: {
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:    "0": [
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:        {
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "devices": [
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "/dev/loop3"
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            ],
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "lv_name": "ceph_lv0",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "lv_size": "21470642176",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "name": "ceph_lv0",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "tags": {
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.cluster_name": "ceph",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.crush_device_class": "",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.encrypted": "0",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.osd_id": "0",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.type": "block",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.vdo": "0"
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            },
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "type": "block",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "vg_name": "ceph_vg0"
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:        }
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:    ],
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:    "1": [
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:        {
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "devices": [
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "/dev/loop4"
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            ],
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "lv_name": "ceph_lv1",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "lv_size": "21470642176",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "name": "ceph_lv1",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "tags": {
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.cluster_name": "ceph",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.crush_device_class": "",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.encrypted": "0",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.osd_id": "1",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.type": "block",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.vdo": "0"
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            },
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "type": "block",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "vg_name": "ceph_vg1"
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:        }
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:    ],
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:    "2": [
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:        {
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "devices": [
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "/dev/loop5"
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            ],
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "lv_name": "ceph_lv2",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "lv_size": "21470642176",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "name": "ceph_lv2",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "tags": {
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.cluster_name": "ceph",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.crush_device_class": "",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.encrypted": "0",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.osd_id": "2",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.type": "block",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:                "ceph.vdo": "0"
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            },
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "type": "block",
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:            "vg_name": "ceph_vg2"
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:        }
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]:    ]
Oct  3 10:42:12 compute-0 compassionate_swartz[472166]: }
Oct  3 10:42:12 compute-0 systemd[1]: libpod-731e277ba0759cf30a8b924d05fbd2ce407ba43c5d187c47bd7538c25fec5437.scope: Deactivated successfully.
Oct  3 10:42:12 compute-0 podman[472150]: 2025-10-03 10:42:12.763657052 +0000 UTC m=+1.176479984 container died 731e277ba0759cf30a8b924d05fbd2ce407ba43c5d187c47bd7538c25fec5437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:42:12 compute-0 nova_compute[351685]: 2025-10-03 10:42:12.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb943bc98b92d6e7bfab16716b41948407662666d0247df3491b8afebe7f6d70-merged.mount: Deactivated successfully.
Oct  3 10:42:12 compute-0 podman[472150]: 2025-10-03 10:42:12.883423588 +0000 UTC m=+1.296246490 container remove 731e277ba0759cf30a8b924d05fbd2ce407ba43c5d187c47bd7538c25fec5437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_swartz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:42:12 compute-0 systemd[1]: libpod-conmon-731e277ba0759cf30a8b924d05fbd2ce407ba43c5d187c47bd7538c25fec5437.scope: Deactivated successfully.
Oct  3 10:42:13 compute-0 nova_compute[351685]: 2025-10-03 10:42:13.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:13 compute-0 nova_compute[351685]: 2025-10-03 10:42:13.724 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:42:13 compute-0 nova_compute[351685]: 2025-10-03 10:42:13.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:42:13 compute-0 nova_compute[351685]: 2025-10-03 10:42:13.756 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:42:13 compute-0 nova_compute[351685]: 2025-10-03 10:42:13.757 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:42:13 compute-0 nova_compute[351685]: 2025-10-03 10:42:13.758 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:42:13 compute-0 nova_compute[351685]: 2025-10-03 10:42:13.759 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:42:13 compute-0 nova_compute[351685]: 2025-10-03 10:42:13.762 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:42:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:14 compute-0 podman[472348]: 2025-10-03 10:42:14.137418604 +0000 UTC m=+0.084887920 container create c56111141bd84b5af8d109abbb48161c5219bb864f8e9f349ede12d977449a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mccarthy, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 10:42:14 compute-0 podman[472348]: 2025-10-03 10:42:14.100071918 +0000 UTC m=+0.047541264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:42:14 compute-0 systemd[1]: Started libpod-conmon-c56111141bd84b5af8d109abbb48161c5219bb864f8e9f349ede12d977449a16.scope.
Oct  3 10:42:14 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:42:14 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4021433016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:42:14 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:42:14 compute-0 podman[472348]: 2025-10-03 10:42:14.274207195 +0000 UTC m=+0.221676501 container init c56111141bd84b5af8d109abbb48161c5219bb864f8e9f349ede12d977449a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:42:14 compute-0 nova_compute[351685]: 2025-10-03 10:42:14.276 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:42:14 compute-0 podman[472348]: 2025-10-03 10:42:14.293560725 +0000 UTC m=+0.241030041 container start c56111141bd84b5af8d109abbb48161c5219bb864f8e9f349ede12d977449a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mccarthy, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 10:42:14 compute-0 podman[472348]: 2025-10-03 10:42:14.29933291 +0000 UTC m=+0.246802216 container attach c56111141bd84b5af8d109abbb48161c5219bb864f8e9f349ede12d977449a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 10:42:14 compute-0 charming_mccarthy[472363]: 167 167
Oct  3 10:42:14 compute-0 systemd[1]: libpod-c56111141bd84b5af8d109abbb48161c5219bb864f8e9f349ede12d977449a16.scope: Deactivated successfully.
Oct  3 10:42:14 compute-0 podman[472348]: 2025-10-03 10:42:14.308522225 +0000 UTC m=+0.255991511 container died c56111141bd84b5af8d109abbb48161c5219bb864f8e9f349ede12d977449a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mccarthy, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:42:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-0189076ef88b5ce7a59d2b73a220b9a09963d64c308209bf66603a4188621211-merged.mount: Deactivated successfully.
Oct  3 10:42:14 compute-0 nova_compute[351685]: 2025-10-03 10:42:14.366 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:42:14 compute-0 nova_compute[351685]: 2025-10-03 10:42:14.367 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:42:14 compute-0 nova_compute[351685]: 2025-10-03 10:42:14.368 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:42:14 compute-0 podman[472348]: 2025-10-03 10:42:14.395349216 +0000 UTC m=+0.342818512 container remove c56111141bd84b5af8d109abbb48161c5219bb864f8e9f349ede12d977449a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:42:14 compute-0 systemd[1]: libpod-conmon-c56111141bd84b5af8d109abbb48161c5219bb864f8e9f349ede12d977449a16.scope: Deactivated successfully.
Oct  3 10:42:14 compute-0 podman[472389]: 2025-10-03 10:42:14.682619397 +0000 UTC m=+0.088653910 container create 364f6ca87423030922648ef9ed283ca6eccb8b715a3d50b6a70c6d0b9ecd4f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:42:14 compute-0 podman[472389]: 2025-10-03 10:42:14.651441128 +0000 UTC m=+0.057475641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:42:14 compute-0 systemd[1]: Started libpod-conmon-364f6ca87423030922648ef9ed283ca6eccb8b715a3d50b6a70c6d0b9ecd4f47.scope.
Oct  3 10:42:14 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a7d4e7fbf3677a7f398240dfe812bad92476fbd6db866c8b8911501a9dfc2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a7d4e7fbf3677a7f398240dfe812bad92476fbd6db866c8b8911501a9dfc2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a7d4e7fbf3677a7f398240dfe812bad92476fbd6db866c8b8911501a9dfc2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a7d4e7fbf3677a7f398240dfe812bad92476fbd6db866c8b8911501a9dfc2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:42:14 compute-0 podman[472389]: 2025-10-03 10:42:14.814956516 +0000 UTC m=+0.220991019 container init 364f6ca87423030922648ef9ed283ca6eccb8b715a3d50b6a70c6d0b9ecd4f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_booth, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:42:14 compute-0 podman[472389]: 2025-10-03 10:42:14.838668315 +0000 UTC m=+0.244702778 container start 364f6ca87423030922648ef9ed283ca6eccb8b715a3d50b6a70c6d0b9ecd4f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_booth, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:42:14 compute-0 nova_compute[351685]: 2025-10-03 10:42:14.841 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:42:14 compute-0 podman[472389]: 2025-10-03 10:42:14.843100688 +0000 UTC m=+0.249135431 container attach 364f6ca87423030922648ef9ed283ca6eccb8b715a3d50b6a70c6d0b9ecd4f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_booth, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 10:42:14 compute-0 nova_compute[351685]: 2025-10-03 10:42:14.843 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3815MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:42:14 compute-0 nova_compute[351685]: 2025-10-03 10:42:14.843 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:42:14 compute-0 nova_compute[351685]: 2025-10-03 10:42:14.843 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:42:15 compute-0 nova_compute[351685]: 2025-10-03 10:42:15.002 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:42:15 compute-0 nova_compute[351685]: 2025-10-03 10:42:15.003 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:42:15 compute-0 nova_compute[351685]: 2025-10-03 10:42:15.003 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:42:15 compute-0 nova_compute[351685]: 2025-10-03 10:42:15.174 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:42:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:42:15 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2537010694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:42:15 compute-0 nova_compute[351685]: 2025-10-03 10:42:15.688 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:42:15 compute-0 nova_compute[351685]: 2025-10-03 10:42:15.701 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:42:15 compute-0 nova_compute[351685]: 2025-10-03 10:42:15.727 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:42:15 compute-0 nova_compute[351685]: 2025-10-03 10:42:15.730 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:42:15 compute-0 nova_compute[351685]: 2025-10-03 10:42:15.731 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:42:15 compute-0 podman[472446]: 2025-10-03 10:42:15.827706225 +0000 UTC m=+0.086587965 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 10:42:15 compute-0 brave_booth[472405]: {
Oct  3 10:42:15 compute-0 brave_booth[472405]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:42:15 compute-0 brave_booth[472405]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:42:15 compute-0 brave_booth[472405]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:42:15 compute-0 brave_booth[472405]:        "osd_id": 1,
Oct  3 10:42:15 compute-0 brave_booth[472405]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:42:15 compute-0 brave_booth[472405]:        "type": "bluestore"
Oct  3 10:42:15 compute-0 brave_booth[472405]:    },
Oct  3 10:42:15 compute-0 brave_booth[472405]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:42:15 compute-0 brave_booth[472405]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:42:15 compute-0 brave_booth[472405]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:42:15 compute-0 brave_booth[472405]:        "osd_id": 2,
Oct  3 10:42:15 compute-0 brave_booth[472405]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:42:15 compute-0 brave_booth[472405]:        "type": "bluestore"
Oct  3 10:42:15 compute-0 brave_booth[472405]:    },
Oct  3 10:42:15 compute-0 brave_booth[472405]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:42:15 compute-0 brave_booth[472405]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:42:15 compute-0 brave_booth[472405]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:42:15 compute-0 brave_booth[472405]:        "osd_id": 0,
Oct  3 10:42:15 compute-0 brave_booth[472405]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:42:15 compute-0 brave_booth[472405]:        "type": "bluestore"
Oct  3 10:42:15 compute-0 brave_booth[472405]:    }
Oct  3 10:42:15 compute-0 brave_booth[472405]: }
Oct  3 10:42:15 compute-0 systemd[1]: libpod-364f6ca87423030922648ef9ed283ca6eccb8b715a3d50b6a70c6d0b9ecd4f47.scope: Deactivated successfully.
Oct  3 10:42:15 compute-0 systemd[1]: libpod-364f6ca87423030922648ef9ed283ca6eccb8b715a3d50b6a70c6d0b9ecd4f47.scope: Consumed 1.092s CPU time.
Oct  3 10:42:15 compute-0 podman[472389]: 2025-10-03 10:42:15.94961421 +0000 UTC m=+1.355648693 container died 364f6ca87423030922648ef9ed283ca6eccb8b715a3d50b6a70c6d0b9ecd4f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 10:42:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6a7d4e7fbf3677a7f398240dfe812bad92476fbd6db866c8b8911501a9dfc2c-merged.mount: Deactivated successfully.
Oct  3 10:42:16 compute-0 podman[472389]: 2025-10-03 10:42:16.030701947 +0000 UTC m=+1.436736420 container remove 364f6ca87423030922648ef9ed283ca6eccb8b715a3d50b6a70c6d0b9ecd4f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:42:16 compute-0 systemd[1]: libpod-conmon-364f6ca87423030922648ef9ed283ca6eccb8b715a3d50b6a70c6d0b9ecd4f47.scope: Deactivated successfully.
Oct  3 10:42:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:42:16 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:42:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:42:16 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:42:16 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 425b091d-3aac-4adf-a896-8438cae13629 does not exist
Oct  3 10:42:16 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d80b0989-f118-4da2-974f-cc92eddc5b5a does not exist
Oct  3 10:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:42:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2248: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:16 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:42:16 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:42:16 compute-0 nova_compute[351685]: 2025-10-03 10:42:16.733 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:42:16 compute-0 nova_compute[351685]: 2025-10-03 10:42:16.734 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:42:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:42:17 compute-0 nova_compute[351685]: 2025-10-03 10:42:17.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:42:17 compute-0 nova_compute[351685]: 2025-10-03 10:42:17.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:42:17 compute-0 nova_compute[351685]: 2025-10-03 10:42:17.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:18 compute-0 nova_compute[351685]: 2025-10-03 10:42:18.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:18 compute-0 podman[472539]: 2025-10-03 10:42:18.85003756 +0000 UTC m=+0.098394903 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:42:18 compute-0 podman[472540]: 2025-10-03 10:42:18.861714083 +0000 UTC m=+0.105419967 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, version=9.4, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, release-0.7.12=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, name=ubi9)
Oct  3 10:42:19 compute-0 nova_compute[351685]: 2025-10-03 10:42:19.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:42:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2250: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2251: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:42:22 compute-0 nova_compute[351685]: 2025-10-03 10:42:22.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:23 compute-0 nova_compute[351685]: 2025-10-03 10:42:23.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2253: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:42:27 compute-0 nova_compute[351685]: 2025-10-03 10:42:27.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:28 compute-0 nova_compute[351685]: 2025-10-03 10:42:28.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:29 compute-0 podman[157165]: time="2025-10-03T10:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:42:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:42:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9080 "" "Go-http-client/1.1"
Oct  3 10:42:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2255: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:31 compute-0 openstack_network_exporter[367524]: ERROR   10:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:42:31 compute-0 openstack_network_exporter[367524]: ERROR   10:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:42:31 compute-0 openstack_network_exporter[367524]: ERROR   10:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:42:31 compute-0 openstack_network_exporter[367524]: ERROR   10:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:42:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:42:31 compute-0 openstack_network_exporter[367524]: ERROR   10:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:42:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:42:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:42:32 compute-0 nova_compute[351685]: 2025-10-03 10:42:32.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:32 compute-0 podman[472582]: 2025-10-03 10:42:32.883446239 +0000 UTC m=+0.118400093 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, io.buildah.version=1.41.4)
Oct  3 10:42:32 compute-0 podman[472581]: 2025-10-03 10:42:32.912412518 +0000 UTC m=+0.154474679 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct  3 10:42:32 compute-0 podman[472583]: 2025-10-03 10:42:32.927471389 +0000 UTC m=+0.155947615 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller)
Oct  3 10:42:33 compute-0 nova_compute[351685]: 2025-10-03 10:42:33.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:35 compute-0 podman[472644]: 2025-10-03 10:42:35.884697729 +0000 UTC m=+0.130475521 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:42:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:42:37 compute-0 nova_compute[351685]: 2025-10-03 10:42:37.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2259: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:38 compute-0 nova_compute[351685]: 2025-10-03 10:42:38.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:38 compute-0 podman[472662]: 2025-10-03 10:42:38.874726969 +0000 UTC m=+0.125946975 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:42:38 compute-0 podman[472663]: 2025-10-03 10:42:38.882739316 +0000 UTC m=+0.127885357 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:42:38 compute-0 podman[472664]: 2025-10-03 10:42:38.896039651 +0000 UTC m=+0.147613509 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Oct  3 10:42:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2260: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:42:41.639 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:42:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:42:41.640 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:42:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:42:41.641 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:42:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:42:42 compute-0 nova_compute[351685]: 2025-10-03 10:42:42.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:43 compute-0 nova_compute[351685]: 2025-10-03 10:42:43.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:42:46
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'backups', 'vms', 'images', 'volumes', 'default.rgw.meta']
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:42:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:42:46 compute-0 podman[472725]: 2025-10-03 10:42:46.865896116 +0000 UTC m=+0.109744835 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 10:42:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:42:47 compute-0 nova_compute[351685]: 2025-10-03 10:42:47.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2264: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:48 compute-0 nova_compute[351685]: 2025-10-03 10:42:48.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:49 compute-0 podman[472745]: 2025-10-03 10:42:49.879642197 +0000 UTC m=+0.126185543 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:42:49 compute-0 podman[472746]: 2025-10-03 10:42:49.898083578 +0000 UTC m=+0.134602773 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, release=1214.1726694543, architecture=x86_64, config_id=edpm, io.buildah.version=1.29.0, version=9.4, com.redhat.component=ubi9-container, release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, managed_by=edpm_ansible)
Oct  3 10:42:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:42:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 8494 writes, 32K keys, 8494 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8494 writes, 2120 syncs, 4.01 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 264 writes, 396 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 264 writes, 132 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:42:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:42:52 compute-0 nova_compute[351685]: 2025-10-03 10:42:52.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:53 compute-0 nova_compute[351685]: 2025-10-03 10:42:53.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:42:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3031290452' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:42:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:42:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3031290452' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:42:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2267: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:42:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:42:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:42:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:42:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 9344 writes, 34K keys, 9344 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9344 writes, 2368 syncs, 3.95 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 220 writes, 330 keys, 220 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 220 writes, 110 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:42:57 compute-0 nova_compute[351685]: 2025-10-03 10:42:57.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2269: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:42:58 compute-0 nova_compute[351685]: 2025-10-03 10:42:58.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:42:59 compute-0 podman[157165]: time="2025-10-03T10:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:42:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:42:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9083 "" "Go-http-client/1.1"
Oct  3 10:43:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2270: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:43:00.439531) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488180439572, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 2041, "num_deletes": 251, "total_data_size": 3406651, "memory_usage": 3461896, "flush_reason": "Manual Compaction"}
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488180632033, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 3340499, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44320, "largest_seqno": 46360, "table_properties": {"data_size": 3331185, "index_size": 5872, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18470, "raw_average_key_size": 20, "raw_value_size": 3312746, "raw_average_value_size": 3593, "num_data_blocks": 261, "num_entries": 922, "num_filter_entries": 922, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759487952, "oldest_key_time": 1759487952, "file_creation_time": 1759488180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 192601 microseconds, and 14983 cpu microseconds.
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:43:00.632127) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 3340499 bytes OK
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:43:00.632153) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:43:00.784363) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:43:00.784408) EVENT_LOG_v1 {"time_micros": 1759488180784397, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:43:00.784440) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 3398137, prev total WAL file size 3398137, number of live WAL files 2.
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:43:00.787449) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(3262KB)], [107(6288KB)]
Oct  3 10:43:00 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488180787584, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 9780084, "oldest_snapshot_seqno": -1}
Oct  3 10:43:01 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6109 keys, 8006897 bytes, temperature: kUnknown
Oct  3 10:43:01 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488181278475, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 8006897, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7968774, "index_size": 21757, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 158998, "raw_average_key_size": 26, "raw_value_size": 7860691, "raw_average_value_size": 1286, "num_data_blocks": 861, "num_entries": 6109, "num_filter_entries": 6109, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759488180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:43:01 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:43:01 compute-0 openstack_network_exporter[367524]: ERROR   10:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:43:01 compute-0 openstack_network_exporter[367524]: ERROR   10:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:43:01 compute-0 openstack_network_exporter[367524]: ERROR   10:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:43:01 compute-0 openstack_network_exporter[367524]: ERROR   10:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:43:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:43:01 compute-0 openstack_network_exporter[367524]: ERROR   10:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:43:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:43:01 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:43:01.278771) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 8006897 bytes
Oct  3 10:43:01 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:43:01.508503) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 19.9 rd, 16.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 6.1 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(5.3) write-amplify(2.4) OK, records in: 6623, records dropped: 514 output_compression: NoCompression
Oct  3 10:43:01 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:43:01.508553) EVENT_LOG_v1 {"time_micros": 1759488181508534, "job": 64, "event": "compaction_finished", "compaction_time_micros": 490970, "compaction_time_cpu_micros": 43274, "output_level": 6, "num_output_files": 1, "total_output_size": 8006897, "num_input_records": 6623, "num_output_records": 6109, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:43:01 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:43:01 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488181510355, "job": 64, "event": "table_file_deletion", "file_number": 109}
Oct  3 10:43:01 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:43:01 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488181513215, "job": 64, "event": "table_file_deletion", "file_number": 107}
Oct  3 10:43:01 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:43:00.786971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:43:01 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:43:01.513503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:43:01 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:43:01.513511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:43:01 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:43:01.513514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:43:01 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:43:01.513517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:43:01 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:43:01.513520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:43:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:43:02 compute-0 nova_compute[351685]: 2025-10-03 10:43:02.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:03 compute-0 nova_compute[351685]: 2025-10-03 10:43:03.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:43:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 7525 writes, 28K keys, 7525 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7525 writes, 1721 syncs, 4.37 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 216 writes, 324 keys, 216 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s#012Interval WAL: 216 writes, 108 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:43:03 compute-0 podman[472787]: 2025-10-03 10:43:03.864088769 +0000 UTC m=+0.102017729 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  3 10:43:03 compute-0 podman[472786]: 2025-10-03 10:43:03.891481696 +0000 UTC m=+0.136271955 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, container_name=openstack_network_exporter, managed_by=edpm_ansible)
Oct  3 10:43:03 compute-0 podman[472788]: 2025-10-03 10:43:03.914612467 +0000 UTC m=+0.150836892 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 10:43:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:05 compute-0 ceph-mgr[192071]: [devicehealth INFO root] Check health
Oct  3 10:43:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:06 compute-0 nova_compute[351685]: 2025-10-03 10:43:06.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:43:06 compute-0 nova_compute[351685]: 2025-10-03 10:43:06.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:43:06 compute-0 nova_compute[351685]: 2025-10-03 10:43:06.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:43:06 compute-0 podman[472850]: 2025-10-03 10:43:06.892897802 +0000 UTC m=+0.145713879 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  3 10:43:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:43:07 compute-0 nova_compute[351685]: 2025-10-03 10:43:07.566 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:43:07 compute-0 nova_compute[351685]: 2025-10-03 10:43:07.566 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:43:07 compute-0 nova_compute[351685]: 2025-10-03 10:43:07.567 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:43:07 compute-0 nova_compute[351685]: 2025-10-03 10:43:07.567 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:43:07 compute-0 nova_compute[351685]: 2025-10-03 10:43:07.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:08 compute-0 nova_compute[351685]: 2025-10-03 10:43:08.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:09 compute-0 nova_compute[351685]: 2025-10-03 10:43:09.611 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:43:09 compute-0 nova_compute[351685]: 2025-10-03 10:43:09.645 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:43:09 compute-0 nova_compute[351685]: 2025-10-03 10:43:09.645 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:43:09 compute-0 podman[472870]: 2025-10-03 10:43:09.875350713 +0000 UTC m=+0.114582592 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:43:09 compute-0 podman[472871]: 2025-10-03 10:43:09.886121078 +0000 UTC m=+0.118540458 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 10:43:09 compute-0 podman[472869]: 2025-10-03 10:43:09.897914855 +0000 UTC m=+0.142769474 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 10:43:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2275: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:10 compute-0 nova_compute[351685]: 2025-10-03 10:43:10.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:43:10 compute-0 nova_compute[351685]: 2025-10-03 10:43:10.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:43:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:43:12 compute-0 nova_compute[351685]: 2025-10-03 10:43:12.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:13 compute-0 nova_compute[351685]: 2025-10-03 10:43:13.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:13 compute-0 nova_compute[351685]: 2025-10-03 10:43:13.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:43:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2277: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:14 compute-0 nova_compute[351685]: 2025-10-03 10:43:14.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:43:14 compute-0 nova_compute[351685]: 2025-10-03 10:43:14.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:43:14 compute-0 nova_compute[351685]: 2025-10-03 10:43:14.763 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:43:14 compute-0 nova_compute[351685]: 2025-10-03 10:43:14.763 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:43:14 compute-0 nova_compute[351685]: 2025-10-03 10:43:14.764 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:43:14 compute-0 nova_compute[351685]: 2025-10-03 10:43:14.764 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:43:14 compute-0 nova_compute[351685]: 2025-10-03 10:43:14.765 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:43:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:43:15 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3617477391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:43:15 compute-0 nova_compute[351685]: 2025-10-03 10:43:15.232 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:43:15 compute-0 nova_compute[351685]: 2025-10-03 10:43:15.342 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:43:15 compute-0 nova_compute[351685]: 2025-10-03 10:43:15.343 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:43:15 compute-0 nova_compute[351685]: 2025-10-03 10:43:15.344 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:43:15 compute-0 nova_compute[351685]: 2025-10-03 10:43:15.776 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:43:15 compute-0 nova_compute[351685]: 2025-10-03 10:43:15.777 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3838MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:43:15 compute-0 nova_compute[351685]: 2025-10-03 10:43:15.777 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:43:15 compute-0 nova_compute[351685]: 2025-10-03 10:43:15.777 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:43:15 compute-0 nova_compute[351685]: 2025-10-03 10:43:15.860 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:43:15 compute-0 nova_compute[351685]: 2025-10-03 10:43:15.861 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:43:15 compute-0 nova_compute[351685]: 2025-10-03 10:43:15.861 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:43:15 compute-0 nova_compute[351685]: 2025-10-03 10:43:15.875 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 10:43:15 compute-0 nova_compute[351685]: 2025-10-03 10:43:15.888 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 10:43:15 compute-0 nova_compute[351685]: 2025-10-03 10:43:15.889 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 10:43:15 compute-0 nova_compute[351685]: 2025-10-03 10:43:15.902 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 10:43:15 compute-0 nova_compute[351685]: 2025-10-03 10:43:15.925 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 10:43:15 compute-0 nova_compute[351685]: 2025-10-03 10:43:15.964 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:43:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:43:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:43:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:43:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:43:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:43:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:43:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:43:16 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1870018218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:43:16 compute-0 nova_compute[351685]: 2025-10-03 10:43:16.440 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:43:16 compute-0 nova_compute[351685]: 2025-10-03 10:43:16.449 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:43:16 compute-0 nova_compute[351685]: 2025-10-03 10:43:16.466 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:43:16 compute-0 nova_compute[351685]: 2025-10-03 10:43:16.468 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:43:16 compute-0 nova_compute[351685]: 2025-10-03 10:43:16.469 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:43:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:43:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:43:17 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:43:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:43:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:43:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:43:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:43:17 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 0f37a681-6664-41b1-99eb-96a48af58929 does not exist
Oct  3 10:43:17 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6b7ca790-a98a-4983-bd8c-3d53888ec763 does not exist
Oct  3 10:43:17 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 06594224-6ea5-4f86-bbbb-d87b00bf362b does not exist
Oct  3 10:43:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:43:17 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:43:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:43:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:43:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:43:17 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:43:17 compute-0 podman[473128]: 2025-10-03 10:43:17.745863957 +0000 UTC m=+0.129289273 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 10:43:17 compute-0 nova_compute[351685]: 2025-10-03 10:43:17.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:43:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:43:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:43:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2279: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:18 compute-0 nova_compute[351685]: 2025-10-03 10:43:18.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:18 compute-0 podman[473261]: 2025-10-03 10:43:18.302853387 +0000 UTC m=+0.047742700 container create 43ca10581947c84a9752697287190cd9abce9a221eb8e6b1de576b01fe4cef52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 10:43:18 compute-0 systemd[1]: Started libpod-conmon-43ca10581947c84a9752697287190cd9abce9a221eb8e6b1de576b01fe4cef52.scope.
Oct  3 10:43:18 compute-0 podman[473261]: 2025-10-03 10:43:18.285325336 +0000 UTC m=+0.030214669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:43:18 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:43:18 compute-0 podman[473261]: 2025-10-03 10:43:18.403621915 +0000 UTC m=+0.148511248 container init 43ca10581947c84a9752697287190cd9abce9a221eb8e6b1de576b01fe4cef52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:43:18 compute-0 podman[473261]: 2025-10-03 10:43:18.412612593 +0000 UTC m=+0.157501906 container start 43ca10581947c84a9752697287190cd9abce9a221eb8e6b1de576b01fe4cef52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:43:18 compute-0 podman[473261]: 2025-10-03 10:43:18.416405985 +0000 UTC m=+0.161295328 container attach 43ca10581947c84a9752697287190cd9abce9a221eb8e6b1de576b01fe4cef52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:43:18 compute-0 infallible_grothendieck[473276]: 167 167
Oct  3 10:43:18 compute-0 systemd[1]: libpod-43ca10581947c84a9752697287190cd9abce9a221eb8e6b1de576b01fe4cef52.scope: Deactivated successfully.
Oct  3 10:43:18 compute-0 podman[473261]: 2025-10-03 10:43:18.419035419 +0000 UTC m=+0.163924732 container died 43ca10581947c84a9752697287190cd9abce9a221eb8e6b1de576b01fe4cef52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:43:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f135d229f92d2cffc7cd0b787fb92b87cc5cdcf5e0b8d3f3dfaec889079caeb-merged.mount: Deactivated successfully.
Oct  3 10:43:18 compute-0 podman[473261]: 2025-10-03 10:43:18.465458826 +0000 UTC m=+0.210348139 container remove 43ca10581947c84a9752697287190cd9abce9a221eb8e6b1de576b01fe4cef52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 10:43:18 compute-0 systemd[1]: libpod-conmon-43ca10581947c84a9752697287190cd9abce9a221eb8e6b1de576b01fe4cef52.scope: Deactivated successfully.
Oct  3 10:43:18 compute-0 podman[473300]: 2025-10-03 10:43:18.657787156 +0000 UTC m=+0.048271587 container create 76d4f0134d01bfd8b8828405b6e48225333769728d08095f763dcabdd5a61343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:43:18 compute-0 systemd[1]: Started libpod-conmon-76d4f0134d01bfd8b8828405b6e48225333769728d08095f763dcabdd5a61343.scope.
Oct  3 10:43:18 compute-0 podman[473300]: 2025-10-03 10:43:18.63887373 +0000 UTC m=+0.029358161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:43:18 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2598524a3fac29ac931f340a6bdcbc9b7218aeb44e4fbacaab192b565f11255c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2598524a3fac29ac931f340a6bdcbc9b7218aeb44e4fbacaab192b565f11255c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2598524a3fac29ac931f340a6bdcbc9b7218aeb44e4fbacaab192b565f11255c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2598524a3fac29ac931f340a6bdcbc9b7218aeb44e4fbacaab192b565f11255c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2598524a3fac29ac931f340a6bdcbc9b7218aeb44e4fbacaab192b565f11255c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:43:18 compute-0 podman[473300]: 2025-10-03 10:43:18.772898133 +0000 UTC m=+0.163382574 container init 76d4f0134d01bfd8b8828405b6e48225333769728d08095f763dcabdd5a61343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:43:18 compute-0 podman[473300]: 2025-10-03 10:43:18.797889793 +0000 UTC m=+0.188374214 container start 76d4f0134d01bfd8b8828405b6e48225333769728d08095f763dcabdd5a61343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_engelbart, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 10:43:18 compute-0 podman[473300]: 2025-10-03 10:43:18.810045793 +0000 UTC m=+0.200530214 container attach 76d4f0134d01bfd8b8828405b6e48225333769728d08095f763dcabdd5a61343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 10:43:19 compute-0 frosty_engelbart[473316]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:43:19 compute-0 frosty_engelbart[473316]: --> relative data size: 1.0
Oct  3 10:43:19 compute-0 frosty_engelbart[473316]: --> All data devices are unavailable
Oct  3 10:43:19 compute-0 systemd[1]: libpod-76d4f0134d01bfd8b8828405b6e48225333769728d08095f763dcabdd5a61343.scope: Deactivated successfully.
Oct  3 10:43:19 compute-0 podman[473300]: 2025-10-03 10:43:19.937221217 +0000 UTC m=+1.327705638 container died 76d4f0134d01bfd8b8828405b6e48225333769728d08095f763dcabdd5a61343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:43:19 compute-0 systemd[1]: libpod-76d4f0134d01bfd8b8828405b6e48225333769728d08095f763dcabdd5a61343.scope: Consumed 1.077s CPU time.
Oct  3 10:43:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2598524a3fac29ac931f340a6bdcbc9b7218aeb44e4fbacaab192b565f11255c-merged.mount: Deactivated successfully.
Oct  3 10:43:20 compute-0 podman[473300]: 2025-10-03 10:43:20.005886826 +0000 UTC m=+1.396371257 container remove 76d4f0134d01bfd8b8828405b6e48225333769728d08095f763dcabdd5a61343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:43:20 compute-0 systemd[1]: libpod-conmon-76d4f0134d01bfd8b8828405b6e48225333769728d08095f763dcabdd5a61343.scope: Deactivated successfully.
Oct  3 10:43:20 compute-0 podman[473347]: 2025-10-03 10:43:20.06281371 +0000 UTC m=+0.083720143 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:43:20 compute-0 podman[473350]: 2025-10-03 10:43:20.071706544 +0000 UTC m=+0.094784807 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, version=9.4, io.buildah.version=1.29.0, vcs-type=git, container_name=kepler, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, io.openshift.expose-services=, distribution-scope=public, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:43:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:20 compute-0 nova_compute[351685]: 2025-10-03 10:43:20.469 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:43:20 compute-0 nova_compute[351685]: 2025-10-03 10:43:20.470 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:43:20 compute-0 nova_compute[351685]: 2025-10-03 10:43:20.470 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:43:20 compute-0 nova_compute[351685]: 2025-10-03 10:43:20.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:43:20 compute-0 podman[473537]: 2025-10-03 10:43:20.876642827 +0000 UTC m=+0.062249265 container create 6358cb1fe886e267cf4ae75694ee29de965a4416ed24c723fe198a19eaa6cf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:43:20 compute-0 systemd[1]: Started libpod-conmon-6358cb1fe886e267cf4ae75694ee29de965a4416ed24c723fe198a19eaa6cf77.scope.
Oct  3 10:43:20 compute-0 podman[473537]: 2025-10-03 10:43:20.856484761 +0000 UTC m=+0.042091199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:43:20 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:43:20 compute-0 podman[473537]: 2025-10-03 10:43:20.985042429 +0000 UTC m=+0.170648907 container init 6358cb1fe886e267cf4ae75694ee29de965a4416ed24c723fe198a19eaa6cf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:43:20 compute-0 podman[473537]: 2025-10-03 10:43:20.993826881 +0000 UTC m=+0.179433309 container start 6358cb1fe886e267cf4ae75694ee29de965a4416ed24c723fe198a19eaa6cf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:43:20 compute-0 podman[473537]: 2025-10-03 10:43:20.998423907 +0000 UTC m=+0.184030425 container attach 6358cb1fe886e267cf4ae75694ee29de965a4416ed24c723fe198a19eaa6cf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:43:21 compute-0 boring_poitras[473553]: 167 167
Oct  3 10:43:21 compute-0 systemd[1]: libpod-6358cb1fe886e267cf4ae75694ee29de965a4416ed24c723fe198a19eaa6cf77.scope: Deactivated successfully.
Oct  3 10:43:21 compute-0 podman[473537]: 2025-10-03 10:43:21.004376688 +0000 UTC m=+0.189983116 container died 6358cb1fe886e267cf4ae75694ee29de965a4416ed24c723fe198a19eaa6cf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:43:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c67d8ed1cd839be5952b94a8e94262d8d492151bda035c24f7388c629aa3452-merged.mount: Deactivated successfully.
Oct  3 10:43:21 compute-0 podman[473537]: 2025-10-03 10:43:21.060228177 +0000 UTC m=+0.245834625 container remove 6358cb1fe886e267cf4ae75694ee29de965a4416ed24c723fe198a19eaa6cf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:43:21 compute-0 systemd[1]: libpod-conmon-6358cb1fe886e267cf4ae75694ee29de965a4416ed24c723fe198a19eaa6cf77.scope: Deactivated successfully.
Oct  3 10:43:21 compute-0 podman[473576]: 2025-10-03 10:43:21.325962719 +0000 UTC m=+0.080806819 container create ec8e29ce394a93e0c35d4d6387b1417715d1f1a9d4ad48716a879c66c544c9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mirzakhani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:43:21 compute-0 systemd[1]: Started libpod-conmon-ec8e29ce394a93e0c35d4d6387b1417715d1f1a9d4ad48716a879c66c544c9c8.scope.
Oct  3 10:43:21 compute-0 podman[473576]: 2025-10-03 10:43:21.294517442 +0000 UTC m=+0.049361582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:43:21 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:43:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/debdc4890b14b17973b2628fcf22a014a4fbd078a079f1946f9757540a1471a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:43:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/debdc4890b14b17973b2628fcf22a014a4fbd078a079f1946f9757540a1471a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:43:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/debdc4890b14b17973b2628fcf22a014a4fbd078a079f1946f9757540a1471a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:43:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/debdc4890b14b17973b2628fcf22a014a4fbd078a079f1946f9757540a1471a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:43:21 compute-0 podman[473576]: 2025-10-03 10:43:21.445308832 +0000 UTC m=+0.200152902 container init ec8e29ce394a93e0c35d4d6387b1417715d1f1a9d4ad48716a879c66c544c9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mirzakhani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct  3 10:43:21 compute-0 podman[473576]: 2025-10-03 10:43:21.465667863 +0000 UTC m=+0.220511923 container start ec8e29ce394a93e0c35d4d6387b1417715d1f1a9d4ad48716a879c66c544c9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 10:43:21 compute-0 podman[473576]: 2025-10-03 10:43:21.470008553 +0000 UTC m=+0.224852623 container attach ec8e29ce394a93e0c35d4d6387b1417715d1f1a9d4ad48716a879c66c544c9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 10:43:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]: {
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:    "0": [
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:        {
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "devices": [
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "/dev/loop3"
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            ],
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "lv_name": "ceph_lv0",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "lv_size": "21470642176",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "name": "ceph_lv0",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "tags": {
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.cluster_name": "ceph",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.crush_device_class": "",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.encrypted": "0",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.osd_id": "0",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.type": "block",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.vdo": "0"
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            },
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "type": "block",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "vg_name": "ceph_vg0"
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:        }
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:    ],
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:    "1": [
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:        {
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "devices": [
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "/dev/loop4"
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            ],
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "lv_name": "ceph_lv1",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "lv_size": "21470642176",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "name": "ceph_lv1",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "tags": {
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.cluster_name": "ceph",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.crush_device_class": "",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.encrypted": "0",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.osd_id": "1",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.type": "block",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.vdo": "0"
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            },
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "type": "block",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "vg_name": "ceph_vg1"
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:        }
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:    ],
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:    "2": [
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:        {
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "devices": [
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "/dev/loop5"
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            ],
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "lv_name": "ceph_lv2",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "lv_size": "21470642176",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "name": "ceph_lv2",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "tags": {
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.cluster_name": "ceph",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.crush_device_class": "",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.encrypted": "0",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.osd_id": "2",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.type": "block",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:                "ceph.vdo": "0"
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            },
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "type": "block",
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:            "vg_name": "ceph_vg2"
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:        }
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]:    ]
Oct  3 10:43:22 compute-0 hungry_mirzakhani[473592]: }
Oct  3 10:43:22 compute-0 systemd[1]: libpod-ec8e29ce394a93e0c35d4d6387b1417715d1f1a9d4ad48716a879c66c544c9c8.scope: Deactivated successfully.
Oct  3 10:43:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:43:22 compute-0 podman[473602]: 2025-10-03 10:43:22.364146253 +0000 UTC m=+0.048650970 container died ec8e29ce394a93e0c35d4d6387b1417715d1f1a9d4ad48716a879c66c544c9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mirzakhani, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 10:43:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-debdc4890b14b17973b2628fcf22a014a4fbd078a079f1946f9757540a1471a8-merged.mount: Deactivated successfully.
Oct  3 10:43:22 compute-0 podman[473602]: 2025-10-03 10:43:22.429213296 +0000 UTC m=+0.113717963 container remove ec8e29ce394a93e0c35d4d6387b1417715d1f1a9d4ad48716a879c66c544c9c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mirzakhani, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:43:22 compute-0 systemd[1]: libpod-conmon-ec8e29ce394a93e0c35d4d6387b1417715d1f1a9d4ad48716a879c66c544c9c8.scope: Deactivated successfully.
Oct  3 10:43:22 compute-0 nova_compute[351685]: 2025-10-03 10:43:22.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:23 compute-0 nova_compute[351685]: 2025-10-03 10:43:23.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:23 compute-0 podman[473752]: 2025-10-03 10:43:23.360877329 +0000 UTC m=+0.066520693 container create 9abcac18bf0e8e8f40559b9a32e9008693655bce500d7c13789f830f9b4ccef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:43:23 compute-0 systemd[1]: Started libpod-conmon-9abcac18bf0e8e8f40559b9a32e9008693655bce500d7c13789f830f9b4ccef8.scope.
Oct  3 10:43:23 compute-0 podman[473752]: 2025-10-03 10:43:23.33408458 +0000 UTC m=+0.039727984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:43:23 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:43:23 compute-0 podman[473752]: 2025-10-03 10:43:23.45490211 +0000 UTC m=+0.160545574 container init 9abcac18bf0e8e8f40559b9a32e9008693655bce500d7c13789f830f9b4ccef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 10:43:23 compute-0 podman[473752]: 2025-10-03 10:43:23.467005958 +0000 UTC m=+0.172649342 container start 9abcac18bf0e8e8f40559b9a32e9008693655bce500d7c13789f830f9b4ccef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:43:23 compute-0 podman[473752]: 2025-10-03 10:43:23.472759492 +0000 UTC m=+0.178402966 container attach 9abcac18bf0e8e8f40559b9a32e9008693655bce500d7c13789f830f9b4ccef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:43:23 compute-0 gifted_knuth[473768]: 167 167
Oct  3 10:43:23 compute-0 systemd[1]: libpod-9abcac18bf0e8e8f40559b9a32e9008693655bce500d7c13789f830f9b4ccef8.scope: Deactivated successfully.
Oct  3 10:43:23 compute-0 conmon[473768]: conmon 9abcac18bf0e8e8f4055 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9abcac18bf0e8e8f40559b9a32e9008693655bce500d7c13789f830f9b4ccef8.scope/container/memory.events
Oct  3 10:43:23 compute-0 podman[473752]: 2025-10-03 10:43:23.477672509 +0000 UTC m=+0.183315903 container died 9abcac18bf0e8e8f40559b9a32e9008693655bce500d7c13789f830f9b4ccef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 10:43:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-89c1c099a80b137e03a858d301491ee49d13b2b8fef2b20e4472866ede830aa4-merged.mount: Deactivated successfully.
Oct  3 10:43:23 compute-0 podman[473752]: 2025-10-03 10:43:23.529457088 +0000 UTC m=+0.235100462 container remove 9abcac18bf0e8e8f40559b9a32e9008693655bce500d7c13789f830f9b4ccef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_knuth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 10:43:23 compute-0 systemd[1]: libpod-conmon-9abcac18bf0e8e8f40559b9a32e9008693655bce500d7c13789f830f9b4ccef8.scope: Deactivated successfully.
Oct  3 10:43:23 compute-0 podman[473791]: 2025-10-03 10:43:23.767673398 +0000 UTC m=+0.074587460 container create 1bd351a16aa8fd2d3ea10afe3c5e3b93f3dfc524a89d49207bfe40874190f64d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 10:43:23 compute-0 systemd[1]: Started libpod-conmon-1bd351a16aa8fd2d3ea10afe3c5e3b93f3dfc524a89d49207bfe40874190f64d.scope.
Oct  3 10:43:23 compute-0 podman[473791]: 2025-10-03 10:43:23.73622649 +0000 UTC m=+0.043140582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:43:23 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bfe179d5957d37dd1fa96afb6fd98c2b4116a5bbad959e18f76645563c10ec6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bfe179d5957d37dd1fa96afb6fd98c2b4116a5bbad959e18f76645563c10ec6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bfe179d5957d37dd1fa96afb6fd98c2b4116a5bbad959e18f76645563c10ec6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bfe179d5957d37dd1fa96afb6fd98c2b4116a5bbad959e18f76645563c10ec6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:43:23 compute-0 podman[473791]: 2025-10-03 10:43:23.900018467 +0000 UTC m=+0.206932529 container init 1bd351a16aa8fd2d3ea10afe3c5e3b93f3dfc524a89d49207bfe40874190f64d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:43:23 compute-0 podman[473791]: 2025-10-03 10:43:23.915870565 +0000 UTC m=+0.222784627 container start 1bd351a16aa8fd2d3ea10afe3c5e3b93f3dfc524a89d49207bfe40874190f64d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:43:23 compute-0 podman[473791]: 2025-10-03 10:43:23.920837934 +0000 UTC m=+0.227752006 container attach 1bd351a16aa8fd2d3ea10afe3c5e3b93f3dfc524a89d49207bfe40874190f64d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:43:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]: {
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:        "osd_id": 1,
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:        "type": "bluestore"
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:    },
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:        "osd_id": 2,
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:        "type": "bluestore"
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:    },
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:        "osd_id": 0,
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:        "type": "bluestore"
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]:    }
Oct  3 10:43:24 compute-0 nostalgic_swanson[473808]: }
Oct  3 10:43:24 compute-0 systemd[1]: libpod-1bd351a16aa8fd2d3ea10afe3c5e3b93f3dfc524a89d49207bfe40874190f64d.scope: Deactivated successfully.
Oct  3 10:43:24 compute-0 systemd[1]: libpod-1bd351a16aa8fd2d3ea10afe3c5e3b93f3dfc524a89d49207bfe40874190f64d.scope: Consumed 1.039s CPU time.
Oct  3 10:43:25 compute-0 podman[473841]: 2025-10-03 10:43:25.014168194 +0000 UTC m=+0.039986172 container died 1bd351a16aa8fd2d3ea10afe3c5e3b93f3dfc524a89d49207bfe40874190f64d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 10:43:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bfe179d5957d37dd1fa96afb6fd98c2b4116a5bbad959e18f76645563c10ec6-merged.mount: Deactivated successfully.
Oct  3 10:43:25 compute-0 podman[473841]: 2025-10-03 10:43:25.123562218 +0000 UTC m=+0.149380206 container remove 1bd351a16aa8fd2d3ea10afe3c5e3b93f3dfc524a89d49207bfe40874190f64d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:43:25 compute-0 systemd[1]: libpod-conmon-1bd351a16aa8fd2d3ea10afe3c5e3b93f3dfc524a89d49207bfe40874190f64d.scope: Deactivated successfully.
Oct  3 10:43:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:43:25 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:43:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:43:25 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:43:25 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 13a9b5e8-0260-43f2-a9d3-d2397de298a0 does not exist
Oct  3 10:43:25 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 4c535cd3-5672-4768-b91a-071641dad87e does not exist
Oct  3 10:43:25 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:43:25 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:43:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2283: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:43:27 compute-0 nova_compute[351685]: 2025-10-03 10:43:27.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:28 compute-0 nova_compute[351685]: 2025-10-03 10:43:28.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:29 compute-0 podman[157165]: time="2025-10-03T10:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:43:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:43:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9088 "" "Go-http-client/1.1"
Oct  3 10:43:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:31 compute-0 openstack_network_exporter[367524]: ERROR   10:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:43:31 compute-0 openstack_network_exporter[367524]: ERROR   10:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:43:31 compute-0 openstack_network_exporter[367524]: ERROR   10:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:43:31 compute-0 openstack_network_exporter[367524]: ERROR   10:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:43:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:43:31 compute-0 openstack_network_exporter[367524]: ERROR   10:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:43:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:43:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:43:32 compute-0 nova_compute[351685]: 2025-10-03 10:43:32.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:33 compute-0 nova_compute[351685]: 2025-10-03 10:43:33.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:33 compute-0 nova_compute[351685]: 2025-10-03 10:43:33.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:43:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:34 compute-0 podman[473907]: 2025-10-03 10:43:34.87238548 +0000 UTC m=+0.110572663 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.build-date=20250930, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:43:34 compute-0 podman[473906]: 2025-10-03 10:43:34.896072458 +0000 UTC m=+0.142182485 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Oct  3 10:43:34 compute-0 podman[473908]: 2025-10-03 10:43:34.937849346 +0000 UTC m=+0.183119416 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 10:43:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:43:37 compute-0 nova_compute[351685]: 2025-10-03 10:43:37.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:37 compute-0 podman[473966]: 2025-10-03 10:43:37.849843119 +0000 UTC m=+0.104634432 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:43:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:38 compute-0 nova_compute[351685]: 2025-10-03 10:43:38.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:40 compute-0 podman[473983]: 2025-10-03 10:43:40.866317666 +0000 UTC m=+0.108140185 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 10:43:40 compute-0 podman[473984]: 2025-10-03 10:43:40.871087079 +0000 UTC m=+0.115613085 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.893 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.894 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.894 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.900 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.901 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.905 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:40 compute-0 podman[473985]: 2025-10-03 10:43:40.905712808 +0000 UTC m=+0.135724079 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.905 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.906 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:43:40.905973) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.912 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.913 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.914 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.914 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.914 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.914 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.915 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.915 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.915 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.916 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.916 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.916 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.916 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.917 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:43:40.914934) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:43:40.917011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.940 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.940 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.941 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.941 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.941 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.941 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.941 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.941 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.941 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.942 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:43:40.941916) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.984 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.984 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.985 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.985 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.985 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.985 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.985 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.985 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.985 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.986 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.986 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.987 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.987 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.987 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.987 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.987 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.987 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.987 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.988 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.988 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.988 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.988 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.988 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.988 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.988 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.988 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.989 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.989 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.989 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.989 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.989 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.990 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.990 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.990 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.990 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.990 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.991 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.991 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.991 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.991 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.991 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.991 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.992 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.992 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.992 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.992 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.992 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.987 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:43:40.985806) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.993 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:43:40.987634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.993 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:43:40.988724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:43:40.990007) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:43:40.991169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:40.995 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:43:40.992420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.025 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.026 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.026 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.026 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.026 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.026 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.027 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.027 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.027 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.028 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.028 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.028 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.028 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.028 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.028 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.029 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.029 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.030 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.030 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.030 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.030 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.030 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.030 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.027 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:43:41.026677) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.030 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.031 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.031 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.031 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.031 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.031 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.031 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.031 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.032 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.032 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.032 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.032 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.032 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.032 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.032 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.032 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:43:41.028403) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.033 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.033 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.033 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.033 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.033 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.034 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.034 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.034 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.034 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.034 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.034 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.035 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.035 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.035 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:43:41.030466) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.035 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.036 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.036 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.036 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.036 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 66690000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:43:41.031750) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.036 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.037 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.037 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.037 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.037 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.037 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:43:41.032800) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.038 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.038 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.038 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.039 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.039 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.039 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.039 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.039 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.040 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.040 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.040 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.040 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:43:41.034037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.041 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:43:41.035013) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.041 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.042 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:43:41.036301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.042 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.042 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.043 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.043 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.043 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.043 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.043 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.044 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.044 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.043 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:43:41.037664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.044 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.045 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.044 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:43:41.039001) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.045 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.045 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:43:41.040631) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.045 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:43:41.042072) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:43:41.043972) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.046 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.046 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:43:41.045298) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.046 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.046 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.046 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.046 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.046 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.046 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.047 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:43:41.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:43:41.641 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:43:41.642 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:43:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:43:41.642 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:43:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:43:42 compute-0 nova_compute[351685]: 2025-10-03 10:43:42.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:43 compute-0 nova_compute[351685]: 2025-10-03 10:43:43.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:43:46
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'vms', 'backups', 'images', 'default.rgw.meta', 'default.rgw.log', '.mgr']
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:43:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:43:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:43:47 compute-0 nova_compute[351685]: 2025-10-03 10:43:47.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:48 compute-0 nova_compute[351685]: 2025-10-03 10:43:48.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:48 compute-0 podman[474044]: 2025-10-03 10:43:48.847672126 +0000 UTC m=+0.106563364 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS)
Oct  3 10:43:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:50 compute-0 podman[474065]: 2025-10-03 10:43:50.874640299 +0000 UTC m=+0.126227495 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9, config_id=edpm, vendor=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release-0.7.12=, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, container_name=kepler)
Oct  3 10:43:50 compute-0 podman[474064]: 2025-10-03 10:43:50.87719305 +0000 UTC m=+0.125254583 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:43:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:43:52 compute-0 nova_compute[351685]: 2025-10-03 10:43:52.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:53 compute-0 nova_compute[351685]: 2025-10-03 10:43:53.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:43:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2460096998' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:43:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:43:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2460096998' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:43:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:43:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:43:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2298: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:43:57 compute-0 nova_compute[351685]: 2025-10-03 10:43:57.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:43:58 compute-0 nova_compute[351685]: 2025-10-03 10:43:58.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:43:59 compute-0 podman[157165]: time="2025-10-03T10:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:43:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:43:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9083 "" "Go-http-client/1.1"
Oct  3 10:44:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:01 compute-0 openstack_network_exporter[367524]: ERROR   10:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:44:01 compute-0 openstack_network_exporter[367524]: ERROR   10:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:44:01 compute-0 openstack_network_exporter[367524]: ERROR   10:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:44:01 compute-0 openstack_network_exporter[367524]: ERROR   10:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:44:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:44:01 compute-0 openstack_network_exporter[367524]: ERROR   10:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:44:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:44:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2301: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:44:02 compute-0 nova_compute[351685]: 2025-10-03 10:44:02.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:03 compute-0 nova_compute[351685]: 2025-10-03 10:44:03.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:05 compute-0 podman[474107]: 2025-10-03 10:44:05.869140799 +0000 UTC m=+0.115318714 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, build-date=2025-08-20T13:12:41)
Oct  3 10:44:05 compute-0 podman[474108]: 2025-10-03 10:44:05.904379468 +0000 UTC m=+0.144450808 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct  3 10:44:05 compute-0 podman[474109]: 2025-10-03 10:44:05.939047578 +0000 UTC m=+0.185739340 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:44:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2303: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:44:07 compute-0 nova_compute[351685]: 2025-10-03 10:44:07.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:44:07 compute-0 nova_compute[351685]: 2025-10-03 10:44:07.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:44:07 compute-0 nova_compute[351685]: 2025-10-03 10:44:07.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:44:07 compute-0 nova_compute[351685]: 2025-10-03 10:44:07.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:08 compute-0 nova_compute[351685]: 2025-10-03 10:44:08.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:08 compute-0 nova_compute[351685]: 2025-10-03 10:44:08.604 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:44:08 compute-0 nova_compute[351685]: 2025-10-03 10:44:08.605 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:44:08 compute-0 nova_compute[351685]: 2025-10-03 10:44:08.605 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:44:08 compute-0 nova_compute[351685]: 2025-10-03 10:44:08.606 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:44:08 compute-0 podman[474169]: 2025-10-03 10:44:08.866757124 +0000 UTC m=+0.120595404 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  3 10:44:10 compute-0 nova_compute[351685]: 2025-10-03 10:44:10.112 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:44:10 compute-0 nova_compute[351685]: 2025-10-03 10:44:10.133 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:44:10 compute-0 nova_compute[351685]: 2025-10-03 10:44:10.134 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:44:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:10 compute-0 nova_compute[351685]: 2025-10-03 10:44:10.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:44:10 compute-0 nova_compute[351685]: 2025-10-03 10:44:10.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:44:11 compute-0 podman[474187]: 2025-10-03 10:44:11.809007105 +0000 UTC m=+0.076316095 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:44:11 compute-0 podman[474189]: 2025-10-03 10:44:11.829937314 +0000 UTC m=+0.086025975 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 10:44:11 compute-0 podman[474188]: 2025-10-03 10:44:11.840140132 +0000 UTC m=+0.105417117 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:44:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2306: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:44:12 compute-0 nova_compute[351685]: 2025-10-03 10:44:12.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:13 compute-0 nova_compute[351685]: 2025-10-03 10:44:13.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2307: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:14 compute-0 nova_compute[351685]: 2025-10-03 10:44:14.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:44:15 compute-0 nova_compute[351685]: 2025-10-03 10:44:15.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:44:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:44:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:44:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:44:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:44:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:44:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:44:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2308: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:16 compute-0 nova_compute[351685]: 2025-10-03 10:44:16.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:44:16 compute-0 nova_compute[351685]: 2025-10-03 10:44:16.763 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:44:16 compute-0 nova_compute[351685]: 2025-10-03 10:44:16.763 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:44:16 compute-0 nova_compute[351685]: 2025-10-03 10:44:16.764 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:44:16 compute-0 nova_compute[351685]: 2025-10-03 10:44:16.764 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:44:16 compute-0 nova_compute[351685]: 2025-10-03 10:44:16.765 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:44:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:44:17 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/272929219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:44:17 compute-0 nova_compute[351685]: 2025-10-03 10:44:17.259 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:44:17 compute-0 nova_compute[351685]: 2025-10-03 10:44:17.339 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:44:17 compute-0 nova_compute[351685]: 2025-10-03 10:44:17.340 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:44:17 compute-0 nova_compute[351685]: 2025-10-03 10:44:17.340 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:44:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:44:17 compute-0 nova_compute[351685]: 2025-10-03 10:44:17.866 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:44:17 compute-0 nova_compute[351685]: 2025-10-03 10:44:17.867 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3843MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:44:17 compute-0 nova_compute[351685]: 2025-10-03 10:44:17.867 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:44:17 compute-0 nova_compute[351685]: 2025-10-03 10:44:17.868 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:44:17 compute-0 nova_compute[351685]: 2025-10-03 10:44:17.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:18 compute-0 nova_compute[351685]: 2025-10-03 10:44:18.122 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:44:18 compute-0 nova_compute[351685]: 2025-10-03 10:44:18.123 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:44:18 compute-0 nova_compute[351685]: 2025-10-03 10:44:18.125 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:44:18 compute-0 nova_compute[351685]: 2025-10-03 10:44:18.158 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:44:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:18 compute-0 nova_compute[351685]: 2025-10-03 10:44:18.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:44:18 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/662789981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:44:18 compute-0 nova_compute[351685]: 2025-10-03 10:44:18.677 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:44:18 compute-0 nova_compute[351685]: 2025-10-03 10:44:18.693 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:44:18 compute-0 nova_compute[351685]: 2025-10-03 10:44:18.722 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:44:18 compute-0 nova_compute[351685]: 2025-10-03 10:44:18.726 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:44:18 compute-0 nova_compute[351685]: 2025-10-03 10:44:18.726 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:44:19 compute-0 podman[474294]: 2025-10-03 10:44:19.883395738 +0000 UTC m=+0.143432206 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:44:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2310: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:20 compute-0 nova_compute[351685]: 2025-10-03 10:44:20.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:44:20 compute-0 nova_compute[351685]: 2025-10-03 10:44:20.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:44:20 compute-0 nova_compute[351685]: 2025-10-03 10:44:20.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:44:20 compute-0 nova_compute[351685]: 2025-10-03 10:44:20.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:44:21 compute-0 podman[474313]: 2025-10-03 10:44:21.862566309 +0000 UTC m=+0.111926677 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:44:21 compute-0 podman[474314]: 2025-10-03 10:44:21.877995582 +0000 UTC m=+0.121799842 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, name=ubi9, distribution-scope=public, vcs-type=git, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, version=9.4, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, maintainer=Red Hat, Inc.)
Oct  3 10:44:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:44:22 compute-0 nova_compute[351685]: 2025-10-03 10:44:22.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:23 compute-0 nova_compute[351685]: 2025-10-03 10:44:23.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:44:26 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:44:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:44:26 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:44:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:44:26 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:44:26 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b9785e60-1ff6-49a6-b9d4-3ba8c99c2b5e does not exist
Oct  3 10:44:26 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 56b232c9-b1f1-4003-9827-2b015e631a83 does not exist
Oct  3 10:44:26 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a5782a24-6693-4da8-94f0-a9c3ad6521b5 does not exist
Oct  3 10:44:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:44:26 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:44:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:44:26 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:44:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:44:26 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:44:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:44:27 compute-0 podman[474624]: 2025-10-03 10:44:27.617059324 +0000 UTC m=+0.100049165 container create 7124641df3f9bfc573474defc7eaf6a0211b69f9114642745fc1b7429c2a6efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 10:44:27 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:44:27 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:44:27 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:44:27 compute-0 podman[474624]: 2025-10-03 10:44:27.564701497 +0000 UTC m=+0.047691388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:44:27 compute-0 systemd[1]: Started libpod-conmon-7124641df3f9bfc573474defc7eaf6a0211b69f9114642745fc1b7429c2a6efd.scope.
Oct  3 10:44:27 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:44:27 compute-0 podman[474624]: 2025-10-03 10:44:27.754758305 +0000 UTC m=+0.237748126 container init 7124641df3f9bfc573474defc7eaf6a0211b69f9114642745fc1b7429c2a6efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:44:27 compute-0 podman[474624]: 2025-10-03 10:44:27.774404944 +0000 UTC m=+0.257394785 container start 7124641df3f9bfc573474defc7eaf6a0211b69f9114642745fc1b7429c2a6efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 10:44:27 compute-0 podman[474624]: 2025-10-03 10:44:27.782195824 +0000 UTC m=+0.265185695 container attach 7124641df3f9bfc573474defc7eaf6a0211b69f9114642745fc1b7429c2a6efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 10:44:27 compute-0 clever_bell[474639]: 167 167
Oct  3 10:44:27 compute-0 systemd[1]: libpod-7124641df3f9bfc573474defc7eaf6a0211b69f9114642745fc1b7429c2a6efd.scope: Deactivated successfully.
Oct  3 10:44:27 compute-0 conmon[474639]: conmon 7124641df3f9bfc57347 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7124641df3f9bfc573474defc7eaf6a0211b69f9114642745fc1b7429c2a6efd.scope/container/memory.events
Oct  3 10:44:27 compute-0 podman[474644]: 2025-10-03 10:44:27.885051799 +0000 UTC m=+0.056197711 container died 7124641df3f9bfc573474defc7eaf6a0211b69f9114642745fc1b7429c2a6efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  3 10:44:27 compute-0 nova_compute[351685]: 2025-10-03 10:44:27.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-641fbecafab247a7dd888b331e8d4277335833c82b9123c02d91fe2b8f455f36-merged.mount: Deactivated successfully.
Oct  3 10:44:27 compute-0 podman[474644]: 2025-10-03 10:44:27.946157236 +0000 UTC m=+0.117303108 container remove 7124641df3f9bfc573474defc7eaf6a0211b69f9114642745fc1b7429c2a6efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bell, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:44:27 compute-0 systemd[1]: libpod-conmon-7124641df3f9bfc573474defc7eaf6a0211b69f9114642745fc1b7429c2a6efd.scope: Deactivated successfully.
Oct  3 10:44:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:28 compute-0 podman[474666]: 2025-10-03 10:44:28.232068224 +0000 UTC m=+0.068221807 container create 61a46b6e0adf3ab688d566b22775c307776535f43e5491b3b753ed3d17d20dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ride, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:44:28 compute-0 systemd[1]: Started libpod-conmon-61a46b6e0adf3ab688d566b22775c307776535f43e5491b3b753ed3d17d20dbc.scope.
Oct  3 10:44:28 compute-0 podman[474666]: 2025-10-03 10:44:28.204920934 +0000 UTC m=+0.041074557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:44:28 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aba666f282fd4aac284aee57a44934f4765fbe414f67ca63458ed9685af616b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aba666f282fd4aac284aee57a44934f4765fbe414f67ca63458ed9685af616b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:44:28 compute-0 nova_compute[351685]: 2025-10-03 10:44:28.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aba666f282fd4aac284aee57a44934f4765fbe414f67ca63458ed9685af616b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aba666f282fd4aac284aee57a44934f4765fbe414f67ca63458ed9685af616b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aba666f282fd4aac284aee57a44934f4765fbe414f67ca63458ed9685af616b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:44:28 compute-0 podman[474666]: 2025-10-03 10:44:28.348697739 +0000 UTC m=+0.184851332 container init 61a46b6e0adf3ab688d566b22775c307776535f43e5491b3b753ed3d17d20dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ride, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 10:44:28 compute-0 podman[474666]: 2025-10-03 10:44:28.365479367 +0000 UTC m=+0.201632920 container start 61a46b6e0adf3ab688d566b22775c307776535f43e5491b3b753ed3d17d20dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ride, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 10:44:28 compute-0 podman[474666]: 2025-10-03 10:44:28.37151944 +0000 UTC m=+0.207672993 container attach 61a46b6e0adf3ab688d566b22775c307776535f43e5491b3b753ed3d17d20dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:44:29 compute-0 silly_ride[474681]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:44:29 compute-0 silly_ride[474681]: --> relative data size: 1.0
Oct  3 10:44:29 compute-0 silly_ride[474681]: --> All data devices are unavailable
Oct  3 10:44:29 compute-0 systemd[1]: libpod-61a46b6e0adf3ab688d566b22775c307776535f43e5491b3b753ed3d17d20dbc.scope: Deactivated successfully.
Oct  3 10:44:29 compute-0 systemd[1]: libpod-61a46b6e0adf3ab688d566b22775c307776535f43e5491b3b753ed3d17d20dbc.scope: Consumed 1.207s CPU time.
Oct  3 10:44:29 compute-0 podman[474710]: 2025-10-03 10:44:29.719354492 +0000 UTC m=+0.053722651 container died 61a46b6e0adf3ab688d566b22775c307776535f43e5491b3b753ed3d17d20dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ride, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:44:29 compute-0 podman[157165]: time="2025-10-03T10:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:44:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aba666f282fd4aac284aee57a44934f4765fbe414f67ca63458ed9685af616b-merged.mount: Deactivated successfully.
Oct  3 10:44:29 compute-0 podman[474710]: 2025-10-03 10:44:29.834542091 +0000 UTC m=+0.168910240 container remove 61a46b6e0adf3ab688d566b22775c307776535f43e5491b3b753ed3d17d20dbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 10:44:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47960 "" "Go-http-client/1.1"
Oct  3 10:44:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9080 "" "Go-http-client/1.1"
Oct  3 10:44:29 compute-0 systemd[1]: libpod-conmon-61a46b6e0adf3ab688d566b22775c307776535f43e5491b3b753ed3d17d20dbc.scope: Deactivated successfully.
Oct  3 10:44:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2315: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:31 compute-0 podman[474862]: 2025-10-03 10:44:31.034847198 +0000 UTC m=+0.095056565 container create 5632517dad9c4cd9c6dff2a1cd25849d62aa9a67c880ff9511a8acc47b6c5253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wilbur, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 10:44:31 compute-0 podman[474862]: 2025-10-03 10:44:30.995219339 +0000 UTC m=+0.055428796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:44:31 compute-0 systemd[1]: Started libpod-conmon-5632517dad9c4cd9c6dff2a1cd25849d62aa9a67c880ff9511a8acc47b6c5253.scope.
Oct  3 10:44:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:44:31 compute-0 podman[474862]: 2025-10-03 10:44:31.187928621 +0000 UTC m=+0.248138068 container init 5632517dad9c4cd9c6dff2a1cd25849d62aa9a67c880ff9511a8acc47b6c5253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 10:44:31 compute-0 podman[474862]: 2025-10-03 10:44:31.204997628 +0000 UTC m=+0.265207015 container start 5632517dad9c4cd9c6dff2a1cd25849d62aa9a67c880ff9511a8acc47b6c5253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wilbur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:44:31 compute-0 podman[474862]: 2025-10-03 10:44:31.211876418 +0000 UTC m=+0.272085795 container attach 5632517dad9c4cd9c6dff2a1cd25849d62aa9a67c880ff9511a8acc47b6c5253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wilbur, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:44:31 compute-0 inspiring_wilbur[474878]: 167 167
Oct  3 10:44:31 compute-0 podman[474862]: 2025-10-03 10:44:31.216009421 +0000 UTC m=+0.276218818 container died 5632517dad9c4cd9c6dff2a1cd25849d62aa9a67c880ff9511a8acc47b6c5253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wilbur, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 10:44:31 compute-0 systemd[1]: libpod-5632517dad9c4cd9c6dff2a1cd25849d62aa9a67c880ff9511a8acc47b6c5253.scope: Deactivated successfully.
Oct  3 10:44:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a47f8934fa847a76a1e361bea9df9fe77847a41459ed3a98c628080f25f4486-merged.mount: Deactivated successfully.
Oct  3 10:44:31 compute-0 podman[474862]: 2025-10-03 10:44:31.279320158 +0000 UTC m=+0.339529515 container remove 5632517dad9c4cd9c6dff2a1cd25849d62aa9a67c880ff9511a8acc47b6c5253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wilbur, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 10:44:31 compute-0 systemd[1]: libpod-conmon-5632517dad9c4cd9c6dff2a1cd25849d62aa9a67c880ff9511a8acc47b6c5253.scope: Deactivated successfully.
Oct  3 10:44:31 compute-0 openstack_network_exporter[367524]: ERROR   10:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:44:31 compute-0 openstack_network_exporter[367524]: ERROR   10:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:44:31 compute-0 openstack_network_exporter[367524]: ERROR   10:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:44:31 compute-0 openstack_network_exporter[367524]: ERROR   10:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:44:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:44:31 compute-0 openstack_network_exporter[367524]: ERROR   10:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:44:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:44:31 compute-0 podman[474902]: 2025-10-03 10:44:31.544772531 +0000 UTC m=+0.073088002 container create 2a6e75673a00ecc177627e7355e6e76d8c55c68c56adc61b2bac82ff4b9c3186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 10:44:31 compute-0 systemd[1]: Started libpod-conmon-2a6e75673a00ecc177627e7355e6e76d8c55c68c56adc61b2bac82ff4b9c3186.scope.
Oct  3 10:44:31 compute-0 podman[474902]: 2025-10-03 10:44:31.521527017 +0000 UTC m=+0.049842498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:44:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4923925fa91314166ab1210b9497bde44cdeaf248937e4c68502ae61c38e9c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4923925fa91314166ab1210b9497bde44cdeaf248937e4c68502ae61c38e9c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4923925fa91314166ab1210b9497bde44cdeaf248937e4c68502ae61c38e9c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4923925fa91314166ab1210b9497bde44cdeaf248937e4c68502ae61c38e9c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:44:31 compute-0 podman[474902]: 2025-10-03 10:44:31.717047279 +0000 UTC m=+0.245362800 container init 2a6e75673a00ecc177627e7355e6e76d8c55c68c56adc61b2bac82ff4b9c3186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:44:31 compute-0 podman[474902]: 2025-10-03 10:44:31.737904217 +0000 UTC m=+0.266219698 container start 2a6e75673a00ecc177627e7355e6e76d8c55c68c56adc61b2bac82ff4b9c3186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:44:31 compute-0 podman[474902]: 2025-10-03 10:44:31.743158716 +0000 UTC m=+0.271474237 container attach 2a6e75673a00ecc177627e7355e6e76d8c55c68c56adc61b2bac82ff4b9c3186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 10:44:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2316: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]: {
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:    "0": [
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:        {
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "devices": [
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "/dev/loop3"
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            ],
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "lv_name": "ceph_lv0",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "lv_size": "21470642176",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "name": "ceph_lv0",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "tags": {
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.cluster_name": "ceph",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.crush_device_class": "",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.encrypted": "0",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.osd_id": "0",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.type": "block",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.vdo": "0"
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            },
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "type": "block",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "vg_name": "ceph_vg0"
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:        }
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:    ],
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:    "1": [
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:        {
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "devices": [
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "/dev/loop4"
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            ],
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "lv_name": "ceph_lv1",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "lv_size": "21470642176",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "name": "ceph_lv1",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "tags": {
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.cluster_name": "ceph",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.crush_device_class": "",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.encrypted": "0",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.osd_id": "1",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.type": "block",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.vdo": "0"
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            },
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "type": "block",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "vg_name": "ceph_vg1"
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:        }
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:    ],
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:    "2": [
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:        {
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "devices": [
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "/dev/loop5"
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            ],
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "lv_name": "ceph_lv2",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "lv_size": "21470642176",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "name": "ceph_lv2",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "tags": {
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.cluster_name": "ceph",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.crush_device_class": "",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.encrypted": "0",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.osd_id": "2",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.type": "block",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:                "ceph.vdo": "0"
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            },
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "type": "block",
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:            "vg_name": "ceph_vg2"
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:        }
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]:    ]
Oct  3 10:44:32 compute-0 inspiring_montalcini[474918]: }
Oct  3 10:44:32 compute-0 systemd[1]: libpod-2a6e75673a00ecc177627e7355e6e76d8c55c68c56adc61b2bac82ff4b9c3186.scope: Deactivated successfully.
Oct  3 10:44:32 compute-0 podman[474902]: 2025-10-03 10:44:32.597562752 +0000 UTC m=+1.125878293 container died 2a6e75673a00ecc177627e7355e6e76d8c55c68c56adc61b2bac82ff4b9c3186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 10:44:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4923925fa91314166ab1210b9497bde44cdeaf248937e4c68502ae61c38e9c6-merged.mount: Deactivated successfully.
Oct  3 10:44:32 compute-0 podman[474902]: 2025-10-03 10:44:32.685051445 +0000 UTC m=+1.213366946 container remove 2a6e75673a00ecc177627e7355e6e76d8c55c68c56adc61b2bac82ff4b9c3186 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 10:44:32 compute-0 systemd[1]: libpod-conmon-2a6e75673a00ecc177627e7355e6e76d8c55c68c56adc61b2bac82ff4b9c3186.scope: Deactivated successfully.
Oct  3 10:44:32 compute-0 nova_compute[351685]: 2025-10-03 10:44:32.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:33 compute-0 nova_compute[351685]: 2025-10-03 10:44:33.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:33 compute-0 podman[475077]: 2025-10-03 10:44:33.813114677 +0000 UTC m=+0.068585297 container create 14a4e795b7c0a6b87d63bfbf454e84609a6dd3b7f43fb57bbdcba48ded52d75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 10:44:33 compute-0 podman[475077]: 2025-10-03 10:44:33.782004921 +0000 UTC m=+0.037475621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:44:33 compute-0 systemd[1]: Started libpod-conmon-14a4e795b7c0a6b87d63bfbf454e84609a6dd3b7f43fb57bbdcba48ded52d75d.scope.
Oct  3 10:44:33 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:44:33 compute-0 podman[475077]: 2025-10-03 10:44:33.965645143 +0000 UTC m=+0.221115803 container init 14a4e795b7c0a6b87d63bfbf454e84609a6dd3b7f43fb57bbdcba48ded52d75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 10:44:33 compute-0 podman[475077]: 2025-10-03 10:44:33.983635819 +0000 UTC m=+0.239106459 container start 14a4e795b7c0a6b87d63bfbf454e84609a6dd3b7f43fb57bbdcba48ded52d75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:44:33 compute-0 podman[475077]: 2025-10-03 10:44:33.989359752 +0000 UTC m=+0.244830362 container attach 14a4e795b7c0a6b87d63bfbf454e84609a6dd3b7f43fb57bbdcba48ded52d75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:44:33 compute-0 busy_merkle[475093]: 167 167
Oct  3 10:44:33 compute-0 systemd[1]: libpod-14a4e795b7c0a6b87d63bfbf454e84609a6dd3b7f43fb57bbdcba48ded52d75d.scope: Deactivated successfully.
Oct  3 10:44:33 compute-0 podman[475077]: 2025-10-03 10:44:33.994191537 +0000 UTC m=+0.249662177 container died 14a4e795b7c0a6b87d63bfbf454e84609a6dd3b7f43fb57bbdcba48ded52d75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f2f4191e765ce6c9afbdd90938b51402b33f59acdf4adc0a2d8bccb0b45e4b8-merged.mount: Deactivated successfully.
Oct  3 10:44:34 compute-0 podman[475077]: 2025-10-03 10:44:34.084574272 +0000 UTC m=+0.340044892 container remove 14a4e795b7c0a6b87d63bfbf454e84609a6dd3b7f43fb57bbdcba48ded52d75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_merkle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:44:34 compute-0 systemd[1]: libpod-conmon-14a4e795b7c0a6b87d63bfbf454e84609a6dd3b7f43fb57bbdcba48ded52d75d.scope: Deactivated successfully.
Oct  3 10:44:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 78 MiB data, 260 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:34 compute-0 podman[475116]: 2025-10-03 10:44:34.389996265 +0000 UTC m=+0.089134616 container create a8a96e814f78b274f602d39eecfe49d871ad5df0e8c07df52b7c3337bce810dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cori, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 10:44:34 compute-0 podman[475116]: 2025-10-03 10:44:34.356550444 +0000 UTC m=+0.055688805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:44:34 compute-0 systemd[1]: Started libpod-conmon-a8a96e814f78b274f602d39eecfe49d871ad5df0e8c07df52b7c3337bce810dc.scope.
Oct  3 10:44:34 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b904c09808e61f2c90ae2c8dc06940a6dda8ffb52a943eef85172d143bf23af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b904c09808e61f2c90ae2c8dc06940a6dda8ffb52a943eef85172d143bf23af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b904c09808e61f2c90ae2c8dc06940a6dda8ffb52a943eef85172d143bf23af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b904c09808e61f2c90ae2c8dc06940a6dda8ffb52a943eef85172d143bf23af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:44:34 compute-0 podman[475116]: 2025-10-03 10:44:34.567102688 +0000 UTC m=+0.266241069 container init a8a96e814f78b274f602d39eecfe49d871ad5df0e8c07df52b7c3337bce810dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cori, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 10:44:34 compute-0 podman[475116]: 2025-10-03 10:44:34.58591778 +0000 UTC m=+0.285056091 container start a8a96e814f78b274f602d39eecfe49d871ad5df0e8c07df52b7c3337bce810dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:44:34 compute-0 podman[475116]: 2025-10-03 10:44:34.59214157 +0000 UTC m=+0.291279901 container attach a8a96e814f78b274f602d39eecfe49d871ad5df0e8c07df52b7c3337bce810dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cori, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]: {
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:        "osd_id": 1,
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:        "type": "bluestore"
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:    },
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:        "osd_id": 2,
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:        "type": "bluestore"
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:    },
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:        "osd_id": 0,
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:        "type": "bluestore"
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]:    }
Oct  3 10:44:35 compute-0 nostalgic_cori[475132]: }
Oct  3 10:44:35 compute-0 systemd[1]: libpod-a8a96e814f78b274f602d39eecfe49d871ad5df0e8c07df52b7c3337bce810dc.scope: Deactivated successfully.
Oct  3 10:44:35 compute-0 systemd[1]: libpod-a8a96e814f78b274f602d39eecfe49d871ad5df0e8c07df52b7c3337bce810dc.scope: Consumed 1.263s CPU time.
Oct  3 10:44:35 compute-0 podman[475165]: 2025-10-03 10:44:35.930390135 +0000 UTC m=+0.044288100 container died a8a96e814f78b274f602d39eecfe49d871ad5df0e8c07df52b7c3337bce810dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cori, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  3 10:44:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b904c09808e61f2c90ae2c8dc06940a6dda8ffb52a943eef85172d143bf23af-merged.mount: Deactivated successfully.
Oct  3 10:44:36 compute-0 podman[475165]: 2025-10-03 10:44:36.0114192 +0000 UTC m=+0.125317115 container remove a8a96e814f78b274f602d39eecfe49d871ad5df0e8c07df52b7c3337bce810dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 10:44:36 compute-0 systemd[1]: libpod-conmon-a8a96e814f78b274f602d39eecfe49d871ad5df0e8c07df52b7c3337bce810dc.scope: Deactivated successfully.
Oct  3 10:44:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:44:36 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:44:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:44:36 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:44:36 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a6d34921-744e-4e33-a70b-9d13a4fe2906 does not exist
Oct  3 10:44:36 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 0a3690f3-3935-4f1f-a1bb-49b424d57314 does not exist
Oct  3 10:44:36 compute-0 podman[475179]: 2025-10-03 10:44:36.119636956 +0000 UTC m=+0.118588509 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, distribution-scope=public)
Oct  3 10:44:36 compute-0 podman[475180]: 2025-10-03 10:44:36.123428718 +0000 UTC m=+0.124633864 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Oct  3 10:44:36 compute-0 podman[475181]: 2025-10-03 10:44:36.163573754 +0000 UTC m=+0.164672907 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Oct  3 10:44:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2318: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 0 B/s wr, 9 op/s
Oct  3 10:44:37 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:44:37 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:44:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:44:37 compute-0 nova_compute[351685]: 2025-10-03 10:44:37.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2319: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 32 op/s
Oct  3 10:44:38 compute-0 nova_compute[351685]: 2025-10-03 10:44:38.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:39 compute-0 podman[475293]: 2025-10-03 10:44:39.886672526 +0000 UTC m=+0.134447258 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 10:44:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2320: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Oct  3 10:44:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:44:41.643 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:44:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:44:41.643 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:44:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:44:41.644 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:44:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2321: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 10:44:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:44:42 compute-0 podman[475313]: 2025-10-03 10:44:42.843317419 +0000 UTC m=+0.085420588 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct  3 10:44:42 compute-0 podman[475314]: 2025-10-03 10:44:42.864194107 +0000 UTC m=+0.093408312 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid)
Oct  3 10:44:42 compute-0 podman[475312]: 2025-10-03 10:44:42.88770999 +0000 UTC m=+0.130916414 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 10:44:42 compute-0 nova_compute[351685]: 2025-10-03 10:44:42.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:43 compute-0 nova_compute[351685]: 2025-10-03 10:44:43.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2322: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:44:46
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'backups', 'volumes', '.rgw.root', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2323: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:44:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:44:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:44:47 compute-0 nova_compute[351685]: 2025-10-03 10:44:47.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2324: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Oct  3 10:44:48 compute-0 nova_compute[351685]: 2025-10-03 10:44:48.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2325: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 27 op/s
Oct  3 10:44:50 compute-0 podman[475375]: 2025-10-03 10:44:50.87773807 +0000 UTC m=+0.126320207 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  3 10:44:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2326: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Oct  3 10:44:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:44:52 compute-0 podman[475394]: 2025-10-03 10:44:52.521546671 +0000 UTC m=+0.110851742 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:44:52 compute-0 podman[475395]: 2025-10-03 10:44:52.550596001 +0000 UTC m=+0.130266943 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, container_name=kepler, release-0.7.12=, architecture=x86_64, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Oct  3 10:44:52 compute-0 nova_compute[351685]: 2025-10-03 10:44:52.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:53 compute-0 nova_compute[351685]: 2025-10-03 10:44:53.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:44:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2618675436' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:44:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:44:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2618675436' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:44:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:44:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:44:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:44:57 compute-0 nova_compute[351685]: 2025-10-03 10:44:57.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2329: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:44:58 compute-0 nova_compute[351685]: 2025-10-03 10:44:58.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:44:59 compute-0 podman[157165]: time="2025-10-03T10:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:44:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:44:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9081 "" "Go-http-client/1.1"
Oct  3 10:45:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2330: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:01 compute-0 openstack_network_exporter[367524]: ERROR   10:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:45:01 compute-0 openstack_network_exporter[367524]: ERROR   10:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:45:01 compute-0 openstack_network_exporter[367524]: ERROR   10:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:45:01 compute-0 openstack_network_exporter[367524]: ERROR   10:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:45:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:45:01 compute-0 openstack_network_exporter[367524]: ERROR   10:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:45:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:45:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2331: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:45:02 compute-0 nova_compute[351685]: 2025-10-03 10:45:02.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:03 compute-0 nova_compute[351685]: 2025-10-03 10:45:03.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2332: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2333: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:06 compute-0 podman[475437]: 2025-10-03 10:45:06.873795583 +0000 UTC m=+0.112203415 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true)
Oct  3 10:45:06 compute-0 podman[475436]: 2025-10-03 10:45:06.915804739 +0000 UTC m=+0.153787777 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, release=1755695350, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9)
Oct  3 10:45:06 compute-0 podman[475438]: 2025-10-03 10:45:06.939504068 +0000 UTC m=+0.161347789 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller)
Oct  3 10:45:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:45:07 compute-0 nova_compute[351685]: 2025-10-03 10:45:07.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:45:07 compute-0 nova_compute[351685]: 2025-10-03 10:45:07.732 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:45:07 compute-0 nova_compute[351685]: 2025-10-03 10:45:07.733 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:45:07 compute-0 nova_compute[351685]: 2025-10-03 10:45:07.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2334: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:08 compute-0 nova_compute[351685]: 2025-10-03 10:45:08.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:08 compute-0 nova_compute[351685]: 2025-10-03 10:45:08.605 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:45:08 compute-0 nova_compute[351685]: 2025-10-03 10:45:08.606 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:45:08 compute-0 nova_compute[351685]: 2025-10-03 10:45:08.607 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:45:08 compute-0 nova_compute[351685]: 2025-10-03 10:45:08.608 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:45:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2335: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:10 compute-0 nova_compute[351685]: 2025-10-03 10:45:10.660 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:45:10 compute-0 nova_compute[351685]: 2025-10-03 10:45:10.676 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:45:10 compute-0 nova_compute[351685]: 2025-10-03 10:45:10.677 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:45:10 compute-0 podman[475502]: 2025-10-03 10:45:10.823487422 +0000 UTC m=+0.082085211 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:45:11 compute-0 nova_compute[351685]: 2025-10-03 10:45:11.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:45:11 compute-0 nova_compute[351685]: 2025-10-03 10:45:11.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:45:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2336: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:45:12 compute-0 nova_compute[351685]: 2025-10-03 10:45:12.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:13 compute-0 nova_compute[351685]: 2025-10-03 10:45:13.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:13 compute-0 podman[475519]: 2025-10-03 10:45:13.806734695 +0000 UTC m=+0.063982151 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:45:13 compute-0 podman[475520]: 2025-10-03 10:45:13.837396417 +0000 UTC m=+0.085627654 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  3 10:45:13 compute-0 podman[475521]: 2025-10-03 10:45:13.838304095 +0000 UTC m=+0.085289122 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 10:45:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2337: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:15 compute-0 nova_compute[351685]: 2025-10-03 10:45:15.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:45:15 compute-0 nova_compute[351685]: 2025-10-03 10:45:15.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:45:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:45:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:45:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:45:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:45:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:45:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:45:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2338: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:16 compute-0 nova_compute[351685]: 2025-10-03 10:45:16.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:45:16 compute-0 nova_compute[351685]: 2025-10-03 10:45:16.764 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:45:16 compute-0 nova_compute[351685]: 2025-10-03 10:45:16.764 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:45:16 compute-0 nova_compute[351685]: 2025-10-03 10:45:16.765 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:45:16 compute-0 nova_compute[351685]: 2025-10-03 10:45:16.765 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:45:16 compute-0 nova_compute[351685]: 2025-10-03 10:45:16.765 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:45:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:45:17 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3662182047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:45:17 compute-0 nova_compute[351685]: 2025-10-03 10:45:17.310 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:45:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:45:17 compute-0 nova_compute[351685]: 2025-10-03 10:45:17.424 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:45:17 compute-0 nova_compute[351685]: 2025-10-03 10:45:17.425 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:45:17 compute-0 nova_compute[351685]: 2025-10-03 10:45:17.425 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:45:17 compute-0 nova_compute[351685]: 2025-10-03 10:45:17.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:17 compute-0 nova_compute[351685]: 2025-10-03 10:45:17.927 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:45:17 compute-0 nova_compute[351685]: 2025-10-03 10:45:17.929 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3846MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:45:17 compute-0 nova_compute[351685]: 2025-10-03 10:45:17.929 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:45:17 compute-0 nova_compute[351685]: 2025-10-03 10:45:17.930 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:45:18 compute-0 nova_compute[351685]: 2025-10-03 10:45:18.037 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:45:18 compute-0 nova_compute[351685]: 2025-10-03 10:45:18.038 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:45:18 compute-0 nova_compute[351685]: 2025-10-03 10:45:18.039 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:45:18 compute-0 nova_compute[351685]: 2025-10-03 10:45:18.092 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:45:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2339: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:18 compute-0 nova_compute[351685]: 2025-10-03 10:45:18.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:45:18 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583522468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:45:18 compute-0 nova_compute[351685]: 2025-10-03 10:45:18.586 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:45:18 compute-0 nova_compute[351685]: 2025-10-03 10:45:18.596 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:45:18 compute-0 nova_compute[351685]: 2025-10-03 10:45:18.617 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:45:18 compute-0 nova_compute[351685]: 2025-10-03 10:45:18.619 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:45:18 compute-0 nova_compute[351685]: 2025-10-03 10:45:18.619 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:45:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2340: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:21 compute-0 nova_compute[351685]: 2025-10-03 10:45:21.619 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:45:21 compute-0 nova_compute[351685]: 2025-10-03 10:45:21.620 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:45:21 compute-0 nova_compute[351685]: 2025-10-03 10:45:21.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:45:21 compute-0 podman[475622]: 2025-10-03 10:45:21.840075862 +0000 UTC m=+0.094666363 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 10:45:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2341: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:45:22 compute-0 nova_compute[351685]: 2025-10-03 10:45:22.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:45:22 compute-0 podman[475641]: 2025-10-03 10:45:22.85685109 +0000 UTC m=+0.104443677 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, version=9.4, vcs-type=git, build-date=2024-09-18T21:23:30, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64)
Oct  3 10:45:22 compute-0 podman[475640]: 2025-10-03 10:45:22.900739285 +0000 UTC m=+0.146281336 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:45:22 compute-0 nova_compute[351685]: 2025-10-03 10:45:22.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:23 compute-0 nova_compute[351685]: 2025-10-03 10:45:23.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2342: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2343: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:45:27 compute-0 nova_compute[351685]: 2025-10-03 10:45:27.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2344: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:28 compute-0 nova_compute[351685]: 2025-10-03 10:45:28.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:29 compute-0 podman[157165]: time="2025-10-03T10:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:45:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:45:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9072 "" "Go-http-client/1.1"
Oct  3 10:45:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2345: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:31 compute-0 openstack_network_exporter[367524]: ERROR   10:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:45:31 compute-0 openstack_network_exporter[367524]: ERROR   10:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:45:31 compute-0 openstack_network_exporter[367524]: ERROR   10:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:45:31 compute-0 openstack_network_exporter[367524]: ERROR   10:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:45:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:45:31 compute-0 openstack_network_exporter[367524]: ERROR   10:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:45:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:45:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2346: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:45:32 compute-0 nova_compute[351685]: 2025-10-03 10:45:32.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:33 compute-0 nova_compute[351685]: 2025-10-03 10:45:33.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2347: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:34 compute-0 nova_compute[351685]: 2025-10-03 10:45:34.726 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:45:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2348: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:45:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:45:37 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:45:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:45:37 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:45:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:45:37 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:45:37 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d76d7dfe-772c-4bd2-a99b-c799ada28046 does not exist
Oct  3 10:45:37 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ecc8cc09-3ee1-4a7d-ba70-c362434a5980 does not exist
Oct  3 10:45:37 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev de00ff9d-6f76-4904-9009-66296333e95d does not exist
Oct  3 10:45:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:45:37 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:45:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:45:37 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:45:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:45:37 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:45:37 compute-0 podman[475839]: 2025-10-03 10:45:37.736000086 +0000 UTC m=+0.116912836 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:45:37 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:45:37 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:45:37 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:45:37 compute-0 podman[475838]: 2025-10-03 10:45:37.745229602 +0000 UTC m=+0.125429679 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, vcs-type=git, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Oct  3 10:45:37 compute-0 podman[475840]: 2025-10-03 10:45:37.777395691 +0000 UTC m=+0.145175671 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 10:45:37 compute-0 nova_compute[351685]: 2025-10-03 10:45:37.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2349: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:38 compute-0 nova_compute[351685]: 2025-10-03 10:45:38.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:38 compute-0 podman[476016]: 2025-10-03 10:45:38.585950069 +0000 UTC m=+0.095382315 container create bb4d180d8242253711bfe3e349df5ec54ecf29a144d3a3dd4c8843ea8ab8d0c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_blackwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:45:38 compute-0 podman[476016]: 2025-10-03 10:45:38.547504278 +0000 UTC m=+0.056936584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:45:38 compute-0 systemd[1]: Started libpod-conmon-bb4d180d8242253711bfe3e349df5ec54ecf29a144d3a3dd4c8843ea8ab8d0c6.scope.
Oct  3 10:45:38 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:45:38 compute-0 podman[476016]: 2025-10-03 10:45:38.731026287 +0000 UTC m=+0.240458583 container init bb4d180d8242253711bfe3e349df5ec54ecf29a144d3a3dd4c8843ea8ab8d0c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_blackwell, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:45:38 compute-0 podman[476016]: 2025-10-03 10:45:38.743736364 +0000 UTC m=+0.253168620 container start bb4d180d8242253711bfe3e349df5ec54ecf29a144d3a3dd4c8843ea8ab8d0c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_blackwell, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 10:45:38 compute-0 thirsty_blackwell[476031]: 167 167
Oct  3 10:45:38 compute-0 podman[476016]: 2025-10-03 10:45:38.752321969 +0000 UTC m=+0.261754215 container attach bb4d180d8242253711bfe3e349df5ec54ecf29a144d3a3dd4c8843ea8ab8d0c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_blackwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:45:38 compute-0 systemd[1]: libpod-bb4d180d8242253711bfe3e349df5ec54ecf29a144d3a3dd4c8843ea8ab8d0c6.scope: Deactivated successfully.
Oct  3 10:45:38 compute-0 podman[476016]: 2025-10-03 10:45:38.756136781 +0000 UTC m=+0.265569047 container died bb4d180d8242253711bfe3e349df5ec54ecf29a144d3a3dd4c8843ea8ab8d0c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_blackwell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 10:45:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-594e8df57a6d1a1f30a9367be0193b69c3b53a750cf87ece0ce63eda7c1daee6-merged.mount: Deactivated successfully.
Oct  3 10:45:38 compute-0 podman[476016]: 2025-10-03 10:45:38.820835694 +0000 UTC m=+0.330267900 container remove bb4d180d8242253711bfe3e349df5ec54ecf29a144d3a3dd4c8843ea8ab8d0c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct  3 10:45:38 compute-0 systemd[1]: libpod-conmon-bb4d180d8242253711bfe3e349df5ec54ecf29a144d3a3dd4c8843ea8ab8d0c6.scope: Deactivated successfully.
Oct  3 10:45:39 compute-0 podman[476054]: 2025-10-03 10:45:39.064523959 +0000 UTC m=+0.089112726 container create f7f3095ae5b502de5a0820cdc3a91b3988958ceebbecd040a8e4f2d47fcf950a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 10:45:39 compute-0 podman[476054]: 2025-10-03 10:45:39.031614345 +0000 UTC m=+0.056203192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:45:39 compute-0 systemd[1]: Started libpod-conmon-f7f3095ae5b502de5a0820cdc3a91b3988958ceebbecd040a8e4f2d47fcf950a.scope.
Oct  3 10:45:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1466621ac2a8979eab1406c3b651a974d063cf52652230bb58153fe26f72e0b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1466621ac2a8979eab1406c3b651a974d063cf52652230bb58153fe26f72e0b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1466621ac2a8979eab1406c3b651a974d063cf52652230bb58153fe26f72e0b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1466621ac2a8979eab1406c3b651a974d063cf52652230bb58153fe26f72e0b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1466621ac2a8979eab1406c3b651a974d063cf52652230bb58153fe26f72e0b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:45:39 compute-0 podman[476054]: 2025-10-03 10:45:39.242391726 +0000 UTC m=+0.266980563 container init f7f3095ae5b502de5a0820cdc3a91b3988958ceebbecd040a8e4f2d47fcf950a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:45:39 compute-0 podman[476054]: 2025-10-03 10:45:39.271006483 +0000 UTC m=+0.295595250 container start f7f3095ae5b502de5a0820cdc3a91b3988958ceebbecd040a8e4f2d47fcf950a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 10:45:39 compute-0 podman[476054]: 2025-10-03 10:45:39.277437948 +0000 UTC m=+0.302026795 container attach f7f3095ae5b502de5a0820cdc3a91b3988958ceebbecd040a8e4f2d47fcf950a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 10:45:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2350: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:40 compute-0 zen_brahmagupta[476071]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:45:40 compute-0 zen_brahmagupta[476071]: --> relative data size: 1.0
Oct  3 10:45:40 compute-0 zen_brahmagupta[476071]: --> All data devices are unavailable
Oct  3 10:45:40 compute-0 systemd[1]: libpod-f7f3095ae5b502de5a0820cdc3a91b3988958ceebbecd040a8e4f2d47fcf950a.scope: Deactivated successfully.
Oct  3 10:45:40 compute-0 systemd[1]: libpod-f7f3095ae5b502de5a0820cdc3a91b3988958ceebbecd040a8e4f2d47fcf950a.scope: Consumed 1.213s CPU time.
Oct  3 10:45:40 compute-0 conmon[476071]: conmon f7f3095ae5b502de5a08 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7f3095ae5b502de5a0820cdc3a91b3988958ceebbecd040a8e4f2d47fcf950a.scope/container/memory.events
Oct  3 10:45:40 compute-0 podman[476054]: 2025-10-03 10:45:40.544776332 +0000 UTC m=+1.569365129 container died f7f3095ae5b502de5a0820cdc3a91b3988958ceebbecd040a8e4f2d47fcf950a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct  3 10:45:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1466621ac2a8979eab1406c3b651a974d063cf52652230bb58153fe26f72e0b6-merged.mount: Deactivated successfully.
Oct  3 10:45:40 compute-0 podman[476054]: 2025-10-03 10:45:40.643757942 +0000 UTC m=+1.668346739 container remove f7f3095ae5b502de5a0820cdc3a91b3988958ceebbecd040a8e4f2d47fcf950a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:45:40 compute-0 systemd[1]: libpod-conmon-f7f3095ae5b502de5a0820cdc3a91b3988958ceebbecd040a8e4f2d47fcf950a.scope: Deactivated successfully.
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.894 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.895 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.895 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.896 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b774a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.906 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.907 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.907 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.908 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:45:40.908014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.915 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.916 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.917 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.917 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.917 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.917 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.917 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.917 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.918 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.918 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.918 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.918 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.918 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.918 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.919 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:45:40.917471) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.919 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:45:40.918585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.943 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.944 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.945 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.946 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.946 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.946 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.946 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.946 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.946 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.947 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:45:40.946877) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.995 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.996 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.997 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.997 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.998 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.998 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.998 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.998 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.998 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:40.999 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.000 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.000 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:45:40.998748) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.000 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.001 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.001 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.001 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.001 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.001 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.002 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:45:41.001509) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.002 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.002 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.003 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.003 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.003 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.004 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.004 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.004 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.004 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.005 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:45:41.004178) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.005 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.006 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.006 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.006 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.006 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.006 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.007 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:45:41.006859) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.008 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.008 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.008 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:45:41.009164) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.010 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.011 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.012 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:45:41.011593) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.036 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.037 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.038 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.038 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:45:41.038180) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.039 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.039 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.039 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.039 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.039 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.039 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.040 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.040 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.040 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.040 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.040 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.041 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.041 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.042 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.042 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.042 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.042 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.042 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.043 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.043 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.043 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.043 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.043 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:45:41.040101) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.043 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:45:41.041992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.044 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.044 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:45:41.043605) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.044 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:45:41.044722) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.045 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.045 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.045 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.045 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.046 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.046 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.046 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.046 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.046 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.047 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.047 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:45:41.045926) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.047 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.047 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.047 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.047 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.048 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.048 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 68630000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:45:41.047055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.048 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.048 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:45:41.048056) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.048 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.048 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.049 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.049 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.049 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.049 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.049 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.049 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.049 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.049 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.050 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.050 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.050 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.050 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.050 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.050 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:45:41.048962) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.051 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:45:41.049868) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.051 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.051 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.051 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.051 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:45:41.050849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.052 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.052 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.052 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:45:41.052039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.053 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.053 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.053 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.053 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.054 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.054 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.054 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.054 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.054 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.054 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.055 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:45:41.053394) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:45:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:45:41.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:45:41.054460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:45:41 compute-0 podman[476162]: 2025-10-03 10:45:41.107831637 +0000 UTC m=+0.118747105 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  3 10:45:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:45:41.644 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:45:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:45:41.645 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:45:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:45:41.645 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:45:41 compute-0 podman[476270]: 2025-10-03 10:45:41.725563923 +0000 UTC m=+0.086769040 container create 8bdd503e64aaf30b5438acc6d7e3869070b4d7035cb7a821c541121667f84dcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 10:45:41 compute-0 podman[476270]: 2025-10-03 10:45:41.691575744 +0000 UTC m=+0.052780921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:45:41 compute-0 systemd[1]: Started libpod-conmon-8bdd503e64aaf30b5438acc6d7e3869070b4d7035cb7a821c541121667f84dcf.scope.
Oct  3 10:45:41 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:45:41 compute-0 podman[476270]: 2025-10-03 10:45:41.882616774 +0000 UTC m=+0.243821931 container init 8bdd503e64aaf30b5438acc6d7e3869070b4d7035cb7a821c541121667f84dcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 10:45:41 compute-0 podman[476270]: 2025-10-03 10:45:41.89404121 +0000 UTC m=+0.255246297 container start 8bdd503e64aaf30b5438acc6d7e3869070b4d7035cb7a821c541121667f84dcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 10:45:41 compute-0 podman[476270]: 2025-10-03 10:45:41.900092754 +0000 UTC m=+0.261297841 container attach 8bdd503e64aaf30b5438acc6d7e3869070b4d7035cb7a821c541121667f84dcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 10:45:41 compute-0 fervent_lamarr[476286]: 167 167
Oct  3 10:45:41 compute-0 systemd[1]: libpod-8bdd503e64aaf30b5438acc6d7e3869070b4d7035cb7a821c541121667f84dcf.scope: Deactivated successfully.
Oct  3 10:45:41 compute-0 podman[476270]: 2025-10-03 10:45:41.906682754 +0000 UTC m=+0.267887871 container died 8bdd503e64aaf30b5438acc6d7e3869070b4d7035cb7a821c541121667f84dcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 10:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-82abc432effb86a8de1f1ce51d986e9e496e8a3eb69a005b8f9eb549df9602c4-merged.mount: Deactivated successfully.
Oct  3 10:45:41 compute-0 podman[476270]: 2025-10-03 10:45:41.9764722 +0000 UTC m=+0.337677297 container remove 8bdd503e64aaf30b5438acc6d7e3869070b4d7035cb7a821c541121667f84dcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lamarr, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 10:45:42 compute-0 systemd[1]: libpod-conmon-8bdd503e64aaf30b5438acc6d7e3869070b4d7035cb7a821c541121667f84dcf.scope: Deactivated successfully.
Oct  3 10:45:42 compute-0 podman[476309]: 2025-10-03 10:45:42.252086968 +0000 UTC m=+0.080399236 container create 6be37fbf96d733e0e37f0df1a5e26c2bc904668a50b4392750e76af57fd745b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:45:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2351: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:42 compute-0 podman[476309]: 2025-10-03 10:45:42.218719679 +0000 UTC m=+0.047031987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:45:42 compute-0 systemd[1]: Started libpod-conmon-6be37fbf96d733e0e37f0df1a5e26c2bc904668a50b4392750e76af57fd745b7.scope.
Oct  3 10:45:42 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8bddff64d31d31b3e8db8e2196c37758dd1d32f6c247960f1ab3c7361525866/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8bddff64d31d31b3e8db8e2196c37758dd1d32f6c247960f1ab3c7361525866/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8bddff64d31d31b3e8db8e2196c37758dd1d32f6c247960f1ab3c7361525866/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8bddff64d31d31b3e8db8e2196c37758dd1d32f6c247960f1ab3c7361525866/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:45:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:45:42 compute-0 podman[476309]: 2025-10-03 10:45:42.39294097 +0000 UTC m=+0.221253248 container init 6be37fbf96d733e0e37f0df1a5e26c2bc904668a50b4392750e76af57fd745b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khayyam, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:45:42 compute-0 podman[476309]: 2025-10-03 10:45:42.41729766 +0000 UTC m=+0.245609928 container start 6be37fbf96d733e0e37f0df1a5e26c2bc904668a50b4392750e76af57fd745b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khayyam, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 10:45:42 compute-0 podman[476309]: 2025-10-03 10:45:42.423140007 +0000 UTC m=+0.251452275 container attach 6be37fbf96d733e0e37f0df1a5e26c2bc904668a50b4392750e76af57fd745b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khayyam, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:45:42 compute-0 nova_compute[351685]: 2025-10-03 10:45:42.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]: {
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:    "0": [
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:        {
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "devices": [
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "/dev/loop3"
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            ],
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "lv_name": "ceph_lv0",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "lv_size": "21470642176",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "name": "ceph_lv0",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "tags": {
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.cluster_name": "ceph",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.crush_device_class": "",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.encrypted": "0",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.osd_id": "0",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.type": "block",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.vdo": "0"
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            },
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "type": "block",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "vg_name": "ceph_vg0"
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:        }
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:    ],
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:    "1": [
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:        {
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "devices": [
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "/dev/loop4"
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            ],
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "lv_name": "ceph_lv1",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "lv_size": "21470642176",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "name": "ceph_lv1",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "tags": {
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.cluster_name": "ceph",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.crush_device_class": "",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.encrypted": "0",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.osd_id": "1",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.type": "block",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.vdo": "0"
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            },
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "type": "block",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "vg_name": "ceph_vg1"
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:        }
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:    ],
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:    "2": [
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:        {
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "devices": [
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "/dev/loop5"
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            ],
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "lv_name": "ceph_lv2",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "lv_size": "21470642176",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "name": "ceph_lv2",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "tags": {
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.cluster_name": "ceph",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.crush_device_class": "",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.encrypted": "0",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.osd_id": "2",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.type": "block",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:                "ceph.vdo": "0"
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            },
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "type": "block",
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:            "vg_name": "ceph_vg2"
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:        }
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]:    ]
Oct  3 10:45:43 compute-0 interesting_khayyam[476323]: }
Oct  3 10:45:43 compute-0 systemd[1]: libpod-6be37fbf96d733e0e37f0df1a5e26c2bc904668a50b4392750e76af57fd745b7.scope: Deactivated successfully.
Oct  3 10:45:43 compute-0 podman[476309]: 2025-10-03 10:45:43.231044384 +0000 UTC m=+1.059356662 container died 6be37fbf96d733e0e37f0df1a5e26c2bc904668a50b4392750e76af57fd745b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khayyam, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 10:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8bddff64d31d31b3e8db8e2196c37758dd1d32f6c247960f1ab3c7361525866-merged.mount: Deactivated successfully.
Oct  3 10:45:43 compute-0 podman[476309]: 2025-10-03 10:45:43.332869276 +0000 UTC m=+1.161181564 container remove 6be37fbf96d733e0e37f0df1a5e26c2bc904668a50b4392750e76af57fd745b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 10:45:43 compute-0 systemd[1]: libpod-conmon-6be37fbf96d733e0e37f0df1a5e26c2bc904668a50b4392750e76af57fd745b7.scope: Deactivated successfully.
Oct  3 10:45:43 compute-0 nova_compute[351685]: 2025-10-03 10:45:43.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:44 compute-0 podman[476444]: 2025-10-03 10:45:44.100194894 +0000 UTC m=+0.106728910 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 10:45:44 compute-0 podman[476445]: 2025-10-03 10:45:44.109915485 +0000 UTC m=+0.112280738 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible)
Oct  3 10:45:44 compute-0 podman[476446]: 2025-10-03 10:45:44.131467196 +0000 UTC m=+0.129257151 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 10:45:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2352: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:44 compute-0 podman[476544]: 2025-10-03 10:45:44.60367364 +0000 UTC m=+0.088290688 container create 6ed87ed07ea5e3b29fec09173a25cbbb0062268083d70d3935d3369b026324d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 10:45:44 compute-0 podman[476544]: 2025-10-03 10:45:44.577480331 +0000 UTC m=+0.062097379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:45:44 compute-0 systemd[1]: Started libpod-conmon-6ed87ed07ea5e3b29fec09173a25cbbb0062268083d70d3935d3369b026324d5.scope.
Oct  3 10:45:44 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:45:44 compute-0 podman[476544]: 2025-10-03 10:45:44.786661422 +0000 UTC m=+0.271278440 container init 6ed87ed07ea5e3b29fec09173a25cbbb0062268083d70d3935d3369b026324d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 10:45:44 compute-0 podman[476544]: 2025-10-03 10:45:44.806029202 +0000 UTC m=+0.290646220 container start 6ed87ed07ea5e3b29fec09173a25cbbb0062268083d70d3935d3369b026324d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:45:44 compute-0 podman[476544]: 2025-10-03 10:45:44.811757056 +0000 UTC m=+0.296374074 container attach 6ed87ed07ea5e3b29fec09173a25cbbb0062268083d70d3935d3369b026324d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:45:44 compute-0 vigorous_chandrasekhar[476560]: 167 167
Oct  3 10:45:44 compute-0 systemd[1]: libpod-6ed87ed07ea5e3b29fec09173a25cbbb0062268083d70d3935d3369b026324d5.scope: Deactivated successfully.
Oct  3 10:45:44 compute-0 podman[476544]: 2025-10-03 10:45:44.821109555 +0000 UTC m=+0.305726603 container died 6ed87ed07ea5e3b29fec09173a25cbbb0062268083d70d3935d3369b026324d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:45:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a12a3e5af874b9bc8bfa376df058b24495fc7049fb4b7bbd69142c8e5c6bb866-merged.mount: Deactivated successfully.
Oct  3 10:45:44 compute-0 podman[476544]: 2025-10-03 10:45:44.902571324 +0000 UTC m=+0.387188342 container remove 6ed87ed07ea5e3b29fec09173a25cbbb0062268083d70d3935d3369b026324d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:45:44 compute-0 systemd[1]: libpod-conmon-6ed87ed07ea5e3b29fec09173a25cbbb0062268083d70d3935d3369b026324d5.scope: Deactivated successfully.
Oct  3 10:45:45 compute-0 podman[476582]: 2025-10-03 10:45:45.202200182 +0000 UTC m=+0.086807662 container create eaf0a599de7d9193a24863ec6e8ffd97c34038141679728a4ea578f4695a2a72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:45:45 compute-0 podman[476582]: 2025-10-03 10:45:45.167052357 +0000 UTC m=+0.051659867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:45:45 compute-0 systemd[1]: Started libpod-conmon-eaf0a599de7d9193a24863ec6e8ffd97c34038141679728a4ea578f4695a2a72.scope.
Oct  3 10:45:45 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b83e8ad9c76fe881816af4b623b2db6ab6e0af2d9101f4f2687e0d380014cd24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b83e8ad9c76fe881816af4b623b2db6ab6e0af2d9101f4f2687e0d380014cd24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b83e8ad9c76fe881816af4b623b2db6ab6e0af2d9101f4f2687e0d380014cd24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b83e8ad9c76fe881816af4b623b2db6ab6e0af2d9101f4f2687e0d380014cd24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:45:45 compute-0 podman[476582]: 2025-10-03 10:45:45.374326186 +0000 UTC m=+0.258933686 container init eaf0a599de7d9193a24863ec6e8ffd97c34038141679728a4ea578f4695a2a72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:45:45 compute-0 podman[476582]: 2025-10-03 10:45:45.402869589 +0000 UTC m=+0.287477039 container start eaf0a599de7d9193a24863ec6e8ffd97c34038141679728a4ea578f4695a2a72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:45:45 compute-0 podman[476582]: 2025-10-03 10:45:45.408286394 +0000 UTC m=+0.292893984 container attach eaf0a599de7d9193a24863ec6e8ffd97c34038141679728a4ea578f4695a2a72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_murdock, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:45:46
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'images', 'backups', 'default.rgw.log', 'default.rgw.meta', 'vms']
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2353: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:46 compute-0 priceless_murdock[476597]: {
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:        "osd_id": 1,
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:        "type": "bluestore"
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:    },
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:        "osd_id": 2,
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:        "type": "bluestore"
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:    },
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:        "osd_id": 0,
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:        "type": "bluestore"
Oct  3 10:45:46 compute-0 priceless_murdock[476597]:    }
Oct  3 10:45:46 compute-0 priceless_murdock[476597]: }
Oct  3 10:45:46 compute-0 systemd[1]: libpod-eaf0a599de7d9193a24863ec6e8ffd97c34038141679728a4ea578f4695a2a72.scope: Deactivated successfully.
Oct  3 10:45:46 compute-0 systemd[1]: libpod-eaf0a599de7d9193a24863ec6e8ffd97c34038141679728a4ea578f4695a2a72.scope: Consumed 1.190s CPU time.
Oct  3 10:45:46 compute-0 podman[476630]: 2025-10-03 10:45:46.671805384 +0000 UTC m=+0.044243538 container died eaf0a599de7d9193a24863ec6e8ffd97c34038141679728a4ea578f4695a2a72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_murdock, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:45:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b83e8ad9c76fe881816af4b623b2db6ab6e0af2d9101f4f2687e0d380014cd24-merged.mount: Deactivated successfully.
Oct  3 10:45:46 compute-0 podman[476630]: 2025-10-03 10:45:46.75752007 +0000 UTC m=+0.129958244 container remove eaf0a599de7d9193a24863ec6e8ffd97c34038141679728a4ea578f4695a2a72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_murdock, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct  3 10:45:46 compute-0 systemd[1]: libpod-conmon-eaf0a599de7d9193a24863ec6e8ffd97c34038141679728a4ea578f4695a2a72.scope: Deactivated successfully.
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:45:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:45:46 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:45:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:45:46 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 569a91de-55e4-4ca7-982f-5e4232b06701 does not exist
Oct  3 10:45:46 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e103f8d3-78b4-42a8-a55e-f1501ee09ed9 does not exist
Oct  3 10:45:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:45:47.383747) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488347383859, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1552, "num_deletes": 255, "total_data_size": 2518025, "memory_usage": 2562112, "flush_reason": "Manual Compaction"}
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488347403712, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 2472504, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46361, "largest_seqno": 47912, "table_properties": {"data_size": 2465163, "index_size": 4348, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14608, "raw_average_key_size": 19, "raw_value_size": 2450645, "raw_average_value_size": 3289, "num_data_blocks": 195, "num_entries": 745, "num_filter_entries": 745, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759488181, "oldest_key_time": 1759488181, "file_creation_time": 1759488347, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 20046 microseconds, and 11085 cpu microseconds.
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:45:47.403803) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 2472504 bytes OK
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:45:47.403828) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:45:47.406178) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:45:47.406204) EVENT_LOG_v1 {"time_micros": 1759488347406195, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:45:47.406230) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 2511250, prev total WAL file size 2511250, number of live WAL files 2.
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:45:47.408189) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373631' seq:72057594037927935, type:22 .. '6C6F676D0032303132' seq:0, type:0; will stop at (end)
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(2414KB)], [110(7819KB)]
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488347408354, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 10479401, "oldest_snapshot_seqno": -1}
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6332 keys, 10380529 bytes, temperature: kUnknown
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488347471296, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 10380529, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10337792, "index_size": 25795, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 164496, "raw_average_key_size": 25, "raw_value_size": 10222605, "raw_average_value_size": 1614, "num_data_blocks": 1034, "num_entries": 6332, "num_filter_entries": 6332, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759488347, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:45:47.471518) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 10380529 bytes
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:45:47.474008) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.3 rd, 164.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 7.6 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(8.4) write-amplify(4.2) OK, records in: 6854, records dropped: 522 output_compression: NoCompression
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:45:47.474040) EVENT_LOG_v1 {"time_micros": 1759488347474027, "job": 66, "event": "compaction_finished", "compaction_time_micros": 62997, "compaction_time_cpu_micros": 46884, "output_level": 6, "num_output_files": 1, "total_output_size": 10380529, "num_input_records": 6854, "num_output_records": 6332, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488347474795, "job": 66, "event": "table_file_deletion", "file_number": 112}
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488347476561, "job": 66, "event": "table_file_deletion", "file_number": 110}
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:45:47.407898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:45:47.476812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:45:47.476818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:45:47.476821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:45:47.476823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:45:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:45:47.476825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:45:47 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:45:47 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:45:47 compute-0 nova_compute[351685]: 2025-10-03 10:45:47.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2354: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:48 compute-0 nova_compute[351685]: 2025-10-03 10:45:48.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2355: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2356: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:45:52 compute-0 podman[476695]: 2025-10-03 10:45:52.893518592 +0000 UTC m=+0.136545625 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:45:52 compute-0 nova_compute[351685]: 2025-10-03 10:45:52.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:53 compute-0 podman[476713]: 2025-10-03 10:45:53.037387409 +0000 UTC m=+0.108981522 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, container_name=kepler, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, distribution-scope=public, architecture=x86_64, name=ubi9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Oct  3 10:45:53 compute-0 podman[476732]: 2025-10-03 10:45:53.198683516 +0000 UTC m=+0.123745895 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:45:53 compute-0 nova_compute[351685]: 2025-10-03 10:45:53.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:45:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/684531743' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:45:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:45:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/684531743' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:45:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2357: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:45:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:45:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2358: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:45:57 compute-0 nova_compute[351685]: 2025-10-03 10:45:57.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2359: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:45:58 compute-0 nova_compute[351685]: 2025-10-03 10:45:58.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:45:59 compute-0 podman[157165]: time="2025-10-03T10:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:45:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:45:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9082 "" "Go-http-client/1.1"
Oct  3 10:46:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2360: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:01 compute-0 openstack_network_exporter[367524]: ERROR   10:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:46:01 compute-0 openstack_network_exporter[367524]: ERROR   10:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:46:01 compute-0 openstack_network_exporter[367524]: ERROR   10:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:46:01 compute-0 openstack_network_exporter[367524]: ERROR   10:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:46:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:46:01 compute-0 openstack_network_exporter[367524]: ERROR   10:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:46:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:46:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2361: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:46:02 compute-0 nova_compute[351685]: 2025-10-03 10:46:02.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:03 compute-0 nova_compute[351685]: 2025-10-03 10:46:03.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2362: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2363: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:46:07 compute-0 nova_compute[351685]: 2025-10-03 10:46:07.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2364: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:08 compute-0 nova_compute[351685]: 2025-10-03 10:46:08.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:08 compute-0 nova_compute[351685]: 2025-10-03 10:46:08.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:46:08 compute-0 nova_compute[351685]: 2025-10-03 10:46:08.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:46:08 compute-0 nova_compute[351685]: 2025-10-03 10:46:08.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:46:08 compute-0 podman[476756]: 2025-10-03 10:46:08.889846078 +0000 UTC m=+0.129362085 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4)
Oct  3 10:46:08 compute-0 podman[476755]: 2025-10-03 10:46:08.89865502 +0000 UTC m=+0.151012188 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, version=9.6)
Oct  3 10:46:08 compute-0 podman[476757]: 2025-10-03 10:46:08.921452421 +0000 UTC m=+0.167813906 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  3 10:46:09 compute-0 nova_compute[351685]: 2025-10-03 10:46:09.629 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:46:09 compute-0 nova_compute[351685]: 2025-10-03 10:46:09.630 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:46:09 compute-0 nova_compute[351685]: 2025-10-03 10:46:09.630 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:46:09 compute-0 nova_compute[351685]: 2025-10-03 10:46:09.630 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:46:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2365: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:10 compute-0 nova_compute[351685]: 2025-10-03 10:46:10.790 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:46:10 compute-0 nova_compute[351685]: 2025-10-03 10:46:10.813 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:46:10 compute-0 nova_compute[351685]: 2025-10-03 10:46:10.814 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:46:11 compute-0 podman[476819]: 2025-10-03 10:46:11.871334666 +0000 UTC m=+0.119241270 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:46:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2366: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:46:12 compute-0 nova_compute[351685]: 2025-10-03 10:46:12.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:13 compute-0 nova_compute[351685]: 2025-10-03 10:46:13.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:13 compute-0 nova_compute[351685]: 2025-10-03 10:46:13.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:46:13 compute-0 nova_compute[351685]: 2025-10-03 10:46:13.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:46:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2367: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:14 compute-0 podman[476839]: 2025-10-03 10:46:14.811043554 +0000 UTC m=+0.101519023 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  3 10:46:14 compute-0 podman[476840]: 2025-10-03 10:46:14.811966234 +0000 UTC m=+0.097062241 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:46:14 compute-0 podman[476838]: 2025-10-03 10:46:14.813898796 +0000 UTC m=+0.106793552 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:46:15 compute-0 nova_compute[351685]: 2025-10-03 10:46:15.726 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:46:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:46:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:46:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:46:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:46:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:46:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:46:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2368: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:46:17 compute-0 nova_compute[351685]: 2025-10-03 10:46:17.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:46:17 compute-0 nova_compute[351685]: 2025-10-03 10:46:17.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:46:17 compute-0 nova_compute[351685]: 2025-10-03 10:46:17.771 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:46:17 compute-0 nova_compute[351685]: 2025-10-03 10:46:17.773 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:46:17 compute-0 nova_compute[351685]: 2025-10-03 10:46:17.774 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:46:17 compute-0 nova_compute[351685]: 2025-10-03 10:46:17.774 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:46:17 compute-0 nova_compute[351685]: 2025-10-03 10:46:17.775 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:46:17 compute-0 nova_compute[351685]: 2025-10-03 10:46:17.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:46:18 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2379260863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:46:18 compute-0 nova_compute[351685]: 2025-10-03 10:46:18.273 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:46:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2369: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:18 compute-0 nova_compute[351685]: 2025-10-03 10:46:18.371 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:46:18 compute-0 nova_compute[351685]: 2025-10-03 10:46:18.371 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:46:18 compute-0 nova_compute[351685]: 2025-10-03 10:46:18.371 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:46:18 compute-0 nova_compute[351685]: 2025-10-03 10:46:18.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:18 compute-0 nova_compute[351685]: 2025-10-03 10:46:18.897 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:46:18 compute-0 nova_compute[351685]: 2025-10-03 10:46:18.898 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3827MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:46:18 compute-0 nova_compute[351685]: 2025-10-03 10:46:18.899 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:46:18 compute-0 nova_compute[351685]: 2025-10-03 10:46:18.899 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:46:18 compute-0 nova_compute[351685]: 2025-10-03 10:46:18.984 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:46:18 compute-0 nova_compute[351685]: 2025-10-03 10:46:18.984 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:46:18 compute-0 nova_compute[351685]: 2025-10-03 10:46:18.984 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:46:19 compute-0 nova_compute[351685]: 2025-10-03 10:46:19.022 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:46:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:46:19 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/648947609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:46:19 compute-0 nova_compute[351685]: 2025-10-03 10:46:19.666 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:46:19 compute-0 nova_compute[351685]: 2025-10-03 10:46:19.679 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:46:19 compute-0 nova_compute[351685]: 2025-10-03 10:46:19.941 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:46:19 compute-0 nova_compute[351685]: 2025-10-03 10:46:19.944 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:46:19 compute-0 nova_compute[351685]: 2025-10-03 10:46:19.944 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:46:19 compute-0 nova_compute[351685]: 2025-10-03 10:46:19.945 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:46:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2370: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2371: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:46:22 compute-0 nova_compute[351685]: 2025-10-03 10:46:22.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:23 compute-0 nova_compute[351685]: 2025-10-03 10:46:23.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:23 compute-0 podman[476943]: 2025-10-03 10:46:23.834789262 +0000 UTC m=+0.094497338 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:46:23 compute-0 podman[476944]: 2025-10-03 10:46:23.86282542 +0000 UTC m=+0.108886398 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, version=9.4, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Oct  3 10:46:23 compute-0 podman[476945]: 2025-10-03 10:46:23.871047183 +0000 UTC m=+0.124389005 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 10:46:23 compute-0 nova_compute[351685]: 2025-10-03 10:46:23.956 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:46:23 compute-0 nova_compute[351685]: 2025-10-03 10:46:23.957 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:46:23 compute-0 nova_compute[351685]: 2025-10-03 10:46:23.958 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:46:23 compute-0 nova_compute[351685]: 2025-10-03 10:46:23.958 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:46:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2372: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2373: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:46:27 compute-0 nova_compute[351685]: 2025-10-03 10:46:27.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2374: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:28 compute-0 nova_compute[351685]: 2025-10-03 10:46:28.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:29 compute-0 podman[157165]: time="2025-10-03T10:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:46:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:46:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9081 "" "Go-http-client/1.1"
Oct  3 10:46:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2375: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:31 compute-0 openstack_network_exporter[367524]: ERROR   10:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:46:31 compute-0 openstack_network_exporter[367524]: ERROR   10:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:46:31 compute-0 openstack_network_exporter[367524]: ERROR   10:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:46:31 compute-0 openstack_network_exporter[367524]: ERROR   10:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:46:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:46:31 compute-0 openstack_network_exporter[367524]: ERROR   10:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:46:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:46:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2376: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:46:32 compute-0 nova_compute[351685]: 2025-10-03 10:46:32.733 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:46:32 compute-0 nova_compute[351685]: 2025-10-03 10:46:32.734 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 10:46:32 compute-0 nova_compute[351685]: 2025-10-03 10:46:32.752 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 10:46:32 compute-0 nova_compute[351685]: 2025-10-03 10:46:32.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:33 compute-0 nova_compute[351685]: 2025-10-03 10:46:33.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2377: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2378: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:46:37 compute-0 nova_compute[351685]: 2025-10-03 10:46:37.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2379: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:38 compute-0 nova_compute[351685]: 2025-10-03 10:46:38.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:39 compute-0 podman[477006]: 2025-10-03 10:46:39.868064051 +0000 UTC m=+0.110936565 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Oct  3 10:46:39 compute-0 podman[477005]: 2025-10-03 10:46:39.876183661 +0000 UTC m=+0.132600439 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-type=git, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc.)
Oct  3 10:46:39 compute-0 podman[477007]: 2025-10-03 10:46:39.936441611 +0000 UTC m=+0.169953655 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  3 10:46:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2380: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:46:41.645 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:46:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:46:41.647 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:46:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:46:41.648 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:46:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2381: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:46:42 compute-0 podman[477070]: 2025-10-03 10:46:42.868176876 +0000 UTC m=+0.112057711 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct  3 10:46:43 compute-0 nova_compute[351685]: 2025-10-03 10:46:43.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:43 compute-0 nova_compute[351685]: 2025-10-03 10:46:43.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2382: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:45 compute-0 podman[477089]: 2025-10-03 10:46:45.866456332 +0000 UTC m=+0.112928888 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:46:45 compute-0 podman[477091]: 2025-10-03 10:46:45.887810536 +0000 UTC m=+0.119380925 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  3 10:46:45 compute-0 podman[477090]: 2025-10-03 10:46:45.895592436 +0000 UTC m=+0.134052385 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:46:46
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['volumes', 'vms', 'backups', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'images']
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2383: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:46 compute-0 nova_compute[351685]: 2025-10-03 10:46:46.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:46:46 compute-0 nova_compute[351685]: 2025-10-03 10:46:46.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:46:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:46:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:46:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:46:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:46:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:46:48 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:46:48 compute-0 nova_compute[351685]: 2025-10-03 10:46:48.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2384: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:48 compute-0 nova_compute[351685]: 2025-10-03 10:46:48.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:46:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:46:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:46:49 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:46:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:46:49 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:46:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:46:49 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:46:49 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev bdceb2f7-7dd2-4955-9e81-9b7aac9cece5 does not exist
Oct  3 10:46:49 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3b429ccd-f695-4a24-a418-ca50bdca02b0 does not exist
Oct  3 10:46:49 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 21db24ad-482c-4089-b4e7-ae1a2d154b88 does not exist
Oct  3 10:46:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:46:49 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:46:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:46:49 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:46:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:46:49 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:46:50 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:46:50 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:46:50 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:46:50.033097) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488410033172, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 750, "num_deletes": 251, "total_data_size": 934395, "memory_usage": 949072, "flush_reason": "Manual Compaction"}
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488410045050, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 925460, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47913, "largest_seqno": 48662, "table_properties": {"data_size": 921569, "index_size": 1671, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8708, "raw_average_key_size": 19, "raw_value_size": 913813, "raw_average_value_size": 2044, "num_data_blocks": 74, "num_entries": 447, "num_filter_entries": 447, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759488347, "oldest_key_time": 1759488347, "file_creation_time": 1759488410, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 12072 microseconds, and 6530 cpu microseconds.
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:46:50.045175) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 925460 bytes OK
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:46:50.045205) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:46:50.048484) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:46:50.048511) EVENT_LOG_v1 {"time_micros": 1759488410048503, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:46:50.048536) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 930574, prev total WAL file size 930574, number of live WAL files 2.
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:46:50.050395) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(903KB)], [113(10137KB)]
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488410050441, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 11305989, "oldest_snapshot_seqno": -1}
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6266 keys, 9578872 bytes, temperature: kUnknown
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488410113570, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 9578872, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9537353, "index_size": 24769, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 163801, "raw_average_key_size": 26, "raw_value_size": 9424094, "raw_average_value_size": 1504, "num_data_blocks": 984, "num_entries": 6266, "num_filter_entries": 6266, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759488410, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:46:50.114557) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 9578872 bytes
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:46:50.119396) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.8 rd, 151.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.9 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(22.6) write-amplify(10.4) OK, records in: 6779, records dropped: 513 output_compression: NoCompression
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:46:50.119455) EVENT_LOG_v1 {"time_micros": 1759488410119431, "job": 68, "event": "compaction_finished", "compaction_time_micros": 63244, "compaction_time_cpu_micros": 38007, "output_level": 6, "num_output_files": 1, "total_output_size": 9578872, "num_input_records": 6779, "num_output_records": 6266, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488410120055, "job": 68, "event": "table_file_deletion", "file_number": 115}
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488410123199, "job": 68, "event": "table_file_deletion", "file_number": 113}
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:46:50.050023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:46:50.123445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:46:50.123456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:46:50.123460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:46:50.123463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:46:50 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:46:50.123466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:46:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2385: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:50 compute-0 podman[477541]: 2025-10-03 10:46:50.444374483 +0000 UTC m=+0.082342428 container create 064ae59679e2740cbb25e8d431de5e03ab03c1abf078912aee26f60966e9c956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:46:50 compute-0 podman[477541]: 2025-10-03 10:46:50.412792402 +0000 UTC m=+0.050760397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:46:50 compute-0 systemd[1]: Started libpod-conmon-064ae59679e2740cbb25e8d431de5e03ab03c1abf078912aee26f60966e9c956.scope.
Oct  3 10:46:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:46:50 compute-0 podman[477541]: 2025-10-03 10:46:50.620800124 +0000 UTC m=+0.258768059 container init 064ae59679e2740cbb25e8d431de5e03ab03c1abf078912aee26f60966e9c956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brattain, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct  3 10:46:50 compute-0 podman[477541]: 2025-10-03 10:46:50.635471345 +0000 UTC m=+0.273439290 container start 064ae59679e2740cbb25e8d431de5e03ab03c1abf078912aee26f60966e9c956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brattain, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 10:46:50 compute-0 podman[477541]: 2025-10-03 10:46:50.642882392 +0000 UTC m=+0.280850347 container attach 064ae59679e2740cbb25e8d431de5e03ab03c1abf078912aee26f60966e9c956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct  3 10:46:50 compute-0 crazy_brattain[477556]: 167 167
Oct  3 10:46:50 compute-0 systemd[1]: libpod-064ae59679e2740cbb25e8d431de5e03ab03c1abf078912aee26f60966e9c956.scope: Deactivated successfully.
Oct  3 10:46:50 compute-0 conmon[477556]: conmon 064ae59679e2740cbb25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-064ae59679e2740cbb25e8d431de5e03ab03c1abf078912aee26f60966e9c956.scope/container/memory.events
Oct  3 10:46:50 compute-0 podman[477563]: 2025-10-03 10:46:50.711780208 +0000 UTC m=+0.036924023 container died 064ae59679e2740cbb25e8d431de5e03ab03c1abf078912aee26f60966e9c956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brattain, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 10:46:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3c9f4637d9d57921f2031e1884888f38eea5a0d3f0f1de063184621ea618288-merged.mount: Deactivated successfully.
Oct  3 10:46:50 compute-0 podman[477563]: 2025-10-03 10:46:50.770293042 +0000 UTC m=+0.095436827 container remove 064ae59679e2740cbb25e8d431de5e03ab03c1abf078912aee26f60966e9c956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 10:46:50 compute-0 systemd[1]: libpod-conmon-064ae59679e2740cbb25e8d431de5e03ab03c1abf078912aee26f60966e9c956.scope: Deactivated successfully.
Oct  3 10:46:51 compute-0 podman[477582]: 2025-10-03 10:46:51.063108482 +0000 UTC m=+0.090601714 container create a468a3d2a22bc0d86d616a3c5e707ca7d94a28412e4a134fa311aec12d55c7dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:46:51 compute-0 podman[477582]: 2025-10-03 10:46:51.024218196 +0000 UTC m=+0.051711478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:46:51 compute-0 systemd[1]: Started libpod-conmon-a468a3d2a22bc0d86d616a3c5e707ca7d94a28412e4a134fa311aec12d55c7dc.scope.
Oct  3 10:46:51 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b3e7cb6c9b86b569d833ded0af0d80c32055fbd1c5c5a61c139e3bde9d1182/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b3e7cb6c9b86b569d833ded0af0d80c32055fbd1c5c5a61c139e3bde9d1182/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b3e7cb6c9b86b569d833ded0af0d80c32055fbd1c5c5a61c139e3bde9d1182/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b3e7cb6c9b86b569d833ded0af0d80c32055fbd1c5c5a61c139e3bde9d1182/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30b3e7cb6c9b86b569d833ded0af0d80c32055fbd1c5c5a61c139e3bde9d1182/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:46:51 compute-0 podman[477582]: 2025-10-03 10:46:51.203440727 +0000 UTC m=+0.230933989 container init a468a3d2a22bc0d86d616a3c5e707ca7d94a28412e4a134fa311aec12d55c7dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kalam, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:46:51 compute-0 podman[477582]: 2025-10-03 10:46:51.224568033 +0000 UTC m=+0.252061305 container start a468a3d2a22bc0d86d616a3c5e707ca7d94a28412e4a134fa311aec12d55c7dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:46:51 compute-0 podman[477582]: 2025-10-03 10:46:51.231596029 +0000 UTC m=+0.259089321 container attach a468a3d2a22bc0d86d616a3c5e707ca7d94a28412e4a134fa311aec12d55c7dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kalam, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:46:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2386: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:46:52 compute-0 romantic_kalam[477597]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:46:52 compute-0 romantic_kalam[477597]: --> relative data size: 1.0
Oct  3 10:46:52 compute-0 romantic_kalam[477597]: --> All data devices are unavailable
Oct  3 10:46:52 compute-0 systemd[1]: libpod-a468a3d2a22bc0d86d616a3c5e707ca7d94a28412e4a134fa311aec12d55c7dc.scope: Deactivated successfully.
Oct  3 10:46:52 compute-0 podman[477582]: 2025-10-03 10:46:52.573655455 +0000 UTC m=+1.601148747 container died a468a3d2a22bc0d86d616a3c5e707ca7d94a28412e4a134fa311aec12d55c7dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 10:46:52 compute-0 systemd[1]: libpod-a468a3d2a22bc0d86d616a3c5e707ca7d94a28412e4a134fa311aec12d55c7dc.scope: Consumed 1.273s CPU time.
Oct  3 10:46:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-30b3e7cb6c9b86b569d833ded0af0d80c32055fbd1c5c5a61c139e3bde9d1182-merged.mount: Deactivated successfully.
Oct  3 10:46:52 compute-0 podman[477582]: 2025-10-03 10:46:52.676389476 +0000 UTC m=+1.703882738 container remove a468a3d2a22bc0d86d616a3c5e707ca7d94a28412e4a134fa311aec12d55c7dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kalam, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 10:46:52 compute-0 systemd[1]: libpod-conmon-a468a3d2a22bc0d86d616a3c5e707ca7d94a28412e4a134fa311aec12d55c7dc.scope: Deactivated successfully.
Oct  3 10:46:53 compute-0 nova_compute[351685]: 2025-10-03 10:46:53.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:53 compute-0 nova_compute[351685]: 2025-10-03 10:46:53.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:53 compute-0 podman[477776]: 2025-10-03 10:46:53.725781549 +0000 UTC m=+0.086482761 container create 3adeb7863e2302c06dd518367c6bf503c34e7a2aad0316b9fff7314445e8b622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct  3 10:46:53 compute-0 podman[477776]: 2025-10-03 10:46:53.685505518 +0000 UTC m=+0.046206810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:46:53 compute-0 systemd[1]: Started libpod-conmon-3adeb7863e2302c06dd518367c6bf503c34e7a2aad0316b9fff7314445e8b622.scope.
Oct  3 10:46:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:46:53 compute-0 podman[477776]: 2025-10-03 10:46:53.857830328 +0000 UTC m=+0.218531560 container init 3adeb7863e2302c06dd518367c6bf503c34e7a2aad0316b9fff7314445e8b622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_fermat, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:46:53 compute-0 podman[477776]: 2025-10-03 10:46:53.876384112 +0000 UTC m=+0.237085324 container start 3adeb7863e2302c06dd518367c6bf503c34e7a2aad0316b9fff7314445e8b622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:46:53 compute-0 nifty_fermat[477793]: 167 167
Oct  3 10:46:53 compute-0 podman[477776]: 2025-10-03 10:46:53.883324474 +0000 UTC m=+0.244025706 container attach 3adeb7863e2302c06dd518367c6bf503c34e7a2aad0316b9fff7314445e8b622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:46:53 compute-0 systemd[1]: libpod-3adeb7863e2302c06dd518367c6bf503c34e7a2aad0316b9fff7314445e8b622.scope: Deactivated successfully.
Oct  3 10:46:53 compute-0 podman[477776]: 2025-10-03 10:46:53.885446903 +0000 UTC m=+0.246148125 container died 3adeb7863e2302c06dd518367c6bf503c34e7a2aad0316b9fff7314445e8b622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_fermat, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 10:46:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-34371698ae6e77fdfdbb664b067f7d369de3855de3afbdb32bdc75a99ce23060-merged.mount: Deactivated successfully.
Oct  3 10:46:53 compute-0 podman[477776]: 2025-10-03 10:46:53.955323961 +0000 UTC m=+0.316025153 container remove 3adeb7863e2302c06dd518367c6bf503c34e7a2aad0316b9fff7314445e8b622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_fermat, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:46:53 compute-0 systemd[1]: libpod-conmon-3adeb7863e2302c06dd518367c6bf503c34e7a2aad0316b9fff7314445e8b622.scope: Deactivated successfully.
Oct  3 10:46:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:46:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/20202694' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:46:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:46:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/20202694' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:46:54 compute-0 podman[477806]: 2025-10-03 10:46:54.019826177 +0000 UTC m=+0.082958799 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 10:46:54 compute-0 podman[477799]: 2025-10-03 10:46:54.044440285 +0000 UTC m=+0.099227710 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:46:54 compute-0 podman[477805]: 2025-10-03 10:46:54.048871107 +0000 UTC m=+0.113429894 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, release-0.7.12=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.component=ubi9-container, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:46:54 compute-0 podman[477876]: 2025-10-03 10:46:54.197126935 +0000 UTC m=+0.069749845 container create 57b45431d99f6cbcaae69cc8a39455103ee3878a9bdd7e319e4f473f10f4629e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_saha, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:46:54 compute-0 podman[477876]: 2025-10-03 10:46:54.16699447 +0000 UTC m=+0.039617370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:46:54 compute-0 systemd[1]: Started libpod-conmon-57b45431d99f6cbcaae69cc8a39455103ee3878a9bdd7e319e4f473f10f4629e.scope.
Oct  3 10:46:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2387: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:54 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f92b97528f075506f49ef6f2b1145677f5aa792eb7f33f97855d5cc61f62d6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f92b97528f075506f49ef6f2b1145677f5aa792eb7f33f97855d5cc61f62d6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f92b97528f075506f49ef6f2b1145677f5aa792eb7f33f97855d5cc61f62d6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f92b97528f075506f49ef6f2b1145677f5aa792eb7f33f97855d5cc61f62d6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:46:54 compute-0 podman[477876]: 2025-10-03 10:46:54.392408201 +0000 UTC m=+0.265031171 container init 57b45431d99f6cbcaae69cc8a39455103ee3878a9bdd7e319e4f473f10f4629e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_saha, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:46:54 compute-0 podman[477876]: 2025-10-03 10:46:54.409757336 +0000 UTC m=+0.282380246 container start 57b45431d99f6cbcaae69cc8a39455103ee3878a9bdd7e319e4f473f10f4629e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_saha, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:46:54 compute-0 podman[477876]: 2025-10-03 10:46:54.416636697 +0000 UTC m=+0.289259627 container attach 57b45431d99f6cbcaae69cc8a39455103ee3878a9bdd7e319e4f473f10f4629e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_saha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  3 10:46:55 compute-0 inspiring_saha[477892]: {
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:    "0": [
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:        {
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "devices": [
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "/dev/loop3"
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            ],
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "lv_name": "ceph_lv0",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "lv_size": "21470642176",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "name": "ceph_lv0",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "tags": {
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.cluster_name": "ceph",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.crush_device_class": "",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.encrypted": "0",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.osd_id": "0",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.type": "block",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.vdo": "0"
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            },
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "type": "block",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "vg_name": "ceph_vg0"
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:        }
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:    ],
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:    "1": [
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:        {
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "devices": [
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "/dev/loop4"
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            ],
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "lv_name": "ceph_lv1",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "lv_size": "21470642176",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "name": "ceph_lv1",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "tags": {
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.cluster_name": "ceph",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.crush_device_class": "",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.encrypted": "0",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.osd_id": "1",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.type": "block",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.vdo": "0"
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            },
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "type": "block",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "vg_name": "ceph_vg1"
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:        }
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:    ],
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:    "2": [
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:        {
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "devices": [
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "/dev/loop5"
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            ],
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "lv_name": "ceph_lv2",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "lv_size": "21470642176",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "name": "ceph_lv2",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "tags": {
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.cluster_name": "ceph",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.crush_device_class": "",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.encrypted": "0",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.osd_id": "2",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.type": "block",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:                "ceph.vdo": "0"
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            },
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "type": "block",
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:            "vg_name": "ceph_vg2"
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:        }
Oct  3 10:46:55 compute-0 inspiring_saha[477892]:    ]
Oct  3 10:46:55 compute-0 inspiring_saha[477892]: }
Oct  3 10:46:55 compute-0 systemd[1]: libpod-57b45431d99f6cbcaae69cc8a39455103ee3878a9bdd7e319e4f473f10f4629e.scope: Deactivated successfully.
Oct  3 10:46:55 compute-0 podman[477901]: 2025-10-03 10:46:55.409889921 +0000 UTC m=+0.066099028 container died 57b45431d99f6cbcaae69cc8a39455103ee3878a9bdd7e319e4f473f10f4629e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_saha, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 10:46:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f92b97528f075506f49ef6f2b1145677f5aa792eb7f33f97855d5cc61f62d6a-merged.mount: Deactivated successfully.
Oct  3 10:46:55 compute-0 podman[477901]: 2025-10-03 10:46:55.510900226 +0000 UTC m=+0.167109243 container remove 57b45431d99f6cbcaae69cc8a39455103ee3878a9bdd7e319e4f473f10f4629e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:46:55 compute-0 systemd[1]: libpod-conmon-57b45431d99f6cbcaae69cc8a39455103ee3878a9bdd7e319e4f473f10f4629e.scope: Deactivated successfully.
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:46:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:46:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2388: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:56 compute-0 podman[478055]: 2025-10-03 10:46:56.704693435 +0000 UTC m=+0.079068894 container create f50951c4d6a0d9686ae1dcee52d0893f6aa5f5f80b95de5e5805a0ea146ce5bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 10:46:56 compute-0 podman[478055]: 2025-10-03 10:46:56.672200243 +0000 UTC m=+0.046575812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:46:56 compute-0 systemd[1]: Started libpod-conmon-f50951c4d6a0d9686ae1dcee52d0893f6aa5f5f80b95de5e5805a0ea146ce5bb.scope.
Oct  3 10:46:56 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:46:56 compute-0 podman[478055]: 2025-10-03 10:46:56.86315704 +0000 UTC m=+0.237532509 container init f50951c4d6a0d9686ae1dcee52d0893f6aa5f5f80b95de5e5805a0ea146ce5bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 10:46:56 compute-0 podman[478055]: 2025-10-03 10:46:56.875402922 +0000 UTC m=+0.249778381 container start f50951c4d6a0d9686ae1dcee52d0893f6aa5f5f80b95de5e5805a0ea146ce5bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 10:46:56 compute-0 podman[478055]: 2025-10-03 10:46:56.879901146 +0000 UTC m=+0.254276615 container attach f50951c4d6a0d9686ae1dcee52d0893f6aa5f5f80b95de5e5805a0ea146ce5bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:46:56 compute-0 trusting_jang[478071]: 167 167
Oct  3 10:46:56 compute-0 systemd[1]: libpod-f50951c4d6a0d9686ae1dcee52d0893f6aa5f5f80b95de5e5805a0ea146ce5bb.scope: Deactivated successfully.
Oct  3 10:46:56 compute-0 podman[478055]: 2025-10-03 10:46:56.884103691 +0000 UTC m=+0.258479150 container died f50951c4d6a0d9686ae1dcee52d0893f6aa5f5f80b95de5e5805a0ea146ce5bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:46:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-89dc213d57e69019730af75df0c00ad25a1241f5768cc7f9a1d8bdaa4d8cb75e-merged.mount: Deactivated successfully.
Oct  3 10:46:56 compute-0 podman[478055]: 2025-10-03 10:46:56.936642984 +0000 UTC m=+0.311018443 container remove f50951c4d6a0d9686ae1dcee52d0893f6aa5f5f80b95de5e5805a0ea146ce5bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 10:46:56 compute-0 systemd[1]: libpod-conmon-f50951c4d6a0d9686ae1dcee52d0893f6aa5f5f80b95de5e5805a0ea146ce5bb.scope: Deactivated successfully.
Oct  3 10:46:57 compute-0 podman[478095]: 2025-10-03 10:46:57.182519879 +0000 UTC m=+0.081781480 container create df4d98915753ad891db3405e9bb17fbcf4535954bf2a14d0fa576da78418efe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 10:46:57 compute-0 podman[478095]: 2025-10-03 10:46:57.14321026 +0000 UTC m=+0.042471921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:46:57 compute-0 systemd[1]: Started libpod-conmon-df4d98915753ad891db3405e9bb17fbcf4535954bf2a14d0fa576da78418efe4.scope.
Oct  3 10:46:57 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:46:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/066edda81b0c4762cd74cb72a0f713219a87e1ab00c3fd9eaebc819cfa3c87ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:46:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/066edda81b0c4762cd74cb72a0f713219a87e1ab00c3fd9eaebc819cfa3c87ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:46:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/066edda81b0c4762cd74cb72a0f713219a87e1ab00c3fd9eaebc819cfa3c87ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:46:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/066edda81b0c4762cd74cb72a0f713219a87e1ab00c3fd9eaebc819cfa3c87ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:46:57 compute-0 podman[478095]: 2025-10-03 10:46:57.36298351 +0000 UTC m=+0.262245161 container init df4d98915753ad891db3405e9bb17fbcf4535954bf2a14d0fa576da78418efe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 10:46:57 compute-0 podman[478095]: 2025-10-03 10:46:57.380826941 +0000 UTC m=+0.280088552 container start df4d98915753ad891db3405e9bb17fbcf4535954bf2a14d0fa576da78418efe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  3 10:46:57 compute-0 podman[478095]: 2025-10-03 10:46:57.387462374 +0000 UTC m=+0.286723985 container attach df4d98915753ad891db3405e9bb17fbcf4535954bf2a14d0fa576da78418efe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:46:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:46:58 compute-0 nova_compute[351685]: 2025-10-03 10:46:58.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2389: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:46:58 compute-0 nova_compute[351685]: 2025-10-03 10:46:58.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:46:58 compute-0 clever_edison[478111]: {
Oct  3 10:46:58 compute-0 clever_edison[478111]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:46:58 compute-0 clever_edison[478111]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:46:58 compute-0 clever_edison[478111]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:46:58 compute-0 clever_edison[478111]:        "osd_id": 1,
Oct  3 10:46:58 compute-0 clever_edison[478111]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:46:58 compute-0 clever_edison[478111]:        "type": "bluestore"
Oct  3 10:46:58 compute-0 clever_edison[478111]:    },
Oct  3 10:46:58 compute-0 clever_edison[478111]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:46:58 compute-0 clever_edison[478111]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:46:58 compute-0 clever_edison[478111]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:46:58 compute-0 clever_edison[478111]:        "osd_id": 2,
Oct  3 10:46:58 compute-0 clever_edison[478111]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:46:58 compute-0 clever_edison[478111]:        "type": "bluestore"
Oct  3 10:46:58 compute-0 clever_edison[478111]:    },
Oct  3 10:46:58 compute-0 clever_edison[478111]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:46:58 compute-0 clever_edison[478111]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:46:58 compute-0 clever_edison[478111]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:46:58 compute-0 clever_edison[478111]:        "osd_id": 0,
Oct  3 10:46:58 compute-0 clever_edison[478111]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:46:58 compute-0 clever_edison[478111]:        "type": "bluestore"
Oct  3 10:46:58 compute-0 clever_edison[478111]:    }
Oct  3 10:46:58 compute-0 clever_edison[478111]: }
Oct  3 10:46:58 compute-0 systemd[1]: libpod-df4d98915753ad891db3405e9bb17fbcf4535954bf2a14d0fa576da78418efe4.scope: Deactivated successfully.
Oct  3 10:46:58 compute-0 podman[478095]: 2025-10-03 10:46:58.55654988 +0000 UTC m=+1.455811521 container died df4d98915753ad891db3405e9bb17fbcf4535954bf2a14d0fa576da78418efe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 10:46:58 compute-0 systemd[1]: libpod-df4d98915753ad891db3405e9bb17fbcf4535954bf2a14d0fa576da78418efe4.scope: Consumed 1.176s CPU time.
Oct  3 10:46:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-066edda81b0c4762cd74cb72a0f713219a87e1ab00c3fd9eaebc819cfa3c87ba-merged.mount: Deactivated successfully.
Oct  3 10:46:58 compute-0 podman[478095]: 2025-10-03 10:46:58.658717233 +0000 UTC m=+1.557978844 container remove df4d98915753ad891db3405e9bb17fbcf4535954bf2a14d0fa576da78418efe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:46:58 compute-0 systemd[1]: libpod-conmon-df4d98915753ad891db3405e9bb17fbcf4535954bf2a14d0fa576da78418efe4.scope: Deactivated successfully.
Oct  3 10:46:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:46:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:46:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:46:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:46:58 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev eab4ccb7-66b5-4865-86cb-0777833d3ef6 does not exist
Oct  3 10:46:58 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a1f5935d-6a21-4b1d-8d6a-3123b6783b06 does not exist
Oct  3 10:46:59 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:46:59 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:46:59 compute-0 podman[157165]: time="2025-10-03T10:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:46:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:46:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9080 "" "Go-http-client/1.1"
Oct  3 10:47:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2390: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:01 compute-0 openstack_network_exporter[367524]: ERROR   10:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:47:01 compute-0 openstack_network_exporter[367524]: ERROR   10:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:47:01 compute-0 openstack_network_exporter[367524]: ERROR   10:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:47:01 compute-0 openstack_network_exporter[367524]: ERROR   10:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:47:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:47:01 compute-0 openstack_network_exporter[367524]: ERROR   10:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:47:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:47:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2391: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:47:03 compute-0 nova_compute[351685]: 2025-10-03 10:47:03.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:03 compute-0 nova_compute[351685]: 2025-10-03 10:47:03.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2392: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2393: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:47:08 compute-0 nova_compute[351685]: 2025-10-03 10:47:08.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2394: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:08 compute-0 nova_compute[351685]: 2025-10-03 10:47:08.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:08 compute-0 nova_compute[351685]: 2025-10-03 10:47:08.750 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:47:08 compute-0 nova_compute[351685]: 2025-10-03 10:47:08.751 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:47:08 compute-0 nova_compute[351685]: 2025-10-03 10:47:08.751 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:47:09 compute-0 nova_compute[351685]: 2025-10-03 10:47:09.662 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:47:09 compute-0 nova_compute[351685]: 2025-10-03 10:47:09.663 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:47:09 compute-0 nova_compute[351685]: 2025-10-03 10:47:09.663 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:47:09 compute-0 nova_compute[351685]: 2025-10-03 10:47:09.664 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:47:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2395: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:10 compute-0 podman[478206]: 2025-10-03 10:47:10.896144456 +0000 UTC m=+0.131837264 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:47:10 compute-0 podman[478205]: 2025-10-03 10:47:10.915603789 +0000 UTC m=+0.155756541 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:47:10 compute-0 podman[478207]: 2025-10-03 10:47:10.957600904 +0000 UTC m=+0.188833000 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:47:11 compute-0 nova_compute[351685]: 2025-10-03 10:47:11.148 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:47:11 compute-0 nova_compute[351685]: 2025-10-03 10:47:11.167 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:47:11 compute-0 nova_compute[351685]: 2025-10-03 10:47:11.167 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:47:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2396: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:47:13 compute-0 nova_compute[351685]: 2025-10-03 10:47:13.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:13 compute-0 nova_compute[351685]: 2025-10-03 10:47:13.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:13 compute-0 nova_compute[351685]: 2025-10-03 10:47:13.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:47:13 compute-0 nova_compute[351685]: 2025-10-03 10:47:13.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:47:13 compute-0 podman[478269]: 2025-10-03 10:47:13.893547725 +0000 UTC m=+0.142961220 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 10:47:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2397: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:47:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:47:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:47:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:47:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:47:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:47:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2398: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:16 compute-0 nova_compute[351685]: 2025-10-03 10:47:16.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:47:16 compute-0 podman[478290]: 2025-10-03 10:47:16.861915483 +0000 UTC m=+0.110047706 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  3 10:47:16 compute-0 podman[478288]: 2025-10-03 10:47:16.867366908 +0000 UTC m=+0.119239601 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:47:16 compute-0 podman[478289]: 2025-10-03 10:47:16.8755464 +0000 UTC m=+0.121140132 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  3 10:47:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:47:17 compute-0 nova_compute[351685]: 2025-10-03 10:47:17.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:47:18 compute-0 nova_compute[351685]: 2025-10-03 10:47:18.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2399: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:18 compute-0 nova_compute[351685]: 2025-10-03 10:47:18.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:19 compute-0 nova_compute[351685]: 2025-10-03 10:47:19.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:47:19 compute-0 nova_compute[351685]: 2025-10-03 10:47:19.774 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:47:19 compute-0 nova_compute[351685]: 2025-10-03 10:47:19.775 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:47:19 compute-0 nova_compute[351685]: 2025-10-03 10:47:19.775 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:47:19 compute-0 nova_compute[351685]: 2025-10-03 10:47:19.776 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:47:19 compute-0 nova_compute[351685]: 2025-10-03 10:47:19.777 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:47:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:47:20 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/261490595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:47:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2400: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:20 compute-0 nova_compute[351685]: 2025-10-03 10:47:20.322 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:47:20 compute-0 nova_compute[351685]: 2025-10-03 10:47:20.412 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:47:20 compute-0 nova_compute[351685]: 2025-10-03 10:47:20.414 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:47:20 compute-0 nova_compute[351685]: 2025-10-03 10:47:20.415 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:47:20 compute-0 nova_compute[351685]: 2025-10-03 10:47:20.889 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:47:20 compute-0 nova_compute[351685]: 2025-10-03 10:47:20.891 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3812MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:47:20 compute-0 nova_compute[351685]: 2025-10-03 10:47:20.891 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:47:20 compute-0 nova_compute[351685]: 2025-10-03 10:47:20.892 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:47:21 compute-0 nova_compute[351685]: 2025-10-03 10:47:21.128 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:47:21 compute-0 nova_compute[351685]: 2025-10-03 10:47:21.129 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:47:21 compute-0 nova_compute[351685]: 2025-10-03 10:47:21.129 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:47:21 compute-0 nova_compute[351685]: 2025-10-03 10:47:21.321 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:47:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:47:21 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/412975490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:47:21 compute-0 nova_compute[351685]: 2025-10-03 10:47:21.836 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:47:21 compute-0 nova_compute[351685]: 2025-10-03 10:47:21.849 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:47:21 compute-0 nova_compute[351685]: 2025-10-03 10:47:21.877 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:47:21 compute-0 nova_compute[351685]: 2025-10-03 10:47:21.880 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:47:21 compute-0 nova_compute[351685]: 2025-10-03 10:47:21.881 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:47:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2401: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:47:23 compute-0 nova_compute[351685]: 2025-10-03 10:47:23.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:23 compute-0 nova_compute[351685]: 2025-10-03 10:47:23.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:23 compute-0 nova_compute[351685]: 2025-10-03 10:47:23.880 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:47:23 compute-0 nova_compute[351685]: 2025-10-03 10:47:23.881 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:47:23 compute-0 nova_compute[351685]: 2025-10-03 10:47:23.882 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:47:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2402: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:24 compute-0 nova_compute[351685]: 2025-10-03 10:47:24.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:47:24 compute-0 podman[478397]: 2025-10-03 10:47:24.860786724 +0000 UTC m=+0.105801720 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, distribution-scope=public, config_id=edpm, io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, name=ubi9, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, container_name=kepler, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., release-0.7.12=)
Oct  3 10:47:24 compute-0 podman[478396]: 2025-10-03 10:47:24.862996135 +0000 UTC m=+0.104856640 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:47:24 compute-0 podman[478398]: 2025-10-03 10:47:24.867368625 +0000 UTC m=+0.096275785 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 10:47:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2403: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:47:27.413850) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488447413913, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 542, "num_deletes": 250, "total_data_size": 593744, "memory_usage": 604336, "flush_reason": "Manual Compaction"}
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488447420423, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 417618, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48663, "largest_seqno": 49204, "table_properties": {"data_size": 414889, "index_size": 765, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7257, "raw_average_key_size": 20, "raw_value_size": 409293, "raw_average_value_size": 1156, "num_data_blocks": 34, "num_entries": 354, "num_filter_entries": 354, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759488411, "oldest_key_time": 1759488411, "file_creation_time": 1759488447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 6626 microseconds, and 2619 cpu microseconds.
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:47:27.420477) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 417618 bytes OK
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:47:27.420492) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:47:27.423882) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:47:27.423904) EVENT_LOG_v1 {"time_micros": 1759488447423896, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:47:27.423923) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 590684, prev total WAL file size 590684, number of live WAL files 2.
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:47:27.425071) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303034' seq:72057594037927935, type:22 .. '6D6772737461740032323535' seq:0, type:0; will stop at (end)
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(407KB)], [116(9354KB)]
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488447425157, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 9996490, "oldest_snapshot_seqno": -1}
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 6122 keys, 6868463 bytes, temperature: kUnknown
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488447478647, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 6868463, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6832219, "index_size": 19814, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 160976, "raw_average_key_size": 26, "raw_value_size": 6725708, "raw_average_value_size": 1098, "num_data_blocks": 779, "num_entries": 6122, "num_filter_entries": 6122, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759488447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:47:27.478982) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 6868463 bytes
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:47:27.481817) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.6 rd, 128.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.1 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(40.4) write-amplify(16.4) OK, records in: 6620, records dropped: 498 output_compression: NoCompression
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:47:27.481850) EVENT_LOG_v1 {"time_micros": 1759488447481834, "job": 70, "event": "compaction_finished", "compaction_time_micros": 53584, "compaction_time_cpu_micros": 38575, "output_level": 6, "num_output_files": 1, "total_output_size": 6868463, "num_input_records": 6620, "num_output_records": 6122, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488447482177, "job": 70, "event": "table_file_deletion", "file_number": 118}
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488447486074, "job": 70, "event": "table_file_deletion", "file_number": 116}
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:47:27.424824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:47:27.486369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:47:27.486375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:47:27.486377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:47:27.486379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:47:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:47:27.486381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:47:28 compute-0 nova_compute[351685]: 2025-10-03 10:47:28.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2404: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:28 compute-0 nova_compute[351685]: 2025-10-03 10:47:28.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:29 compute-0 podman[157165]: time="2025-10-03T10:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:47:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:47:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9094 "" "Go-http-client/1.1"
Oct  3 10:47:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2405: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:31 compute-0 openstack_network_exporter[367524]: ERROR   10:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:47:31 compute-0 openstack_network_exporter[367524]: ERROR   10:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:47:31 compute-0 openstack_network_exporter[367524]: ERROR   10:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:47:31 compute-0 openstack_network_exporter[367524]: ERROR   10:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:47:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:47:31 compute-0 openstack_network_exporter[367524]: ERROR   10:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:47:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:47:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2406: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:47:33 compute-0 nova_compute[351685]: 2025-10-03 10:47:33.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:33 compute-0 nova_compute[351685]: 2025-10-03 10:47:33.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2407: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2408: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:47:38 compute-0 nova_compute[351685]: 2025-10-03 10:47:38.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2409: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:38 compute-0 nova_compute[351685]: 2025-10-03 10:47:38.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:39 compute-0 nova_compute[351685]: 2025-10-03 10:47:39.726 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:47:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2410: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.895 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.896 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.896 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.906 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.907 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.908 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.908 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.908 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:47:40.908872) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.914 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.915 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.915 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.916 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.916 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.916 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:47:40.916641) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.916 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.917 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.917 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.917 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.917 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.918 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.918 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.918 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:47:40.918163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.943 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.944 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.944 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.944 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.944 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.944 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.944 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.945 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.945 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:40.945 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:47:40.945081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.008 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.010 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.011 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.012 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.012 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:47:41.011836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.012 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.014 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.014 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.015 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.015 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.016 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:47:41.015472) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.017 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.018 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.019 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:47:41.018883) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.021 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:47:41.022502) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.024 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.025 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.025 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.025 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:47:41.025858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.026 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.026 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.027 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.027 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.028 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.028 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.029 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.029 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:47:41.029474) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.029 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.065 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.066 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.067 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.069 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:47:41.067909) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.069 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.069 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.072 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.074 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.074 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.075 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.075 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:47:41.075805) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.076 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.077 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.078 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.080 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.080 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.081 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.081 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.082 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:47:41.081542) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.082 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.083 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.085 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.086 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.086 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.087 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:47:41.086962) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.088 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.089 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.090 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.090 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.091 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.091 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:47:41.091014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.092 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.093 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.093 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.094 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.094 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.094 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:47:41.094505) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.095 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.095 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.096 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.096 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.097 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.097 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:47:41.096789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.097 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.098 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.098 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.099 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.099 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.099 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.100 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:47:41.099786) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.100 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 70640000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.101 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.101 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.101 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.102 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.102 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.102 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:47:41.102548) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.102 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.103 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.104 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.104 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.104 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.105 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.105 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.105 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:47:41.105071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.106 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.106 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.106 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.106 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.107 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:47:41.106842) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.107 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.107 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.108 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.108 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.108 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.108 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:47:41.108538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.109 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.110 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.110 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.110 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.110 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.110 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.111 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.111 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:47:41.110642) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.111 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.111 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.112 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.112 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.112 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.112 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.113 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:47:41.112275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.113 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.114 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.119 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.119 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.119 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.119 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:47:41.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:47:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:47:41.646 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:47:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:47:41.647 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:47:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:47:41.648 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:47:41 compute-0 podman[478457]: 2025-10-03 10:47:41.876712 +0000 UTC m=+0.124387685 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., distribution-scope=public, version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct  3 10:47:41 compute-0 podman[478458]: 2025-10-03 10:47:41.885669287 +0000 UTC m=+0.131952648 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 10:47:41 compute-0 podman[478459]: 2025-10-03 10:47:41.927704804 +0000 UTC m=+0.163197259 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 10:47:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2411: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:47:43 compute-0 nova_compute[351685]: 2025-10-03 10:47:43.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:43 compute-0 nova_compute[351685]: 2025-10-03 10:47:43.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2412: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:44 compute-0 podman[478517]: 2025-10-03 10:47:44.830665896 +0000 UTC m=+0.131801472 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:47:46
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.log', '.rgw.root', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control']
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2413: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:47:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:47:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:47:47 compute-0 podman[478536]: 2025-10-03 10:47:47.849874742 +0000 UTC m=+0.101644377 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:47:47 compute-0 podman[478537]: 2025-10-03 10:47:47.869840292 +0000 UTC m=+0.114912712 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 10:47:47 compute-0 podman[478538]: 2025-10-03 10:47:47.888492529 +0000 UTC m=+0.129967835 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:47:48 compute-0 nova_compute[351685]: 2025-10-03 10:47:48.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2414: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:48 compute-0 nova_compute[351685]: 2025-10-03 10:47:48.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2415: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2416: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:47:53 compute-0 nova_compute[351685]: 2025-10-03 10:47:53.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:53 compute-0 nova_compute[351685]: 2025-10-03 10:47:53.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:47:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1579148616' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:47:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:47:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1579148616' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:47:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2417: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:47:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:47:55 compute-0 podman[478601]: 2025-10-03 10:47:55.847470579 +0000 UTC m=+0.083440029 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:47:55 compute-0 podman[478600]: 2025-10-03 10:47:55.858612156 +0000 UTC m=+0.103166382 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.buildah.version=1.29.0, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=)
Oct  3 10:47:55 compute-0 podman[478599]: 2025-10-03 10:47:55.882670548 +0000 UTC m=+0.124773995 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:47:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2418: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:47:58 compute-0 nova_compute[351685]: 2025-10-03 10:47:58.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2419: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:47:58 compute-0 nova_compute[351685]: 2025-10-03 10:47:58.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:47:59 compute-0 podman[157165]: time="2025-10-03T10:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:47:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:47:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9083 "" "Go-http-client/1.1"
Oct  3 10:48:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  3 10:48:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 10:48:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:48:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:48:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:48:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:48:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:48:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:48:00 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev abe35d67-c168-4a7c-ae77-a90fa696508d does not exist
Oct  3 10:48:00 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ea8befaf-1403-4cab-b7d6-300fe22ba97e does not exist
Oct  3 10:48:00 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 72da3b1f-e5c6-46fe-8f2d-6c2951f2d9de does not exist
Oct  3 10:48:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:48:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:48:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:48:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:48:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:48:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:48:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2420: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:00 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 10:48:00 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:48:00 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:48:00 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:48:01 compute-0 openstack_network_exporter[367524]: ERROR   10:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:48:01 compute-0 openstack_network_exporter[367524]: ERROR   10:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:48:01 compute-0 openstack_network_exporter[367524]: ERROR   10:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:48:01 compute-0 openstack_network_exporter[367524]: ERROR   10:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:48:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:48:01 compute-0 openstack_network_exporter[367524]: ERROR   10:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:48:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:48:01 compute-0 podman[478928]: 2025-10-03 10:48:01.460127438 +0000 UTC m=+0.082435307 container create 4b45e666d9f2eafba9d5e577c965d926189d798bd65ee4486a337221cd103e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_driscoll, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 10:48:01 compute-0 podman[478928]: 2025-10-03 10:48:01.430323852 +0000 UTC m=+0.052631751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:48:01 compute-0 systemd[1]: Started libpod-conmon-4b45e666d9f2eafba9d5e577c965d926189d798bd65ee4486a337221cd103e6f.scope.
Oct  3 10:48:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:48:01 compute-0 podman[478928]: 2025-10-03 10:48:01.607711375 +0000 UTC m=+0.230019304 container init 4b45e666d9f2eafba9d5e577c965d926189d798bd65ee4486a337221cd103e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_driscoll, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:48:01 compute-0 podman[478928]: 2025-10-03 10:48:01.623662928 +0000 UTC m=+0.245970817 container start 4b45e666d9f2eafba9d5e577c965d926189d798bd65ee4486a337221cd103e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_driscoll, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:48:01 compute-0 podman[478928]: 2025-10-03 10:48:01.63092073 +0000 UTC m=+0.253228589 container attach 4b45e666d9f2eafba9d5e577c965d926189d798bd65ee4486a337221cd103e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_driscoll, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:48:01 compute-0 bold_driscoll[478944]: 167 167
Oct  3 10:48:01 compute-0 systemd[1]: libpod-4b45e666d9f2eafba9d5e577c965d926189d798bd65ee4486a337221cd103e6f.scope: Deactivated successfully.
Oct  3 10:48:01 compute-0 podman[478928]: 2025-10-03 10:48:01.636416066 +0000 UTC m=+0.258723955 container died 4b45e666d9f2eafba9d5e577c965d926189d798bd65ee4486a337221cd103e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_driscoll, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 10:48:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-81f4c32a26402439e375ccea9af4881c4883876f1a9d84b9ea6760711c86ef6d-merged.mount: Deactivated successfully.
Oct  3 10:48:01 compute-0 podman[478928]: 2025-10-03 10:48:01.715899888 +0000 UTC m=+0.338207747 container remove 4b45e666d9f2eafba9d5e577c965d926189d798bd65ee4486a337221cd103e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_driscoll, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:48:01 compute-0 systemd[1]: libpod-conmon-4b45e666d9f2eafba9d5e577c965d926189d798bd65ee4486a337221cd103e6f.scope: Deactivated successfully.
Oct  3 10:48:02 compute-0 podman[478966]: 2025-10-03 10:48:02.029169114 +0000 UTC m=+0.103399960 container create e55cbb5a4376f7c937976e9dadf5a0d51936abcab1d67b1b6275637409397a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:48:02 compute-0 podman[478966]: 2025-10-03 10:48:01.992693443 +0000 UTC m=+0.066924339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:48:02 compute-0 systemd[1]: Started libpod-conmon-e55cbb5a4376f7c937976e9dadf5a0d51936abcab1d67b1b6275637409397a77.scope.
Oct  3 10:48:02 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e69c3399f6bcc6d33097d65a7fb502261026e38ad943453abe9911ab533cb20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e69c3399f6bcc6d33097d65a7fb502261026e38ad943453abe9911ab533cb20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e69c3399f6bcc6d33097d65a7fb502261026e38ad943453abe9911ab533cb20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e69c3399f6bcc6d33097d65a7fb502261026e38ad943453abe9911ab533cb20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:48:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e69c3399f6bcc6d33097d65a7fb502261026e38ad943453abe9911ab533cb20/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:48:02 compute-0 podman[478966]: 2025-10-03 10:48:02.17735313 +0000 UTC m=+0.251583956 container init e55cbb5a4376f7c937976e9dadf5a0d51936abcab1d67b1b6275637409397a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:48:02 compute-0 podman[478966]: 2025-10-03 10:48:02.204211762 +0000 UTC m=+0.278442608 container start e55cbb5a4376f7c937976e9dadf5a0d51936abcab1d67b1b6275637409397a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:48:02 compute-0 podman[478966]: 2025-10-03 10:48:02.217686805 +0000 UTC m=+0.291917631 container attach e55cbb5a4376f7c937976e9dadf5a0d51936abcab1d67b1b6275637409397a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 10:48:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2421: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:48:03 compute-0 nova_compute[351685]: 2025-10-03 10:48:03.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:03 compute-0 nova_compute[351685]: 2025-10-03 10:48:03.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:03 compute-0 busy_hawking[478982]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:48:03 compute-0 busy_hawking[478982]: --> relative data size: 1.0
Oct  3 10:48:03 compute-0 busy_hawking[478982]: --> All data devices are unavailable
Oct  3 10:48:03 compute-0 systemd[1]: libpod-e55cbb5a4376f7c937976e9dadf5a0d51936abcab1d67b1b6275637409397a77.scope: Deactivated successfully.
Oct  3 10:48:03 compute-0 systemd[1]: libpod-e55cbb5a4376f7c937976e9dadf5a0d51936abcab1d67b1b6275637409397a77.scope: Consumed 1.268s CPU time.
Oct  3 10:48:03 compute-0 podman[478966]: 2025-10-03 10:48:03.536765075 +0000 UTC m=+1.610995921 container died e55cbb5a4376f7c937976e9dadf5a0d51936abcab1d67b1b6275637409397a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:48:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e69c3399f6bcc6d33097d65a7fb502261026e38ad943453abe9911ab533cb20-merged.mount: Deactivated successfully.
Oct  3 10:48:03 compute-0 podman[478966]: 2025-10-03 10:48:03.637103736 +0000 UTC m=+1.711334572 container remove e55cbb5a4376f7c937976e9dadf5a0d51936abcab1d67b1b6275637409397a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 10:48:03 compute-0 systemd[1]: libpod-conmon-e55cbb5a4376f7c937976e9dadf5a0d51936abcab1d67b1b6275637409397a77.scope: Deactivated successfully.
Oct  3 10:48:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2422: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:04 compute-0 podman[479166]: 2025-10-03 10:48:04.828531979 +0000 UTC m=+0.085862137 container create 01d0c25474617ba56cd7574d8f7de75773521a77941863c57c7baff6ce188f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:48:04 compute-0 podman[479166]: 2025-10-03 10:48:04.793838165 +0000 UTC m=+0.051168363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:48:04 compute-0 systemd[1]: Started libpod-conmon-01d0c25474617ba56cd7574d8f7de75773521a77941863c57c7baff6ce188f1b.scope.
Oct  3 10:48:04 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:48:04 compute-0 podman[479166]: 2025-10-03 10:48:04.987534832 +0000 UTC m=+0.244865070 container init 01d0c25474617ba56cd7574d8f7de75773521a77941863c57c7baff6ce188f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:48:05 compute-0 podman[479166]: 2025-10-03 10:48:05.005615683 +0000 UTC m=+0.262945861 container start 01d0c25474617ba56cd7574d8f7de75773521a77941863c57c7baff6ce188f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 10:48:05 compute-0 podman[479166]: 2025-10-03 10:48:05.013158835 +0000 UTC m=+0.270489003 container attach 01d0c25474617ba56cd7574d8f7de75773521a77941863c57c7baff6ce188f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:48:05 compute-0 dreamy_matsumoto[479182]: 167 167
Oct  3 10:48:05 compute-0 systemd[1]: libpod-01d0c25474617ba56cd7574d8f7de75773521a77941863c57c7baff6ce188f1b.scope: Deactivated successfully.
Oct  3 10:48:05 compute-0 podman[479166]: 2025-10-03 10:48:05.021435831 +0000 UTC m=+0.278765959 container died 01d0c25474617ba56cd7574d8f7de75773521a77941863c57c7baff6ce188f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:48:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba9401041683995c41d72d0c43573de50165229d31f9a825b107c2638360b9dd-merged.mount: Deactivated successfully.
Oct  3 10:48:05 compute-0 podman[479166]: 2025-10-03 10:48:05.080879739 +0000 UTC m=+0.338209867 container remove 01d0c25474617ba56cd7574d8f7de75773521a77941863c57c7baff6ce188f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_matsumoto, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:48:05 compute-0 systemd[1]: libpod-conmon-01d0c25474617ba56cd7574d8f7de75773521a77941863c57c7baff6ce188f1b.scope: Deactivated successfully.
Oct  3 10:48:05 compute-0 podman[479204]: 2025-10-03 10:48:05.345012368 +0000 UTC m=+0.097143100 container create dcc34fee37823d59a811ca7ca449e2547b2ef1b629922fdfcbcc039a2cc60d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:48:05 compute-0 podman[479204]: 2025-10-03 10:48:05.303576257 +0000 UTC m=+0.055707059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:48:05 compute-0 systemd[1]: Started libpod-conmon-dcc34fee37823d59a811ca7ca449e2547b2ef1b629922fdfcbcc039a2cc60d88.scope.
Oct  3 10:48:05 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:48:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c3037fa1509bde7f0526280a0d59bbabfbbad5ccfda2dd4096dd98c26f01971/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:48:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c3037fa1509bde7f0526280a0d59bbabfbbad5ccfda2dd4096dd98c26f01971/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:48:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c3037fa1509bde7f0526280a0d59bbabfbbad5ccfda2dd4096dd98c26f01971/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:48:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c3037fa1509bde7f0526280a0d59bbabfbbad5ccfda2dd4096dd98c26f01971/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:48:05 compute-0 podman[479204]: 2025-10-03 10:48:05.521288106 +0000 UTC m=+0.273418878 container init dcc34fee37823d59a811ca7ca449e2547b2ef1b629922fdfcbcc039a2cc60d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pascal, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 10:48:05 compute-0 podman[479204]: 2025-10-03 10:48:05.540364148 +0000 UTC m=+0.292494850 container start dcc34fee37823d59a811ca7ca449e2547b2ef1b629922fdfcbcc039a2cc60d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:48:05 compute-0 podman[479204]: 2025-10-03 10:48:05.546834785 +0000 UTC m=+0.298965577 container attach dcc34fee37823d59a811ca7ca449e2547b2ef1b629922fdfcbcc039a2cc60d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pascal, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:48:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2423: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:06 compute-0 jolly_pascal[479218]: {
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:    "0": [
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:        {
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "devices": [
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "/dev/loop3"
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            ],
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "lv_name": "ceph_lv0",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "lv_size": "21470642176",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "name": "ceph_lv0",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "tags": {
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.cluster_name": "ceph",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.crush_device_class": "",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.encrypted": "0",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.osd_id": "0",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.type": "block",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.vdo": "0"
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            },
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "type": "block",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "vg_name": "ceph_vg0"
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:        }
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:    ],
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:    "1": [
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:        {
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "devices": [
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "/dev/loop4"
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            ],
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "lv_name": "ceph_lv1",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "lv_size": "21470642176",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "name": "ceph_lv1",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "tags": {
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.cluster_name": "ceph",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.crush_device_class": "",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.encrypted": "0",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.osd_id": "1",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.type": "block",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.vdo": "0"
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            },
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "type": "block",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "vg_name": "ceph_vg1"
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:        }
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:    ],
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:    "2": [
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:        {
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "devices": [
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "/dev/loop5"
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            ],
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "lv_name": "ceph_lv2",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "lv_size": "21470642176",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "name": "ceph_lv2",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "tags": {
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.cluster_name": "ceph",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.crush_device_class": "",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.encrypted": "0",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.osd_id": "2",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.type": "block",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:                "ceph.vdo": "0"
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            },
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "type": "block",
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:            "vg_name": "ceph_vg2"
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:        }
Oct  3 10:48:06 compute-0 jolly_pascal[479218]:    ]
Oct  3 10:48:06 compute-0 jolly_pascal[479218]: }
Oct  3 10:48:06 compute-0 systemd[1]: libpod-dcc34fee37823d59a811ca7ca449e2547b2ef1b629922fdfcbcc039a2cc60d88.scope: Deactivated successfully.
Oct  3 10:48:06 compute-0 podman[479204]: 2025-10-03 10:48:06.516131568 +0000 UTC m=+1.268262290 container died dcc34fee37823d59a811ca7ca449e2547b2ef1b629922fdfcbcc039a2cc60d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:48:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c3037fa1509bde7f0526280a0d59bbabfbbad5ccfda2dd4096dd98c26f01971-merged.mount: Deactivated successfully.
Oct  3 10:48:06 compute-0 podman[479204]: 2025-10-03 10:48:06.601897092 +0000 UTC m=+1.354027824 container remove dcc34fee37823d59a811ca7ca449e2547b2ef1b629922fdfcbcc039a2cc60d88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:48:06 compute-0 systemd[1]: libpod-conmon-dcc34fee37823d59a811ca7ca449e2547b2ef1b629922fdfcbcc039a2cc60d88.scope: Deactivated successfully.
Oct  3 10:48:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:48:07 compute-0 podman[479375]: 2025-10-03 10:48:07.786215227 +0000 UTC m=+0.089239727 container create 736550783b59be1c1f669ebeb76224825d330815e2486cc83fc3e0cf8d943d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 10:48:07 compute-0 podman[479375]: 2025-10-03 10:48:07.751858233 +0000 UTC m=+0.054882763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:48:07 compute-0 systemd[1]: Started libpod-conmon-736550783b59be1c1f669ebeb76224825d330815e2486cc83fc3e0cf8d943d9a.scope.
Oct  3 10:48:07 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:48:07 compute-0 podman[479375]: 2025-10-03 10:48:07.937571834 +0000 UTC m=+0.240596384 container init 736550783b59be1c1f669ebeb76224825d330815e2486cc83fc3e0cf8d943d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct  3 10:48:07 compute-0 podman[479375]: 2025-10-03 10:48:07.958497556 +0000 UTC m=+0.261522046 container start 736550783b59be1c1f669ebeb76224825d330815e2486cc83fc3e0cf8d943d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 10:48:07 compute-0 podman[479375]: 2025-10-03 10:48:07.965377257 +0000 UTC m=+0.268401807 container attach 736550783b59be1c1f669ebeb76224825d330815e2486cc83fc3e0cf8d943d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 10:48:07 compute-0 vibrant_darwin[479390]: 167 167
Oct  3 10:48:07 compute-0 systemd[1]: libpod-736550783b59be1c1f669ebeb76224825d330815e2486cc83fc3e0cf8d943d9a.scope: Deactivated successfully.
Oct  3 10:48:07 compute-0 podman[479375]: 2025-10-03 10:48:07.975691718 +0000 UTC m=+0.278716218 container died 736550783b59be1c1f669ebeb76224825d330815e2486cc83fc3e0cf8d943d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_darwin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:48:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c83c1f778e5f2ae9e4f4290b0aaf0f8cab19a4b9e8184da3d6ae53acd92ee6a0-merged.mount: Deactivated successfully.
Oct  3 10:48:08 compute-0 podman[479375]: 2025-10-03 10:48:08.052608097 +0000 UTC m=+0.355632587 container remove 736550783b59be1c1f669ebeb76224825d330815e2486cc83fc3e0cf8d943d9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:48:08 compute-0 nova_compute[351685]: 2025-10-03 10:48:08.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:08 compute-0 systemd[1]: libpod-conmon-736550783b59be1c1f669ebeb76224825d330815e2486cc83fc3e0cf8d943d9a.scope: Deactivated successfully.
Oct  3 10:48:08 compute-0 podman[479414]: 2025-10-03 10:48:08.321642422 +0000 UTC m=+0.070682759 container create 2b7a887c08de94f7824de9e993ec82b914ab1ea262193cf0c3b9051324d9a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shannon, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 10:48:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2424: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:08 compute-0 podman[479414]: 2025-10-03 10:48:08.296063121 +0000 UTC m=+0.045103508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:48:08 compute-0 systemd[1]: Started libpod-conmon-2b7a887c08de94f7824de9e993ec82b914ab1ea262193cf0c3b9051324d9a6e6.scope.
Oct  3 10:48:08 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:48:08 compute-0 nova_compute[351685]: 2025-10-03 10:48:08.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb97966f4fc6d742d98e4924e2f21cdaa36080cdaef04513d1f9f628a28c913f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb97966f4fc6d742d98e4924e2f21cdaa36080cdaef04513d1f9f628a28c913f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb97966f4fc6d742d98e4924e2f21cdaa36080cdaef04513d1f9f628a28c913f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb97966f4fc6d742d98e4924e2f21cdaa36080cdaef04513d1f9f628a28c913f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:48:08 compute-0 podman[479414]: 2025-10-03 10:48:08.495201843 +0000 UTC m=+0.244242250 container init 2b7a887c08de94f7824de9e993ec82b914ab1ea262193cf0c3b9051324d9a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:48:08 compute-0 podman[479414]: 2025-10-03 10:48:08.517660714 +0000 UTC m=+0.266701091 container start 2b7a887c08de94f7824de9e993ec82b914ab1ea262193cf0c3b9051324d9a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shannon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:48:08 compute-0 podman[479414]: 2025-10-03 10:48:08.524506434 +0000 UTC m=+0.273546871 container attach 2b7a887c08de94f7824de9e993ec82b914ab1ea262193cf0c3b9051324d9a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct  3 10:48:09 compute-0 nova_compute[351685]: 2025-10-03 10:48:09.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:48:09 compute-0 nova_compute[351685]: 2025-10-03 10:48:09.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:48:09 compute-0 nova_compute[351685]: 2025-10-03 10:48:09.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:48:09 compute-0 strange_shannon[479431]: {
Oct  3 10:48:09 compute-0 strange_shannon[479431]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:48:09 compute-0 strange_shannon[479431]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:48:09 compute-0 strange_shannon[479431]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:48:09 compute-0 strange_shannon[479431]:        "osd_id": 1,
Oct  3 10:48:09 compute-0 strange_shannon[479431]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:48:09 compute-0 strange_shannon[479431]:        "type": "bluestore"
Oct  3 10:48:09 compute-0 strange_shannon[479431]:    },
Oct  3 10:48:09 compute-0 strange_shannon[479431]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:48:09 compute-0 strange_shannon[479431]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:48:09 compute-0 strange_shannon[479431]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:48:09 compute-0 strange_shannon[479431]:        "osd_id": 2,
Oct  3 10:48:09 compute-0 strange_shannon[479431]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:48:09 compute-0 strange_shannon[479431]:        "type": "bluestore"
Oct  3 10:48:09 compute-0 strange_shannon[479431]:    },
Oct  3 10:48:09 compute-0 strange_shannon[479431]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:48:09 compute-0 strange_shannon[479431]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:48:09 compute-0 strange_shannon[479431]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:48:09 compute-0 strange_shannon[479431]:        "osd_id": 0,
Oct  3 10:48:09 compute-0 strange_shannon[479431]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:48:09 compute-0 strange_shannon[479431]:        "type": "bluestore"
Oct  3 10:48:09 compute-0 strange_shannon[479431]:    }
Oct  3 10:48:09 compute-0 strange_shannon[479431]: }
Oct  3 10:48:09 compute-0 systemd[1]: libpod-2b7a887c08de94f7824de9e993ec82b914ab1ea262193cf0c3b9051324d9a6e6.scope: Deactivated successfully.
Oct  3 10:48:09 compute-0 podman[479414]: 2025-10-03 10:48:09.835190605 +0000 UTC m=+1.584230972 container died 2b7a887c08de94f7824de9e993ec82b914ab1ea262193cf0c3b9051324d9a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:48:09 compute-0 systemd[1]: libpod-2b7a887c08de94f7824de9e993ec82b914ab1ea262193cf0c3b9051324d9a6e6.scope: Consumed 1.299s CPU time.
Oct  3 10:48:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb97966f4fc6d742d98e4924e2f21cdaa36080cdaef04513d1f9f628a28c913f-merged.mount: Deactivated successfully.
Oct  3 10:48:09 compute-0 podman[479414]: 2025-10-03 10:48:09.94250765 +0000 UTC m=+1.691548017 container remove 2b7a887c08de94f7824de9e993ec82b914ab1ea262193cf0c3b9051324d9a6e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:48:09 compute-0 systemd[1]: libpod-conmon-2b7a887c08de94f7824de9e993ec82b914ab1ea262193cf0c3b9051324d9a6e6.scope: Deactivated successfully.
Oct  3 10:48:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:48:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:48:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:48:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:48:10 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 13495651-587a-400b-a421-92dccc762279 does not exist
Oct  3 10:48:10 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e0ca0745-69b0-47a8-a6d4-442b9287a3f4 does not exist
Oct  3 10:48:10 compute-0 nova_compute[351685]: 2025-10-03 10:48:10.294 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:48:10 compute-0 nova_compute[351685]: 2025-10-03 10:48:10.296 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:48:10 compute-0 nova_compute[351685]: 2025-10-03 10:48:10.297 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:48:10 compute-0 nova_compute[351685]: 2025-10-03 10:48:10.297 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:48:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2425: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:48:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:48:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2426: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:12 compute-0 nova_compute[351685]: 2025-10-03 10:48:12.405 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:48:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:48:12 compute-0 nova_compute[351685]: 2025-10-03 10:48:12.423 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:48:12 compute-0 nova_compute[351685]: 2025-10-03 10:48:12.424 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:48:12 compute-0 podman[479526]: 2025-10-03 10:48:12.901751088 +0000 UTC m=+0.136247654 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct  3 10:48:12 compute-0 podman[479528]: 2025-10-03 10:48:12.936986589 +0000 UTC m=+0.164029226 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:48:12 compute-0 podman[479527]: 2025-10-03 10:48:12.938616571 +0000 UTC m=+0.172909391 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Oct  3 10:48:13 compute-0 nova_compute[351685]: 2025-10-03 10:48:13.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:13 compute-0 nova_compute[351685]: 2025-10-03 10:48:13.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2427: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:14 compute-0 nova_compute[351685]: 2025-10-03 10:48:14.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:48:14 compute-0 nova_compute[351685]: 2025-10-03 10:48:14.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:48:15 compute-0 podman[479594]: 2025-10-03 10:48:15.870906054 +0000 UTC m=+0.126308625 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Oct  3 10:48:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:48:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:48:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:48:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:48:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:48:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:48:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2428: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:48:18 compute-0 nova_compute[351685]: 2025-10-03 10:48:18.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2429: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:18 compute-0 nova_compute[351685]: 2025-10-03 10:48:18.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:18 compute-0 nova_compute[351685]: 2025-10-03 10:48:18.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:48:18 compute-0 podman[479613]: 2025-10-03 10:48:18.872505611 +0000 UTC m=+0.127019659 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:48:18 compute-0 podman[479615]: 2025-10-03 10:48:18.880606631 +0000 UTC m=+0.121468031 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.vendor=CentOS)
Oct  3 10:48:18 compute-0 podman[479614]: 2025-10-03 10:48:18.886149759 +0000 UTC m=+0.131568395 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Oct  3 10:48:19 compute-0 nova_compute[351685]: 2025-10-03 10:48:19.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:48:19 compute-0 nova_compute[351685]: 2025-10-03 10:48:19.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:48:19 compute-0 nova_compute[351685]: 2025-10-03 10:48:19.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:48:19 compute-0 nova_compute[351685]: 2025-10-03 10:48:19.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:48:19 compute-0 nova_compute[351685]: 2025-10-03 10:48:19.762 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:48:19 compute-0 nova_compute[351685]: 2025-10-03 10:48:19.762 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:48:19 compute-0 nova_compute[351685]: 2025-10-03 10:48:19.762 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:48:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:48:20 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3956186175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:48:20 compute-0 nova_compute[351685]: 2025-10-03 10:48:20.230 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:48:20 compute-0 nova_compute[351685]: 2025-10-03 10:48:20.327 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:48:20 compute-0 nova_compute[351685]: 2025-10-03 10:48:20.328 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:48:20 compute-0 nova_compute[351685]: 2025-10-03 10:48:20.328 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:48:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2430: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:20 compute-0 nova_compute[351685]: 2025-10-03 10:48:20.887 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:48:20 compute-0 nova_compute[351685]: 2025-10-03 10:48:20.889 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3810MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:48:20 compute-0 nova_compute[351685]: 2025-10-03 10:48:20.889 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:48:20 compute-0 nova_compute[351685]: 2025-10-03 10:48:20.889 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:48:21 compute-0 nova_compute[351685]: 2025-10-03 10:48:21.000 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:48:21 compute-0 nova_compute[351685]: 2025-10-03 10:48:21.000 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:48:21 compute-0 nova_compute[351685]: 2025-10-03 10:48:21.001 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:48:21 compute-0 nova_compute[351685]: 2025-10-03 10:48:21.022 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 10:48:21 compute-0 nova_compute[351685]: 2025-10-03 10:48:21.045 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 10:48:21 compute-0 nova_compute[351685]: 2025-10-03 10:48:21.046 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 10:48:21 compute-0 nova_compute[351685]: 2025-10-03 10:48:21.061 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 10:48:21 compute-0 nova_compute[351685]: 2025-10-03 10:48:21.079 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 10:48:21 compute-0 nova_compute[351685]: 2025-10-03 10:48:21.111 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:48:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:48:21 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2849981476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:48:21 compute-0 nova_compute[351685]: 2025-10-03 10:48:21.600 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:48:21 compute-0 nova_compute[351685]: 2025-10-03 10:48:21.609 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:48:21 compute-0 nova_compute[351685]: 2025-10-03 10:48:21.629 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:48:21 compute-0 nova_compute[351685]: 2025-10-03 10:48:21.632 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:48:21 compute-0 nova_compute[351685]: 2025-10-03 10:48:21.632 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:48:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2431: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:48:23 compute-0 nova_compute[351685]: 2025-10-03 10:48:23.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:23 compute-0 nova_compute[351685]: 2025-10-03 10:48:23.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:23 compute-0 nova_compute[351685]: 2025-10-03 10:48:23.632 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:48:23 compute-0 nova_compute[351685]: 2025-10-03 10:48:23.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:48:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2432: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:24 compute-0 nova_compute[351685]: 2025-10-03 10:48:24.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:48:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2433: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:26 compute-0 nova_compute[351685]: 2025-10-03 10:48:26.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:48:26 compute-0 podman[479718]: 2025-10-03 10:48:26.856536853 +0000 UTC m=+0.100727934 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., version=9.4, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, architecture=x86_64, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler)
Oct  3 10:48:26 compute-0 podman[479719]: 2025-10-03 10:48:26.861352578 +0000 UTC m=+0.091476687 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  3 10:48:26 compute-0 podman[479717]: 2025-10-03 10:48:26.891011299 +0000 UTC m=+0.129949691 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:48:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:48:28 compute-0 nova_compute[351685]: 2025-10-03 10:48:28.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2434: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:28 compute-0 nova_compute[351685]: 2025-10-03 10:48:28.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:29 compute-0 podman[157165]: time="2025-10-03T10:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:48:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:48:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9085 "" "Go-http-client/1.1"
Oct  3 10:48:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2435: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:31 compute-0 openstack_network_exporter[367524]: ERROR   10:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:48:31 compute-0 openstack_network_exporter[367524]: ERROR   10:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:48:31 compute-0 openstack_network_exporter[367524]: ERROR   10:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:48:31 compute-0 openstack_network_exporter[367524]: ERROR   10:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:48:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:48:31 compute-0 openstack_network_exporter[367524]: ERROR   10:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:48:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:48:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2436: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:48:33 compute-0 nova_compute[351685]: 2025-10-03 10:48:33.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:33 compute-0 nova_compute[351685]: 2025-10-03 10:48:33.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2437: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2438: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:48:38 compute-0 nova_compute[351685]: 2025-10-03 10:48:38.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2439: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:38 compute-0 nova_compute[351685]: 2025-10-03 10:48:38.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2440: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:48:41.652 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:48:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:48:41.652 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:48:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:48:41.653 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:48:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2441: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:48:43 compute-0 nova_compute[351685]: 2025-10-03 10:48:43.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:43 compute-0 nova_compute[351685]: 2025-10-03 10:48:43.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:43 compute-0 podman[479774]: 2025-10-03 10:48:43.880062827 +0000 UTC m=+0.126461571 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.6, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct  3 10:48:43 compute-0 podman[479775]: 2025-10-03 10:48:43.881076949 +0000 UTC m=+0.117531203 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute)
Oct  3 10:48:43 compute-0 podman[479776]: 2025-10-03 10:48:43.927766028 +0000 UTC m=+0.173092077 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  3 10:48:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2442: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:48:46
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['backups', 'images', 'vms', 'volumes', 'default.rgw.control', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta']
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2443: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:48:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:48:46 compute-0 podman[479833]: 2025-10-03 10:48:46.886901263 +0000 UTC m=+0.140397318 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 10:48:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:48:48 compute-0 nova_compute[351685]: 2025-10-03 10:48:48.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2444: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:48 compute-0 nova_compute[351685]: 2025-10-03 10:48:48.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:49 compute-0 podman[479851]: 2025-10-03 10:48:49.867627261 +0000 UTC m=+0.107481581 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:48:49 compute-0 podman[479852]: 2025-10-03 10:48:49.894056809 +0000 UTC m=+0.131924756 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:48:49 compute-0 podman[479853]: 2025-10-03 10:48:49.906379994 +0000 UTC m=+0.126023086 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  3 10:48:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2445: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2446: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:48:53 compute-0 nova_compute[351685]: 2025-10-03 10:48:53.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:53 compute-0 nova_compute[351685]: 2025-10-03 10:48:53.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:48:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/701021722' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:48:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:48:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/701021722' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:48:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2447: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:48:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:48:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2448: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:48:57 compute-0 podman[479911]: 2025-10-03 10:48:57.8560616 +0000 UTC m=+0.103693169 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi)
Oct  3 10:48:57 compute-0 podman[479909]: 2025-10-03 10:48:57.856174194 +0000 UTC m=+0.107785082 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:48:57 compute-0 podman[479910]: 2025-10-03 10:48:57.877191408 +0000 UTC m=+0.126065658 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, release=1214.1726694543, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, managed_by=edpm_ansible, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-type=git, version=9.4, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, architecture=x86_64)
Oct  3 10:48:58 compute-0 nova_compute[351685]: 2025-10-03 10:48:58.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2449: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:48:58 compute-0 nova_compute[351685]: 2025-10-03 10:48:58.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:48:59 compute-0 podman[157165]: time="2025-10-03T10:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:48:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:48:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9083 "" "Go-http-client/1.1"
Oct  3 10:49:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2450: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:01 compute-0 openstack_network_exporter[367524]: ERROR   10:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:49:01 compute-0 openstack_network_exporter[367524]: ERROR   10:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:49:01 compute-0 openstack_network_exporter[367524]: ERROR   10:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:49:01 compute-0 openstack_network_exporter[367524]: ERROR   10:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:49:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:49:01 compute-0 openstack_network_exporter[367524]: ERROR   10:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:49:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:49:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2451: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:49:03 compute-0 nova_compute[351685]: 2025-10-03 10:49:03.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:03 compute-0 nova_compute[351685]: 2025-10-03 10:49:03.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2452: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2453: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:49:08 compute-0 nova_compute[351685]: 2025-10-03 10:49:08.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2454: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:08 compute-0 nova_compute[351685]: 2025-10-03 10:49:08.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:09 compute-0 nova_compute[351685]: 2025-10-03 10:49:09.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:49:09 compute-0 nova_compute[351685]: 2025-10-03 10:49:09.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:49:09 compute-0 nova_compute[351685]: 2025-10-03 10:49:09.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:49:10 compute-0 nova_compute[351685]: 2025-10-03 10:49:10.302 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:49:10 compute-0 nova_compute[351685]: 2025-10-03 10:49:10.302 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:49:10 compute-0 nova_compute[351685]: 2025-10-03 10:49:10.303 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:49:10 compute-0 nova_compute[351685]: 2025-10-03 10:49:10.303 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:49:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2455: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:49:11 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:49:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:49:11 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:49:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:49:11 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:49:11 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f25a16fe-6941-4ba1-b537-c8db4d63492c does not exist
Oct  3 10:49:11 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev bf4b1e4c-99db-43b1-8d8a-88849257b54c does not exist
Oct  3 10:49:11 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8b8c46c4-2fe9-497c-b3b5-a3acbe542e7c does not exist
Oct  3 10:49:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:49:11 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:49:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:49:11 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:49:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:49:11 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:49:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:49:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:49:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:49:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2456: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:49:12 compute-0 nova_compute[351685]: 2025-10-03 10:49:12.463 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:49:12 compute-0 nova_compute[351685]: 2025-10-03 10:49:12.486 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:49:12 compute-0 nova_compute[351685]: 2025-10-03 10:49:12.486 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:49:12 compute-0 podman[480234]: 2025-10-03 10:49:12.534067304 +0000 UTC m=+0.052614499 container create 0c0833644451646743eccfd16ebd7015c11d20adb0354098b154c4f6abb62a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_darwin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  3 10:49:12 compute-0 systemd[1]: Started libpod-conmon-0c0833644451646743eccfd16ebd7015c11d20adb0354098b154c4f6abb62a83.scope.
Oct  3 10:49:12 compute-0 podman[480234]: 2025-10-03 10:49:12.51430857 +0000 UTC m=+0.032855795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:49:12 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:49:12 compute-0 podman[480234]: 2025-10-03 10:49:12.655817712 +0000 UTC m=+0.174364937 container init 0c0833644451646743eccfd16ebd7015c11d20adb0354098b154c4f6abb62a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_darwin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:49:12 compute-0 podman[480234]: 2025-10-03 10:49:12.676811396 +0000 UTC m=+0.195358581 container start 0c0833644451646743eccfd16ebd7015c11d20adb0354098b154c4f6abb62a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_darwin, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 10:49:12 compute-0 podman[480234]: 2025-10-03 10:49:12.681072073 +0000 UTC m=+0.199619288 container attach 0c0833644451646743eccfd16ebd7015c11d20adb0354098b154c4f6abb62a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_darwin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:49:12 compute-0 blissful_darwin[480250]: 167 167
Oct  3 10:49:12 compute-0 systemd[1]: libpod-0c0833644451646743eccfd16ebd7015c11d20adb0354098b154c4f6abb62a83.scope: Deactivated successfully.
Oct  3 10:49:12 compute-0 podman[480234]: 2025-10-03 10:49:12.686816077 +0000 UTC m=+0.205363282 container died 0c0833644451646743eccfd16ebd7015c11d20adb0354098b154c4f6abb62a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_darwin, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 10:49:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-36b3633f244406db679f8995a05881e71e266a72b4a567c7b11c700c1f3d510f-merged.mount: Deactivated successfully.
Oct  3 10:49:12 compute-0 podman[480234]: 2025-10-03 10:49:12.7523335 +0000 UTC m=+0.270880695 container remove 0c0833644451646743eccfd16ebd7015c11d20adb0354098b154c4f6abb62a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_darwin, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:49:12 compute-0 systemd[1]: libpod-conmon-0c0833644451646743eccfd16ebd7015c11d20adb0354098b154c4f6abb62a83.scope: Deactivated successfully.
Oct  3 10:49:13 compute-0 podman[480274]: 2025-10-03 10:49:13.040224341 +0000 UTC m=+0.103340188 container create 519dc4de3b89ee87c8a2ab05140660899a08072efaa639d34eda53853886bb44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_black, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 10:49:13 compute-0 podman[480274]: 2025-10-03 10:49:12.998157881 +0000 UTC m=+0.061273788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:49:13 compute-0 nova_compute[351685]: 2025-10-03 10:49:13.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:13 compute-0 systemd[1]: Started libpod-conmon-519dc4de3b89ee87c8a2ab05140660899a08072efaa639d34eda53853886bb44.scope.
Oct  3 10:49:13 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:49:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f33d3bccb629a5f001ee13c899370555fdaef19a4e855dd33fdc5cd04baa81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:49:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f33d3bccb629a5f001ee13c899370555fdaef19a4e855dd33fdc5cd04baa81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:49:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f33d3bccb629a5f001ee13c899370555fdaef19a4e855dd33fdc5cd04baa81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:49:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f33d3bccb629a5f001ee13c899370555fdaef19a4e855dd33fdc5cd04baa81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:49:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81f33d3bccb629a5f001ee13c899370555fdaef19a4e855dd33fdc5cd04baa81/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:49:13 compute-0 podman[480274]: 2025-10-03 10:49:13.215957092 +0000 UTC m=+0.279072979 container init 519dc4de3b89ee87c8a2ab05140660899a08072efaa639d34eda53853886bb44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 10:49:13 compute-0 podman[480274]: 2025-10-03 10:49:13.239763076 +0000 UTC m=+0.302878903 container start 519dc4de3b89ee87c8a2ab05140660899a08072efaa639d34eda53853886bb44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_black, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 10:49:13 compute-0 podman[480274]: 2025-10-03 10:49:13.245467749 +0000 UTC m=+0.308583606 container attach 519dc4de3b89ee87c8a2ab05140660899a08072efaa639d34eda53853886bb44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_black, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:49:13 compute-0 nova_compute[351685]: 2025-10-03 10:49:13.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2457: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:14 compute-0 xenodochial_black[480290]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:49:14 compute-0 xenodochial_black[480290]: --> relative data size: 1.0
Oct  3 10:49:14 compute-0 xenodochial_black[480290]: --> All data devices are unavailable
Oct  3 10:49:14 compute-0 systemd[1]: libpod-519dc4de3b89ee87c8a2ab05140660899a08072efaa639d34eda53853886bb44.scope: Deactivated successfully.
Oct  3 10:49:14 compute-0 systemd[1]: libpod-519dc4de3b89ee87c8a2ab05140660899a08072efaa639d34eda53853886bb44.scope: Consumed 1.350s CPU time.
Oct  3 10:49:14 compute-0 podman[480274]: 2025-10-03 10:49:14.64772065 +0000 UTC m=+1.710836507 container died 519dc4de3b89ee87c8a2ab05140660899a08072efaa639d34eda53853886bb44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 10:49:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-81f33d3bccb629a5f001ee13c899370555fdaef19a4e855dd33fdc5cd04baa81-merged.mount: Deactivated successfully.
Oct  3 10:49:14 compute-0 podman[480274]: 2025-10-03 10:49:14.774283312 +0000 UTC m=+1.837399119 container remove 519dc4de3b89ee87c8a2ab05140660899a08072efaa639d34eda53853886bb44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_black, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:49:14 compute-0 systemd[1]: libpod-conmon-519dc4de3b89ee87c8a2ab05140660899a08072efaa639d34eda53853886bb44.scope: Deactivated successfully.
Oct  3 10:49:14 compute-0 podman[480321]: 2025-10-03 10:49:14.814903366 +0000 UTC m=+0.115103406 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:49:14 compute-0 podman[480320]: 2025-10-03 10:49:14.821318472 +0000 UTC m=+0.129255760 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, name=ubi9-minimal)
Oct  3 10:49:14 compute-0 podman[480323]: 2025-10-03 10:49:14.89415713 +0000 UTC m=+0.192664145 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:49:15 compute-0 nova_compute[351685]: 2025-10-03 10:49:15.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:49:15 compute-0 nova_compute[351685]: 2025-10-03 10:49:15.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:49:15 compute-0 podman[480530]: 2025-10-03 10:49:15.877282867 +0000 UTC m=+0.079673189 container create db548fdc465f066ebb17b709c4fab1561b8d1f6fee03b171d974a26ea9faf025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 10:49:15 compute-0 podman[480530]: 2025-10-03 10:49:15.843091739 +0000 UTC m=+0.045482141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:49:15 compute-0 systemd[1]: Started libpod-conmon-db548fdc465f066ebb17b709c4fab1561b8d1f6fee03b171d974a26ea9faf025.scope.
Oct  3 10:49:15 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:49:16 compute-0 podman[480530]: 2025-10-03 10:49:16.010991678 +0000 UTC m=+0.213382030 container init db548fdc465f066ebb17b709c4fab1561b8d1f6fee03b171d974a26ea9faf025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:49:16 compute-0 podman[480530]: 2025-10-03 10:49:16.02352757 +0000 UTC m=+0.225917922 container start db548fdc465f066ebb17b709c4fab1561b8d1f6fee03b171d974a26ea9faf025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:49:16 compute-0 podman[480530]: 2025-10-03 10:49:16.030909698 +0000 UTC m=+0.233300010 container attach db548fdc465f066ebb17b709c4fab1561b8d1f6fee03b171d974a26ea9faf025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 10:49:16 compute-0 eager_keldysh[480546]: 167 167
Oct  3 10:49:16 compute-0 systemd[1]: libpod-db548fdc465f066ebb17b709c4fab1561b8d1f6fee03b171d974a26ea9faf025.scope: Deactivated successfully.
Oct  3 10:49:16 compute-0 podman[480530]: 2025-10-03 10:49:16.034162562 +0000 UTC m=+0.236552874 container died db548fdc465f066ebb17b709c4fab1561b8d1f6fee03b171d974a26ea9faf025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keldysh, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 10:49:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-4458ac042828ef8b0ab1b0f3d3b5b8502d85fcdfdcdea01d9aa8d2f3d9f44068-merged.mount: Deactivated successfully.
Oct  3 10:49:16 compute-0 podman[480530]: 2025-10-03 10:49:16.093362492 +0000 UTC m=+0.295752804 container remove db548fdc465f066ebb17b709c4fab1561b8d1f6fee03b171d974a26ea9faf025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 10:49:16 compute-0 systemd[1]: libpod-conmon-db548fdc465f066ebb17b709c4fab1561b8d1f6fee03b171d974a26ea9faf025.scope: Deactivated successfully.
Oct  3 10:49:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:49:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:49:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:49:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:49:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:49:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:49:16 compute-0 podman[480569]: 2025-10-03 10:49:16.326194635 +0000 UTC m=+0.071520016 container create 4b718644be413adb9cd4a1d928a8617437edbdbee80928831d52e675df813966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:49:16 compute-0 podman[480569]: 2025-10-03 10:49:16.297505234 +0000 UTC m=+0.042830625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:49:16 compute-0 systemd[1]: Started libpod-conmon-4b718644be413adb9cd4a1d928a8617437edbdbee80928831d52e675df813966.scope.
Oct  3 10:49:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2458: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:16 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/130f3875576ee9c50601a0c7d841a9a196e6bd1c018f69c4fbbac04c4a9b1c46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/130f3875576ee9c50601a0c7d841a9a196e6bd1c018f69c4fbbac04c4a9b1c46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/130f3875576ee9c50601a0c7d841a9a196e6bd1c018f69c4fbbac04c4a9b1c46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:49:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/130f3875576ee9c50601a0c7d841a9a196e6bd1c018f69c4fbbac04c4a9b1c46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:49:16 compute-0 podman[480569]: 2025-10-03 10:49:16.491400469 +0000 UTC m=+0.236725850 container init 4b718644be413adb9cd4a1d928a8617437edbdbee80928831d52e675df813966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:49:16 compute-0 podman[480569]: 2025-10-03 10:49:16.51479475 +0000 UTC m=+0.260120101 container start 4b718644be413adb9cd4a1d928a8617437edbdbee80928831d52e675df813966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:49:16 compute-0 podman[480569]: 2025-10-03 10:49:16.519897173 +0000 UTC m=+0.265222564 container attach 4b718644be413adb9cd4a1d928a8617437edbdbee80928831d52e675df813966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:49:17 compute-0 suspicious_pare[480585]: {
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:    "0": [
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:        {
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "devices": [
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "/dev/loop3"
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            ],
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "lv_name": "ceph_lv0",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "lv_size": "21470642176",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "name": "ceph_lv0",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "tags": {
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.cluster_name": "ceph",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.crush_device_class": "",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.encrypted": "0",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.osd_id": "0",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.type": "block",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.vdo": "0"
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            },
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "type": "block",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "vg_name": "ceph_vg0"
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:        }
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:    ],
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:    "1": [
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:        {
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "devices": [
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "/dev/loop4"
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            ],
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "lv_name": "ceph_lv1",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "lv_size": "21470642176",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "name": "ceph_lv1",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "tags": {
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.cluster_name": "ceph",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.crush_device_class": "",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.encrypted": "0",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.osd_id": "1",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.type": "block",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.vdo": "0"
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            },
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "type": "block",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "vg_name": "ceph_vg1"
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:        }
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:    ],
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:    "2": [
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:        {
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "devices": [
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "/dev/loop5"
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            ],
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "lv_name": "ceph_lv2",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "lv_size": "21470642176",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "name": "ceph_lv2",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "tags": {
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.cluster_name": "ceph",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.crush_device_class": "",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.encrypted": "0",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.osd_id": "2",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.type": "block",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:                "ceph.vdo": "0"
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            },
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "type": "block",
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:            "vg_name": "ceph_vg2"
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:        }
Oct  3 10:49:17 compute-0 suspicious_pare[480585]:    ]
Oct  3 10:49:17 compute-0 suspicious_pare[480585]: }
Oct  3 10:49:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:49:17 compute-0 systemd[1]: libpod-4b718644be413adb9cd4a1d928a8617437edbdbee80928831d52e675df813966.scope: Deactivated successfully.
Oct  3 10:49:17 compute-0 podman[480569]: 2025-10-03 10:49:17.470372003 +0000 UTC m=+1.215697404 container died 4b718644be413adb9cd4a1d928a8617437edbdbee80928831d52e675df813966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 10:49:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-130f3875576ee9c50601a0c7d841a9a196e6bd1c018f69c4fbbac04c4a9b1c46-merged.mount: Deactivated successfully.
Oct  3 10:49:17 compute-0 podman[480569]: 2025-10-03 10:49:17.569510214 +0000 UTC m=+1.314835605 container remove 4b718644be413adb9cd4a1d928a8617437edbdbee80928831d52e675df813966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:49:17 compute-0 systemd[1]: libpod-conmon-4b718644be413adb9cd4a1d928a8617437edbdbee80928831d52e675df813966.scope: Deactivated successfully.
Oct  3 10:49:17 compute-0 podman[480595]: 2025-10-03 10:49:17.600962704 +0000 UTC m=+0.087558652 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  3 10:49:18 compute-0 nova_compute[351685]: 2025-10-03 10:49:18.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2459: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:18 compute-0 nova_compute[351685]: 2025-10-03 10:49:18.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:18 compute-0 podman[480764]: 2025-10-03 10:49:18.631571675 +0000 UTC m=+0.087733427 container create 17d44bedcfda766b52742b11f27f6d5895064cd30ed327960475f8d9d23ffa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 10:49:18 compute-0 podman[480764]: 2025-10-03 10:49:18.597862563 +0000 UTC m=+0.054024405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:49:18 compute-0 systemd[1]: Started libpod-conmon-17d44bedcfda766b52742b11f27f6d5895064cd30ed327960475f8d9d23ffa44.scope.
Oct  3 10:49:18 compute-0 nova_compute[351685]: 2025-10-03 10:49:18.726 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:49:18 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:49:18 compute-0 podman[480764]: 2025-10-03 10:49:18.777067455 +0000 UTC m=+0.233229277 container init 17d44bedcfda766b52742b11f27f6d5895064cd30ed327960475f8d9d23ffa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 10:49:18 compute-0 podman[480764]: 2025-10-03 10:49:18.797950515 +0000 UTC m=+0.254112337 container start 17d44bedcfda766b52742b11f27f6d5895064cd30ed327960475f8d9d23ffa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 10:49:18 compute-0 amazing_poincare[480780]: 167 167
Oct  3 10:49:18 compute-0 systemd[1]: libpod-17d44bedcfda766b52742b11f27f6d5895064cd30ed327960475f8d9d23ffa44.scope: Deactivated successfully.
Oct  3 10:49:18 compute-0 podman[480764]: 2025-10-03 10:49:18.805800868 +0000 UTC m=+0.261962660 container attach 17d44bedcfda766b52742b11f27f6d5895064cd30ed327960475f8d9d23ffa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:49:18 compute-0 podman[480764]: 2025-10-03 10:49:18.807546313 +0000 UTC m=+0.263708095 container died 17d44bedcfda766b52742b11f27f6d5895064cd30ed327960475f8d9d23ffa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 10:49:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-150d3e89f160b0add108d2f385d90d58d44d2ffc465ea84d479a9edb46072f0b-merged.mount: Deactivated successfully.
Oct  3 10:49:18 compute-0 podman[480764]: 2025-10-03 10:49:18.898398249 +0000 UTC m=+0.354560001 container remove 17d44bedcfda766b52742b11f27f6d5895064cd30ed327960475f8d9d23ffa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 10:49:18 compute-0 systemd[1]: libpod-conmon-17d44bedcfda766b52742b11f27f6d5895064cd30ed327960475f8d9d23ffa44.scope: Deactivated successfully.
Oct  3 10:49:19 compute-0 podman[480802]: 2025-10-03 10:49:19.179504143 +0000 UTC m=+0.097087777 container create 986bbbf74d84fb65d408a3d00932512127f39e1648411a8fa56876e58323ee78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 10:49:19 compute-0 podman[480802]: 2025-10-03 10:49:19.146232105 +0000 UTC m=+0.063815789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:49:19 compute-0 systemd[1]: Started libpod-conmon-986bbbf74d84fb65d408a3d00932512127f39e1648411a8fa56876e58323ee78.scope.
Oct  3 10:49:19 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:49:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b767762eb04ab0910214ca6fce6aae79bf4603eeba63c9e3003f7838e342f5d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:49:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b767762eb04ab0910214ca6fce6aae79bf4603eeba63c9e3003f7838e342f5d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:49:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b767762eb04ab0910214ca6fce6aae79bf4603eeba63c9e3003f7838e342f5d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:49:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b767762eb04ab0910214ca6fce6aae79bf4603eeba63c9e3003f7838e342f5d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:49:19 compute-0 podman[480802]: 2025-10-03 10:49:19.353171817 +0000 UTC m=+0.270755491 container init 986bbbf74d84fb65d408a3d00932512127f39e1648411a8fa56876e58323ee78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elbakyan, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:49:19 compute-0 podman[480802]: 2025-10-03 10:49:19.383732628 +0000 UTC m=+0.301316232 container start 986bbbf74d84fb65d408a3d00932512127f39e1648411a8fa56876e58323ee78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:49:19 compute-0 podman[480802]: 2025-10-03 10:49:19.389126611 +0000 UTC m=+0.306710295 container attach 986bbbf74d84fb65d408a3d00932512127f39e1648411a8fa56876e58323ee78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elbakyan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:49:19 compute-0 nova_compute[351685]: 2025-10-03 10:49:19.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:49:19 compute-0 nova_compute[351685]: 2025-10-03 10:49:19.767 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:49:19 compute-0 nova_compute[351685]: 2025-10-03 10:49:19.767 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:49:19 compute-0 nova_compute[351685]: 2025-10-03 10:49:19.767 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:49:19 compute-0 nova_compute[351685]: 2025-10-03 10:49:19.768 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:49:19 compute-0 nova_compute[351685]: 2025-10-03 10:49:19.768 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:49:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:49:20 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2145009138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:49:20 compute-0 nova_compute[351685]: 2025-10-03 10:49:20.265 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:49:20 compute-0 nova_compute[351685]: 2025-10-03 10:49:20.360 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:49:20 compute-0 nova_compute[351685]: 2025-10-03 10:49:20.361 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:49:20 compute-0 nova_compute[351685]: 2025-10-03 10:49:20.361 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:49:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2460: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]: {
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:        "osd_id": 1,
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:        "type": "bluestore"
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:    },
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:        "osd_id": 2,
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:        "type": "bluestore"
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:    },
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:        "osd_id": 0,
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:        "type": "bluestore"
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]:    }
Oct  3 10:49:20 compute-0 suspicious_elbakyan[480818]: }
Oct  3 10:49:20 compute-0 systemd[1]: libpod-986bbbf74d84fb65d408a3d00932512127f39e1648411a8fa56876e58323ee78.scope: Deactivated successfully.
Oct  3 10:49:20 compute-0 systemd[1]: libpod-986bbbf74d84fb65d408a3d00932512127f39e1648411a8fa56876e58323ee78.scope: Consumed 1.194s CPU time.
Oct  3 10:49:20 compute-0 podman[480802]: 2025-10-03 10:49:20.609909907 +0000 UTC m=+1.527493541 container died 986bbbf74d84fb65d408a3d00932512127f39e1648411a8fa56876e58323ee78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elbakyan, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:49:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b767762eb04ab0910214ca6fce6aae79bf4603eeba63c9e3003f7838e342f5d6-merged.mount: Deactivated successfully.
Oct  3 10:49:20 compute-0 podman[480802]: 2025-10-03 10:49:20.688646175 +0000 UTC m=+1.606229769 container remove 986bbbf74d84fb65d408a3d00932512127f39e1648411a8fa56876e58323ee78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:49:20 compute-0 systemd[1]: libpod-conmon-986bbbf74d84fb65d408a3d00932512127f39e1648411a8fa56876e58323ee78.scope: Deactivated successfully.
Oct  3 10:49:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:49:20 compute-0 podman[480875]: 2025-10-03 10:49:20.744177758 +0000 UTC m=+0.100091534 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:49:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:49:20 compute-0 podman[480877]: 2025-10-03 10:49:20.749030243 +0000 UTC m=+0.096974384 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3)
Oct  3 10:49:20 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:49:20 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:49:20 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 090546ee-3fd7-4f5c-9f53-af4d96e68fd7 does not exist
Oct  3 10:49:20 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3eabe02e-d8ce-468c-8766-131d7edf1171 does not exist
Oct  3 10:49:20 compute-0 podman[480876]: 2025-10-03 10:49:20.768390566 +0000 UTC m=+0.123449685 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  3 10:49:20 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:49:20 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:49:20 compute-0 nova_compute[351685]: 2025-10-03 10:49:20.849 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:49:20 compute-0 nova_compute[351685]: 2025-10-03 10:49:20.850 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3755MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:49:20 compute-0 nova_compute[351685]: 2025-10-03 10:49:20.851 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:49:20 compute-0 nova_compute[351685]: 2025-10-03 10:49:20.851 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:49:21 compute-0 nova_compute[351685]: 2025-10-03 10:49:21.950 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:49:21 compute-0 nova_compute[351685]: 2025-10-03 10:49:21.950 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:49:21 compute-0 nova_compute[351685]: 2025-10-03 10:49:21.951 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:49:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2461: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:49:22 compute-0 nova_compute[351685]: 2025-10-03 10:49:22.719 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:49:23 compute-0 nova_compute[351685]: 2025-10-03 10:49:23.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:49:23 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1436778213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:49:23 compute-0 nova_compute[351685]: 2025-10-03 10:49:23.228 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:49:23 compute-0 nova_compute[351685]: 2025-10-03 10:49:23.241 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:49:23 compute-0 nova_compute[351685]: 2025-10-03 10:49:23.265 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:49:23 compute-0 nova_compute[351685]: 2025-10-03 10:49:23.266 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:49:23 compute-0 nova_compute[351685]: 2025-10-03 10:49:23.266 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:49:23 compute-0 nova_compute[351685]: 2025-10-03 10:49:23.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:24 compute-0 nova_compute[351685]: 2025-10-03 10:49:24.267 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:49:24 compute-0 nova_compute[351685]: 2025-10-03 10:49:24.267 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:49:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2462: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:24 compute-0 nova_compute[351685]: 2025-10-03 10:49:24.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:49:25 compute-0 nova_compute[351685]: 2025-10-03 10:49:25.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:49:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2463: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:26 compute-0 nova_compute[351685]: 2025-10-03 10:49:26.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:49:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:49:28 compute-0 nova_compute[351685]: 2025-10-03 10:49:28.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2464: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:28 compute-0 nova_compute[351685]: 2025-10-03 10:49:28.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:28 compute-0 podman[481015]: 2025-10-03 10:49:28.868172324 +0000 UTC m=+0.109672602 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:49:28 compute-0 podman[481017]: 2025-10-03 10:49:28.879445505 +0000 UTC m=+0.123255567 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  3 10:49:28 compute-0 podman[481016]: 2025-10-03 10:49:28.890878103 +0000 UTC m=+0.129421266 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, container_name=kepler, io.openshift.tags=base rhel9, version=9.4, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct  3 10:49:29 compute-0 podman[157165]: time="2025-10-03T10:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:49:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:49:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9080 "" "Go-http-client/1.1"
Oct  3 10:49:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2465: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:31 compute-0 openstack_network_exporter[367524]: ERROR   10:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:49:31 compute-0 openstack_network_exporter[367524]: ERROR   10:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:49:31 compute-0 openstack_network_exporter[367524]: ERROR   10:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:49:31 compute-0 openstack_network_exporter[367524]: ERROR   10:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:49:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:49:31 compute-0 openstack_network_exporter[367524]: ERROR   10:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:49:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:49:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2466: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:49:33 compute-0 nova_compute[351685]: 2025-10-03 10:49:33.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:33 compute-0 nova_compute[351685]: 2025-10-03 10:49:33.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2467: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2468: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:49:38 compute-0 nova_compute[351685]: 2025-10-03 10:49:38.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2469: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:38 compute-0 nova_compute[351685]: 2025-10-03 10:49:38.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:39 compute-0 nova_compute[351685]: 2025-10-03 10:49:39.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:49:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2470: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.896 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.897 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.899 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.907 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.916 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.916 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.916 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.916 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:49:40.916770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.927 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.928 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.929 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.929 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.929 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.929 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.929 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.930 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.930 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.931 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.931 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.931 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.931 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.931 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:49:40.929752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.933 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:49:40.931816) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.968 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.969 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.970 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.971 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.971 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.971 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.971 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.971 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.972 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:40.972 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:49:40.972109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.032 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.033 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.033 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.034 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.034 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.034 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.034 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.034 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.035 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.035 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:49:41.035007) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.036 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.036 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.037 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.038 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.038 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.038 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.039 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.039 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.040 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.040 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:49:41.038822) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.041 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.041 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:49:41.042010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.042 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.043 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.043 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.044 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.044 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.045 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.045 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:49:41.045129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.046 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.046 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.047 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.047 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.048 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.048 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:49:41.048192) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.049 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.049 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.050 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.050 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.051 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.051 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.051 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.051 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:49:41.051335) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.075 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.076 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.076 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.077 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.077 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.077 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.077 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.078 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.078 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:49:41.077332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.080 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.080 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.080 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.080 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.080 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.081 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:49:41.080614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.081 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.082 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.083 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.084 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:49:41.083548) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.084 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.085 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.085 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.086 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.086 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:49:41.086146) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.087 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.087 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.087 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.088 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.088 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.088 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:49:41.087991) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.089 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.089 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.089 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.090 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.090 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.091 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:49:41.090045) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.091 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.091 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.091 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.092 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.092 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.093 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:49:41.091809) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.093 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.093 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.093 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.094 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:49:41.094022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.094 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.094 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 72620000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.094 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.095 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.095 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.095 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.095 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.096 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:49:41.095889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.096 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.097 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.097 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.097 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.098 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.098 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:49:41.098102) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.098 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.099 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.099 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.099 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.099 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.100 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.100 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.100 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:49:41.100027) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.101 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.101 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.101 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.101 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.102 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.102 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.102 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.104 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.104 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:49:41.102023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.104 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.104 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.104 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.105 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:49:41.104873) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.106 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.106 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.106 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.107 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:49:41.106994) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.107 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.108 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:49:41.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:49:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:49:41.653 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:49:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:49:41.654 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:49:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:49:41.654 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:49:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2471: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:49:43 compute-0 nova_compute[351685]: 2025-10-03 10:49:43.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:43 compute-0 nova_compute[351685]: 2025-10-03 10:49:43.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:45 compute-0 podman[481077]: 2025-10-03 10:49:45.859351807 +0000 UTC m=+0.094009608 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 10:49:45 compute-0 podman[481076]: 2025-10-03 10:49:45.895303551 +0000 UTC m=+0.140410638 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_id=edpm, architecture=x86_64, managed_by=edpm_ansible, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:49:45 compute-0 podman[481078]: 2025-10-03 10:49:45.916539723 +0000 UTC m=+0.155779562 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:49:46
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'images', 'vms', '.mgr', '.rgw.root', 'volumes', 'backups', 'cephfs.cephfs.meta']
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:49:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:49:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:49:47 compute-0 podman[481141]: 2025-10-03 10:49:47.880695339 +0000 UTC m=+0.135334456 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  3 10:49:48 compute-0 nova_compute[351685]: 2025-10-03 10:49:48.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:48 compute-0 nova_compute[351685]: 2025-10-03 10:49:48.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:51 compute-0 podman[481161]: 2025-10-03 10:49:51.85477413 +0000 UTC m=+0.106779459 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3)
Oct  3 10:49:51 compute-0 podman[481159]: 2025-10-03 10:49:51.896041055 +0000 UTC m=+0.141610518 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:49:51 compute-0 podman[481160]: 2025-10-03 10:49:51.901949314 +0000 UTC m=+0.142581458 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:49:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:49:53 compute-0 nova_compute[351685]: 2025-10-03 10:49:53.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:53 compute-0 nova_compute[351685]: 2025-10-03 10:49:53.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:49:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3842806191' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:49:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:49:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3842806191' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:49:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:49:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:49:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:49:58 compute-0 nova_compute[351685]: 2025-10-03 10:49:58.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:49:58 compute-0 nova_compute[351685]: 2025-10-03 10:49:58.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:49:59 compute-0 podman[157165]: time="2025-10-03T10:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:49:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:49:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9068 "" "Go-http-client/1.1"
Oct  3 10:49:59 compute-0 podman[481217]: 2025-10-03 10:49:59.87470449 +0000 UTC m=+0.123009779 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:49:59 compute-0 podman[481219]: 2025-10-03 10:49:59.884448553 +0000 UTC m=+0.127948978 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Oct  3 10:49:59 compute-0 podman[481218]: 2025-10-03 10:49:59.906129199 +0000 UTC m=+0.126283174 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, container_name=kepler, release=1214.1726694543, architecture=x86_64, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=base rhel9, vcs-type=git, config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30)
Oct  3 10:50:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:01 compute-0 openstack_network_exporter[367524]: ERROR   10:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:50:01 compute-0 openstack_network_exporter[367524]: ERROR   10:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:50:01 compute-0 openstack_network_exporter[367524]: ERROR   10:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:50:01 compute-0 openstack_network_exporter[367524]: ERROR   10:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:50:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:50:01 compute-0 openstack_network_exporter[367524]: ERROR   10:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:50:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:50:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:50:03 compute-0 nova_compute[351685]: 2025-10-03 10:50:03.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:03 compute-0 nova_compute[351685]: 2025-10-03 10:50:03.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:50:08 compute-0 nova_compute[351685]: 2025-10-03 10:50:08.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:08 compute-0 nova_compute[351685]: 2025-10-03 10:50:08.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:11 compute-0 nova_compute[351685]: 2025-10-03 10:50:11.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:50:11 compute-0 nova_compute[351685]: 2025-10-03 10:50:11.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:50:11 compute-0 nova_compute[351685]: 2025-10-03 10:50:11.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:50:12 compute-0 nova_compute[351685]: 2025-10-03 10:50:12.149 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:50:12 compute-0 nova_compute[351685]: 2025-10-03 10:50:12.150 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:50:12 compute-0 nova_compute[351685]: 2025-10-03 10:50:12.150 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:50:12 compute-0 nova_compute[351685]: 2025-10-03 10:50:12.150 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:50:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:50:13 compute-0 nova_compute[351685]: 2025-10-03 10:50:13.011 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:50:13 compute-0 nova_compute[351685]: 2025-10-03 10:50:13.032 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:50:13 compute-0 nova_compute[351685]: 2025-10-03 10:50:13.032 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:50:13 compute-0 nova_compute[351685]: 2025-10-03 10:50:13.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:13 compute-0 nova_compute[351685]: 2025-10-03 10:50:13.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:50:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:50:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:50:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:50:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:50:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:50:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2488: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:16 compute-0 nova_compute[351685]: 2025-10-03 10:50:16.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:50:16 compute-0 nova_compute[351685]: 2025-10-03 10:50:16.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:50:16 compute-0 podman[481277]: 2025-10-03 10:50:16.857059747 +0000 UTC m=+0.108695150 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, config_id=edpm, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=)
Oct  3 10:50:16 compute-0 podman[481278]: 2025-10-03 10:50:16.883831355 +0000 UTC m=+0.125942273 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Oct  3 10:50:16 compute-0 podman[481279]: 2025-10-03 10:50:16.89800035 +0000 UTC m=+0.134469457 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  3 10:50:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:50:18 compute-0 nova_compute[351685]: 2025-10-03 10:50:18.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:18 compute-0 nova_compute[351685]: 2025-10-03 10:50:18.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:18 compute-0 podman[481339]: 2025-10-03 10:50:18.888075129 +0000 UTC m=+0.149173719 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent)
Oct  3 10:50:19 compute-0 nova_compute[351685]: 2025-10-03 10:50:19.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:50:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:20 compute-0 nova_compute[351685]: 2025-10-03 10:50:20.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:50:20 compute-0 nova_compute[351685]: 2025-10-03 10:50:20.759 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:50:20 compute-0 nova_compute[351685]: 2025-10-03 10:50:20.759 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:50:20 compute-0 nova_compute[351685]: 2025-10-03 10:50:20.760 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:50:20 compute-0 nova_compute[351685]: 2025-10-03 10:50:20.760 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:50:20 compute-0 nova_compute[351685]: 2025-10-03 10:50:20.761 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:50:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:50:21 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1246574995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:50:21 compute-0 nova_compute[351685]: 2025-10-03 10:50:21.271 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:50:21 compute-0 nova_compute[351685]: 2025-10-03 10:50:21.375 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:50:21 compute-0 nova_compute[351685]: 2025-10-03 10:50:21.375 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:50:21 compute-0 nova_compute[351685]: 2025-10-03 10:50:21.376 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:50:21 compute-0 nova_compute[351685]: 2025-10-03 10:50:21.732 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:50:21 compute-0 nova_compute[351685]: 2025-10-03 10:50:21.733 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3811MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:50:21 compute-0 nova_compute[351685]: 2025-10-03 10:50:21.733 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:50:21 compute-0 nova_compute[351685]: 2025-10-03 10:50:21.733 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:50:21 compute-0 nova_compute[351685]: 2025-10-03 10:50:21.821 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:50:21 compute-0 nova_compute[351685]: 2025-10-03 10:50:21.822 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:50:21 compute-0 nova_compute[351685]: 2025-10-03 10:50:21.822 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:50:21 compute-0 nova_compute[351685]: 2025-10-03 10:50:21.861 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:50:22 compute-0 podman[481546]: 2025-10-03 10:50:22.050549181 +0000 UTC m=+0.104923410 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  3 10:50:22 compute-0 podman[481546]: 2025-10-03 10:50:22.193957863 +0000 UTC m=+0.248332082 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:50:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:50:22 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2251885444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:50:22 compute-0 nova_compute[351685]: 2025-10-03 10:50:22.358 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:50:22 compute-0 nova_compute[351685]: 2025-10-03 10:50:22.367 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:50:22 compute-0 nova_compute[351685]: 2025-10-03 10:50:22.431 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:50:22 compute-0 nova_compute[351685]: 2025-10-03 10:50:22.435 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:50:22 compute-0 nova_compute[351685]: 2025-10-03 10:50:22.435 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:50:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:50:22 compute-0 podman[481599]: 2025-10-03 10:50:22.626621302 +0000 UTC m=+0.099619800 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:50:22 compute-0 podman[481600]: 2025-10-03 10:50:22.637845642 +0000 UTC m=+0.098236915 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct  3 10:50:22 compute-0 podman[481598]: 2025-10-03 10:50:22.648212925 +0000 UTC m=+0.115782898 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:50:23 compute-0 nova_compute[351685]: 2025-10-03 10:50:23.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:50:23 compute-0 nova_compute[351685]: 2025-10-03 10:50:23.436 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:50:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:50:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:50:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:50:23 compute-0 nova_compute[351685]: 2025-10-03 10:50:23.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:23 compute-0 nova_compute[351685]: 2025-10-03 10:50:23.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:50:23 compute-0 nova_compute[351685]: 2025-10-03 10:50:23.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:50:23 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:50:23 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:50:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2492: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:50:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:50:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:50:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:50:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:50:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:50:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 77903776-c08f-4dc1-9838-a74a3c3989fc does not exist
Oct  3 10:50:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 1a16ba6f-90a0-47f6-b15c-a9370e8c49fa does not exist
Oct  3 10:50:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 64fd6321-d053-4933-9dfc-703a9b4b2fc8 does not exist
Oct  3 10:50:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:50:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:50:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:50:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:50:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:50:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:50:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:50:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:50:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:50:25 compute-0 podman[482042]: 2025-10-03 10:50:25.611562836 +0000 UTC m=+0.063134859 container create 45b8d046d4d880b7c26f963fc07753f6995b3bbd84ad52d25e82aec018aa57e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:50:25 compute-0 systemd[1]: Started libpod-conmon-45b8d046d4d880b7c26f963fc07753f6995b3bbd84ad52d25e82aec018aa57e0.scope.
Oct  3 10:50:25 compute-0 podman[482042]: 2025-10-03 10:50:25.586713348 +0000 UTC m=+0.038285351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:50:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:50:25 compute-0 podman[482042]: 2025-10-03 10:50:25.756220578 +0000 UTC m=+0.207792581 container init 45b8d046d4d880b7c26f963fc07753f6995b3bbd84ad52d25e82aec018aa57e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_turing, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:50:25 compute-0 podman[482042]: 2025-10-03 10:50:25.778175553 +0000 UTC m=+0.229747576 container start 45b8d046d4d880b7c26f963fc07753f6995b3bbd84ad52d25e82aec018aa57e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_turing, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:50:25 compute-0 podman[482042]: 2025-10-03 10:50:25.785635793 +0000 UTC m=+0.237207796 container attach 45b8d046d4d880b7c26f963fc07753f6995b3bbd84ad52d25e82aec018aa57e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 10:50:25 compute-0 naughty_turing[482058]: 167 167
Oct  3 10:50:25 compute-0 systemd[1]: libpod-45b8d046d4d880b7c26f963fc07753f6995b3bbd84ad52d25e82aec018aa57e0.scope: Deactivated successfully.
Oct  3 10:50:25 compute-0 conmon[482058]: conmon 45b8d046d4d880b7c26f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45b8d046d4d880b7c26f963fc07753f6995b3bbd84ad52d25e82aec018aa57e0.scope/container/memory.events
Oct  3 10:50:25 compute-0 podman[482042]: 2025-10-03 10:50:25.792338088 +0000 UTC m=+0.243910101 container died 45b8d046d4d880b7c26f963fc07753f6995b3bbd84ad52d25e82aec018aa57e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:50:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4fb4d37c6f20da321b50999079de4453a708db019637b013343db49a05e1a80-merged.mount: Deactivated successfully.
Oct  3 10:50:25 compute-0 podman[482042]: 2025-10-03 10:50:25.86469067 +0000 UTC m=+0.316262653 container remove 45b8d046d4d880b7c26f963fc07753f6995b3bbd84ad52d25e82aec018aa57e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_turing, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 10:50:25 compute-0 systemd[1]: libpod-conmon-45b8d046d4d880b7c26f963fc07753f6995b3bbd84ad52d25e82aec018aa57e0.scope: Deactivated successfully.
Oct  3 10:50:26 compute-0 podman[482081]: 2025-10-03 10:50:26.162696046 +0000 UTC m=+0.110662494 container create f5e7cca327d0e40616cef81c9a4d326a8968c8bca4947f6d9a4ef736b718ce20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lovelace, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:50:26 compute-0 podman[482081]: 2025-10-03 10:50:26.127222308 +0000 UTC m=+0.075188816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:50:26 compute-0 systemd[1]: Started libpod-conmon-f5e7cca327d0e40616cef81c9a4d326a8968c8bca4947f6d9a4ef736b718ce20.scope.
Oct  3 10:50:26 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:50:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eea1275bbefe1cb51eb74b775563a2dc5f369858c4d41b6783e126bad48077/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:50:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eea1275bbefe1cb51eb74b775563a2dc5f369858c4d41b6783e126bad48077/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:50:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eea1275bbefe1cb51eb74b775563a2dc5f369858c4d41b6783e126bad48077/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:50:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eea1275bbefe1cb51eb74b775563a2dc5f369858c4d41b6783e126bad48077/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:50:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eea1275bbefe1cb51eb74b775563a2dc5f369858c4d41b6783e126bad48077/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:50:26 compute-0 podman[482081]: 2025-10-03 10:50:26.352480057 +0000 UTC m=+0.300446515 container init f5e7cca327d0e40616cef81c9a4d326a8968c8bca4947f6d9a4ef736b718ce20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lovelace, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:50:26 compute-0 podman[482081]: 2025-10-03 10:50:26.387099929 +0000 UTC m=+0.335066377 container start f5e7cca327d0e40616cef81c9a4d326a8968c8bca4947f6d9a4ef736b718ce20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lovelace, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:50:26 compute-0 podman[482081]: 2025-10-03 10:50:26.393837595 +0000 UTC m=+0.341804043 container attach f5e7cca327d0e40616cef81c9a4d326a8968c8bca4947f6d9a4ef736b718ce20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lovelace, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:50:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:26 compute-0 nova_compute[351685]: 2025-10-03 10:50:26.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:50:26 compute-0 nova_compute[351685]: 2025-10-03 10:50:26.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:50:26 compute-0 nova_compute[351685]: 2025-10-03 10:50:26.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:50:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:50:27 compute-0 suspicious_lovelace[482097]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:50:27 compute-0 suspicious_lovelace[482097]: --> relative data size: 1.0
Oct  3 10:50:27 compute-0 suspicious_lovelace[482097]: --> All data devices are unavailable
Oct  3 10:50:27 compute-0 systemd[1]: libpod-f5e7cca327d0e40616cef81c9a4d326a8968c8bca4947f6d9a4ef736b718ce20.scope: Deactivated successfully.
Oct  3 10:50:27 compute-0 systemd[1]: libpod-f5e7cca327d0e40616cef81c9a4d326a8968c8bca4947f6d9a4ef736b718ce20.scope: Consumed 1.243s CPU time.
Oct  3 10:50:27 compute-0 podman[482126]: 2025-10-03 10:50:27.773213921 +0000 UTC m=+0.049739818 container died f5e7cca327d0e40616cef81c9a4d326a8968c8bca4947f6d9a4ef736b718ce20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:50:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-36eea1275bbefe1cb51eb74b775563a2dc5f369858c4d41b6783e126bad48077-merged.mount: Deactivated successfully.
Oct  3 10:50:27 compute-0 podman[482126]: 2025-10-03 10:50:27.842800165 +0000 UTC m=+0.119326032 container remove f5e7cca327d0e40616cef81c9a4d326a8968c8bca4947f6d9a4ef736b718ce20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:50:27 compute-0 systemd[1]: libpod-conmon-f5e7cca327d0e40616cef81c9a4d326a8968c8bca4947f6d9a4ef736b718ce20.scope: Deactivated successfully.
Oct  3 10:50:28 compute-0 nova_compute[351685]: 2025-10-03 10:50:28.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2494: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:28 compute-0 nova_compute[351685]: 2025-10-03 10:50:28.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:28 compute-0 podman[482279]: 2025-10-03 10:50:28.874598714 +0000 UTC m=+0.041218084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:50:28 compute-0 podman[482279]: 2025-10-03 10:50:28.991718993 +0000 UTC m=+0.158338283 container create a5da4a3f7deabb29bce2151defa9fd5d37a2603ccef5fdcb6c53d4047f646c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ellis, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 10:50:29 compute-0 systemd[1]: Started libpod-conmon-a5da4a3f7deabb29bce2151defa9fd5d37a2603ccef5fdcb6c53d4047f646c77.scope.
Oct  3 10:50:29 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:50:29 compute-0 podman[482279]: 2025-10-03 10:50:29.195971939 +0000 UTC m=+0.362591319 container init a5da4a3f7deabb29bce2151defa9fd5d37a2603ccef5fdcb6c53d4047f646c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ellis, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:50:29 compute-0 podman[482279]: 2025-10-03 10:50:29.209536666 +0000 UTC m=+0.376155976 container start a5da4a3f7deabb29bce2151defa9fd5d37a2603ccef5fdcb6c53d4047f646c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ellis, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  3 10:50:29 compute-0 podman[482279]: 2025-10-03 10:50:29.216112726 +0000 UTC m=+0.382732046 container attach a5da4a3f7deabb29bce2151defa9fd5d37a2603ccef5fdcb6c53d4047f646c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  3 10:50:29 compute-0 funny_ellis[482295]: 167 167
Oct  3 10:50:29 compute-0 systemd[1]: libpod-a5da4a3f7deabb29bce2151defa9fd5d37a2603ccef5fdcb6c53d4047f646c77.scope: Deactivated successfully.
Oct  3 10:50:29 compute-0 conmon[482295]: conmon a5da4a3f7deabb29bce2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a5da4a3f7deabb29bce2151defa9fd5d37a2603ccef5fdcb6c53d4047f646c77.scope/container/memory.events
Oct  3 10:50:29 compute-0 podman[482279]: 2025-10-03 10:50:29.22775673 +0000 UTC m=+0.394376100 container died a5da4a3f7deabb29bce2151defa9fd5d37a2603ccef5fdcb6c53d4047f646c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ellis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 10:50:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d66f96a375624f6fe6852f7f44edf7245c695aeb8b83a388af3038c28f65921f-merged.mount: Deactivated successfully.
Oct  3 10:50:29 compute-0 podman[482279]: 2025-10-03 10:50:29.304305887 +0000 UTC m=+0.470925207 container remove a5da4a3f7deabb29bce2151defa9fd5d37a2603ccef5fdcb6c53d4047f646c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ellis, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 10:50:29 compute-0 systemd[1]: libpod-conmon-a5da4a3f7deabb29bce2151defa9fd5d37a2603ccef5fdcb6c53d4047f646c77.scope: Deactivated successfully.
Oct  3 10:50:29 compute-0 podman[482317]: 2025-10-03 10:50:29.564785958 +0000 UTC m=+0.067403364 container create 5c4afbdbaba7c4e56bec8dd35f61848dbdf17126b76536eefa3e775addc3202b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 10:50:29 compute-0 systemd[1]: Started libpod-conmon-5c4afbdbaba7c4e56bec8dd35f61848dbdf17126b76536eefa3e775addc3202b.scope.
Oct  3 10:50:29 compute-0 podman[482317]: 2025-10-03 10:50:29.544878599 +0000 UTC m=+0.047495995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:50:29 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63efe68344020cc94654c17e8e7fd50056b6e88abb075e3697fdb644daf89985/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63efe68344020cc94654c17e8e7fd50056b6e88abb075e3697fdb644daf89985/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63efe68344020cc94654c17e8e7fd50056b6e88abb075e3697fdb644daf89985/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:50:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63efe68344020cc94654c17e8e7fd50056b6e88abb075e3697fdb644daf89985/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:50:29 compute-0 podman[482317]: 2025-10-03 10:50:29.72250706 +0000 UTC m=+0.225124516 container init 5c4afbdbaba7c4e56bec8dd35f61848dbdf17126b76536eefa3e775addc3202b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:50:29 compute-0 podman[157165]: time="2025-10-03T10:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:50:29 compute-0 podman[482317]: 2025-10-03 10:50:29.754206338 +0000 UTC m=+0.256823744 container start 5c4afbdbaba7c4e56bec8dd35f61848dbdf17126b76536eefa3e775addc3202b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:50:29 compute-0 podman[482317]: 2025-10-03 10:50:29.771754722 +0000 UTC m=+0.274372108 container attach 5c4afbdbaba7c4e56bec8dd35f61848dbdf17126b76536eefa3e775addc3202b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:50:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47846 "" "Go-http-client/1.1"
Oct  3 10:50:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9500 "" "Go-http-client/1.1"
Oct  3 10:50:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]: {
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:    "0": [
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:        {
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "devices": [
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "/dev/loop3"
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            ],
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "lv_name": "ceph_lv0",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "lv_size": "21470642176",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "name": "ceph_lv0",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "tags": {
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.cluster_name": "ceph",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.crush_device_class": "",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.encrypted": "0",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.osd_id": "0",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.type": "block",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.vdo": "0"
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            },
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "type": "block",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "vg_name": "ceph_vg0"
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:        }
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:    ],
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:    "1": [
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:        {
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "devices": [
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "/dev/loop4"
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            ],
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "lv_name": "ceph_lv1",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "lv_size": "21470642176",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "name": "ceph_lv1",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "tags": {
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.cluster_name": "ceph",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.crush_device_class": "",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.encrypted": "0",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.osd_id": "1",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.type": "block",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.vdo": "0"
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            },
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "type": "block",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "vg_name": "ceph_vg1"
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:        }
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:    ],
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:    "2": [
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:        {
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "devices": [
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "/dev/loop5"
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            ],
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "lv_name": "ceph_lv2",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "lv_size": "21470642176",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "name": "ceph_lv2",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "tags": {
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.cluster_name": "ceph",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.crush_device_class": "",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.encrypted": "0",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.osd_id": "2",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.type": "block",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:                "ceph.vdo": "0"
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            },
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "type": "block",
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:            "vg_name": "ceph_vg2"
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:        }
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]:    ]
Oct  3 10:50:30 compute-0 nervous_hofstadter[482332]: }
Oct  3 10:50:30 compute-0 systemd[1]: libpod-5c4afbdbaba7c4e56bec8dd35f61848dbdf17126b76536eefa3e775addc3202b.scope: Deactivated successfully.
Oct  3 10:50:30 compute-0 podman[482317]: 2025-10-03 10:50:30.630181565 +0000 UTC m=+1.132798981 container died 5c4afbdbaba7c4e56bec8dd35f61848dbdf17126b76536eefa3e775addc3202b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct  3 10:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-63efe68344020cc94654c17e8e7fd50056b6e88abb075e3697fdb644daf89985-merged.mount: Deactivated successfully.
Oct  3 10:50:30 compute-0 podman[482317]: 2025-10-03 10:50:30.732637274 +0000 UTC m=+1.235254640 container remove 5c4afbdbaba7c4e56bec8dd35f61848dbdf17126b76536eefa3e775addc3202b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:50:30 compute-0 systemd[1]: libpod-conmon-5c4afbdbaba7c4e56bec8dd35f61848dbdf17126b76536eefa3e775addc3202b.scope: Deactivated successfully.
Oct  3 10:50:30 compute-0 podman[482342]: 2025-10-03 10:50:30.786171462 +0000 UTC m=+0.120475738 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:50:30 compute-0 podman[482350]: 2025-10-03 10:50:30.792038151 +0000 UTC m=+0.117840594 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:50:30 compute-0 podman[482348]: 2025-10-03 10:50:30.811992681 +0000 UTC m=+0.126557063 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.tags=base rhel9, name=ubi9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0)
Oct  3 10:50:31 compute-0 openstack_network_exporter[367524]: ERROR   10:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:50:31 compute-0 openstack_network_exporter[367524]: ERROR   10:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:50:31 compute-0 openstack_network_exporter[367524]: ERROR   10:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:50:31 compute-0 openstack_network_exporter[367524]: ERROR   10:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:50:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:50:31 compute-0 openstack_network_exporter[367524]: ERROR   10:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:50:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:50:31 compute-0 podman[482545]: 2025-10-03 10:50:31.731159565 +0000 UTC m=+0.091077585 container create cc9c756dc40b4841856d40f6aa05af2c61c022ccc9d6a7801d543b46bdaeaa18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct  3 10:50:31 compute-0 podman[482545]: 2025-10-03 10:50:31.697549216 +0000 UTC m=+0.057467286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:50:31 compute-0 systemd[1]: Started libpod-conmon-cc9c756dc40b4841856d40f6aa05af2c61c022ccc9d6a7801d543b46bdaeaa18.scope.
Oct  3 10:50:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:50:31 compute-0 podman[482545]: 2025-10-03 10:50:31.878963709 +0000 UTC m=+0.238881799 container init cc9c756dc40b4841856d40f6aa05af2c61c022ccc9d6a7801d543b46bdaeaa18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:50:31 compute-0 podman[482545]: 2025-10-03 10:50:31.890571262 +0000 UTC m=+0.250489292 container start cc9c756dc40b4841856d40f6aa05af2c61c022ccc9d6a7801d543b46bdaeaa18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swartz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:50:31 compute-0 podman[482545]: 2025-10-03 10:50:31.897449012 +0000 UTC m=+0.257367102 container attach cc9c756dc40b4841856d40f6aa05af2c61c022ccc9d6a7801d543b46bdaeaa18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 10:50:31 compute-0 lucid_swartz[482561]: 167 167
Oct  3 10:50:31 compute-0 systemd[1]: libpod-cc9c756dc40b4841856d40f6aa05af2c61c022ccc9d6a7801d543b46bdaeaa18.scope: Deactivated successfully.
Oct  3 10:50:31 compute-0 podman[482545]: 2025-10-03 10:50:31.898803176 +0000 UTC m=+0.258721176 container died cc9c756dc40b4841856d40f6aa05af2c61c022ccc9d6a7801d543b46bdaeaa18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 10:50:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-b722a067dd020e621f2723ac5ce2abd297e26cb5f9544154e38d57f31fea424f-merged.mount: Deactivated successfully.
Oct  3 10:50:31 compute-0 podman[482545]: 2025-10-03 10:50:31.973795923 +0000 UTC m=+0.333713923 container remove cc9c756dc40b4841856d40f6aa05af2c61c022ccc9d6a7801d543b46bdaeaa18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_swartz, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct  3 10:50:31 compute-0 systemd[1]: libpod-conmon-cc9c756dc40b4841856d40f6aa05af2c61c022ccc9d6a7801d543b46bdaeaa18.scope: Deactivated successfully.
Oct  3 10:50:32 compute-0 podman[482583]: 2025-10-03 10:50:32.227346992 +0000 UTC m=+0.073995886 container create 841088a215118befd739f077491e2e815def2a5aa6ba329595888b9e73361f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:50:32 compute-0 podman[482583]: 2025-10-03 10:50:32.198740114 +0000 UTC m=+0.045389088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:50:32 compute-0 systemd[1]: Started libpod-conmon-841088a215118befd739f077491e2e815def2a5aa6ba329595888b9e73361f3b.scope.
Oct  3 10:50:32 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cf6fce7900a818f70cd081da2597cc9a6868cf62e9d9da0d48374320a3eb9a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cf6fce7900a818f70cd081da2597cc9a6868cf62e9d9da0d48374320a3eb9a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cf6fce7900a818f70cd081da2597cc9a6868cf62e9d9da0d48374320a3eb9a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cf6fce7900a818f70cd081da2597cc9a6868cf62e9d9da0d48374320a3eb9a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:50:32 compute-0 podman[482583]: 2025-10-03 10:50:32.384150316 +0000 UTC m=+0.230799260 container init 841088a215118befd739f077491e2e815def2a5aa6ba329595888b9e73361f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 10:50:32 compute-0 podman[482583]: 2025-10-03 10:50:32.402530505 +0000 UTC m=+0.249179439 container start 841088a215118befd739f077491e2e815def2a5aa6ba329595888b9e73361f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:50:32 compute-0 podman[482583]: 2025-10-03 10:50:32.409667654 +0000 UTC m=+0.256316608 container attach 841088a215118befd739f077491e2e815def2a5aa6ba329595888b9e73361f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:50:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2496: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:50:33 compute-0 nova_compute[351685]: 2025-10-03 10:50:33.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:33 compute-0 determined_thompson[482599]: {
Oct  3 10:50:33 compute-0 determined_thompson[482599]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:50:33 compute-0 determined_thompson[482599]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:50:33 compute-0 determined_thompson[482599]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:50:33 compute-0 determined_thompson[482599]:        "osd_id": 1,
Oct  3 10:50:33 compute-0 determined_thompson[482599]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:50:33 compute-0 determined_thompson[482599]:        "type": "bluestore"
Oct  3 10:50:33 compute-0 determined_thompson[482599]:    },
Oct  3 10:50:33 compute-0 determined_thompson[482599]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:50:33 compute-0 determined_thompson[482599]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:50:33 compute-0 determined_thompson[482599]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:50:33 compute-0 determined_thompson[482599]:        "osd_id": 2,
Oct  3 10:50:33 compute-0 determined_thompson[482599]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:50:33 compute-0 determined_thompson[482599]:        "type": "bluestore"
Oct  3 10:50:33 compute-0 determined_thompson[482599]:    },
Oct  3 10:50:33 compute-0 determined_thompson[482599]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:50:33 compute-0 determined_thompson[482599]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:50:33 compute-0 determined_thompson[482599]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:50:33 compute-0 determined_thompson[482599]:        "osd_id": 0,
Oct  3 10:50:33 compute-0 determined_thompson[482599]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:50:33 compute-0 determined_thompson[482599]:        "type": "bluestore"
Oct  3 10:50:33 compute-0 determined_thompson[482599]:    }
Oct  3 10:50:33 compute-0 determined_thompson[482599]: }
Oct  3 10:50:33 compute-0 systemd[1]: libpod-841088a215118befd739f077491e2e815def2a5aa6ba329595888b9e73361f3b.scope: Deactivated successfully.
Oct  3 10:50:33 compute-0 systemd[1]: libpod-841088a215118befd739f077491e2e815def2a5aa6ba329595888b9e73361f3b.scope: Consumed 1.147s CPU time.
Oct  3 10:50:33 compute-0 podman[482583]: 2025-10-03 10:50:33.553054175 +0000 UTC m=+1.399703149 container died 841088a215118befd739f077491e2e815def2a5aa6ba329595888b9e73361f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 10:50:33 compute-0 nova_compute[351685]: 2025-10-03 10:50:33.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cf6fce7900a818f70cd081da2597cc9a6868cf62e9d9da0d48374320a3eb9a2-merged.mount: Deactivated successfully.
Oct  3 10:50:33 compute-0 podman[482583]: 2025-10-03 10:50:33.647420144 +0000 UTC m=+1.494069068 container remove 841088a215118befd739f077491e2e815def2a5aa6ba329595888b9e73361f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct  3 10:50:33 compute-0 systemd[1]: libpod-conmon-841088a215118befd739f077491e2e815def2a5aa6ba329595888b9e73361f3b.scope: Deactivated successfully.
Oct  3 10:50:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:50:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:50:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:50:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:50:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d1c0fd1e-ad88-4be0-8901-deabe35246e3 does not exist
Oct  3 10:50:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 505e30c9-5084-474e-9b58-616df6068dc2 does not exist
Oct  3 10:50:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:50:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:50:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:50:36.046799) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488636046872, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1753, "num_deletes": 251, "total_data_size": 2850756, "memory_usage": 2897408, "flush_reason": "Manual Compaction"}
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488636078829, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 2789936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49205, "largest_seqno": 50957, "table_properties": {"data_size": 2781900, "index_size": 4915, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16343, "raw_average_key_size": 19, "raw_value_size": 2765854, "raw_average_value_size": 3381, "num_data_blocks": 219, "num_entries": 818, "num_filter_entries": 818, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759488447, "oldest_key_time": 1759488447, "file_creation_time": 1759488636, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 32124 microseconds, and 14375 cpu microseconds.
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:50:36.078929) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 2789936 bytes OK
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:50:36.078957) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:50:36.081372) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:50:36.081394) EVENT_LOG_v1 {"time_micros": 1759488636081386, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:50:36.081419) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 2843274, prev total WAL file size 2843274, number of live WAL files 2.
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:50:36.083695) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(2724KB)], [119(6707KB)]
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488636083734, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 9658399, "oldest_snapshot_seqno": -1}
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 6426 keys, 7936063 bytes, temperature: kUnknown
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488636159705, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 7936063, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7896732, "index_size": 22133, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16133, "raw_key_size": 167892, "raw_average_key_size": 26, "raw_value_size": 7783716, "raw_average_value_size": 1211, "num_data_blocks": 874, "num_entries": 6426, "num_filter_entries": 6426, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759488636, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:50:36.160111) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 7936063 bytes
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:50:36.163144) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.9 rd, 104.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 6.6 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(6.3) write-amplify(2.8) OK, records in: 6940, records dropped: 514 output_compression: NoCompression
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:50:36.163176) EVENT_LOG_v1 {"time_micros": 1759488636163161, "job": 72, "event": "compaction_finished", "compaction_time_micros": 76088, "compaction_time_cpu_micros": 39831, "output_level": 6, "num_output_files": 1, "total_output_size": 7936063, "num_input_records": 6940, "num_output_records": 6426, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488636164510, "job": 72, "event": "table_file_deletion", "file_number": 121}
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488636167403, "job": 72, "event": "table_file_deletion", "file_number": 119}
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:50:36.083455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:50:36.167651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:50:36.167661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:50:36.167664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:50:36.167668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:50:36 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:50:36.167671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:50:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:50:38 compute-0 nova_compute[351685]: 2025-10-03 10:50:38.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:38 compute-0 nova_compute[351685]: 2025-10-03 10:50:38.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2500: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:50:41.654 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:50:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:50:41.655 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:50:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:50:41.655 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:50:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2501: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:50:43 compute-0 nova_compute[351685]: 2025-10-03 10:50:43.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:43 compute-0 nova_compute[351685]: 2025-10-03 10:50:43.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2502: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:50:46
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'images', 'default.rgw.control', 'volumes']
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:50:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:50:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:50:47 compute-0 podman[482694]: 2025-10-03 10:50:47.858542353 +0000 UTC m=+0.106722626 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Oct  3 10:50:47 compute-0 podman[482693]: 2025-10-03 10:50:47.895452748 +0000 UTC m=+0.137354749 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-type=git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct  3 10:50:47 compute-0 podman[482695]: 2025-10-03 10:50:47.938933014 +0000 UTC m=+0.170564736 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:50:48 compute-0 nova_compute[351685]: 2025-10-03 10:50:48.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:48 compute-0 nova_compute[351685]: 2025-10-03 10:50:48.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:49 compute-0 podman[482758]: 2025-10-03 10:50:49.867089914 +0000 UTC m=+0.127337809 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct  3 10:50:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2506: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:50:52 compute-0 podman[482775]: 2025-10-03 10:50:52.85774545 +0000 UTC m=+0.103340799 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:50:52 compute-0 podman[482776]: 2025-10-03 10:50:52.873409643 +0000 UTC m=+0.111377096 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  3 10:50:52 compute-0 podman[482777]: 2025-10-03 10:50:52.879378655 +0000 UTC m=+0.111323086 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  3 10:50:53 compute-0 nova_compute[351685]: 2025-10-03 10:50:53.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:53 compute-0 nova_compute[351685]: 2025-10-03 10:50:53.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:50:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3217466728' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:50:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:50:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3217466728' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:50:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:50:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 11K writes, 50K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1318 writes, 5967 keys, 1318 commit groups, 1.0 writes per commit group, ingest: 8.58 MB, 0.01 MB/s#012Interval WAL: 1318 writes, 1318 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     29.7      2.15              0.24        36    0.060       0      0       0.0       0.0#012  L6      1/0    7.57 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.3     84.9     70.4      3.87              0.99        35    0.111    195K    18K       0.0       0.0#012 Sum      1/0    7.57 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.3     54.6     55.8      6.02              1.24        71    0.085    195K    18K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3     48.3     49.8      1.01              0.26        10    0.101     33K   2561       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0     84.9     70.4      3.87              0.99        35    0.111    195K    18K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     29.8      2.14              0.24        35    0.061       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.6      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.062, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.33 GB write, 0.07 MB/s write, 0.32 GB read, 0.07 MB/s read, 6.0 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56005dddb1f0#2 capacity: 304.00 MB usage: 39.81 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000458 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2519,38.40 MB,12.6316%) FilterBlock(72,548.36 KB,0.176154%) IndexBlock(72,893.27 KB,0.286951%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:50:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:50:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2508: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:50:58 compute-0 nova_compute[351685]: 2025-10-03 10:50:58.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2509: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:50:58 compute-0 nova_compute[351685]: 2025-10-03 10:50:58.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:50:59 compute-0 podman[157165]: time="2025-10-03T10:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:50:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:50:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9082 "" "Go-http-client/1.1"
Oct  3 10:51:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:01 compute-0 openstack_network_exporter[367524]: ERROR   10:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:51:01 compute-0 openstack_network_exporter[367524]: ERROR   10:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:51:01 compute-0 openstack_network_exporter[367524]: ERROR   10:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:51:01 compute-0 openstack_network_exporter[367524]: ERROR   10:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:51:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:51:01 compute-0 openstack_network_exporter[367524]: ERROR   10:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:51:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:51:01 compute-0 podman[482835]: 2025-10-03 10:51:01.866734388 +0000 UTC m=+0.106332744 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:51:01 compute-0 podman[482836]: 2025-10-03 10:51:01.884171628 +0000 UTC m=+0.129540909 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, managed_by=edpm_ansible, release-0.7.12=, container_name=kepler, io.buildah.version=1.29.0, config_id=edpm, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, distribution-scope=public)
Oct  3 10:51:01 compute-0 podman[482837]: 2025-10-03 10:51:01.886685549 +0000 UTC m=+0.111253962 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  3 10:51:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:51:03 compute-0 nova_compute[351685]: 2025-10-03 10:51:03.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:03 compute-0 nova_compute[351685]: 2025-10-03 10:51:03.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2513: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:51:08 compute-0 nova_compute[351685]: 2025-10-03 10:51:08.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:08 compute-0 nova_compute[351685]: 2025-10-03 10:51:08.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:11 compute-0 nova_compute[351685]: 2025-10-03 10:51:11.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:51:11 compute-0 nova_compute[351685]: 2025-10-03 10:51:11.732 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:51:11 compute-0 nova_compute[351685]: 2025-10-03 10:51:11.732 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:51:12 compute-0 nova_compute[351685]: 2025-10-03 10:51:12.183 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:51:12 compute-0 nova_compute[351685]: 2025-10-03 10:51:12.184 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:51:12 compute-0 nova_compute[351685]: 2025-10-03 10:51:12.184 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:51:12 compute-0 nova_compute[351685]: 2025-10-03 10:51:12.184 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:51:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:51:13 compute-0 nova_compute[351685]: 2025-10-03 10:51:13.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:13 compute-0 nova_compute[351685]: 2025-10-03 10:51:13.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:13 compute-0 nova_compute[351685]: 2025-10-03 10:51:13.669 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:51:13 compute-0 nova_compute[351685]: 2025-10-03 10:51:13.691 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:51:13 compute-0 nova_compute[351685]: 2025-10-03 10:51:13.691 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:51:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:51:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:51:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:51:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:51:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:51:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:51:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:51:17 compute-0 nova_compute[351685]: 2025-10-03 10:51:17.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:51:17 compute-0 nova_compute[351685]: 2025-10-03 10:51:17.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:51:18 compute-0 nova_compute[351685]: 2025-10-03 10:51:18.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:18 compute-0 nova_compute[351685]: 2025-10-03 10:51:18.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:18 compute-0 podman[482897]: 2025-10-03 10:51:18.880137218 +0000 UTC m=+0.124943571 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Oct  3 10:51:18 compute-0 podman[482896]: 2025-10-03 10:51:18.882295538 +0000 UTC m=+0.129566130 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct  3 10:51:18 compute-0 podman[482898]: 2025-10-03 10:51:18.94280789 +0000 UTC m=+0.171748764 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 10:51:19 compute-0 nova_compute[351685]: 2025-10-03 10:51:19.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:51:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:20 compute-0 podman[482957]: 2025-10-03 10:51:20.86412143 +0000 UTC m=+0.111661646 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  3 10:51:21 compute-0 nova_compute[351685]: 2025-10-03 10:51:21.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:51:21 compute-0 nova_compute[351685]: 2025-10-03 10:51:21.763 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:51:21 compute-0 nova_compute[351685]: 2025-10-03 10:51:21.764 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:51:21 compute-0 nova_compute[351685]: 2025-10-03 10:51:21.764 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:51:21 compute-0 nova_compute[351685]: 2025-10-03 10:51:21.765 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:51:21 compute-0 nova_compute[351685]: 2025-10-03 10:51:21.765 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:51:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:51:22 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/550273512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:51:22 compute-0 nova_compute[351685]: 2025-10-03 10:51:22.311 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:51:22 compute-0 nova_compute[351685]: 2025-10-03 10:51:22.420 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:51:22 compute-0 nova_compute[351685]: 2025-10-03 10:51:22.421 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:51:22 compute-0 nova_compute[351685]: 2025-10-03 10:51:22.422 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:51:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:51:23 compute-0 nova_compute[351685]: 2025-10-03 10:51:23.054 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:51:23 compute-0 nova_compute[351685]: 2025-10-03 10:51:23.057 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3796MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:51:23 compute-0 nova_compute[351685]: 2025-10-03 10:51:23.058 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:51:23 compute-0 nova_compute[351685]: 2025-10-03 10:51:23.059 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:51:23 compute-0 nova_compute[351685]: 2025-10-03 10:51:23.192 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:51:23 compute-0 nova_compute[351685]: 2025-10-03 10:51:23.193 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:51:23 compute-0 nova_compute[351685]: 2025-10-03 10:51:23.194 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:51:23 compute-0 nova_compute[351685]: 2025-10-03 10:51:23.231 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:51:23 compute-0 nova_compute[351685]: 2025-10-03 10:51:23.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:23 compute-0 nova_compute[351685]: 2025-10-03 10:51:23.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:51:23 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/974517350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:51:23 compute-0 nova_compute[351685]: 2025-10-03 10:51:23.697 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:51:23 compute-0 nova_compute[351685]: 2025-10-03 10:51:23.709 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:51:23 compute-0 nova_compute[351685]: 2025-10-03 10:51:23.725 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:51:23 compute-0 nova_compute[351685]: 2025-10-03 10:51:23.728 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:51:23 compute-0 nova_compute[351685]: 2025-10-03 10:51:23.728 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:51:23 compute-0 nova_compute[351685]: 2025-10-03 10:51:23.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:51:23 compute-0 podman[483018]: 2025-10-03 10:51:23.860624883 +0000 UTC m=+0.103200244 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:51:23 compute-0 podman[483019]: 2025-10-03 10:51:23.888804247 +0000 UTC m=+0.137824545 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 10:51:23 compute-0 podman[483020]: 2025-10-03 10:51:23.901210846 +0000 UTC m=+0.132860426 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:51:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:25 compute-0 nova_compute[351685]: 2025-10-03 10:51:25.740 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:51:25 compute-0 nova_compute[351685]: 2025-10-03 10:51:25.741 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:51:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:51:28 compute-0 nova_compute[351685]: 2025-10-03 10:51:28.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:28 compute-0 nova_compute[351685]: 2025-10-03 10:51:28.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:28 compute-0 nova_compute[351685]: 2025-10-03 10:51:28.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:51:28 compute-0 nova_compute[351685]: 2025-10-03 10:51:28.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:51:28 compute-0 nova_compute[351685]: 2025-10-03 10:51:28.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:51:29 compute-0 podman[157165]: time="2025-10-03T10:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:51:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:51:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9083 "" "Go-http-client/1.1"
Oct  3 10:51:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:31 compute-0 openstack_network_exporter[367524]: ERROR   10:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:51:31 compute-0 openstack_network_exporter[367524]: ERROR   10:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:51:31 compute-0 openstack_network_exporter[367524]: ERROR   10:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:51:31 compute-0 openstack_network_exporter[367524]: ERROR   10:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:51:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:51:31 compute-0 openstack_network_exporter[367524]: ERROR   10:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:51:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:51:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:51:32 compute-0 podman[483078]: 2025-10-03 10:51:32.849792537 +0000 UTC m=+0.101570211 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:51:32 compute-0 podman[483076]: 2025-10-03 10:51:32.85642599 +0000 UTC m=+0.103619567 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:51:32 compute-0 podman[483077]: 2025-10-03 10:51:32.886496075 +0000 UTC m=+0.129621951 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, architecture=x86_64, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, config_id=edpm, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, container_name=kepler, maintainer=Red Hat, Inc., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9)
Oct  3 10:51:33 compute-0 nova_compute[351685]: 2025-10-03 10:51:33.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:33 compute-0 nova_compute[351685]: 2025-10-03 10:51:33.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2527: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:51:35 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:51:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:51:35 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:51:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:51:35 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:51:35 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b8ca2ba6-e661-4e4f-9ce6-658624af0904 does not exist
Oct  3 10:51:35 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 46e36e79-8a2e-4fee-933a-7cf5118c24a3 does not exist
Oct  3 10:51:35 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8108eca2-5e00-4216-a2c2-03231c13a3bf does not exist
Oct  3 10:51:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:51:35 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:51:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:51:35 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:51:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:51:35 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:51:36 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:51:36 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:51:36 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:51:36 compute-0 podman[483406]: 2025-10-03 10:51:36.290908822 +0000 UTC m=+0.079232783 container create a08b46150dd879662852a523e0bf4025d9802c4d574ba2acbf89a27763f8403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:51:36 compute-0 podman[483406]: 2025-10-03 10:51:36.258218173 +0000 UTC m=+0.046542204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:51:36 compute-0 systemd[1]: Started libpod-conmon-a08b46150dd879662852a523e0bf4025d9802c4d574ba2acbf89a27763f8403f.scope.
Oct  3 10:51:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:51:36 compute-0 podman[483406]: 2025-10-03 10:51:36.444876185 +0000 UTC m=+0.233200236 container init a08b46150dd879662852a523e0bf4025d9802c4d574ba2acbf89a27763f8403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:51:36 compute-0 podman[483406]: 2025-10-03 10:51:36.461682634 +0000 UTC m=+0.250006625 container start a08b46150dd879662852a523e0bf4025d9802c4d574ba2acbf89a27763f8403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:51:36 compute-0 podman[483406]: 2025-10-03 10:51:36.468653468 +0000 UTC m=+0.256977459 container attach a08b46150dd879662852a523e0bf4025d9802c4d574ba2acbf89a27763f8403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct  3 10:51:36 compute-0 distracted_wilson[483421]: 167 167
Oct  3 10:51:36 compute-0 systemd[1]: libpod-a08b46150dd879662852a523e0bf4025d9802c4d574ba2acbf89a27763f8403f.scope: Deactivated successfully.
Oct  3 10:51:36 compute-0 podman[483406]: 2025-10-03 10:51:36.47462114 +0000 UTC m=+0.262945131 container died a08b46150dd879662852a523e0bf4025d9802c4d574ba2acbf89a27763f8403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:51:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-685660b0c5d31e9db2e913a01b8a8e914a47f47d39421665e8c672d99ce21551-merged.mount: Deactivated successfully.
Oct  3 10:51:36 compute-0 podman[483406]: 2025-10-03 10:51:36.653026226 +0000 UTC m=+0.441350197 container remove a08b46150dd879662852a523e0bf4025d9802c4d574ba2acbf89a27763f8403f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:51:36 compute-0 systemd[1]: libpod-conmon-a08b46150dd879662852a523e0bf4025d9802c4d574ba2acbf89a27763f8403f.scope: Deactivated successfully.
Oct  3 10:51:36 compute-0 nova_compute[351685]: 2025-10-03 10:51:36.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:51:36 compute-0 nova_compute[351685]: 2025-10-03 10:51:36.732 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 10:51:36 compute-0 nova_compute[351685]: 2025-10-03 10:51:36.772 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 10:51:36 compute-0 podman[483445]: 2025-10-03 10:51:36.867950365 +0000 UTC m=+0.052003300 container create bd9f2fe30ac27c29c06c552fb73751beb682033a8b0a5274d378cf69c3e432cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:51:36 compute-0 systemd[1]: Started libpod-conmon-bd9f2fe30ac27c29c06c552fb73751beb682033a8b0a5274d378cf69c3e432cd.scope.
Oct  3 10:51:36 compute-0 podman[483445]: 2025-10-03 10:51:36.850496184 +0000 UTC m=+0.034549139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:51:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8271e935667ad1e3e2aaadffa5b00a38fcab38bfa5a07e96b2c6d66f954242e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8271e935667ad1e3e2aaadffa5b00a38fcab38bfa5a07e96b2c6d66f954242e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8271e935667ad1e3e2aaadffa5b00a38fcab38bfa5a07e96b2c6d66f954242e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8271e935667ad1e3e2aaadffa5b00a38fcab38bfa5a07e96b2c6d66f954242e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8271e935667ad1e3e2aaadffa5b00a38fcab38bfa5a07e96b2c6d66f954242e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:51:37 compute-0 podman[483445]: 2025-10-03 10:51:37.011368939 +0000 UTC m=+0.195421954 container init bd9f2fe30ac27c29c06c552fb73751beb682033a8b0a5274d378cf69c3e432cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:51:37 compute-0 podman[483445]: 2025-10-03 10:51:37.039921155 +0000 UTC m=+0.223974120 container start bd9f2fe30ac27c29c06c552fb73751beb682033a8b0a5274d378cf69c3e432cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:51:37 compute-0 podman[483445]: 2025-10-03 10:51:37.047035813 +0000 UTC m=+0.231088848 container attach bd9f2fe30ac27c29c06c552fb73751beb682033a8b0a5274d378cf69c3e432cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:51:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:51:38 compute-0 intelligent_cohen[483461]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:51:38 compute-0 intelligent_cohen[483461]: --> relative data size: 1.0
Oct  3 10:51:38 compute-0 intelligent_cohen[483461]: --> All data devices are unavailable
Oct  3 10:51:38 compute-0 systemd[1]: libpod-bd9f2fe30ac27c29c06c552fb73751beb682033a8b0a5274d378cf69c3e432cd.scope: Deactivated successfully.
Oct  3 10:51:38 compute-0 podman[483445]: 2025-10-03 10:51:38.264362017 +0000 UTC m=+1.448414992 container died bd9f2fe30ac27c29c06c552fb73751beb682033a8b0a5274d378cf69c3e432cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 10:51:38 compute-0 systemd[1]: libpod-bd9f2fe30ac27c29c06c552fb73751beb682033a8b0a5274d378cf69c3e432cd.scope: Consumed 1.152s CPU time.
Oct  3 10:51:38 compute-0 nova_compute[351685]: 2025-10-03 10:51:38.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8271e935667ad1e3e2aaadffa5b00a38fcab38bfa5a07e96b2c6d66f954242e-merged.mount: Deactivated successfully.
Oct  3 10:51:38 compute-0 podman[483445]: 2025-10-03 10:51:38.367922311 +0000 UTC m=+1.551975256 container remove bd9f2fe30ac27c29c06c552fb73751beb682033a8b0a5274d378cf69c3e432cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:51:38 compute-0 systemd[1]: libpod-conmon-bd9f2fe30ac27c29c06c552fb73751beb682033a8b0a5274d378cf69c3e432cd.scope: Deactivated successfully.
Oct  3 10:51:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2529: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:38 compute-0 nova_compute[351685]: 2025-10-03 10:51:38.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:39 compute-0 podman[483644]: 2025-10-03 10:51:39.555938425 +0000 UTC m=+0.098087809 container create 3c46b2648e284e0ce93cb7a91020c6372e4df1c41e01d91f0a85590e3ea47d74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:51:39 compute-0 podman[483644]: 2025-10-03 10:51:39.521026184 +0000 UTC m=+0.063175648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:51:39 compute-0 systemd[1]: Started libpod-conmon-3c46b2648e284e0ce93cb7a91020c6372e4df1c41e01d91f0a85590e3ea47d74.scope.
Oct  3 10:51:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:51:39 compute-0 podman[483644]: 2025-10-03 10:51:39.69345937 +0000 UTC m=+0.235608824 container init 3c46b2648e284e0ce93cb7a91020c6372e4df1c41e01d91f0a85590e3ea47d74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:51:39 compute-0 podman[483644]: 2025-10-03 10:51:39.709160513 +0000 UTC m=+0.251309917 container start 3c46b2648e284e0ce93cb7a91020c6372e4df1c41e01d91f0a85590e3ea47d74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pare, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:51:39 compute-0 podman[483644]: 2025-10-03 10:51:39.71778883 +0000 UTC m=+0.259938234 container attach 3c46b2648e284e0ce93cb7a91020c6372e4df1c41e01d91f0a85590e3ea47d74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pare, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:51:39 compute-0 elegant_pare[483660]: 167 167
Oct  3 10:51:39 compute-0 systemd[1]: libpod-3c46b2648e284e0ce93cb7a91020c6372e4df1c41e01d91f0a85590e3ea47d74.scope: Deactivated successfully.
Oct  3 10:51:39 compute-0 podman[483644]: 2025-10-03 10:51:39.723339548 +0000 UTC m=+0.265488972 container died 3c46b2648e284e0ce93cb7a91020c6372e4df1c41e01d91f0a85590e3ea47d74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pare, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:51:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e69097d30b528efc547e0abd4ec241678cd2d910d74d71636ecaf902e62ab4a-merged.mount: Deactivated successfully.
Oct  3 10:51:39 compute-0 podman[483644]: 2025-10-03 10:51:39.810534067 +0000 UTC m=+0.352683471 container remove 3c46b2648e284e0ce93cb7a91020c6372e4df1c41e01d91f0a85590e3ea47d74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 10:51:39 compute-0 systemd[1]: libpod-conmon-3c46b2648e284e0ce93cb7a91020c6372e4df1c41e01d91f0a85590e3ea47d74.scope: Deactivated successfully.
Oct  3 10:51:40 compute-0 podman[483684]: 2025-10-03 10:51:40.074609174 +0000 UTC m=+0.069558963 container create 60ac304b24aa3fade895c6cb0629032aff445b3416c8662d0fb37e2afe939ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_margulis, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:51:40 compute-0 podman[483684]: 2025-10-03 10:51:40.047864825 +0000 UTC m=+0.042814594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:51:40 compute-0 systemd[1]: Started libpod-conmon-60ac304b24aa3fade895c6cb0629032aff445b3416c8662d0fb37e2afe939ad8.scope.
Oct  3 10:51:40 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d6b818f58ac304a45b3f96cbb6b0afbee662761f975ccf2f6c6839ab0408e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d6b818f58ac304a45b3f96cbb6b0afbee662761f975ccf2f6c6839ab0408e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d6b818f58ac304a45b3f96cbb6b0afbee662761f975ccf2f6c6839ab0408e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d6b818f58ac304a45b3f96cbb6b0afbee662761f975ccf2f6c6839ab0408e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:51:40 compute-0 podman[483684]: 2025-10-03 10:51:40.230111165 +0000 UTC m=+0.225061004 container init 60ac304b24aa3fade895c6cb0629032aff445b3416c8662d0fb37e2afe939ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:51:40 compute-0 podman[483684]: 2025-10-03 10:51:40.256702029 +0000 UTC m=+0.251651818 container start 60ac304b24aa3fade895c6cb0629032aff445b3416c8662d0fb37e2afe939ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_margulis, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 10:51:40 compute-0 podman[483684]: 2025-10-03 10:51:40.264169889 +0000 UTC m=+0.259119688 container attach 60ac304b24aa3fade895c6cb0629032aff445b3416c8662d0fb37e2afe939ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 10:51:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2530: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.896 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.897 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.898 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.906 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.907 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.907 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.907 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.908 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:51:40.907810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.915 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.916 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.916 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.917 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.917 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.917 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.917 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.918 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.918 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.919 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:51:40.917795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.919 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.919 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.920 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.920 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.920 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:51:40.920230) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.950 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.951 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.952 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.953 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.954 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.954 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.954 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.955 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:40.956 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:51:40.954980) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.017 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.017 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.017 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.017 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.017 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.018 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.018 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.018 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.019 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.019 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:51:41.017726) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.019 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.019 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:51:41.019485) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.020 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.021 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.021 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:51:41.021110) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.021 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.021 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.022 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.022 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:51:41.022583) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.023 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.023 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.023 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.023 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.024 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:51:41.024021) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.024 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.025 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.025 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.025 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.025 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:51:41.025364) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 gifted_margulis[483701]: {
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:    "0": [
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:        {
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "devices": [
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "/dev/loop3"
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            ],
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "lv_name": "ceph_lv0",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "lv_size": "21470642176",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "name": "ceph_lv0",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "tags": {
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.cluster_name": "ceph",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.crush_device_class": "",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.encrypted": "0",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.osd_id": "0",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.type": "block",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.vdo": "0"
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            },
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "type": "block",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "vg_name": "ceph_vg0"
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:        }
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:    ],
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:    "1": [
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:        {
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "devices": [
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "/dev/loop4"
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            ],
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "lv_name": "ceph_lv1",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "lv_size": "21470642176",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "name": "ceph_lv1",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "tags": {
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.cluster_name": "ceph",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.crush_device_class": "",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.encrypted": "0",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.osd_id": "1",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.type": "block",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.vdo": "0"
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            },
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "type": "block",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "vg_name": "ceph_vg1"
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:        }
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:    ],
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:    "2": [
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:        {
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "devices": [
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "/dev/loop5"
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            ],
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "lv_name": "ceph_lv2",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "lv_size": "21470642176",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "name": "ceph_lv2",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "tags": {
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.cluster_name": "ceph",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.crush_device_class": "",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.encrypted": "0",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.osd_id": "2",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.type": "block",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:                "ceph.vdo": "0"
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            },
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "type": "block",
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:            "vg_name": "ceph_vg2"
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:        }
Oct  3 10:51:41 compute-0 gifted_margulis[483701]:    ]
Oct  3 10:51:41 compute-0 gifted_margulis[483701]: }
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.047 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.048 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.049 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.049 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.049 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.049 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.050 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:51:41.049535) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.050 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.051 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.051 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.051 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.051 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.051 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:51:41.051273) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.051 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.052 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.053 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.053 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.053 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.053 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.054 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.054 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:51:41.053845) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.054 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.054 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.054 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.055 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.055 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.055 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:51:41.055076) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.055 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.055 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.056 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.056 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.056 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.056 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:51:41.056182) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.056 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.057 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:51:41.057207) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.057 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.057 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.058 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.058 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.058 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:51:41.058123) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.058 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.058 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.058 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.059 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.059 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 74510000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.059 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.060 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:51:41.059099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.060 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.060 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:51:41.060333) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.060 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.061 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.061 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.061 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.061 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:51:41.061327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.062 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.062 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.062 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.062 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.063 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:51:41.062397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.063 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.063 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:51:41.063519) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.063 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.064 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.064 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.064 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.064 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.064 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:51:41.064675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.065 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.065 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.065 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.065 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.065 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.066 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:51:41.065741) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:51:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:51:41 compute-0 systemd[1]: libpod-60ac304b24aa3fade895c6cb0629032aff445b3416c8662d0fb37e2afe939ad8.scope: Deactivated successfully.
Oct  3 10:51:41 compute-0 podman[483711]: 2025-10-03 10:51:41.120470404 +0000 UTC m=+0.029369063 container died 60ac304b24aa3fade895c6cb0629032aff445b3416c8662d0fb37e2afe939ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_margulis, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:51:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4d6b818f58ac304a45b3f96cbb6b0afbee662761f975ccf2f6c6839ab0408e0-merged.mount: Deactivated successfully.
Oct  3 10:51:41 compute-0 podman[483711]: 2025-10-03 10:51:41.221810257 +0000 UTC m=+0.130708906 container remove 60ac304b24aa3fade895c6cb0629032aff445b3416c8662d0fb37e2afe939ad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:51:41 compute-0 systemd[1]: libpod-conmon-60ac304b24aa3fade895c6cb0629032aff445b3416c8662d0fb37e2afe939ad8.scope: Deactivated successfully.
Oct  3 10:51:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:51:41.655 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:51:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:51:41.656 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:51:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:51:41.658 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:51:42 compute-0 podman[483862]: 2025-10-03 10:51:42.263634558 +0000 UTC m=+0.100528897 container create c050f76f14bed5686fca513b452d75df8887153d36ef286848505ce12c0c11b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jennings, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 10:51:42 compute-0 podman[483862]: 2025-10-03 10:51:42.214543492 +0000 UTC m=+0.051437851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:51:42 compute-0 systemd[1]: Started libpod-conmon-c050f76f14bed5686fca513b452d75df8887153d36ef286848505ce12c0c11b0.scope.
Oct  3 10:51:42 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:51:42 compute-0 podman[483862]: 2025-10-03 10:51:42.407785065 +0000 UTC m=+0.244679414 container init c050f76f14bed5686fca513b452d75df8887153d36ef286848505ce12c0c11b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:51:42 compute-0 podman[483862]: 2025-10-03 10:51:42.424600865 +0000 UTC m=+0.261495164 container start c050f76f14bed5686fca513b452d75df8887153d36ef286848505ce12c0c11b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jennings, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 10:51:42 compute-0 podman[483862]: 2025-10-03 10:51:42.428824381 +0000 UTC m=+0.265718740 container attach c050f76f14bed5686fca513b452d75df8887153d36ef286848505ce12c0c11b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jennings, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:51:42 compute-0 charming_jennings[483878]: 167 167
Oct  3 10:51:42 compute-0 systemd[1]: libpod-c050f76f14bed5686fca513b452d75df8887153d36ef286848505ce12c0c11b0.scope: Deactivated successfully.
Oct  3 10:51:42 compute-0 podman[483862]: 2025-10-03 10:51:42.434541014 +0000 UTC m=+0.271435333 container died c050f76f14bed5686fca513b452d75df8887153d36ef286848505ce12c0c11b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:51:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:51:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e8c592bbc892c8c74efe25e74eaa7a66bd764be004518897aafb559d75e985e-merged.mount: Deactivated successfully.
Oct  3 10:51:42 compute-0 podman[483862]: 2025-10-03 10:51:42.506749822 +0000 UTC m=+0.343644121 container remove c050f76f14bed5686fca513b452d75df8887153d36ef286848505ce12c0c11b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jennings, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 10:51:42 compute-0 systemd[1]: libpod-conmon-c050f76f14bed5686fca513b452d75df8887153d36ef286848505ce12c0c11b0.scope: Deactivated successfully.
Oct  3 10:51:42 compute-0 podman[483902]: 2025-10-03 10:51:42.760357502 +0000 UTC m=+0.097244902 container create e16f181f840c1f05d4200a058d6d4ed83e3487d12b0eae33cb4a97508e779fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:51:42 compute-0 nova_compute[351685]: 2025-10-03 10:51:42.766 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:51:42 compute-0 podman[483902]: 2025-10-03 10:51:42.736105614 +0000 UTC m=+0.072993004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:51:42 compute-0 systemd[1]: Started libpod-conmon-e16f181f840c1f05d4200a058d6d4ed83e3487d12b0eae33cb4a97508e779fb3.scope.
Oct  3 10:51:42 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0d52586829cf866c1deb04c2a71dcf27e74c82fd8b7438adc221a80a412306/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0d52586829cf866c1deb04c2a71dcf27e74c82fd8b7438adc221a80a412306/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0d52586829cf866c1deb04c2a71dcf27e74c82fd8b7438adc221a80a412306/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:51:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0d52586829cf866c1deb04c2a71dcf27e74c82fd8b7438adc221a80a412306/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:51:42 compute-0 podman[483902]: 2025-10-03 10:51:42.889045053 +0000 UTC m=+0.225932463 container init e16f181f840c1f05d4200a058d6d4ed83e3487d12b0eae33cb4a97508e779fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:51:42 compute-0 podman[483902]: 2025-10-03 10:51:42.907168135 +0000 UTC m=+0.244055505 container start e16f181f840c1f05d4200a058d6d4ed83e3487d12b0eae33cb4a97508e779fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:51:42 compute-0 podman[483902]: 2025-10-03 10:51:42.913481328 +0000 UTC m=+0.250368788 container attach e16f181f840c1f05d4200a058d6d4ed83e3487d12b0eae33cb4a97508e779fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:51:43 compute-0 nova_compute[351685]: 2025-10-03 10:51:43.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:43 compute-0 nova_compute[351685]: 2025-10-03 10:51:43.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]: {
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:        "osd_id": 1,
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:        "type": "bluestore"
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:    },
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:        "osd_id": 2,
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:        "type": "bluestore"
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:    },
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:        "osd_id": 0,
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:        "type": "bluestore"
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]:    }
Oct  3 10:51:44 compute-0 interesting_dubinsky[483918]: }
Oct  3 10:51:44 compute-0 systemd[1]: libpod-e16f181f840c1f05d4200a058d6d4ed83e3487d12b0eae33cb4a97508e779fb3.scope: Deactivated successfully.
Oct  3 10:51:44 compute-0 podman[483902]: 2025-10-03 10:51:44.224695785 +0000 UTC m=+1.561583175 container died e16f181f840c1f05d4200a058d6d4ed83e3487d12b0eae33cb4a97508e779fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 10:51:44 compute-0 systemd[1]: libpod-e16f181f840c1f05d4200a058d6d4ed83e3487d12b0eae33cb4a97508e779fb3.scope: Consumed 1.303s CPU time.
Oct  3 10:51:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca0d52586829cf866c1deb04c2a71dcf27e74c82fd8b7438adc221a80a412306-merged.mount: Deactivated successfully.
Oct  3 10:51:44 compute-0 podman[483902]: 2025-10-03 10:51:44.314621052 +0000 UTC m=+1.651508432 container remove e16f181f840c1f05d4200a058d6d4ed83e3487d12b0eae33cb4a97508e779fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 10:51:44 compute-0 systemd[1]: libpod-conmon-e16f181f840c1f05d4200a058d6d4ed83e3487d12b0eae33cb4a97508e779fb3.scope: Deactivated successfully.
Oct  3 10:51:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:51:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:51:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:51:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:51:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 97670857-3b7e-4229-bbed-7f28999c4d71 does not exist
Oct  3 10:51:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 7a8b0966-5ec3-4404-a97f-17d53b4eb13a does not exist
Oct  3 10:51:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:45 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:51:45 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:51:46
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'volumes', 'vms', 'default.rgw.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'backups']
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:51:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:51:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:51:48 compute-0 nova_compute[351685]: 2025-10-03 10:51:48.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2534: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:48 compute-0 nova_compute[351685]: 2025-10-03 10:51:48.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:49 compute-0 podman[484016]: 2025-10-03 10:51:49.86548818 +0000 UTC m=+0.108524375 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4)
Oct  3 10:51:49 compute-0 podman[484015]: 2025-10-03 10:51:49.877696752 +0000 UTC m=+0.119194617 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9)
Oct  3 10:51:49 compute-0 podman[484017]: 2025-10-03 10:51:49.900215625 +0000 UTC m=+0.136280855 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2)
Oct  3 10:51:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2535: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:51 compute-0 podman[484082]: 2025-10-03 10:51:51.87583179 +0000 UTC m=+0.121844012 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 10:51:52 compute-0 nova_compute[351685]: 2025-10-03 10:51:52.057 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:51:52 compute-0 nova_compute[351685]: 2025-10-03 10:51:52.088 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Triggering sync for uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  3 10:51:52 compute-0 nova_compute[351685]: 2025-10-03 10:51:52.089 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:51:52 compute-0 nova_compute[351685]: 2025-10-03 10:51:52.090 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:51:52 compute-0 nova_compute[351685]: 2025-10-03 10:51:52.125 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:51:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:51:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2536: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:53 compute-0 nova_compute[351685]: 2025-10-03 10:51:53.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:53 compute-0 nova_compute[351685]: 2025-10-03 10:51:53.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:53 compute-0 nova_compute[351685]: 2025-10-03 10:51:53.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:51:53 compute-0 nova_compute[351685]: 2025-10-03 10:51:53.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 10:51:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:51:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2825802414' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:51:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:51:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2825802414' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:51:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2537: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:54 compute-0 podman[484101]: 2025-10-03 10:51:54.881831008 +0000 UTC m=+0.127781013 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:51:54 compute-0 podman[484102]: 2025-10-03 10:51:54.908513344 +0000 UTC m=+0.143295701 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 10:51:54 compute-0 podman[484103]: 2025-10-03 10:51:54.913793764 +0000 UTC m=+0.145504222 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:51:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:51:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2538: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:57 compute-0 nova_compute[351685]: 2025-10-03 10:51:57.102 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:51:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:51:58 compute-0 nova_compute[351685]: 2025-10-03 10:51:58.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2539: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:51:58 compute-0 nova_compute[351685]: 2025-10-03 10:51:58.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:51:59 compute-0 podman[157165]: time="2025-10-03T10:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:51:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:51:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9076 "" "Go-http-client/1.1"
Oct  3 10:52:00 compute-0 podman[157165]: time="2025-10-03T10:52:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:52:00 compute-0 podman[157165]: @ - - [03/Oct/2025:10:52:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 47016 "" "Go-http-client/1.1"
Oct  3 10:52:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:01 compute-0 openstack_network_exporter[367524]: ERROR   10:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:52:01 compute-0 openstack_network_exporter[367524]: ERROR   10:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:52:01 compute-0 openstack_network_exporter[367524]: ERROR   10:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:52:01 compute-0 openstack_network_exporter[367524]: ERROR   10:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:52:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:52:01 compute-0 openstack_network_exporter[367524]: ERROR   10:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:52:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:52:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:52:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:03 compute-0 nova_compute[351685]: 2025-10-03 10:52:03.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:03 compute-0 nova_compute[351685]: 2025-10-03 10:52:03.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:03 compute-0 podman[484164]: 2025-10-03 10:52:03.841098735 +0000 UTC m=+0.090105113 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:52:03 compute-0 podman[484162]: 2025-10-03 10:52:03.851130217 +0000 UTC m=+0.116626335 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:52:03 compute-0 podman[484163]: 2025-10-03 10:52:03.886766021 +0000 UTC m=+0.141905186 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, architecture=x86_64, io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, distribution-scope=public, vcs-type=git, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.buildah.version=1.29.0, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct  3 10:52:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2542: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2543: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:52:08 compute-0 nova_compute[351685]: 2025-10-03 10:52:08.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:08 compute-0 nova_compute[351685]: 2025-10-03 10:52:08.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:52:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:12 compute-0 nova_compute[351685]: 2025-10-03 10:52:12.751 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:52:12 compute-0 nova_compute[351685]: 2025-10-03 10:52:12.752 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:52:12 compute-0 nova_compute[351685]: 2025-10-03 10:52:12.752 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:52:13 compute-0 nova_compute[351685]: 2025-10-03 10:52:13.245 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:52:13 compute-0 nova_compute[351685]: 2025-10-03 10:52:13.246 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:52:13 compute-0 nova_compute[351685]: 2025-10-03 10:52:13.246 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:52:13 compute-0 nova_compute[351685]: 2025-10-03 10:52:13.246 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:52:13 compute-0 nova_compute[351685]: 2025-10-03 10:52:13.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:13 compute-0 nova_compute[351685]: 2025-10-03 10:52:13.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:52:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:52:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:52:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:52:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:52:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:52:16 compute-0 nova_compute[351685]: 2025-10-03 10:52:16.167 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:52:16 compute-0 nova_compute[351685]: 2025-10-03 10:52:16.187 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:52:16 compute-0 nova_compute[351685]: 2025-10-03 10:52:16.188 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:52:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2548: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:52:18 compute-0 nova_compute[351685]: 2025-10-03 10:52:18.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2549: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:18 compute-0 nova_compute[351685]: 2025-10-03 10:52:18.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:18 compute-0 nova_compute[351685]: 2025-10-03 10:52:18.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:52:18 compute-0 nova_compute[351685]: 2025-10-03 10:52:18.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:52:19 compute-0 nova_compute[351685]: 2025-10-03 10:52:19.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:52:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:20 compute-0 podman[484225]: 2025-10-03 10:52:20.871984433 +0000 UTC m=+0.113385571 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6)
Oct  3 10:52:20 compute-0 podman[484226]: 2025-10-03 10:52:20.894225917 +0000 UTC m=+0.131218382 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Oct  3 10:52:20 compute-0 podman[484227]: 2025-10-03 10:52:20.91581621 +0000 UTC m=+0.155370248 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  3 10:52:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:52:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:22 compute-0 podman[484287]: 2025-10-03 10:52:22.66259043 +0000 UTC m=+0.123536605 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:52:22 compute-0 nova_compute[351685]: 2025-10-03 10:52:22.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:52:22 compute-0 nova_compute[351685]: 2025-10-03 10:52:22.772 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:52:22 compute-0 nova_compute[351685]: 2025-10-03 10:52:22.773 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:52:22 compute-0 nova_compute[351685]: 2025-10-03 10:52:22.774 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:52:22 compute-0 nova_compute[351685]: 2025-10-03 10:52:22.775 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:52:22 compute-0 nova_compute[351685]: 2025-10-03 10:52:22.776 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:52:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:52:23 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/22412526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:52:23 compute-0 nova_compute[351685]: 2025-10-03 10:52:23.269 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:52:23 compute-0 nova_compute[351685]: 2025-10-03 10:52:23.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:23 compute-0 nova_compute[351685]: 2025-10-03 10:52:23.382 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:52:23 compute-0 nova_compute[351685]: 2025-10-03 10:52:23.383 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:52:23 compute-0 nova_compute[351685]: 2025-10-03 10:52:23.384 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:52:23 compute-0 nova_compute[351685]: 2025-10-03 10:52:23.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:23 compute-0 nova_compute[351685]: 2025-10-03 10:52:23.997 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:52:24 compute-0 nova_compute[351685]: 2025-10-03 10:52:23.999 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3810MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:52:24 compute-0 nova_compute[351685]: 2025-10-03 10:52:24.000 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:52:24 compute-0 nova_compute[351685]: 2025-10-03 10:52:24.001 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:52:24 compute-0 nova_compute[351685]: 2025-10-03 10:52:24.102 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:52:24 compute-0 nova_compute[351685]: 2025-10-03 10:52:24.103 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:52:24 compute-0 nova_compute[351685]: 2025-10-03 10:52:24.104 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:52:24 compute-0 nova_compute[351685]: 2025-10-03 10:52:24.159 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:52:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2552: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:52:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4270368402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:52:24 compute-0 nova_compute[351685]: 2025-10-03 10:52:24.694 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:52:24 compute-0 nova_compute[351685]: 2025-10-03 10:52:24.707 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:52:24 compute-0 nova_compute[351685]: 2025-10-03 10:52:24.731 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:52:24 compute-0 nova_compute[351685]: 2025-10-03 10:52:24.734 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:52:24 compute-0 nova_compute[351685]: 2025-10-03 10:52:24.735 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:52:25 compute-0 podman[484350]: 2025-10-03 10:52:25.8639254 +0000 UTC m=+0.119886700 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:52:25 compute-0 podman[484349]: 2025-10-03 10:52:25.868113614 +0000 UTC m=+0.117516064 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:52:25 compute-0 podman[484351]: 2025-10-03 10:52:25.874058375 +0000 UTC m=+0.114561509 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  3 10:52:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2553: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:26 compute-0 nova_compute[351685]: 2025-10-03 10:52:26.736 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:52:26 compute-0 nova_compute[351685]: 2025-10-03 10:52:26.738 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:52:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:52:28 compute-0 nova_compute[351685]: 2025-10-03 10:52:28.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:28 compute-0 nova_compute[351685]: 2025-10-03 10:52:28.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:28 compute-0 nova_compute[351685]: 2025-10-03 10:52:28.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:52:28 compute-0 nova_compute[351685]: 2025-10-03 10:52:28.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:52:29 compute-0 nova_compute[351685]: 2025-10-03 10:52:29.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:52:29 compute-0 podman[157165]: time="2025-10-03T10:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:52:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:52:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9085 "" "Go-http-client/1.1"
Oct  3 10:52:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:31 compute-0 openstack_network_exporter[367524]: ERROR   10:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:52:31 compute-0 openstack_network_exporter[367524]: ERROR   10:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:52:31 compute-0 openstack_network_exporter[367524]: ERROR   10:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:52:31 compute-0 openstack_network_exporter[367524]: ERROR   10:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:52:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:52:31 compute-0 openstack_network_exporter[367524]: ERROR   10:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:52:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:52:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:52:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:33 compute-0 nova_compute[351685]: 2025-10-03 10:52:33.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:33 compute-0 nova_compute[351685]: 2025-10-03 10:52:33.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:34 compute-0 podman[484410]: 2025-10-03 10:52:34.830483642 +0000 UTC m=+0.077293262 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:52:34 compute-0 podman[484412]: 2025-10-03 10:52:34.838711136 +0000 UTC m=+0.085488405 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 10:52:34 compute-0 podman[484411]: 2025-10-03 10:52:34.860755223 +0000 UTC m=+0.109862218 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.openshift.expose-services=, config_id=edpm, vcs-type=git, version=9.4, name=ubi9, build-date=2024-09-18T21:23:30)
Oct  3 10:52:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2558: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:52:38 compute-0 nova_compute[351685]: 2025-10-03 10:52:38.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:38 compute-0 nova_compute[351685]: 2025-10-03 10:52:38.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2560: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:52:41.656 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:52:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:52:41.657 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:52:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:52:41.657 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:52:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:52:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2561: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:43 compute-0 nova_compute[351685]: 2025-10-03 10:52:43.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:43 compute-0 nova_compute[351685]: 2025-10-03 10:52:43.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:52:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:52:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:52:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:52:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:52:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:52:45 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev cd8f7ce7-fe37-4c57-878f-954d02ecc499 does not exist
Oct  3 10:52:45 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 7046f282-953f-4302-988a-c5c9369031bc does not exist
Oct  3 10:52:45 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b787e3c6-a7b4-4e15-a0d3-2d8155e037d2 does not exist
Oct  3 10:52:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:52:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:52:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:52:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:52:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:52:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:52:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:52:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:52:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:52:46
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.data', '.mgr', 'backups', 'vms', 'volumes']
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:46 compute-0 podman[484740]: 2025-10-03 10:52:46.783975399 +0000 UTC m=+0.077250670 container create a8cea8d255a035f7e499a59867478942af8efeca6829d0eadd57926c48572fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:52:46 compute-0 podman[484740]: 2025-10-03 10:52:46.75219694 +0000 UTC m=+0.045472291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:52:46 compute-0 systemd[1]: Started libpod-conmon-a8cea8d255a035f7e499a59867478942af8efeca6829d0eadd57926c48572fb0.scope.
Oct  3 10:52:46 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:52:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:52:46 compute-0 podman[484740]: 2025-10-03 10:52:46.944482832 +0000 UTC m=+0.237758123 container init a8cea8d255a035f7e499a59867478942af8efeca6829d0eadd57926c48572fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:52:46 compute-0 podman[484740]: 2025-10-03 10:52:46.962611843 +0000 UTC m=+0.255887114 container start a8cea8d255a035f7e499a59867478942af8efeca6829d0eadd57926c48572fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:52:46 compute-0 podman[484740]: 2025-10-03 10:52:46.969141543 +0000 UTC m=+0.262416854 container attach a8cea8d255a035f7e499a59867478942af8efeca6829d0eadd57926c48572fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 10:52:46 compute-0 brave_ritchie[484756]: 167 167
Oct  3 10:52:46 compute-0 systemd[1]: libpod-a8cea8d255a035f7e499a59867478942af8efeca6829d0eadd57926c48572fb0.scope: Deactivated successfully.
Oct  3 10:52:46 compute-0 podman[484740]: 2025-10-03 10:52:46.974799074 +0000 UTC m=+0.268074405 container died a8cea8d255a035f7e499a59867478942af8efeca6829d0eadd57926c48572fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 10:52:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c65705ecb5619ad99e18bccc41a570bb8ea7b4fc33c03ec6d699d5a3a6cbb724-merged.mount: Deactivated successfully.
Oct  3 10:52:47 compute-0 podman[484740]: 2025-10-03 10:52:47.046589608 +0000 UTC m=+0.339864879 container remove a8cea8d255a035f7e499a59867478942af8efeca6829d0eadd57926c48572fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 10:52:47 compute-0 systemd[1]: libpod-conmon-a8cea8d255a035f7e499a59867478942af8efeca6829d0eadd57926c48572fb0.scope: Deactivated successfully.
Oct  3 10:52:47 compute-0 podman[484778]: 2025-10-03 10:52:47.33136353 +0000 UTC m=+0.081558919 container create 0181f6b44c49eb0544b4c64b0833e6b15624db463c3b2f2565cc8cf48dbc0a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:52:47 compute-0 podman[484778]: 2025-10-03 10:52:47.298429533 +0000 UTC m=+0.048624952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:52:47 compute-0 systemd[1]: Started libpod-conmon-0181f6b44c49eb0544b4c64b0833e6b15624db463c3b2f2565cc8cf48dbc0a14.scope.
Oct  3 10:52:47 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f7f486b3a26de42b9f61da7214141f909f575f39472fe24d242effe9f8cac7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f7f486b3a26de42b9f61da7214141f909f575f39472fe24d242effe9f8cac7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f7f486b3a26de42b9f61da7214141f909f575f39472fe24d242effe9f8cac7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f7f486b3a26de42b9f61da7214141f909f575f39472fe24d242effe9f8cac7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f7f486b3a26de42b9f61da7214141f909f575f39472fe24d242effe9f8cac7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:52:47 compute-0 podman[484778]: 2025-10-03 10:52:47.487864213 +0000 UTC m=+0.238059642 container init 0181f6b44c49eb0544b4c64b0833e6b15624db463c3b2f2565cc8cf48dbc0a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 10:52:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:52:47 compute-0 podman[484778]: 2025-10-03 10:52:47.51269679 +0000 UTC m=+0.262892169 container start 0181f6b44c49eb0544b4c64b0833e6b15624db463c3b2f2565cc8cf48dbc0a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:52:47 compute-0 podman[484778]: 2025-10-03 10:52:47.519588172 +0000 UTC m=+0.269783551 container attach 0181f6b44c49eb0544b4c64b0833e6b15624db463c3b2f2565cc8cf48dbc0a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:52:48 compute-0 nova_compute[351685]: 2025-10-03 10:52:48.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2564: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:48 compute-0 nova_compute[351685]: 2025-10-03 10:52:48.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:48 compute-0 relaxed_franklin[484793]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:52:48 compute-0 relaxed_franklin[484793]: --> relative data size: 1.0
Oct  3 10:52:48 compute-0 relaxed_franklin[484793]: --> All data devices are unavailable
Oct  3 10:52:48 compute-0 systemd[1]: libpod-0181f6b44c49eb0544b4c64b0833e6b15624db463c3b2f2565cc8cf48dbc0a14.scope: Deactivated successfully.
Oct  3 10:52:48 compute-0 systemd[1]: libpod-0181f6b44c49eb0544b4c64b0833e6b15624db463c3b2f2565cc8cf48dbc0a14.scope: Consumed 1.200s CPU time.
Oct  3 10:52:48 compute-0 podman[484778]: 2025-10-03 10:52:48.771230507 +0000 UTC m=+1.521425886 container died 0181f6b44c49eb0544b4c64b0833e6b15624db463c3b2f2565cc8cf48dbc0a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 10:52:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5f7f486b3a26de42b9f61da7214141f909f575f39472fe24d242effe9f8cac7-merged.mount: Deactivated successfully.
Oct  3 10:52:48 compute-0 podman[484778]: 2025-10-03 10:52:48.889511854 +0000 UTC m=+1.639707223 container remove 0181f6b44c49eb0544b4c64b0833e6b15624db463c3b2f2565cc8cf48dbc0a14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_franklin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:52:48 compute-0 systemd[1]: libpod-conmon-0181f6b44c49eb0544b4c64b0833e6b15624db463c3b2f2565cc8cf48dbc0a14.scope: Deactivated successfully.
Oct  3 10:52:50 compute-0 podman[484973]: 2025-10-03 10:52:50.06356816 +0000 UTC m=+0.067672824 container create 1f195d7f8415cb7fd78fed85f20f05eb954a6502f810100d3213458e8738962c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 10:52:50 compute-0 podman[484973]: 2025-10-03 10:52:50.029391592 +0000 UTC m=+0.033496306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:52:50 compute-0 systemd[1]: Started libpod-conmon-1f195d7f8415cb7fd78fed85f20f05eb954a6502f810100d3213458e8738962c.scope.
Oct  3 10:52:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:52:50 compute-0 podman[484973]: 2025-10-03 10:52:50.212644995 +0000 UTC m=+0.216749639 container init 1f195d7f8415cb7fd78fed85f20f05eb954a6502f810100d3213458e8738962c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  3 10:52:50 compute-0 podman[484973]: 2025-10-03 10:52:50.228862605 +0000 UTC m=+0.232967269 container start 1f195d7f8415cb7fd78fed85f20f05eb954a6502f810100d3213458e8738962c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 10:52:50 compute-0 podman[484973]: 2025-10-03 10:52:50.236511871 +0000 UTC m=+0.240616525 container attach 1f195d7f8415cb7fd78fed85f20f05eb954a6502f810100d3213458e8738962c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:52:50 compute-0 intelligent_panini[484989]: 167 167
Oct  3 10:52:50 compute-0 systemd[1]: libpod-1f195d7f8415cb7fd78fed85f20f05eb954a6502f810100d3213458e8738962c.scope: Deactivated successfully.
Oct  3 10:52:50 compute-0 podman[484973]: 2025-10-03 10:52:50.23928279 +0000 UTC m=+0.243387424 container died 1f195d7f8415cb7fd78fed85f20f05eb954a6502f810100d3213458e8738962c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:52:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3618f6c13cd7e1869d7438b7d5b7ef7794d290a7b0c37ee46d8bb71986c9992b-merged.mount: Deactivated successfully.
Oct  3 10:52:50 compute-0 podman[484973]: 2025-10-03 10:52:50.319010809 +0000 UTC m=+0.323115473 container remove 1f195d7f8415cb7fd78fed85f20f05eb954a6502f810100d3213458e8738962c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_panini, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 10:52:50 compute-0 systemd[1]: libpod-conmon-1f195d7f8415cb7fd78fed85f20f05eb954a6502f810100d3213458e8738962c.scope: Deactivated successfully.
Oct  3 10:52:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:50 compute-0 podman[485012]: 2025-10-03 10:52:50.626549821 +0000 UTC m=+0.086513268 container create edda85844a1ad372d6085d2b70c1de8d325332fb53523e17ae91d3ac8ea27769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:52:50 compute-0 podman[485012]: 2025-10-03 10:52:50.591461154 +0000 UTC m=+0.051424651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:52:50 compute-0 systemd[1]: Started libpod-conmon-edda85844a1ad372d6085d2b70c1de8d325332fb53523e17ae91d3ac8ea27769.scope.
Oct  3 10:52:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf14980dde511f9c64376e03f2054e520fb6b5290349f85859503cdbaa01ebd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf14980dde511f9c64376e03f2054e520fb6b5290349f85859503cdbaa01ebd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf14980dde511f9c64376e03f2054e520fb6b5290349f85859503cdbaa01ebd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:52:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcf14980dde511f9c64376e03f2054e520fb6b5290349f85859503cdbaa01ebd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:52:50 compute-0 podman[485012]: 2025-10-03 10:52:50.790387719 +0000 UTC m=+0.250351206 container init edda85844a1ad372d6085d2b70c1de8d325332fb53523e17ae91d3ac8ea27769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_vaughan, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:52:50 compute-0 podman[485012]: 2025-10-03 10:52:50.825344142 +0000 UTC m=+0.285307599 container start edda85844a1ad372d6085d2b70c1de8d325332fb53523e17ae91d3ac8ea27769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:52:50 compute-0 podman[485012]: 2025-10-03 10:52:50.83216604 +0000 UTC m=+0.292129487 container attach edda85844a1ad372d6085d2b70c1de8d325332fb53523e17ae91d3ac8ea27769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 10:52:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:52:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 8706 writes, 32K keys, 8706 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8706 writes, 2226 syncs, 3.91 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]: {
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:    "0": [
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:        {
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "devices": [
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "/dev/loop3"
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            ],
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "lv_name": "ceph_lv0",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "lv_size": "21470642176",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "name": "ceph_lv0",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "tags": {
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.cluster_name": "ceph",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.crush_device_class": "",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.encrypted": "0",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.osd_id": "0",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.type": "block",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.vdo": "0"
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            },
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "type": "block",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "vg_name": "ceph_vg0"
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:        }
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:    ],
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:    "1": [
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:        {
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "devices": [
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "/dev/loop4"
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            ],
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "lv_name": "ceph_lv1",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "lv_size": "21470642176",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "name": "ceph_lv1",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "tags": {
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.cluster_name": "ceph",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.crush_device_class": "",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.encrypted": "0",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.osd_id": "1",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.type": "block",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.vdo": "0"
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            },
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "type": "block",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "vg_name": "ceph_vg1"
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:        }
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:    ],
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:    "2": [
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:        {
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "devices": [
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "/dev/loop5"
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            ],
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "lv_name": "ceph_lv2",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "lv_size": "21470642176",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "name": "ceph_lv2",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "tags": {
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.cluster_name": "ceph",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.crush_device_class": "",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.encrypted": "0",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.osd_id": "2",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.type": "block",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:                "ceph.vdo": "0"
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            },
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "type": "block",
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:            "vg_name": "ceph_vg2"
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:        }
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]:    ]
Oct  3 10:52:51 compute-0 trusting_vaughan[485027]: }
Oct  3 10:52:51 compute-0 systemd[1]: libpod-edda85844a1ad372d6085d2b70c1de8d325332fb53523e17ae91d3ac8ea27769.scope: Deactivated successfully.
Oct  3 10:52:51 compute-0 podman[485012]: 2025-10-03 10:52:51.672405861 +0000 UTC m=+1.132369288 container died edda85844a1ad372d6085d2b70c1de8d325332fb53523e17ae91d3ac8ea27769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:52:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcf14980dde511f9c64376e03f2054e520fb6b5290349f85859503cdbaa01ebd-merged.mount: Deactivated successfully.
Oct  3 10:52:51 compute-0 podman[485012]: 2025-10-03 10:52:51.802448495 +0000 UTC m=+1.262411902 container remove edda85844a1ad372d6085d2b70c1de8d325332fb53523e17ae91d3ac8ea27769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:52:51 compute-0 systemd[1]: libpod-conmon-edda85844a1ad372d6085d2b70c1de8d325332fb53523e17ae91d3ac8ea27769.scope: Deactivated successfully.
Oct  3 10:52:51 compute-0 podman[485037]: 2025-10-03 10:52:51.849380581 +0000 UTC m=+0.133607449 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Oct  3 10:52:51 compute-0 podman[485043]: 2025-10-03 10:52:51.858387061 +0000 UTC m=+0.142326760 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Oct  3 10:52:51 compute-0 podman[485044]: 2025-10-03 10:52:51.942542202 +0000 UTC m=+0.215141447 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:52:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:52:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:52 compute-0 podman[485246]: 2025-10-03 10:52:52.699789608 +0000 UTC m=+0.102652756 container create 3a1769086d41e9475862b9b9bd5c66e118d6dbe4f999a39211577606547bbca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_robinson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:52:52 compute-0 podman[485246]: 2025-10-03 10:52:52.631520127 +0000 UTC m=+0.034383275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:52:52 compute-0 systemd[1]: Started libpod-conmon-3a1769086d41e9475862b9b9bd5c66e118d6dbe4f999a39211577606547bbca0.scope.
Oct  3 10:52:52 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:52:52 compute-0 podman[485246]: 2025-10-03 10:52:52.841816086 +0000 UTC m=+0.244679234 container init 3a1769086d41e9475862b9b9bd5c66e118d6dbe4f999a39211577606547bbca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:52:52 compute-0 podman[485246]: 2025-10-03 10:52:52.855083273 +0000 UTC m=+0.257946401 container start 3a1769086d41e9475862b9b9bd5c66e118d6dbe4f999a39211577606547bbca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_robinson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:52:52 compute-0 podman[485246]: 2025-10-03 10:52:52.861407235 +0000 UTC m=+0.264270383 container attach 3a1769086d41e9475862b9b9bd5c66e118d6dbe4f999a39211577606547bbca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  3 10:52:52 compute-0 gracious_robinson[485271]: 167 167
Oct  3 10:52:52 compute-0 systemd[1]: libpod-3a1769086d41e9475862b9b9bd5c66e118d6dbe4f999a39211577606547bbca0.scope: Deactivated successfully.
Oct  3 10:52:52 compute-0 podman[485246]: 2025-10-03 10:52:52.868054259 +0000 UTC m=+0.270917417 container died 3a1769086d41e9475862b9b9bd5c66e118d6dbe4f999a39211577606547bbca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_robinson, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 10:52:52 compute-0 podman[485259]: 2025-10-03 10:52:52.877705159 +0000 UTC m=+0.129604951 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  3 10:52:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-86dd70557d088a391d47ca6c731711b4dc02ff635d5754f8a8f2bc5b8e6326c3-merged.mount: Deactivated successfully.
Oct  3 10:52:52 compute-0 podman[485246]: 2025-10-03 10:52:52.949875226 +0000 UTC m=+0.352738374 container remove 3a1769086d41e9475862b9b9bd5c66e118d6dbe4f999a39211577606547bbca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_robinson, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:52:52 compute-0 systemd[1]: libpod-conmon-3a1769086d41e9475862b9b9bd5c66e118d6dbe4f999a39211577606547bbca0.scope: Deactivated successfully.
Oct  3 10:52:53 compute-0 podman[485303]: 2025-10-03 10:52:53.230165873 +0000 UTC m=+0.099310830 container create efaaea0586fc4d7b439d7e10a51c2469782a13779ccbf108d3b55ee50539f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:52:53 compute-0 podman[485303]: 2025-10-03 10:52:53.180363844 +0000 UTC m=+0.049508831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:52:53 compute-0 systemd[1]: Started libpod-conmon-efaaea0586fc4d7b439d7e10a51c2469782a13779ccbf108d3b55ee50539f362.scope.
Oct  3 10:52:53 compute-0 nova_compute[351685]: 2025-10-03 10:52:53.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:52:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad8256885bd7117141e23a6b752d8f0deb833986654ce84a75952d5064e3ad85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:52:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad8256885bd7117141e23a6b752d8f0deb833986654ce84a75952d5064e3ad85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:52:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad8256885bd7117141e23a6b752d8f0deb833986654ce84a75952d5064e3ad85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:52:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad8256885bd7117141e23a6b752d8f0deb833986654ce84a75952d5064e3ad85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:52:53 compute-0 podman[485303]: 2025-10-03 10:52:53.383568086 +0000 UTC m=+0.252713073 container init efaaea0586fc4d7b439d7e10a51c2469782a13779ccbf108d3b55ee50539f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 10:52:53 compute-0 podman[485303]: 2025-10-03 10:52:53.404958773 +0000 UTC m=+0.274103730 container start efaaea0586fc4d7b439d7e10a51c2469782a13779ccbf108d3b55ee50539f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:52:53 compute-0 podman[485303]: 2025-10-03 10:52:53.412134954 +0000 UTC m=+0.281279921 container attach efaaea0586fc4d7b439d7e10a51c2469782a13779ccbf108d3b55ee50539f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 10:52:53 compute-0 nova_compute[351685]: 2025-10-03 10:52:53.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:52:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/11343341' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:52:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:52:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/11343341' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]: {
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:52:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:        "osd_id": 1,
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:        "type": "bluestore"
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:    },
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:        "osd_id": 2,
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:        "type": "bluestore"
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:    },
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:        "osd_id": 0,
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:        "type": "bluestore"
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]:    }
Oct  3 10:52:54 compute-0 ecstatic_yalow[485319]: }
Oct  3 10:52:54 compute-0 systemd[1]: libpod-efaaea0586fc4d7b439d7e10a51c2469782a13779ccbf108d3b55ee50539f362.scope: Deactivated successfully.
Oct  3 10:52:54 compute-0 systemd[1]: libpod-efaaea0586fc4d7b439d7e10a51c2469782a13779ccbf108d3b55ee50539f362.scope: Consumed 1.154s CPU time.
Oct  3 10:52:54 compute-0 podman[485303]: 2025-10-03 10:52:54.563723377 +0000 UTC m=+1.432868304 container died efaaea0586fc4d7b439d7e10a51c2469782a13779ccbf108d3b55ee50539f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:52:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad8256885bd7117141e23a6b752d8f0deb833986654ce84a75952d5064e3ad85-merged.mount: Deactivated successfully.
Oct  3 10:52:54 compute-0 podman[485303]: 2025-10-03 10:52:54.655832034 +0000 UTC m=+1.524976991 container remove efaaea0586fc4d7b439d7e10a51c2469782a13779ccbf108d3b55ee50539f362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:52:54 compute-0 systemd[1]: libpod-conmon-efaaea0586fc4d7b439d7e10a51c2469782a13779ccbf108d3b55ee50539f362.scope: Deactivated successfully.
Oct  3 10:52:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:52:54 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:52:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:52:54 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:52:54 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 1607d52a-8766-482d-8840-b7d4313f4fd4 does not exist
Oct  3 10:52:54 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev cd30475a-6400-41eb-85aa-8f0fcf9cfb6c does not exist
Oct  3 10:52:55 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:52:55 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:52:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:52:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2568: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:56 compute-0 podman[485414]: 2025-10-03 10:52:56.84457519 +0000 UTC m=+0.086971843 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:52:56 compute-0 podman[485416]: 2025-10-03 10:52:56.867227997 +0000 UTC m=+0.095385883 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3)
Oct  3 10:52:56 compute-0 podman[485415]: 2025-10-03 10:52:56.87292412 +0000 UTC m=+0.114000231 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 10:52:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:52:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:52:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 9524 writes, 34K keys, 9524 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9524 writes, 2458 syncs, 3.87 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:52:58 compute-0 nova_compute[351685]: 2025-10-03 10:52:58.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2569: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:52:58 compute-0 nova_compute[351685]: 2025-10-03 10:52:58.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:52:59 compute-0 podman[157165]: time="2025-10-03T10:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:52:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:52:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9086 "" "Go-http-client/1.1"
Oct  3 10:53:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:01 compute-0 openstack_network_exporter[367524]: ERROR   10:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:53:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:53:01 compute-0 openstack_network_exporter[367524]: ERROR   10:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:53:01 compute-0 openstack_network_exporter[367524]: ERROR   10:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:53:01 compute-0 openstack_network_exporter[367524]: ERROR   10:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:53:01 compute-0 openstack_network_exporter[367524]: ERROR   10:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:53:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:53:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:53:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 10:53:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 7705 writes, 29K keys, 7705 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 7705 writes, 1811 syncs, 4.25 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 10:53:03 compute-0 nova_compute[351685]: 2025-10-03 10:53:03.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:03 compute-0 nova_compute[351685]: 2025-10-03 10:53:03.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:05 compute-0 ceph-mgr[192071]: [devicehealth INFO root] Check health
Oct  3 10:53:05 compute-0 podman[485475]: 2025-10-03 10:53:05.853295514 +0000 UTC m=+0.097293624 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:53:05 compute-0 podman[485476]: 2025-10-03 10:53:05.869462782 +0000 UTC m=+0.105559619 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.4, distribution-scope=public, architecture=x86_64, container_name=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1214.1726694543)
Oct  3 10:53:05 compute-0 podman[485477]: 2025-10-03 10:53:05.8746713 +0000 UTC m=+0.106208931 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:53:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2573: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:53:08 compute-0 nova_compute[351685]: 2025-10-03 10:53:08.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:08 compute-0 nova_compute[351685]: 2025-10-03 10:53:08.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:53:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:12 compute-0 nova_compute[351685]: 2025-10-03 10:53:12.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:53:12 compute-0 nova_compute[351685]: 2025-10-03 10:53:12.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:53:12 compute-0 nova_compute[351685]: 2025-10-03 10:53:12.732 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:53:13 compute-0 nova_compute[351685]: 2025-10-03 10:53:13.052 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:53:13 compute-0 nova_compute[351685]: 2025-10-03 10:53:13.053 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:53:13 compute-0 nova_compute[351685]: 2025-10-03 10:53:13.054 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:53:13 compute-0 nova_compute[351685]: 2025-10-03 10:53:13.055 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:53:13 compute-0 nova_compute[351685]: 2025-10-03 10:53:13.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:13 compute-0 nova_compute[351685]: 2025-10-03 10:53:13.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2577: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:15 compute-0 nova_compute[351685]: 2025-10-03 10:53:15.112 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:53:15 compute-0 nova_compute[351685]: 2025-10-03 10:53:15.142 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:53:15 compute-0 nova_compute[351685]: 2025-10-03 10:53:15.143 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:53:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:53:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:53:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:53:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:53:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:53:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:53:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:53:18 compute-0 nova_compute[351685]: 2025-10-03 10:53:18.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:18 compute-0 nova_compute[351685]: 2025-10-03 10:53:18.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:18 compute-0 nova_compute[351685]: 2025-10-03 10:53:18.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:53:18 compute-0 nova_compute[351685]: 2025-10-03 10:53:18.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:53:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:21 compute-0 nova_compute[351685]: 2025-10-03 10:53:21.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:53:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:53:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:22 compute-0 nova_compute[351685]: 2025-10-03 10:53:22.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:53:22 compute-0 nova_compute[351685]: 2025-10-03 10:53:22.795 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:53:22 compute-0 nova_compute[351685]: 2025-10-03 10:53:22.796 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:53:22 compute-0 nova_compute[351685]: 2025-10-03 10:53:22.796 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:53:22 compute-0 nova_compute[351685]: 2025-10-03 10:53:22.796 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:53:22 compute-0 nova_compute[351685]: 2025-10-03 10:53:22.797 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:53:22 compute-0 podman[485537]: 2025-10-03 10:53:22.885761761 +0000 UTC m=+0.111162288 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 10:53:22 compute-0 podman[485536]: 2025-10-03 10:53:22.901674702 +0000 UTC m=+0.141035676 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=)
Oct  3 10:53:22 compute-0 podman[485538]: 2025-10-03 10:53:22.946947145 +0000 UTC m=+0.168474758 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 10:53:22 compute-0 podman[485595]: 2025-10-03 10:53:22.995535635 +0000 UTC m=+0.074075839 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:53:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:53:23 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2698199358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:53:23 compute-0 nova_compute[351685]: 2025-10-03 10:53:23.292 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:53:23 compute-0 nova_compute[351685]: 2025-10-03 10:53:23.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:23 compute-0 nova_compute[351685]: 2025-10-03 10:53:23.406 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:53:23 compute-0 nova_compute[351685]: 2025-10-03 10:53:23.406 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:53:23 compute-0 nova_compute[351685]: 2025-10-03 10:53:23.406 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:53:23 compute-0 nova_compute[351685]: 2025-10-03 10:53:23.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:23 compute-0 nova_compute[351685]: 2025-10-03 10:53:23.841 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:53:23 compute-0 nova_compute[351685]: 2025-10-03 10:53:23.842 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3816MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:53:23 compute-0 nova_compute[351685]: 2025-10-03 10:53:23.842 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:53:23 compute-0 nova_compute[351685]: 2025-10-03 10:53:23.843 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:53:23 compute-0 nova_compute[351685]: 2025-10-03 10:53:23.933 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:53:23 compute-0 nova_compute[351685]: 2025-10-03 10:53:23.933 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:53:23 compute-0 nova_compute[351685]: 2025-10-03 10:53:23.934 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:53:23 compute-0 nova_compute[351685]: 2025-10-03 10:53:23.951 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 10:53:23 compute-0 nova_compute[351685]: 2025-10-03 10:53:23.969 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 10:53:23 compute-0 nova_compute[351685]: 2025-10-03 10:53:23.970 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 10:53:23 compute-0 nova_compute[351685]: 2025-10-03 10:53:23.986 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 10:53:24 compute-0 nova_compute[351685]: 2025-10-03 10:53:24.021 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 10:53:24 compute-0 nova_compute[351685]: 2025-10-03 10:53:24.059 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:53:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:53:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4094680244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:53:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:24 compute-0 nova_compute[351685]: 2025-10-03 10:53:24.571 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:53:24 compute-0 nova_compute[351685]: 2025-10-03 10:53:24.584 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:53:24 compute-0 nova_compute[351685]: 2025-10-03 10:53:24.604 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:53:24 compute-0 nova_compute[351685]: 2025-10-03 10:53:24.606 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:53:24 compute-0 nova_compute[351685]: 2025-10-03 10:53:24.606 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:53:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:53:27 compute-0 nova_compute[351685]: 2025-10-03 10:53:27.608 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:53:27 compute-0 nova_compute[351685]: 2025-10-03 10:53:27.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:53:27 compute-0 podman[485660]: 2025-10-03 10:53:27.86196511 +0000 UTC m=+0.109586119 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:53:27 compute-0 podman[485661]: 2025-10-03 10:53:27.881114976 +0000 UTC m=+0.129433187 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible)
Oct  3 10:53:27 compute-0 podman[485662]: 2025-10-03 10:53:27.893900846 +0000 UTC m=+0.125777079 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:53:28 compute-0 nova_compute[351685]: 2025-10-03 10:53:28.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:28 compute-0 nova_compute[351685]: 2025-10-03 10:53:28.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:28 compute-0 nova_compute[351685]: 2025-10-03 10:53:28.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:53:29 compute-0 nova_compute[351685]: 2025-10-03 10:53:29.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:53:29 compute-0 podman[157165]: time="2025-10-03T10:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:53:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:53:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9085 "" "Go-http-client/1.1"
Oct  3 10:53:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:31 compute-0 openstack_network_exporter[367524]: ERROR   10:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:53:31 compute-0 openstack_network_exporter[367524]: ERROR   10:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:53:31 compute-0 openstack_network_exporter[367524]: ERROR   10:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:53:31 compute-0 openstack_network_exporter[367524]: ERROR   10:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:53:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:53:31 compute-0 openstack_network_exporter[367524]: ERROR   10:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:53:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:53:31 compute-0 nova_compute[351685]: 2025-10-03 10:53:31.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:53:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:53:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:32 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Oct  3 10:53:32 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:53:32.560735) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:53:32 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Oct  3 10:53:32 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488812560814, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1597, "num_deletes": 256, "total_data_size": 2579301, "memory_usage": 2625296, "flush_reason": "Manual Compaction"}
Oct  3 10:53:32 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Oct  3 10:53:32 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488812996384, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 2543879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50958, "largest_seqno": 52554, "table_properties": {"data_size": 2536405, "index_size": 4481, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14928, "raw_average_key_size": 19, "raw_value_size": 2521538, "raw_average_value_size": 3313, "num_data_blocks": 200, "num_entries": 761, "num_filter_entries": 761, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759488637, "oldest_key_time": 1759488637, "file_creation_time": 1759488812, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:53:32 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 435714 microseconds, and 11993 cpu microseconds.
Oct  3 10:53:32 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:53:32.996445) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 2543879 bytes OK
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:53:32.996477) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:53:33.148573) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:53:33.148623) EVENT_LOG_v1 {"time_micros": 1759488813148610, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:53:33.148653) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2572378, prev total WAL file size 2598866, number of live WAL files 2.
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:53:33.150617) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303131' seq:72057594037927935, type:22 .. '6C6F676D0032323633' seq:0, type:0; will stop at (end)
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(2484KB)], [122(7750KB)]
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488813150653, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 10479942, "oldest_snapshot_seqno": -1}
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 6663 keys, 10378427 bytes, temperature: kUnknown
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488813338597, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 10378427, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10334402, "index_size": 26249, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 173676, "raw_average_key_size": 26, "raw_value_size": 10214149, "raw_average_value_size": 1532, "num_data_blocks": 1051, "num_entries": 6663, "num_filter_entries": 6663, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759488813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:53:33 compute-0 nova_compute[351685]: 2025-10-03 10:53:33.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:53:33.338946) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 10378427 bytes
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:53:33.373479) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 55.7 rd, 55.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 7.6 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(8.2) write-amplify(4.1) OK, records in: 7187, records dropped: 524 output_compression: NoCompression
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:53:33.373528) EVENT_LOG_v1 {"time_micros": 1759488813373508, "job": 74, "event": "compaction_finished", "compaction_time_micros": 188041, "compaction_time_cpu_micros": 32051, "output_level": 6, "num_output_files": 1, "total_output_size": 10378427, "num_input_records": 7187, "num_output_records": 6663, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488813374806, "job": 74, "event": "table_file_deletion", "file_number": 124}
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488813378389, "job": 74, "event": "table_file_deletion", "file_number": 122}
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:53:33.150376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:53:33.378649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:53:33.378656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:53:33.378659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:53:33.378662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:53:33 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:53:33.379702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:53:33 compute-0 nova_compute[351685]: 2025-10-03 10:53:33.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:36 compute-0 podman[485719]: 2025-10-03 10:53:36.827444078 +0000 UTC m=+0.080042341 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:53:36 compute-0 podman[485721]: 2025-10-03 10:53:36.83654205 +0000 UTC m=+0.080738173 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:53:36 compute-0 podman[485720]: 2025-10-03 10:53:36.860114346 +0000 UTC m=+0.106627694 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, name=ubi9, release=1214.1726694543, config_id=edpm, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container)
Oct  3 10:53:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:53:38 compute-0 nova_compute[351685]: 2025-10-03 10:53:38.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:38 compute-0 nova_compute[351685]: 2025-10-03 10:53:38.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.897 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.897 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.897 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.898 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.904 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.905 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.905 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.905 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.905 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:53:40.905785) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.911 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.912 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.912 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.912 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.912 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.913 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.913 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.913 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.914 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.914 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:53:40.913211) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.914 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.914 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.914 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.915 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.915 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:53:40.915160) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.944 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.945 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.946 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.946 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.947 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.947 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.947 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.948 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:40.948 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:53:40.948089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.003 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.004 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.004 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.005 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.005 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.006 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.006 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.006 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.006 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.006 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.007 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.008 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:53:41.006427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.008 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.008 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.008 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.008 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.010 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.010 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:53:41.008759) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.012 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:53:41.011337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.012 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.013 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:53:41.013628) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.014 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.015 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.015 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.015 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.015 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.015 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.016 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.017 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.017 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.017 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.017 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:53:41.015558) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.017 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:53:41.017611) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.046 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.047 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.048 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.048 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.048 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.049 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:53:41.048670) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.049 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.052 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.053 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.053 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:53:41.053446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.053 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.054 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.054 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.055 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.056 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.057 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.059 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:53:41.057478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.059 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.060 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.060 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:53:41.060517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.061 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.062 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.062 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.064 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:53:41.062468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.064 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:53:41.064832) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.066 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.066 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.066 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:53:41.066871) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.067 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.068 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.069 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.069 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.069 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:53:41.069194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.069 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 76400000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.071 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.071 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:53:41.071604) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.071 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.072 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.072 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.073 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.074 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.074 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.075 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:53:41.073795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.075 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.076 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:53:41.076124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.076 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.077 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.077 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.077 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.077 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.077 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.077 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:53:41.077456) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.078 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.078 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.078 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.079 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.080 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.080 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:53:41.079058) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.080 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.080 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.080 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.080 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.081 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.082 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:53:41.080542) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:53:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:53:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:53:41.658 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:53:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:53:41.658 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:53:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:53:41.659 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:53:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:53:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2591: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:43 compute-0 nova_compute[351685]: 2025-10-03 10:53:43.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:43 compute-0 nova_compute[351685]: 2025-10-03 10:53:43.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:53:46
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.log', 'default.rgw.control', '.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta']
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2593: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:46 compute-0 nova_compute[351685]: 2025-10-03 10:53:46.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:53:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:53:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:53:48 compute-0 nova_compute[351685]: 2025-10-03 10:53:48.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:48 compute-0 nova_compute[351685]: 2025-10-03 10:53:48.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:53:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2596: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:53 compute-0 nova_compute[351685]: 2025-10-03 10:53:53.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:53 compute-0 nova_compute[351685]: 2025-10-03 10:53:53.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:53 compute-0 podman[485780]: 2025-10-03 10:53:53.871826037 +0000 UTC m=+0.120031133 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct  3 10:53:53 compute-0 podman[485781]: 2025-10-03 10:53:53.883739379 +0000 UTC m=+0.123936289 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:53:53 compute-0 podman[485782]: 2025-10-03 10:53:53.895109874 +0000 UTC m=+0.127518774 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct  3 10:53:53 compute-0 podman[485783]: 2025-10-03 10:53:53.938650412 +0000 UTC m=+0.163182179 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct  3 10:53:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:53:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4107817218' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:53:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:53:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4107817218' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:53:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2597: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:53:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:53:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:53:56 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:53:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:53:56 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:53:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:53:56 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:53:56 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d8b089a5-8932-40fa-b990-17c7c57652fc does not exist
Oct  3 10:53:56 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 2d3fa4ba-7b41-4bdd-aa98-db2ab497024a does not exist
Oct  3 10:53:56 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b4f7081d-79c0-4ffe-9c64-2a525c79fccc does not exist
Oct  3 10:53:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:53:56 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:53:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:53:56 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:53:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:53:56 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:53:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2598: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:53:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:53:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:53:57 compute-0 podman[486129]: 2025-10-03 10:53:57.12659108 +0000 UTC m=+0.041133771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:53:57 compute-0 podman[486129]: 2025-10-03 10:53:57.459556577 +0000 UTC m=+0.374099218 container create b13912315a980c54e43cf9475cf977379e1b7621242837bfd3037debf61eaa5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_solomon, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Oct  3 10:53:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:53:57 compute-0 systemd[1]: Started libpod-conmon-b13912315a980c54e43cf9475cf977379e1b7621242837bfd3037debf61eaa5a.scope.
Oct  3 10:53:57 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:53:57 compute-0 podman[486129]: 2025-10-03 10:53:57.679220689 +0000 UTC m=+0.593763340 container init b13912315a980c54e43cf9475cf977379e1b7621242837bfd3037debf61eaa5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 10:53:57 compute-0 podman[486129]: 2025-10-03 10:53:57.691570125 +0000 UTC m=+0.606112766 container start b13912315a980c54e43cf9475cf977379e1b7621242837bfd3037debf61eaa5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 10:53:57 compute-0 podman[486129]: 2025-10-03 10:53:57.697904728 +0000 UTC m=+0.612447389 container attach b13912315a980c54e43cf9475cf977379e1b7621242837bfd3037debf61eaa5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_solomon, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:53:57 compute-0 lucid_solomon[486145]: 167 167
Oct  3 10:53:57 compute-0 systemd[1]: libpod-b13912315a980c54e43cf9475cf977379e1b7621242837bfd3037debf61eaa5a.scope: Deactivated successfully.
Oct  3 10:53:57 compute-0 podman[486129]: 2025-10-03 10:53:57.703072324 +0000 UTC m=+0.617614965 container died b13912315a980c54e43cf9475cf977379e1b7621242837bfd3037debf61eaa5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_solomon, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:53:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-793aba303098539575cde683c8b04b03313ec37e6b1e1f41407303966904451e-merged.mount: Deactivated successfully.
Oct  3 10:53:58 compute-0 podman[486129]: 2025-10-03 10:53:58.251893942 +0000 UTC m=+1.166436543 container remove b13912315a980c54e43cf9475cf977379e1b7621242837bfd3037debf61eaa5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:53:58 compute-0 systemd[1]: libpod-conmon-b13912315a980c54e43cf9475cf977379e1b7621242837bfd3037debf61eaa5a.scope: Deactivated successfully.
Oct  3 10:53:58 compute-0 podman[486161]: 2025-10-03 10:53:58.320567857 +0000 UTC m=+0.281987064 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:53:58 compute-0 podman[486162]: 2025-10-03 10:53:58.336818458 +0000 UTC m=+0.284479644 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:53:58 compute-0 podman[486163]: 2025-10-03 10:53:58.350015841 +0000 UTC m=+0.290721033 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3)
Oct  3 10:53:58 compute-0 nova_compute[351685]: 2025-10-03 10:53:58.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2599: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:53:58 compute-0 podman[486227]: 2025-10-03 10:53:58.506120402 +0000 UTC m=+0.051343629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:53:58 compute-0 podman[486227]: 2025-10-03 10:53:58.615124801 +0000 UTC m=+0.160348028 container create 5eb816226c863bcb53f1770f5916417e6b58894efca792c329cd8c4053a434b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:53:58 compute-0 nova_compute[351685]: 2025-10-03 10:53:58.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:53:58 compute-0 systemd[1]: Started libpod-conmon-5eb816226c863bcb53f1770f5916417e6b58894efca792c329cd8c4053a434b0.scope.
Oct  3 10:53:58 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a2e4e0e9efa43a82e42e394220fcab3b23269923341dfc5b9b7661fa94eae9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a2e4e0e9efa43a82e42e394220fcab3b23269923341dfc5b9b7661fa94eae9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a2e4e0e9efa43a82e42e394220fcab3b23269923341dfc5b9b7661fa94eae9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a2e4e0e9efa43a82e42e394220fcab3b23269923341dfc5b9b7661fa94eae9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00a2e4e0e9efa43a82e42e394220fcab3b23269923341dfc5b9b7661fa94eae9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:53:58 compute-0 podman[486227]: 2025-10-03 10:53:58.815006837 +0000 UTC m=+0.360230044 container init 5eb816226c863bcb53f1770f5916417e6b58894efca792c329cd8c4053a434b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:53:58 compute-0 podman[486227]: 2025-10-03 10:53:58.829574005 +0000 UTC m=+0.374797192 container start 5eb816226c863bcb53f1770f5916417e6b58894efca792c329cd8c4053a434b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 10:53:58 compute-0 podman[486227]: 2025-10-03 10:53:58.85310839 +0000 UTC m=+0.398331607 container attach 5eb816226c863bcb53f1770f5916417e6b58894efca792c329cd8c4053a434b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:53:59 compute-0 podman[157165]: time="2025-10-03T10:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:53:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47974 "" "Go-http-client/1.1"
Oct  3 10:53:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9513 "" "Go-http-client/1.1"
Oct  3 10:54:00 compute-0 agitated_lumiere[486243]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:54:00 compute-0 agitated_lumiere[486243]: --> relative data size: 1.0
Oct  3 10:54:00 compute-0 agitated_lumiere[486243]: --> All data devices are unavailable
Oct  3 10:54:00 compute-0 systemd[1]: libpod-5eb816226c863bcb53f1770f5916417e6b58894efca792c329cd8c4053a434b0.scope: Deactivated successfully.
Oct  3 10:54:00 compute-0 podman[486227]: 2025-10-03 10:54:00.161177357 +0000 UTC m=+1.706400574 container died 5eb816226c863bcb53f1770f5916417e6b58894efca792c329cd8c4053a434b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:54:00 compute-0 systemd[1]: libpod-5eb816226c863bcb53f1770f5916417e6b58894efca792c329cd8c4053a434b0.scope: Consumed 1.255s CPU time.
Oct  3 10:54:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-00a2e4e0e9efa43a82e42e394220fcab3b23269923341dfc5b9b7661fa94eae9-merged.mount: Deactivated successfully.
Oct  3 10:54:00 compute-0 podman[486227]: 2025-10-03 10:54:00.255121823 +0000 UTC m=+1.800345010 container remove 5eb816226c863bcb53f1770f5916417e6b58894efca792c329cd8c4053a434b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:54:00 compute-0 systemd[1]: libpod-conmon-5eb816226c863bcb53f1770f5916417e6b58894efca792c329cd8c4053a434b0.scope: Deactivated successfully.
Oct  3 10:54:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:01 compute-0 podman[486420]: 2025-10-03 10:54:01.208546706 +0000 UTC m=+0.047232277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:54:01 compute-0 podman[486420]: 2025-10-03 10:54:01.378904044 +0000 UTC m=+0.217589545 container create d71bea2d04301714730e5a21568693c568f0ba474439f9fc8004da2024133bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 10:54:01 compute-0 openstack_network_exporter[367524]: ERROR   10:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:54:01 compute-0 openstack_network_exporter[367524]: ERROR   10:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:54:01 compute-0 openstack_network_exporter[367524]: ERROR   10:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:54:01 compute-0 openstack_network_exporter[367524]: ERROR   10:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:54:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:54:01 compute-0 openstack_network_exporter[367524]: ERROR   10:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:54:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:54:01 compute-0 systemd[1]: Started libpod-conmon-d71bea2d04301714730e5a21568693c568f0ba474439f9fc8004da2024133bf8.scope.
Oct  3 10:54:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:54:01 compute-0 podman[486420]: 2025-10-03 10:54:01.77876985 +0000 UTC m=+0.617455421 container init d71bea2d04301714730e5a21568693c568f0ba474439f9fc8004da2024133bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:54:01 compute-0 podman[486420]: 2025-10-03 10:54:01.798206463 +0000 UTC m=+0.636891944 container start d71bea2d04301714730e5a21568693c568f0ba474439f9fc8004da2024133bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 10:54:01 compute-0 xenodochial_babbage[486435]: 167 167
Oct  3 10:54:01 compute-0 systemd[1]: libpod-d71bea2d04301714730e5a21568693c568f0ba474439f9fc8004da2024133bf8.scope: Deactivated successfully.
Oct  3 10:54:01 compute-0 podman[486420]: 2025-10-03 10:54:01.829329052 +0000 UTC m=+0.668014583 container attach d71bea2d04301714730e5a21568693c568f0ba474439f9fc8004da2024133bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 10:54:01 compute-0 podman[486420]: 2025-10-03 10:54:01.830002594 +0000 UTC m=+0.668688095 container died d71bea2d04301714730e5a21568693c568f0ba474439f9fc8004da2024133bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:54:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-17df0fdc75485ebea0ea3f1a2e5024d577edf378bb7801fc4293737cfd96592f-merged.mount: Deactivated successfully.
Oct  3 10:54:02 compute-0 podman[486420]: 2025-10-03 10:54:02.240712288 +0000 UTC m=+1.079397759 container remove d71bea2d04301714730e5a21568693c568f0ba474439f9fc8004da2024133bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 10:54:02 compute-0 systemd[1]: libpod-conmon-d71bea2d04301714730e5a21568693c568f0ba474439f9fc8004da2024133bf8.scope: Deactivated successfully.
Oct  3 10:54:02 compute-0 podman[486459]: 2025-10-03 10:54:02.506007264 +0000 UTC m=+0.101133747 container create a68a488137aea104094449d89890eac84baf724d52acddc969ed5f829fb02d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:54:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:54:02 compute-0 podman[486459]: 2025-10-03 10:54:02.463781219 +0000 UTC m=+0.058907782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:54:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:02 compute-0 systemd[1]: Started libpod-conmon-a68a488137aea104094449d89890eac84baf724d52acddc969ed5f829fb02d09.scope.
Oct  3 10:54:02 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:54:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd89bdcc58eb8c57ad111c54340ca758b19edc5871f13d9fb9aba24b1675a04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:54:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd89bdcc58eb8c57ad111c54340ca758b19edc5871f13d9fb9aba24b1675a04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:54:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd89bdcc58eb8c57ad111c54340ca758b19edc5871f13d9fb9aba24b1675a04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:54:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd89bdcc58eb8c57ad111c54340ca758b19edc5871f13d9fb9aba24b1675a04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:54:02 compute-0 podman[486459]: 2025-10-03 10:54:02.668941703 +0000 UTC m=+0.264068266 container init a68a488137aea104094449d89890eac84baf724d52acddc969ed5f829fb02d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chatterjee, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct  3 10:54:02 compute-0 podman[486459]: 2025-10-03 10:54:02.689739541 +0000 UTC m=+0.284866014 container start a68a488137aea104094449d89890eac84baf724d52acddc969ed5f829fb02d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chatterjee, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:54:02 compute-0 podman[486459]: 2025-10-03 10:54:02.696433997 +0000 UTC m=+0.291560470 container attach a68a488137aea104094449d89890eac84baf724d52acddc969ed5f829fb02d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 10:54:03 compute-0 nova_compute[351685]: 2025-10-03 10:54:03.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]: {
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:    "0": [
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:        {
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "devices": [
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "/dev/loop3"
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            ],
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "lv_name": "ceph_lv0",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "lv_size": "21470642176",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "name": "ceph_lv0",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "tags": {
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.cluster_name": "ceph",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.crush_device_class": "",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.encrypted": "0",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.osd_id": "0",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.type": "block",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.vdo": "0"
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            },
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "type": "block",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "vg_name": "ceph_vg0"
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:        }
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:    ],
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:    "1": [
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:        {
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "devices": [
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "/dev/loop4"
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            ],
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "lv_name": "ceph_lv1",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "lv_size": "21470642176",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "name": "ceph_lv1",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "tags": {
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.cluster_name": "ceph",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.crush_device_class": "",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.encrypted": "0",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.osd_id": "1",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.type": "block",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.vdo": "0"
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            },
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "type": "block",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "vg_name": "ceph_vg1"
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:        }
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:    ],
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:    "2": [
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:        {
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "devices": [
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "/dev/loop5"
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            ],
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "lv_name": "ceph_lv2",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "lv_size": "21470642176",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "name": "ceph_lv2",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "tags": {
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.cluster_name": "ceph",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.crush_device_class": "",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.encrypted": "0",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.osd_id": "2",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.type": "block",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:                "ceph.vdo": "0"
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            },
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "type": "block",
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:            "vg_name": "ceph_vg2"
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:        }
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]:    ]
Oct  3 10:54:03 compute-0 sweet_chatterjee[486474]: }
Oct  3 10:54:03 compute-0 systemd[1]: libpod-a68a488137aea104094449d89890eac84baf724d52acddc969ed5f829fb02d09.scope: Deactivated successfully.
Oct  3 10:54:03 compute-0 podman[486459]: 2025-10-03 10:54:03.58976769 +0000 UTC m=+1.184894133 container died a68a488137aea104094449d89890eac84baf724d52acddc969ed5f829fb02d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:54:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-abd89bdcc58eb8c57ad111c54340ca758b19edc5871f13d9fb9aba24b1675a04-merged.mount: Deactivated successfully.
Oct  3 10:54:03 compute-0 podman[486459]: 2025-10-03 10:54:03.672589999 +0000 UTC m=+1.267716442 container remove a68a488137aea104094449d89890eac84baf724d52acddc969ed5f829fb02d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:54:03 compute-0 systemd[1]: libpod-conmon-a68a488137aea104094449d89890eac84baf724d52acddc969ed5f829fb02d09.scope: Deactivated successfully.
Oct  3 10:54:03 compute-0 nova_compute[351685]: 2025-10-03 10:54:03.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:04 compute-0 podman[486633]: 2025-10-03 10:54:04.820605199 +0000 UTC m=+0.073738198 container create b5c46f75e81e43226687d5dc2c4a2c5016f1652f3e015cfa298d1179038ff0c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 10:54:04 compute-0 podman[486633]: 2025-10-03 10:54:04.792799767 +0000 UTC m=+0.045932786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:54:04 compute-0 systemd[1]: Started libpod-conmon-b5c46f75e81e43226687d5dc2c4a2c5016f1652f3e015cfa298d1179038ff0c2.scope.
Oct  3 10:54:04 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:54:04 compute-0 podman[486633]: 2025-10-03 10:54:04.958825005 +0000 UTC m=+0.211958034 container init b5c46f75e81e43226687d5dc2c4a2c5016f1652f3e015cfa298d1179038ff0c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:54:04 compute-0 podman[486633]: 2025-10-03 10:54:04.976506123 +0000 UTC m=+0.229639142 container start b5c46f75e81e43226687d5dc2c4a2c5016f1652f3e015cfa298d1179038ff0c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:54:04 compute-0 podman[486633]: 2025-10-03 10:54:04.98265457 +0000 UTC m=+0.235787589 container attach b5c46f75e81e43226687d5dc2c4a2c5016f1652f3e015cfa298d1179038ff0c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:54:04 compute-0 compassionate_wozniak[486649]: 167 167
Oct  3 10:54:04 compute-0 systemd[1]: libpod-b5c46f75e81e43226687d5dc2c4a2c5016f1652f3e015cfa298d1179038ff0c2.scope: Deactivated successfully.
Oct  3 10:54:04 compute-0 podman[486633]: 2025-10-03 10:54:04.985464111 +0000 UTC m=+0.238597130 container died b5c46f75e81e43226687d5dc2c4a2c5016f1652f3e015cfa298d1179038ff0c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:54:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d75fb1900898dbf001a6f9de34ff8171a19dc8f17a9ded3fabab722d28405cf9-merged.mount: Deactivated successfully.
Oct  3 10:54:05 compute-0 podman[486633]: 2025-10-03 10:54:05.052992908 +0000 UTC m=+0.306125887 container remove b5c46f75e81e43226687d5dc2c4a2c5016f1652f3e015cfa298d1179038ff0c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wozniak, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:54:05 compute-0 systemd[1]: libpod-conmon-b5c46f75e81e43226687d5dc2c4a2c5016f1652f3e015cfa298d1179038ff0c2.scope: Deactivated successfully.
Oct  3 10:54:05 compute-0 podman[486674]: 2025-10-03 10:54:05.287450014 +0000 UTC m=+0.074707599 container create df376739c283da5ab287d89c6d4ecc6c4d201d269c08199971068373af1c5d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:54:05 compute-0 podman[486674]: 2025-10-03 10:54:05.256673566 +0000 UTC m=+0.043931241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:54:05 compute-0 systemd[1]: Started libpod-conmon-df376739c283da5ab287d89c6d4ecc6c4d201d269c08199971068373af1c5d71.scope.
Oct  3 10:54:05 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:54:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1667dbb7c71d319acf4cb0e6e87b7929ffdc09b44a9c43e3fbac54d7a7fc78b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:54:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1667dbb7c71d319acf4cb0e6e87b7929ffdc09b44a9c43e3fbac54d7a7fc78b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:54:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1667dbb7c71d319acf4cb0e6e87b7929ffdc09b44a9c43e3fbac54d7a7fc78b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:54:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1667dbb7c71d319acf4cb0e6e87b7929ffdc09b44a9c43e3fbac54d7a7fc78b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:54:05 compute-0 podman[486674]: 2025-10-03 10:54:05.460085715 +0000 UTC m=+0.247343340 container init df376739c283da5ab287d89c6d4ecc6c4d201d269c08199971068373af1c5d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_matsumoto, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 10:54:05 compute-0 podman[486674]: 2025-10-03 10:54:05.481271595 +0000 UTC m=+0.268529180 container start df376739c283da5ab287d89c6d4ecc6c4d201d269c08199971068373af1c5d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_matsumoto, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 10:54:05 compute-0 podman[486674]: 2025-10-03 10:54:05.487532077 +0000 UTC m=+0.274789742 container attach df376739c283da5ab287d89c6d4ecc6c4d201d269c08199971068373af1c5d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]: {
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:54:06 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:54:06 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:        "osd_id": 1,
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:        "type": "bluestore"
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:    },
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:        "osd_id": 2,
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:        "type": "bluestore"
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:    },
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:        "osd_id": 0,
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:        "type": "bluestore"
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]:    }
Oct  3 10:54:06 compute-0 agitated_matsumoto[486690]: }
Oct  3 10:54:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2603: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:06 compute-0 systemd[1]: libpod-df376739c283da5ab287d89c6d4ecc6c4d201d269c08199971068373af1c5d71.scope: Deactivated successfully.
Oct  3 10:54:06 compute-0 systemd[1]: libpod-df376739c283da5ab287d89c6d4ecc6c4d201d269c08199971068373af1c5d71.scope: Consumed 1.100s CPU time.
Oct  3 10:54:06 compute-0 podman[486674]: 2025-10-03 10:54:06.576757589 +0000 UTC m=+1.364015244 container died df376739c283da5ab287d89c6d4ecc6c4d201d269c08199971068373af1c5d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_matsumoto, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:54:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1667dbb7c71d319acf4cb0e6e87b7929ffdc09b44a9c43e3fbac54d7a7fc78b3-merged.mount: Deactivated successfully.
Oct  3 10:54:06 compute-0 podman[486674]: 2025-10-03 10:54:06.676654846 +0000 UTC m=+1.463912441 container remove df376739c283da5ab287d89c6d4ecc6c4d201d269c08199971068373af1c5d71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:54:06 compute-0 systemd[1]: libpod-conmon-df376739c283da5ab287d89c6d4ecc6c4d201d269c08199971068373af1c5d71.scope: Deactivated successfully.
Oct  3 10:54:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:54:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:54:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:54:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:54:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 4792ae7f-eff2-4b2e-94b8-bc6783990717 does not exist
Oct  3 10:54:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 29459c34-914f-4a03-ac8f-8be7c6e158b3 does not exist
Oct  3 10:54:07 compute-0 podman[486759]: 2025-10-03 10:54:07.041838767 +0000 UTC m=+0.106268652 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:54:07 compute-0 podman[486760]: 2025-10-03 10:54:07.049599027 +0000 UTC m=+0.108613928 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, name=ubi9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, release=1214.1726694543, distribution-scope=public, release-0.7.12=, vcs-type=git, io.openshift.expose-services=)
Oct  3 10:54:07 compute-0 podman[486761]: 2025-10-03 10:54:07.059556596 +0000 UTC m=+0.114167565 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:54:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:54:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:54:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:54:08 compute-0 nova_compute[351685]: 2025-10-03 10:54:08.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:08 compute-0 nova_compute[351685]: 2025-10-03 10:54:08.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:54:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:13 compute-0 nova_compute[351685]: 2025-10-03 10:54:13.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:13 compute-0 nova_compute[351685]: 2025-10-03 10:54:13.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:13 compute-0 nova_compute[351685]: 2025-10-03 10:54:13.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:54:13 compute-0 nova_compute[351685]: 2025-10-03 10:54:13.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:54:13 compute-0 nova_compute[351685]: 2025-10-03 10:54:13.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:54:14 compute-0 nova_compute[351685]: 2025-10-03 10:54:14.065 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:54:14 compute-0 nova_compute[351685]: 2025-10-03 10:54:14.065 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:54:14 compute-0 nova_compute[351685]: 2025-10-03 10:54:14.066 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:54:14 compute-0 nova_compute[351685]: 2025-10-03 10:54:14.066 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:54:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:54:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:54:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:54:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:54:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:54:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:54:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:16 compute-0 nova_compute[351685]: 2025-10-03 10:54:16.818 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:54:16 compute-0 nova_compute[351685]: 2025-10-03 10:54:16.835 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:54:16 compute-0 nova_compute[351685]: 2025-10-03 10:54:16.836 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:54:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:54:18 compute-0 nova_compute[351685]: 2025-10-03 10:54:18.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:18 compute-0 nova_compute[351685]: 2025-10-03 10:54:18.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:20 compute-0 nova_compute[351685]: 2025-10-03 10:54:20.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:54:20 compute-0 nova_compute[351685]: 2025-10-03 10:54:20.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:54:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:54:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:22 compute-0 nova_compute[351685]: 2025-10-03 10:54:22.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:54:23 compute-0 nova_compute[351685]: 2025-10-03 10:54:23.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:23 compute-0 nova_compute[351685]: 2025-10-03 10:54:23.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:23 compute-0 nova_compute[351685]: 2025-10-03 10:54:23.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:54:23 compute-0 nova_compute[351685]: 2025-10-03 10:54:23.750 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:54:23 compute-0 nova_compute[351685]: 2025-10-03 10:54:23.751 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:54:23 compute-0 nova_compute[351685]: 2025-10-03 10:54:23.751 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:54:23 compute-0 nova_compute[351685]: 2025-10-03 10:54:23.751 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:54:23 compute-0 nova_compute[351685]: 2025-10-03 10:54:23.752 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:54:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:54:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1058342562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:54:24 compute-0 nova_compute[351685]: 2025-10-03 10:54:24.283 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:54:24 compute-0 nova_compute[351685]: 2025-10-03 10:54:24.441 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:54:24 compute-0 nova_compute[351685]: 2025-10-03 10:54:24.442 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:54:24 compute-0 nova_compute[351685]: 2025-10-03 10:54:24.443 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:54:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:24 compute-0 podman[486872]: 2025-10-03 10:54:24.836459475 +0000 UTC m=+0.085520546 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 10:54:24 compute-0 podman[486870]: 2025-10-03 10:54:24.858774161 +0000 UTC m=+0.115167957 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, release=1755695350, config_id=edpm, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 10:54:24 compute-0 podman[486871]: 2025-10-03 10:54:24.868850655 +0000 UTC m=+0.117950037 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 10:54:24 compute-0 podman[486873]: 2025-10-03 10:54:24.890551682 +0000 UTC m=+0.133450405 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  3 10:54:24 compute-0 nova_compute[351685]: 2025-10-03 10:54:24.955 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:54:24 compute-0 nova_compute[351685]: 2025-10-03 10:54:24.956 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3806MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:54:24 compute-0 nova_compute[351685]: 2025-10-03 10:54:24.956 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:54:24 compute-0 nova_compute[351685]: 2025-10-03 10:54:24.957 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:54:25 compute-0 nova_compute[351685]: 2025-10-03 10:54:25.076 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:54:25 compute-0 nova_compute[351685]: 2025-10-03 10:54:25.076 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:54:25 compute-0 nova_compute[351685]: 2025-10-03 10:54:25.076 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:54:25 compute-0 nova_compute[351685]: 2025-10-03 10:54:25.182 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:54:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:54:25 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3113162288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:54:25 compute-0 nova_compute[351685]: 2025-10-03 10:54:25.621 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:54:25 compute-0 nova_compute[351685]: 2025-10-03 10:54:25.629 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:54:25 compute-0 nova_compute[351685]: 2025-10-03 10:54:25.642 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:54:25 compute-0 nova_compute[351685]: 2025-10-03 10:54:25.644 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:54:25 compute-0 nova_compute[351685]: 2025-10-03 10:54:25.644 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:54:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:54:27.749384) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488867749431, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 699, "num_deletes": 251, "total_data_size": 848294, "memory_usage": 862024, "flush_reason": "Manual Compaction"}
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488867762038, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 840251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52555, "largest_seqno": 53253, "table_properties": {"data_size": 836604, "index_size": 1491, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8284, "raw_average_key_size": 19, "raw_value_size": 829301, "raw_average_value_size": 1942, "num_data_blocks": 67, "num_entries": 427, "num_filter_entries": 427, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759488813, "oldest_key_time": 1759488813, "file_creation_time": 1759488867, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 13504 microseconds, and 7509 cpu microseconds.
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:54:27.762888) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 840251 bytes OK
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:54:27.763415) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:54:27.808707) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:54:27.808756) EVENT_LOG_v1 {"time_micros": 1759488867808748, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:54:27.808778) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 844662, prev total WAL file size 844662, number of live WAL files 2.
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:54:27.810362) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(820KB)], [125(10135KB)]
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488867810394, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 11218678, "oldest_snapshot_seqno": -1}
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 6577 keys, 9471564 bytes, temperature: kUnknown
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488867865456, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 9471564, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9429019, "index_size": 25024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 172557, "raw_average_key_size": 26, "raw_value_size": 9311123, "raw_average_value_size": 1415, "num_data_blocks": 991, "num_entries": 6577, "num_filter_entries": 6577, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759488867, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:54:27.865689) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 9471564 bytes
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:54:27.871711) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.5 rd, 171.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.9 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(24.6) write-amplify(11.3) OK, records in: 7090, records dropped: 513 output_compression: NoCompression
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:54:27.871730) EVENT_LOG_v1 {"time_micros": 1759488867871721, "job": 76, "event": "compaction_finished", "compaction_time_micros": 55130, "compaction_time_cpu_micros": 22927, "output_level": 6, "num_output_files": 1, "total_output_size": 9471564, "num_input_records": 7090, "num_output_records": 6577, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488867872015, "job": 76, "event": "table_file_deletion", "file_number": 127}
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488867874282, "job": 76, "event": "table_file_deletion", "file_number": 125}
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:54:27.810222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:54:27.874456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:54:27.874459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:54:27.874461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:54:27.874462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:54:27 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:54:27.874464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:54:28 compute-0 nova_compute[351685]: 2025-10-03 10:54:28.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:28 compute-0 nova_compute[351685]: 2025-10-03 10:54:28.646 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:54:28 compute-0 nova_compute[351685]: 2025-10-03 10:54:28.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:28 compute-0 podman[486973]: 2025-10-03 10:54:28.844948159 +0000 UTC m=+0.098505352 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:54:28 compute-0 podman[486975]: 2025-10-03 10:54:28.880062497 +0000 UTC m=+0.117666998 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:54:28 compute-0 podman[486974]: 2025-10-03 10:54:28.884666224 +0000 UTC m=+0.128689301 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:54:29 compute-0 nova_compute[351685]: 2025-10-03 10:54:29.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:54:29 compute-0 nova_compute[351685]: 2025-10-03 10:54:29.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:54:29 compute-0 nova_compute[351685]: 2025-10-03 10:54:29.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:54:29 compute-0 podman[157165]: time="2025-10-03T10:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:54:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:54:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9082 "" "Go-http-client/1.1"
Oct  3 10:54:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:31 compute-0 openstack_network_exporter[367524]: ERROR   10:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:54:31 compute-0 openstack_network_exporter[367524]: ERROR   10:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:54:31 compute-0 openstack_network_exporter[367524]: ERROR   10:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:54:31 compute-0 openstack_network_exporter[367524]: ERROR   10:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:54:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:54:31 compute-0 openstack_network_exporter[367524]: ERROR   10:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:54:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:54:31 compute-0 nova_compute[351685]: 2025-10-03 10:54:31.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:54:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:54:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:33 compute-0 nova_compute[351685]: 2025-10-03 10:54:33.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:33 compute-0 nova_compute[351685]: 2025-10-03 10:54:33.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Oct  3 10:54:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Oct  3 10:54:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:54:37 compute-0 podman[487031]: 2025-10-03 10:54:37.873074398 +0000 UTC m=+0.117081540 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:54:37 compute-0 podman[487032]: 2025-10-03 10:54:37.879589217 +0000 UTC m=+0.113773403 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, release-0.7.12=, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, release=1214.1726694543, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Oct  3 10:54:37 compute-0 podman[487033]: 2025-10-03 10:54:37.905140807 +0000 UTC m=+0.131684578 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Oct  3 10:54:38 compute-0 nova_compute[351685]: 2025-10-03 10:54:38.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 16 op/s
Oct  3 10:54:38 compute-0 nova_compute[351685]: 2025-10-03 10:54:38.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Oct  3 10:54:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:54:41.659 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:54:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:54:41.660 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:54:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:54:41.660 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:54:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:54:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Oct  3 10:54:43 compute-0 nova_compute[351685]: 2025-10-03 10:54:43.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:43 compute-0 nova_compute[351685]: 2025-10-03 10:54:43.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2622: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:54:46
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.control', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'images', 'vms']
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:54:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:54:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:54:48 compute-0 nova_compute[351685]: 2025-10-03 10:54:48.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct  3 10:54:48 compute-0 nova_compute[351685]: 2025-10-03 10:54:48.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Oct  3 10:54:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:54:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 16 op/s
Oct  3 10:54:53 compute-0 nova_compute[351685]: 2025-10-03 10:54:53.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:53 compute-0 nova_compute[351685]: 2025-10-03 10:54:53.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:54:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3985565206' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:54:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:54:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3985565206' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:54:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2627: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 16 op/s
Oct  3 10:54:55 compute-0 podman[487091]: 2025-10-03 10:54:55.869665406 +0000 UTC m=+0.099846686 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Oct  3 10:54:55 compute-0 podman[487090]: 2025-10-03 10:54:55.879219033 +0000 UTC m=+0.123766374 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  3 10:54:55 compute-0 podman[487089]: 2025-10-03 10:54:55.896527339 +0000 UTC m=+0.148211268 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., distribution-scope=public, release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_id=edpm)
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:54:55 compute-0 podman[487097]: 2025-10-03 10:54:55.934009401 +0000 UTC m=+0.157353952 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:54:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:54:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Oct  3 10:54:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:54:58 compute-0 nova_compute[351685]: 2025-10-03 10:54:58.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2629: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:54:58 compute-0 nova_compute[351685]: 2025-10-03 10:54:58.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:54:59 compute-0 podman[157165]: time="2025-10-03T10:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:54:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:54:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9083 "" "Go-http-client/1.1"
Oct  3 10:54:59 compute-0 podman[487170]: 2025-10-03 10:54:59.875779726 +0000 UTC m=+0.118725571 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:54:59 compute-0 podman[487171]: 2025-10-03 10:54:59.879549347 +0000 UTC m=+0.115887530 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  3 10:54:59 compute-0 podman[487172]: 2025-10-03 10:54:59.90893649 +0000 UTC m=+0.138190135 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 10:55:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2630: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:01 compute-0 openstack_network_exporter[367524]: ERROR   10:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:55:01 compute-0 openstack_network_exporter[367524]: ERROR   10:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:55:01 compute-0 openstack_network_exporter[367524]: ERROR   10:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:55:01 compute-0 openstack_network_exporter[367524]: ERROR   10:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:55:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:55:01 compute-0 openstack_network_exporter[367524]: ERROR   10:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:55:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:55:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:55:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2631: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:03 compute-0 nova_compute[351685]: 2025-10-03 10:55:03.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:03 compute-0 nova_compute[351685]: 2025-10-03 10:55:03.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2632: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2633: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:55:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:55:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:55:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:55:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:55:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:55:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:55:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 121d193c-25b4-4a52-a073-0426f51bfbd4 does not exist
Oct  3 10:55:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ce81b1c3-447c-476a-8bd1-cc6083933e65 does not exist
Oct  3 10:55:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 123dd2c3-501f-461e-a818-2bf17a9249dd does not exist
Oct  3 10:55:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:55:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:55:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:55:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:55:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:55:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:55:08 compute-0 nova_compute[351685]: 2025-10-03 10:55:08.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:08 compute-0 podman[487384]: 2025-10-03 10:55:08.598497082 +0000 UTC m=+0.086586452 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:55:08 compute-0 podman[487385]: 2025-10-03 10:55:08.599261355 +0000 UTC m=+0.088060777 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, architecture=x86_64, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9)
Oct  3 10:55:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2634: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:08 compute-0 podman[487386]: 2025-10-03 10:55:08.614553577 +0000 UTC m=+0.088181512 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 10:55:08 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:55:08 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:55:08 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:55:08 compute-0 nova_compute[351685]: 2025-10-03 10:55:08.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:09 compute-0 podman[487556]: 2025-10-03 10:55:09.273160217 +0000 UTC m=+0.090124504 container create f0eea282017b9fd468cd5ec2701da1d6437093274adebf1a0333a14bea109c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:55:09 compute-0 podman[487556]: 2025-10-03 10:55:09.226444097 +0000 UTC m=+0.043408464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:55:09 compute-0 systemd[1]: Started libpod-conmon-f0eea282017b9fd468cd5ec2701da1d6437093274adebf1a0333a14bea109c17.scope.
Oct  3 10:55:09 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:55:09 compute-0 podman[487556]: 2025-10-03 10:55:09.425367532 +0000 UTC m=+0.242331829 container init f0eea282017b9fd468cd5ec2701da1d6437093274adebf1a0333a14bea109c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 10:55:09 compute-0 podman[487556]: 2025-10-03 10:55:09.445022523 +0000 UTC m=+0.261986840 container start f0eea282017b9fd468cd5ec2701da1d6437093274adebf1a0333a14bea109c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:55:09 compute-0 podman[487556]: 2025-10-03 10:55:09.452689169 +0000 UTC m=+0.269653526 container attach f0eea282017b9fd468cd5ec2701da1d6437093274adebf1a0333a14bea109c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:55:09 compute-0 priceless_bouman[487571]: 167 167
Oct  3 10:55:09 compute-0 systemd[1]: libpod-f0eea282017b9fd468cd5ec2701da1d6437093274adebf1a0333a14bea109c17.scope: Deactivated successfully.
Oct  3 10:55:09 compute-0 podman[487556]: 2025-10-03 10:55:09.459977734 +0000 UTC m=+0.276942041 container died f0eea282017b9fd468cd5ec2701da1d6437093274adebf1a0333a14bea109c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 10:55:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-318883514f77cea191aeca1651d47c6109dc964ea6bfed14eaf9745f7eab96d2-merged.mount: Deactivated successfully.
Oct  3 10:55:09 compute-0 podman[487556]: 2025-10-03 10:55:09.530106735 +0000 UTC m=+0.347071042 container remove f0eea282017b9fd468cd5ec2701da1d6437093274adebf1a0333a14bea109c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bouman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 10:55:09 compute-0 systemd[1]: libpod-conmon-f0eea282017b9fd468cd5ec2701da1d6437093274adebf1a0333a14bea109c17.scope: Deactivated successfully.
Oct  3 10:55:09 compute-0 podman[487594]: 2025-10-03 10:55:09.788679854 +0000 UTC m=+0.086979253 container create 7137535b1973f8d2cf4c79aea97752af01bdf10841c7c4310dca724f24a866ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_benz, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:55:09 compute-0 podman[487594]: 2025-10-03 10:55:09.756569844 +0000 UTC m=+0.054869323 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:55:09 compute-0 systemd[1]: Started libpod-conmon-7137535b1973f8d2cf4c79aea97752af01bdf10841c7c4310dca724f24a866ad.scope.
Oct  3 10:55:09 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:55:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f21346af42663bc9352a7e2fb534f6c4665652faefcd382d744670d9aedfb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:55:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f21346af42663bc9352a7e2fb534f6c4665652faefcd382d744670d9aedfb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:55:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f21346af42663bc9352a7e2fb534f6c4665652faefcd382d744670d9aedfb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:55:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f21346af42663bc9352a7e2fb534f6c4665652faefcd382d744670d9aedfb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:55:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f21346af42663bc9352a7e2fb534f6c4665652faefcd382d744670d9aedfb4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:55:09 compute-0 podman[487594]: 2025-10-03 10:55:09.946770819 +0000 UTC m=+0.245070318 container init 7137535b1973f8d2cf4c79aea97752af01bdf10841c7c4310dca724f24a866ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Oct  3 10:55:09 compute-0 podman[487594]: 2025-10-03 10:55:09.963901649 +0000 UTC m=+0.262201088 container start 7137535b1973f8d2cf4c79aea97752af01bdf10841c7c4310dca724f24a866ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  3 10:55:09 compute-0 podman[487594]: 2025-10-03 10:55:09.969953983 +0000 UTC m=+0.268253482 container attach 7137535b1973f8d2cf4c79aea97752af01bdf10841c7c4310dca724f24a866ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 10:55:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2635: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:11 compute-0 epic_benz[487610]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:55:11 compute-0 epic_benz[487610]: --> relative data size: 1.0
Oct  3 10:55:11 compute-0 epic_benz[487610]: --> All data devices are unavailable
Oct  3 10:55:11 compute-0 systemd[1]: libpod-7137535b1973f8d2cf4c79aea97752af01bdf10841c7c4310dca724f24a866ad.scope: Deactivated successfully.
Oct  3 10:55:11 compute-0 systemd[1]: libpod-7137535b1973f8d2cf4c79aea97752af01bdf10841c7c4310dca724f24a866ad.scope: Consumed 1.228s CPU time.
Oct  3 10:55:11 compute-0 podman[487639]: 2025-10-03 10:55:11.313970014 +0000 UTC m=+0.038331152 container died 7137535b1973f8d2cf4c79aea97752af01bdf10841c7c4310dca724f24a866ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_benz, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:55:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7f21346af42663bc9352a7e2fb534f6c4665652faefcd382d744670d9aedfb4-merged.mount: Deactivated successfully.
Oct  3 10:55:11 compute-0 podman[487639]: 2025-10-03 10:55:11.409213831 +0000 UTC m=+0.133574939 container remove 7137535b1973f8d2cf4c79aea97752af01bdf10841c7c4310dca724f24a866ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_benz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:55:11 compute-0 systemd[1]: libpod-conmon-7137535b1973f8d2cf4c79aea97752af01bdf10841c7c4310dca724f24a866ad.scope: Deactivated successfully.
Oct  3 10:55:12 compute-0 podman[487791]: 2025-10-03 10:55:12.418646452 +0000 UTC m=+0.050586065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:55:12 compute-0 podman[487791]: 2025-10-03 10:55:12.538643874 +0000 UTC m=+0.170583477 container create d5e85ff1b4ee70272aff27307e6527168fb388741c66fcd5002f1fa12ded7cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 10:55:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:55:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2636: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:12 compute-0 systemd[1]: Started libpod-conmon-d5e85ff1b4ee70272aff27307e6527168fb388741c66fcd5002f1fa12ded7cbd.scope.
Oct  3 10:55:12 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:55:12 compute-0 podman[487791]: 2025-10-03 10:55:12.779370991 +0000 UTC m=+0.411310594 container init d5e85ff1b4ee70272aff27307e6527168fb388741c66fcd5002f1fa12ded7cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hertz, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 10:55:12 compute-0 podman[487791]: 2025-10-03 10:55:12.79615422 +0000 UTC m=+0.428093823 container start d5e85ff1b4ee70272aff27307e6527168fb388741c66fcd5002f1fa12ded7cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hertz, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:55:12 compute-0 podman[487791]: 2025-10-03 10:55:12.803695101 +0000 UTC m=+0.435634754 container attach d5e85ff1b4ee70272aff27307e6527168fb388741c66fcd5002f1fa12ded7cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hertz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:55:12 compute-0 determined_hertz[487806]: 167 167
Oct  3 10:55:12 compute-0 systemd[1]: libpod-d5e85ff1b4ee70272aff27307e6527168fb388741c66fcd5002f1fa12ded7cbd.scope: Deactivated successfully.
Oct  3 10:55:12 compute-0 podman[487791]: 2025-10-03 10:55:12.809160297 +0000 UTC m=+0.441099910 container died d5e85ff1b4ee70272aff27307e6527168fb388741c66fcd5002f1fa12ded7cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:55:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-8eb2e561b149ce22c6833e5636ff612f2a08a95eda1cee3a879e57f99f47df40-merged.mount: Deactivated successfully.
Oct  3 10:55:12 compute-0 podman[487791]: 2025-10-03 10:55:12.88150878 +0000 UTC m=+0.513448393 container remove d5e85ff1b4ee70272aff27307e6527168fb388741c66fcd5002f1fa12ded7cbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 10:55:12 compute-0 systemd[1]: libpod-conmon-d5e85ff1b4ee70272aff27307e6527168fb388741c66fcd5002f1fa12ded7cbd.scope: Deactivated successfully.
Oct  3 10:55:13 compute-0 podman[487827]: 2025-10-03 10:55:13.162761327 +0000 UTC m=+0.069109799 container create c22147e007da07a5baa517fbb7141615efd491c526c74f65c00eda1f45ac8e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:55:13 compute-0 podman[487827]: 2025-10-03 10:55:13.139582913 +0000 UTC m=+0.045931415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:55:13 compute-0 systemd[1]: Started libpod-conmon-c22147e007da07a5baa517fbb7141615efd491c526c74f65c00eda1f45ac8e9f.scope.
Oct  3 10:55:13 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5404036f0191b64640cb68efdc8ae0134bf00169f630a7ec9d8ddcca912fbdf6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5404036f0191b64640cb68efdc8ae0134bf00169f630a7ec9d8ddcca912fbdf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5404036f0191b64640cb68efdc8ae0134bf00169f630a7ec9d8ddcca912fbdf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5404036f0191b64640cb68efdc8ae0134bf00169f630a7ec9d8ddcca912fbdf6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:55:13 compute-0 podman[487827]: 2025-10-03 10:55:13.333400114 +0000 UTC m=+0.239748636 container init c22147e007da07a5baa517fbb7141615efd491c526c74f65c00eda1f45ac8e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:55:13 compute-0 podman[487827]: 2025-10-03 10:55:13.355851595 +0000 UTC m=+0.262200077 container start c22147e007da07a5baa517fbb7141615efd491c526c74f65c00eda1f45ac8e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:55:13 compute-0 podman[487827]: 2025-10-03 10:55:13.361701113 +0000 UTC m=+0.268049685 container attach c22147e007da07a5baa517fbb7141615efd491c526c74f65c00eda1f45ac8e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 10:55:13 compute-0 nova_compute[351685]: 2025-10-03 10:55:13.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:13 compute-0 nova_compute[351685]: 2025-10-03 10:55:13.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]: {
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:    "0": [
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:        {
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "devices": [
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "/dev/loop3"
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            ],
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "lv_name": "ceph_lv0",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "lv_size": "21470642176",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "name": "ceph_lv0",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "tags": {
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.cluster_name": "ceph",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.crush_device_class": "",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.encrypted": "0",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.osd_id": "0",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.type": "block",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.vdo": "0"
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            },
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "type": "block",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "vg_name": "ceph_vg0"
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:        }
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:    ],
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:    "1": [
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:        {
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "devices": [
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "/dev/loop4"
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            ],
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "lv_name": "ceph_lv1",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "lv_size": "21470642176",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "name": "ceph_lv1",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "tags": {
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.cluster_name": "ceph",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.crush_device_class": "",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.encrypted": "0",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.osd_id": "1",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.type": "block",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.vdo": "0"
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            },
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "type": "block",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "vg_name": "ceph_vg1"
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:        }
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:    ],
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:    "2": [
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:        {
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "devices": [
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "/dev/loop5"
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            ],
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "lv_name": "ceph_lv2",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "lv_size": "21470642176",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "name": "ceph_lv2",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "tags": {
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.cluster_name": "ceph",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.crush_device_class": "",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.encrypted": "0",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.osd_id": "2",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.type": "block",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:                "ceph.vdo": "0"
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            },
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "type": "block",
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:            "vg_name": "ceph_vg2"
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:        }
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]:    ]
Oct  3 10:55:14 compute-0 elastic_hofstadter[487844]: }
Oct  3 10:55:14 compute-0 systemd[1]: libpod-c22147e007da07a5baa517fbb7141615efd491c526c74f65c00eda1f45ac8e9f.scope: Deactivated successfully.
Oct  3 10:55:14 compute-0 podman[487827]: 2025-10-03 10:55:14.201738557 +0000 UTC m=+1.108087129 container died c22147e007da07a5baa517fbb7141615efd491c526c74f65c00eda1f45ac8e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:55:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-5404036f0191b64640cb68efdc8ae0134bf00169f630a7ec9d8ddcca912fbdf6-merged.mount: Deactivated successfully.
Oct  3 10:55:14 compute-0 podman[487827]: 2025-10-03 10:55:14.311376916 +0000 UTC m=+1.217725378 container remove c22147e007da07a5baa517fbb7141615efd491c526c74f65c00eda1f45ac8e9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:55:14 compute-0 systemd[1]: libpod-conmon-c22147e007da07a5baa517fbb7141615efd491c526c74f65c00eda1f45ac8e9f.scope: Deactivated successfully.
Oct  3 10:55:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2637: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:15 compute-0 podman[488002]: 2025-10-03 10:55:15.336044077 +0000 UTC m=+0.070950579 container create b41d0b54295e507e63231b4d78770c6120b7874ca7f48655f0c1041147bf1954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hodgkin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:55:15 compute-0 systemd[1]: Started libpod-conmon-b41d0b54295e507e63231b4d78770c6120b7874ca7f48655f0c1041147bf1954.scope.
Oct  3 10:55:15 compute-0 podman[488002]: 2025-10-03 10:55:15.316750787 +0000 UTC m=+0.051657329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:55:15 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:55:15 compute-0 podman[488002]: 2025-10-03 10:55:15.453352722 +0000 UTC m=+0.188259314 container init b41d0b54295e507e63231b4d78770c6120b7874ca7f48655f0c1041147bf1954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:55:15 compute-0 podman[488002]: 2025-10-03 10:55:15.471847655 +0000 UTC m=+0.206754197 container start b41d0b54295e507e63231b4d78770c6120b7874ca7f48655f0c1041147bf1954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hodgkin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:55:15 compute-0 podman[488002]: 2025-10-03 10:55:15.478628533 +0000 UTC m=+0.213535135 container attach b41d0b54295e507e63231b4d78770c6120b7874ca7f48655f0c1041147bf1954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hodgkin, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:55:15 compute-0 brave_hodgkin[488016]: 167 167
Oct  3 10:55:15 compute-0 systemd[1]: libpod-b41d0b54295e507e63231b4d78770c6120b7874ca7f48655f0c1041147bf1954.scope: Deactivated successfully.
Oct  3 10:55:15 compute-0 podman[488002]: 2025-10-03 10:55:15.48505029 +0000 UTC m=+0.219956832 container died b41d0b54295e507e63231b4d78770c6120b7874ca7f48655f0c1041147bf1954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 10:55:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2baf76e98011f48983df201189c82ea24b3e6cd5c26872c4a9ef606515bc84e-merged.mount: Deactivated successfully.
Oct  3 10:55:15 compute-0 podman[488002]: 2025-10-03 10:55:15.554618503 +0000 UTC m=+0.289525025 container remove b41d0b54295e507e63231b4d78770c6120b7874ca7f48655f0c1041147bf1954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hodgkin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 10:55:15 compute-0 systemd[1]: libpod-conmon-b41d0b54295e507e63231b4d78770c6120b7874ca7f48655f0c1041147bf1954.scope: Deactivated successfully.
Oct  3 10:55:15 compute-0 nova_compute[351685]: 2025-10-03 10:55:15.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:55:15 compute-0 nova_compute[351685]: 2025-10-03 10:55:15.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:55:15 compute-0 nova_compute[351685]: 2025-10-03 10:55:15.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:55:15 compute-0 podman[488040]: 2025-10-03 10:55:15.802392965 +0000 UTC m=+0.095208637 container create fc95db810b218b980684f6a6dd9e34502db1ef82c215f951c0d4e9e801f79110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  3 10:55:15 compute-0 podman[488040]: 2025-10-03 10:55:15.761629927 +0000 UTC m=+0.054445599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:55:15 compute-0 systemd[1]: Started libpod-conmon-fc95db810b218b980684f6a6dd9e34502db1ef82c215f951c0d4e9e801f79110.scope.
Oct  3 10:55:15 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f72290c45c32e837daac56904e6f59080906417ef61d9c9b1322d86ecfd05d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f72290c45c32e837daac56904e6f59080906417ef61d9c9b1322d86ecfd05d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f72290c45c32e837daac56904e6f59080906417ef61d9c9b1322d86ecfd05d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f72290c45c32e837daac56904e6f59080906417ef61d9c9b1322d86ecfd05d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:55:16 compute-0 podman[488040]: 2025-10-03 10:55:16.007282442 +0000 UTC m=+0.300098174 container init fc95db810b218b980684f6a6dd9e34502db1ef82c215f951c0d4e9e801f79110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 10:55:16 compute-0 podman[488040]: 2025-10-03 10:55:16.034725333 +0000 UTC m=+0.327540995 container start fc95db810b218b980684f6a6dd9e34502db1ef82c215f951c0d4e9e801f79110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 10:55:16 compute-0 podman[488040]: 2025-10-03 10:55:16.039143145 +0000 UTC m=+0.331958807 container attach fc95db810b218b980684f6a6dd9e34502db1ef82c215f951c0d4e9e801f79110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 10:55:16 compute-0 nova_compute[351685]: 2025-10-03 10:55:16.105 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:55:16 compute-0 nova_compute[351685]: 2025-10-03 10:55:16.106 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:55:16 compute-0 nova_compute[351685]: 2025-10-03 10:55:16.106 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:55:16 compute-0 nova_compute[351685]: 2025-10-03 10:55:16.107 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:55:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:55:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:55:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:55:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:55:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:55:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:55:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2638: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]: {
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:        "osd_id": 1,
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:        "type": "bluestore"
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:    },
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:        "osd_id": 2,
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:        "type": "bluestore"
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:    },
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:        "osd_id": 0,
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:        "type": "bluestore"
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]:    }
Oct  3 10:55:17 compute-0 friendly_dijkstra[488056]: }
Oct  3 10:55:17 compute-0 podman[488040]: 2025-10-03 10:55:17.049102893 +0000 UTC m=+1.341918545 container died fc95db810b218b980684f6a6dd9e34502db1ef82c215f951c0d4e9e801f79110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:55:17 compute-0 systemd[1]: libpod-fc95db810b218b980684f6a6dd9e34502db1ef82c215f951c0d4e9e801f79110.scope: Deactivated successfully.
Oct  3 10:55:17 compute-0 systemd[1]: libpod-fc95db810b218b980684f6a6dd9e34502db1ef82c215f951c0d4e9e801f79110.scope: Consumed 1.022s CPU time.
Oct  3 10:55:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f72290c45c32e837daac56904e6f59080906417ef61d9c9b1322d86ecfd05d7-merged.mount: Deactivated successfully.
Oct  3 10:55:17 compute-0 podman[488040]: 2025-10-03 10:55:17.147563904 +0000 UTC m=+1.440379556 container remove fc95db810b218b980684f6a6dd9e34502db1ef82c215f951c0d4e9e801f79110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 10:55:17 compute-0 systemd[1]: libpod-conmon-fc95db810b218b980684f6a6dd9e34502db1ef82c215f951c0d4e9e801f79110.scope: Deactivated successfully.
Oct  3 10:55:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:55:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:55:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:55:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:55:17 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b1c7bf1d-539c-4b75-88b3-4fd0f9d21d8f does not exist
Oct  3 10:55:17 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e2d6178d-de51-4cbf-a15a-2132663a8d6d does not exist
Oct  3 10:55:17 compute-0 nova_compute[351685]: 2025-10-03 10:55:17.407 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:55:17 compute-0 nova_compute[351685]: 2025-10-03 10:55:17.423 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:55:17 compute-0 nova_compute[351685]: 2025-10-03 10:55:17.423 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:55:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:55:18 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:55:18 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:55:18 compute-0 nova_compute[351685]: 2025-10-03 10:55:18.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2639: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:18 compute-0 nova_compute[351685]: 2025-10-03 10:55:18.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2640: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:21 compute-0 nova_compute[351685]: 2025-10-03 10:55:21.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:55:21 compute-0 nova_compute[351685]: 2025-10-03 10:55:21.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:55:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:55:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2641: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:23 compute-0 nova_compute[351685]: 2025-10-03 10:55:23.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:23 compute-0 nova_compute[351685]: 2025-10-03 10:55:23.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:55:23 compute-0 nova_compute[351685]: 2025-10-03 10:55:23.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:23 compute-0 nova_compute[351685]: 2025-10-03 10:55:23.755 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:55:23 compute-0 nova_compute[351685]: 2025-10-03 10:55:23.756 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:55:23 compute-0 nova_compute[351685]: 2025-10-03 10:55:23.757 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:55:23 compute-0 nova_compute[351685]: 2025-10-03 10:55:23.757 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:55:23 compute-0 nova_compute[351685]: 2025-10-03 10:55:23.758 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:55:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:55:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1788623255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:55:24 compute-0 nova_compute[351685]: 2025-10-03 10:55:24.264 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:55:24 compute-0 nova_compute[351685]: 2025-10-03 10:55:24.400 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:55:24 compute-0 nova_compute[351685]: 2025-10-03 10:55:24.401 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:55:24 compute-0 nova_compute[351685]: 2025-10-03 10:55:24.402 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:55:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2642: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:24 compute-0 nova_compute[351685]: 2025-10-03 10:55:24.901 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:55:24 compute-0 nova_compute[351685]: 2025-10-03 10:55:24.902 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3792MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:55:24 compute-0 nova_compute[351685]: 2025-10-03 10:55:24.903 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:55:24 compute-0 nova_compute[351685]: 2025-10-03 10:55:24.903 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:55:25 compute-0 nova_compute[351685]: 2025-10-03 10:55:25.057 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:55:25 compute-0 nova_compute[351685]: 2025-10-03 10:55:25.058 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:55:25 compute-0 nova_compute[351685]: 2025-10-03 10:55:25.059 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:55:25 compute-0 nova_compute[351685]: 2025-10-03 10:55:25.132 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:55:25 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:55:25 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4082179951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:55:25 compute-0 nova_compute[351685]: 2025-10-03 10:55:25.649 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:55:25 compute-0 nova_compute[351685]: 2025-10-03 10:55:25.659 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:55:25 compute-0 nova_compute[351685]: 2025-10-03 10:55:25.685 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:55:25 compute-0 nova_compute[351685]: 2025-10-03 10:55:25.686 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:55:25 compute-0 nova_compute[351685]: 2025-10-03 10:55:25.687 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:55:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2643: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:26 compute-0 nova_compute[351685]: 2025-10-03 10:55:26.682 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:55:26 compute-0 podman[488200]: 2025-10-03 10:55:26.879312109 +0000 UTC m=+0.119569589 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Oct  3 10:55:26 compute-0 podman[488199]: 2025-10-03 10:55:26.889098733 +0000 UTC m=+0.126524742 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 10:55:26 compute-0 podman[488198]: 2025-10-03 10:55:26.906878294 +0000 UTC m=+0.147816705 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.buildah.version=1.33.7, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64)
Oct  3 10:55:26 compute-0 podman[488201]: 2025-10-03 10:55:26.919269931 +0000 UTC m=+0.142133283 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  3 10:55:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:55:28 compute-0 nova_compute[351685]: 2025-10-03 10:55:28.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2644: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:28 compute-0 nova_compute[351685]: 2025-10-03 10:55:28.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:55:28 compute-0 nova_compute[351685]: 2025-10-03 10:55:28.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:29 compute-0 nova_compute[351685]: 2025-10-03 10:55:29.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:55:29 compute-0 podman[157165]: time="2025-10-03T10:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:55:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:55:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9090 "" "Go-http-client/1.1"
Oct  3 10:55:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2645: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:30 compute-0 podman[488277]: 2025-10-03 10:55:30.79851988 +0000 UTC m=+0.061836576 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:55:30 compute-0 podman[488278]: 2025-10-03 10:55:30.831279612 +0000 UTC m=+0.092700797 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct  3 10:55:30 compute-0 podman[488279]: 2025-10-03 10:55:30.831419666 +0000 UTC m=+0.090711423 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2)
Oct  3 10:55:31 compute-0 openstack_network_exporter[367524]: ERROR   10:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:55:31 compute-0 openstack_network_exporter[367524]: ERROR   10:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:55:31 compute-0 openstack_network_exporter[367524]: ERROR   10:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:55:31 compute-0 openstack_network_exporter[367524]: ERROR   10:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:55:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:55:31 compute-0 openstack_network_exporter[367524]: ERROR   10:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:55:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:55:31 compute-0 nova_compute[351685]: 2025-10-03 10:55:31.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:55:31 compute-0 nova_compute[351685]: 2025-10-03 10:55:31.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:55:31 compute-0 nova_compute[351685]: 2025-10-03 10:55:31.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:55:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:55:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2646: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:33 compute-0 nova_compute[351685]: 2025-10-03 10:55:33.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:33 compute-0 nova_compute[351685]: 2025-10-03 10:55:33.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2647: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2648: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:55:38 compute-0 nova_compute[351685]: 2025-10-03 10:55:38.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2649: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:38 compute-0 nova_compute[351685]: 2025-10-03 10:55:38.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:38 compute-0 podman[488339]: 2025-10-03 10:55:38.844184562 +0000 UTC m=+0.094726721 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vcs-type=git, container_name=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, release-0.7.12=, maintainer=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.29.0, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Oct  3 10:55:38 compute-0 podman[488344]: 2025-10-03 10:55:38.857007623 +0000 UTC m=+0.085077061 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:55:38 compute-0 podman[488338]: 2025-10-03 10:55:38.861135226 +0000 UTC m=+0.116311605 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:55:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2650: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.897 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.898 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.898 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a940b5130>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.907 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.907 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.907 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.908 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.908 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:55:40.907989) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.912 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.912 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.912 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.912 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.912 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.912 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.912 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.913 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.913 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:55:40.912938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.913 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.913 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.913 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.913 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.914 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.914 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.914 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:55:40.914150) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.932 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.933 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.933 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.934 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.934 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.934 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.934 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.934 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.934 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.935 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:55:40.934632) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.967 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.968 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.968 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.969 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.970 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.970 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.970 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.970 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.971 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.971 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.972 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.973 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.973 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.974 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:55:40.970735) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.974 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.974 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.974 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.975 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.975 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.975 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:55:40.975004) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.976 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.977 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.977 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.978 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.978 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.978 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.978 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.979 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.979 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.980 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:55:40.978868) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.980 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.981 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.982 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.982 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.982 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.982 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.983 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.983 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.984 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.985 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.986 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:55:40.982849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.986 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.987 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.987 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.987 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.987 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.988 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.989 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.990 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.991 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.991 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.991 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:55:40.987555) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.991 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.992 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:40.992 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:55:40.992104) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.026 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.027 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.027 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.027 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.028 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.028 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.028 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.029 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:55:41.028473) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.029 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.030 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.031 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.031 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.031 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.031 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.031 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.032 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.032 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:55:41.032087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.032 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.033 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.033 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.034 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.035 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.035 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.035 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.035 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.036 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.038 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.038 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:55:41.035890) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.038 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.039 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.039 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.040 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:55:41.039919) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.041 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.042 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.044 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:55:41.042543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.044 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.045 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.045 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:55:41.045207) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.046 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.046 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.046 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.047 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.047 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.047 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.048 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.049 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.050 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:55:41.047402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.050 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.050 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.050 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 78240000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.052 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.053 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.053 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.054 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:55:41.050599) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.054 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.054 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:55:41.053731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.055 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.055 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.055 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.055 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.056 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.056 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.057 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.058 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.058 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.058 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.059 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:55:41.056134) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.059 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:55:41.059042) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.059 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.059 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.060 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.060 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.060 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.060 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.061 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.061 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.062 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.062 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:55:41.060612) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.062 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.063 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.063 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:55:41.062380) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.063 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.064 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:55:41.063861) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.067 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:55:41.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:55:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:55:41.660 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:55:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:55:41.661 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:55:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:55:41.661 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:55:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:55:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2651: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:43 compute-0 nova_compute[351685]: 2025-10-03 10:55:43.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:43 compute-0 nova_compute[351685]: 2025-10-03 10:55:43.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2652: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:55:46
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'default.rgw.control', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'vms']
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2653: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:55:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:55:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:55:47.586913) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488947586951, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 855, "num_deletes": 250, "total_data_size": 1184918, "memory_usage": 1207104, "flush_reason": "Manual Compaction"}
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488947599488, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 727105, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53254, "largest_seqno": 54108, "table_properties": {"data_size": 723581, "index_size": 1303, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9259, "raw_average_key_size": 20, "raw_value_size": 716114, "raw_average_value_size": 1594, "num_data_blocks": 59, "num_entries": 449, "num_filter_entries": 449, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759488868, "oldest_key_time": 1759488868, "file_creation_time": 1759488947, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 12676 microseconds, and 5826 cpu microseconds.
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:55:47.599581) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 727105 bytes OK
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:55:47.599610) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:55:47.603193) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:55:47.603220) EVENT_LOG_v1 {"time_micros": 1759488947603211, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:55:47.603310) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 1180723, prev total WAL file size 1180723, number of live WAL files 2.
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:55:47.604811) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323534' seq:72057594037927935, type:22 .. '6D6772737461740032353035' seq:0, type:0; will stop at (end)
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(710KB)], [128(9249KB)]
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488947604926, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 10198669, "oldest_snapshot_seqno": -1}
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 6548 keys, 7345880 bytes, temperature: kUnknown
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488947658618, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 7345880, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7307295, "index_size": 21128, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 172103, "raw_average_key_size": 26, "raw_value_size": 7193621, "raw_average_value_size": 1098, "num_data_blocks": 831, "num_entries": 6548, "num_filter_entries": 6548, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759488947, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:55:47.658972) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 7345880 bytes
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:55:47.661753) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.5 rd, 136.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.0 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(24.1) write-amplify(10.1) OK, records in: 7026, records dropped: 478 output_compression: NoCompression
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:55:47.661791) EVENT_LOG_v1 {"time_micros": 1759488947661774, "job": 78, "event": "compaction_finished", "compaction_time_micros": 53824, "compaction_time_cpu_micros": 24355, "output_level": 6, "num_output_files": 1, "total_output_size": 7345880, "num_input_records": 7026, "num_output_records": 6548, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488947662346, "job": 78, "event": "table_file_deletion", "file_number": 130}
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759488947666062, "job": 78, "event": "table_file_deletion", "file_number": 128}
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:55:47.604579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:55:47.666455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:55:47.666462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:55:47.666465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:55:47.666468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:55:47 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:55:47.666471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:55:48 compute-0 nova_compute[351685]: 2025-10-03 10:55:48.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2654: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:48 compute-0 nova_compute[351685]: 2025-10-03 10:55:48.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:49 compute-0 nova_compute[351685]: 2025-10-03 10:55:49.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:55:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2655: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:55:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2656: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:53 compute-0 nova_compute[351685]: 2025-10-03 10:55:53.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:53 compute-0 nova_compute[351685]: 2025-10-03 10:55:53.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:55:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3306394174' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:55:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:55:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3306394174' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:55:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2657: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:55:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:55:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2658: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:55:57 compute-0 podman[488399]: 2025-10-03 10:55:57.85053671 +0000 UTC m=+0.091730335 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:55:57 compute-0 podman[488400]: 2025-10-03 10:55:57.853572808 +0000 UTC m=+0.092216071 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Oct  3 10:55:57 compute-0 podman[488398]: 2025-10-03 10:55:57.861289845 +0000 UTC m=+0.107748920 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.expose-services=, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal)
Oct  3 10:55:57 compute-0 podman[488406]: 2025-10-03 10:55:57.915434374 +0000 UTC m=+0.144755058 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct  3 10:55:58 compute-0 nova_compute[351685]: 2025-10-03 10:55:58.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2659: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:55:58 compute-0 nova_compute[351685]: 2025-10-03 10:55:58.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:55:59 compute-0 podman[157165]: time="2025-10-03T10:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:55:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:55:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9086 "" "Go-http-client/1.1"
Oct  3 10:56:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2660: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:01 compute-0 openstack_network_exporter[367524]: ERROR   10:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:56:01 compute-0 openstack_network_exporter[367524]: ERROR   10:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:56:01 compute-0 openstack_network_exporter[367524]: ERROR   10:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:56:01 compute-0 openstack_network_exporter[367524]: ERROR   10:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:56:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:56:01 compute-0 openstack_network_exporter[367524]: ERROR   10:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:56:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:56:01 compute-0 podman[488480]: 2025-10-03 10:56:01.822597467 +0000 UTC m=+0.084444942 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:56:01 compute-0 podman[488481]: 2025-10-03 10:56:01.841890546 +0000 UTC m=+0.100584990 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:56:01 compute-0 podman[488479]: 2025-10-03 10:56:01.849639454 +0000 UTC m=+0.103251715 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:56:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:56:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2661: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:03 compute-0 nova_compute[351685]: 2025-10-03 10:56:03.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:03 compute-0 nova_compute[351685]: 2025-10-03 10:56:03.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2662: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2663: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:56:08 compute-0 nova_compute[351685]: 2025-10-03 10:56:08.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2664: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:08 compute-0 nova_compute[351685]: 2025-10-03 10:56:08.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:09 compute-0 podman[488539]: 2025-10-03 10:56:09.820049593 +0000 UTC m=+0.073436508 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:56:09 compute-0 podman[488545]: 2025-10-03 10:56:09.838210686 +0000 UTC m=+0.078229262 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=edpm, org.label-schema.schema-version=1.0)
Oct  3 10:56:09 compute-0 podman[488540]: 2025-10-03 10:56:09.864273553 +0000 UTC m=+0.106443858 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.expose-services=, io.buildah.version=1.29.0, version=9.4, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, name=ubi9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=base rhel9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543)
Oct  3 10:56:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2665: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:56:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2666: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:13 compute-0 nova_compute[351685]: 2025-10-03 10:56:13.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:13 compute-0 nova_compute[351685]: 2025-10-03 10:56:13.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2667: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:56:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:56:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:56:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:56:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:56:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:56:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2668: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:56:17 compute-0 nova_compute[351685]: 2025-10-03 10:56:17.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:56:17 compute-0 nova_compute[351685]: 2025-10-03 10:56:17.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:56:17 compute-0 nova_compute[351685]: 2025-10-03 10:56:17.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:56:18 compute-0 nova_compute[351685]: 2025-10-03 10:56:18.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:56:18 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:56:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:56:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:56:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:56:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:56:18 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 330b83d1-4145-44d2-b707-053dd258fce9 does not exist
Oct  3 10:56:18 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a05ef75f-ce1a-4e81-ae05-06e6681e9990 does not exist
Oct  3 10:56:18 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 33c72866-aa80-4af2-9c66-8e65f8bd3553 does not exist
Oct  3 10:56:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:56:18 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:56:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:56:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:56:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:56:18 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:56:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2669: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:18 compute-0 nova_compute[351685]: 2025-10-03 10:56:18.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:56:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:56:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:56:19 compute-0 nova_compute[351685]: 2025-10-03 10:56:19.136 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:56:19 compute-0 nova_compute[351685]: 2025-10-03 10:56:19.137 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:56:19 compute-0 nova_compute[351685]: 2025-10-03 10:56:19.137 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:56:19 compute-0 nova_compute[351685]: 2025-10-03 10:56:19.138 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:56:19 compute-0 podman[488868]: 2025-10-03 10:56:19.762223399 +0000 UTC m=+0.043568660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:56:19 compute-0 podman[488868]: 2025-10-03 10:56:19.937881586 +0000 UTC m=+0.219226777 container create 54a86da41c68837e11de9404cd6fd5b91d4c09e937f88f147bf58be4f98aadda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_liskov, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:56:20 compute-0 systemd[1]: Started libpod-conmon-54a86da41c68837e11de9404cd6fd5b91d4c09e937f88f147bf58be4f98aadda.scope.
Oct  3 10:56:20 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:56:20 compute-0 podman[488868]: 2025-10-03 10:56:20.291857079 +0000 UTC m=+0.573202320 container init 54a86da41c68837e11de9404cd6fd5b91d4c09e937f88f147bf58be4f98aadda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_liskov, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 10:56:20 compute-0 podman[488868]: 2025-10-03 10:56:20.303341268 +0000 UTC m=+0.584686469 container start 54a86da41c68837e11de9404cd6fd5b91d4c09e937f88f147bf58be4f98aadda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_liskov, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:56:20 compute-0 distracted_liskov[488883]: 167 167
Oct  3 10:56:20 compute-0 systemd[1]: libpod-54a86da41c68837e11de9404cd6fd5b91d4c09e937f88f147bf58be4f98aadda.scope: Deactivated successfully.
Oct  3 10:56:20 compute-0 podman[488868]: 2025-10-03 10:56:20.461649109 +0000 UTC m=+0.742994300 container attach 54a86da41c68837e11de9404cd6fd5b91d4c09e937f88f147bf58be4f98aadda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_liskov, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:56:20 compute-0 podman[488868]: 2025-10-03 10:56:20.462867908 +0000 UTC m=+0.744213099 container died 54a86da41c68837e11de9404cd6fd5b91d4c09e937f88f147bf58be4f98aadda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 10:56:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2670: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce39b43089c47a28ae6593616c9ea75eca71ee1b39546b62ef52c06055b2ece2-merged.mount: Deactivated successfully.
Oct  3 10:56:21 compute-0 nova_compute[351685]: 2025-10-03 10:56:21.052 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:56:21 compute-0 nova_compute[351685]: 2025-10-03 10:56:21.202 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:56:21 compute-0 nova_compute[351685]: 2025-10-03 10:56:21.203 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:56:21 compute-0 podman[488868]: 2025-10-03 10:56:21.284679956 +0000 UTC m=+1.566025147 container remove 54a86da41c68837e11de9404cd6fd5b91d4c09e937f88f147bf58be4f98aadda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 10:56:21 compute-0 systemd[1]: libpod-conmon-54a86da41c68837e11de9404cd6fd5b91d4c09e937f88f147bf58be4f98aadda.scope: Deactivated successfully.
Oct  3 10:56:21 compute-0 podman[488906]: 2025-10-03 10:56:21.538895096 +0000 UTC m=+0.044918493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:56:21 compute-0 podman[488906]: 2025-10-03 10:56:21.640122386 +0000 UTC m=+0.146145743 container create 3a922d12a44d75b133ec7b387a09b30bab669aa283a41233c9661351849e3656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 10:56:21 compute-0 systemd[1]: Started libpod-conmon-3a922d12a44d75b133ec7b387a09b30bab669aa283a41233c9661351849e3656.scope.
Oct  3 10:56:21 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893375e52ebf11ff89baa179a469637c698959a3df5efc09eae56aca24fc14d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893375e52ebf11ff89baa179a469637c698959a3df5efc09eae56aca24fc14d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893375e52ebf11ff89baa179a469637c698959a3df5efc09eae56aca24fc14d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893375e52ebf11ff89baa179a469637c698959a3df5efc09eae56aca24fc14d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893375e52ebf11ff89baa179a469637c698959a3df5efc09eae56aca24fc14d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:56:21 compute-0 podman[488906]: 2025-10-03 10:56:21.887184156 +0000 UTC m=+0.393207543 container init 3a922d12a44d75b133ec7b387a09b30bab669aa283a41233c9661351849e3656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:56:21 compute-0 podman[488906]: 2025-10-03 10:56:21.900825194 +0000 UTC m=+0.406848541 container start 3a922d12a44d75b133ec7b387a09b30bab669aa283a41233c9661351849e3656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cohen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  3 10:56:21 compute-0 podman[488906]: 2025-10-03 10:56:21.986909157 +0000 UTC m=+0.492932554 container attach 3a922d12a44d75b133ec7b387a09b30bab669aa283a41233c9661351849e3656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cohen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:56:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:56:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2671: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:23 compute-0 confident_cohen[488919]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:56:23 compute-0 confident_cohen[488919]: --> relative data size: 1.0
Oct  3 10:56:23 compute-0 confident_cohen[488919]: --> All data devices are unavailable
Oct  3 10:56:23 compute-0 systemd[1]: libpod-3a922d12a44d75b133ec7b387a09b30bab669aa283a41233c9661351849e3656.scope: Deactivated successfully.
Oct  3 10:56:23 compute-0 podman[488906]: 2025-10-03 10:56:23.129503923 +0000 UTC m=+1.635527330 container died 3a922d12a44d75b133ec7b387a09b30bab669aa283a41233c9661351849e3656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:56:23 compute-0 systemd[1]: libpod-3a922d12a44d75b133ec7b387a09b30bab669aa283a41233c9661351849e3656.scope: Consumed 1.183s CPU time.
Oct  3 10:56:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-893375e52ebf11ff89baa179a469637c698959a3df5efc09eae56aca24fc14d3-merged.mount: Deactivated successfully.
Oct  3 10:56:23 compute-0 nova_compute[351685]: 2025-10-03 10:56:23.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:23 compute-0 podman[488906]: 2025-10-03 10:56:23.54220494 +0000 UTC m=+2.048228307 container remove 3a922d12a44d75b133ec7b387a09b30bab669aa283a41233c9661351849e3656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_cohen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:56:23 compute-0 systemd[1]: libpod-conmon-3a922d12a44d75b133ec7b387a09b30bab669aa283a41233c9661351849e3656.scope: Deactivated successfully.
Oct  3 10:56:23 compute-0 nova_compute[351685]: 2025-10-03 10:56:23.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:56:23 compute-0 nova_compute[351685]: 2025-10-03 10:56:23.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:56:23 compute-0 nova_compute[351685]: 2025-10-03 10:56:23.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:56:23 compute-0 nova_compute[351685]: 2025-10-03 10:56:23.762 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:56:23 compute-0 nova_compute[351685]: 2025-10-03 10:56:23.763 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:56:23 compute-0 nova_compute[351685]: 2025-10-03 10:56:23.763 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:56:23 compute-0 nova_compute[351685]: 2025-10-03 10:56:23.764 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:56:23 compute-0 nova_compute[351685]: 2025-10-03 10:56:23.764 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:56:23 compute-0 nova_compute[351685]: 2025-10-03 10:56:23.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:56:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3056738289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:56:24 compute-0 nova_compute[351685]: 2025-10-03 10:56:24.287 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:56:24 compute-0 nova_compute[351685]: 2025-10-03 10:56:24.408 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:56:24 compute-0 nova_compute[351685]: 2025-10-03 10:56:24.408 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:56:24 compute-0 nova_compute[351685]: 2025-10-03 10:56:24.409 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:56:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2672: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:24 compute-0 podman[489122]: 2025-10-03 10:56:24.609593692 +0000 UTC m=+0.038644962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:56:24 compute-0 nova_compute[351685]: 2025-10-03 10:56:24.753 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:56:24 compute-0 nova_compute[351685]: 2025-10-03 10:56:24.754 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3785MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:56:24 compute-0 nova_compute[351685]: 2025-10-03 10:56:24.755 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:56:24 compute-0 nova_compute[351685]: 2025-10-03 10:56:24.755 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:56:24 compute-0 podman[489122]: 2025-10-03 10:56:24.884233327 +0000 UTC m=+0.313429192 container create 0066dc0dcfb07dc9f76f02d62f47be345a6fc749b668fb6577c0ec3532b5fbe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 10:56:25 compute-0 systemd[1]: Started libpod-conmon-0066dc0dcfb07dc9f76f02d62f47be345a6fc749b668fb6577c0ec3532b5fbe2.scope.
Oct  3 10:56:25 compute-0 nova_compute[351685]: 2025-10-03 10:56:25.209 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:56:25 compute-0 nova_compute[351685]: 2025-10-03 10:56:25.209 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:56:25 compute-0 nova_compute[351685]: 2025-10-03 10:56:25.210 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:56:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:56:25 compute-0 nova_compute[351685]: 2025-10-03 10:56:25.248 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:56:25 compute-0 podman[489122]: 2025-10-03 10:56:25.756431814 +0000 UTC m=+1.185483084 container init 0066dc0dcfb07dc9f76f02d62f47be345a6fc749b668fb6577c0ec3532b5fbe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 10:56:25 compute-0 podman[489122]: 2025-10-03 10:56:25.775374702 +0000 UTC m=+1.204425992 container start 0066dc0dcfb07dc9f76f02d62f47be345a6fc749b668fb6577c0ec3532b5fbe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 10:56:25 compute-0 goofy_sinoussi[489138]: 167 167
Oct  3 10:56:25 compute-0 systemd[1]: libpod-0066dc0dcfb07dc9f76f02d62f47be345a6fc749b668fb6577c0ec3532b5fbe2.scope: Deactivated successfully.
Oct  3 10:56:25 compute-0 podman[489122]: 2025-10-03 10:56:25.944927595 +0000 UTC m=+1.373978935 container attach 0066dc0dcfb07dc9f76f02d62f47be345a6fc749b668fb6577c0ec3532b5fbe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct  3 10:56:25 compute-0 podman[489122]: 2025-10-03 10:56:25.946077301 +0000 UTC m=+1.375128591 container died 0066dc0dcfb07dc9f76f02d62f47be345a6fc749b668fb6577c0ec3532b5fbe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:56:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:56:26 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3186708850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:56:26 compute-0 nova_compute[351685]: 2025-10-03 10:56:26.099 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.851s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:56:26 compute-0 nova_compute[351685]: 2025-10-03 10:56:26.112 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:56:26 compute-0 nova_compute[351685]: 2025-10-03 10:56:26.166 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:56:26 compute-0 nova_compute[351685]: 2025-10-03 10:56:26.168 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:56:26 compute-0 nova_compute[351685]: 2025-10-03 10:56:26.169 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:56:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-06e8860f53524130f9f08686f83f952c1a425899c4c1a1bcdbd71d44e186e654-merged.mount: Deactivated successfully.
Oct  3 10:56:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2673: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:26 compute-0 podman[489122]: 2025-10-03 10:56:26.697570453 +0000 UTC m=+2.126621703 container remove 0066dc0dcfb07dc9f76f02d62f47be345a6fc749b668fb6577c0ec3532b5fbe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct  3 10:56:26 compute-0 systemd[1]: libpod-conmon-0066dc0dcfb07dc9f76f02d62f47be345a6fc749b668fb6577c0ec3532b5fbe2.scope: Deactivated successfully.
Oct  3 10:56:27 compute-0 podman[489183]: 2025-10-03 10:56:26.922563865 +0000 UTC m=+0.046664259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:56:27 compute-0 podman[489183]: 2025-10-03 10:56:27.1236452 +0000 UTC m=+0.247745624 container create ef451fd0c5a7868f6ef88db1be353028db0339ea8e3297a91da9bce707d4168b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 10:56:27 compute-0 nova_compute[351685]: 2025-10-03 10:56:27.164 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:56:27 compute-0 systemd[1]: Started libpod-conmon-ef451fd0c5a7868f6ef88db1be353028db0339ea8e3297a91da9bce707d4168b.scope.
Oct  3 10:56:27 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:56:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15741edcde559d75419e0487294ea6096be5f21791dc6f41a79077fb564e4123/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:56:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15741edcde559d75419e0487294ea6096be5f21791dc6f41a79077fb564e4123/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:56:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15741edcde559d75419e0487294ea6096be5f21791dc6f41a79077fb564e4123/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:56:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15741edcde559d75419e0487294ea6096be5f21791dc6f41a79077fb564e4123/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:56:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:56:27 compute-0 podman[489183]: 2025-10-03 10:56:27.701909401 +0000 UTC m=+0.826009865 container init ef451fd0c5a7868f6ef88db1be353028db0339ea8e3297a91da9bce707d4168b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carver, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:56:27 compute-0 podman[489183]: 2025-10-03 10:56:27.7196475 +0000 UTC m=+0.843747904 container start ef451fd0c5a7868f6ef88db1be353028db0339ea8e3297a91da9bce707d4168b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  3 10:56:27 compute-0 podman[489183]: 2025-10-03 10:56:27.755348376 +0000 UTC m=+0.879448860 container attach ef451fd0c5a7868f6ef88db1be353028db0339ea8e3297a91da9bce707d4168b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carver, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  3 10:56:28 compute-0 nova_compute[351685]: 2025-10-03 10:56:28.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:28 compute-0 wizardly_carver[489200]: {
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:    "0": [
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:        {
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "devices": [
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "/dev/loop3"
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            ],
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "lv_name": "ceph_lv0",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "lv_size": "21470642176",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "name": "ceph_lv0",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "tags": {
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.cluster_name": "ceph",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.crush_device_class": "",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.encrypted": "0",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.osd_id": "0",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.type": "block",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.vdo": "0"
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            },
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "type": "block",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "vg_name": "ceph_vg0"
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:        }
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:    ],
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:    "1": [
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:        {
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "devices": [
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "/dev/loop4"
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            ],
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "lv_name": "ceph_lv1",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "lv_size": "21470642176",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "name": "ceph_lv1",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "tags": {
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.cluster_name": "ceph",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.crush_device_class": "",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.encrypted": "0",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.osd_id": "1",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.type": "block",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.vdo": "0"
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            },
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "type": "block",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "vg_name": "ceph_vg1"
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:        }
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:    ],
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:    "2": [
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:        {
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "devices": [
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "/dev/loop5"
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            ],
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "lv_name": "ceph_lv2",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "lv_size": "21470642176",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "name": "ceph_lv2",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "tags": {
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.cluster_name": "ceph",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.crush_device_class": "",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.encrypted": "0",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.osd_id": "2",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.type": "block",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:                "ceph.vdo": "0"
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            },
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "type": "block",
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:            "vg_name": "ceph_vg2"
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:        }
Oct  3 10:56:28 compute-0 wizardly_carver[489200]:    ]
Oct  3 10:56:28 compute-0 wizardly_carver[489200]: }
Oct  3 10:56:28 compute-0 systemd[1]: libpod-ef451fd0c5a7868f6ef88db1be353028db0339ea8e3297a91da9bce707d4168b.scope: Deactivated successfully.
Oct  3 10:56:28 compute-0 podman[489183]: 2025-10-03 10:56:28.610580437 +0000 UTC m=+1.734680821 container died ef451fd0c5a7868f6ef88db1be353028db0339ea8e3297a91da9bce707d4168b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  3 10:56:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2674: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:28 compute-0 nova_compute[351685]: 2025-10-03 10:56:28.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:56:28 compute-0 nova_compute[351685]: 2025-10-03 10:56:28.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:56:28 compute-0 nova_compute[351685]: 2025-10-03 10:56:28.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-15741edcde559d75419e0487294ea6096be5f21791dc6f41a79077fb564e4123-merged.mount: Deactivated successfully.
Oct  3 10:56:29 compute-0 podman[489210]: 2025-10-03 10:56:29.058912098 +0000 UTC m=+0.391929381 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc.)
Oct  3 10:56:29 compute-0 podman[489217]: 2025-10-03 10:56:29.0683181 +0000 UTC m=+0.383732058 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_managed=true, container_name=ceilometer_agent_compute, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm)
Oct  3 10:56:29 compute-0 podman[489183]: 2025-10-03 10:56:29.091928118 +0000 UTC m=+2.216028522 container remove ef451fd0c5a7868f6ef88db1be353028db0339ea8e3297a91da9bce707d4168b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Oct  3 10:56:29 compute-0 systemd[1]: libpod-conmon-ef451fd0c5a7868f6ef88db1be353028db0339ea8e3297a91da9bce707d4168b.scope: Deactivated successfully.
Oct  3 10:56:29 compute-0 podman[489216]: 2025-10-03 10:56:29.125902068 +0000 UTC m=+0.461102871 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:56:29 compute-0 podman[489218]: 2025-10-03 10:56:29.282446553 +0000 UTC m=+0.596667192 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:56:29 compute-0 podman[157165]: time="2025-10-03T10:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:56:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:56:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9087 "" "Go-http-client/1.1"
Oct  3 10:56:30 compute-0 podman[489437]: 2025-10-03 10:56:30.206087391 +0000 UTC m=+0.035057296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:56:30 compute-0 podman[489437]: 2025-10-03 10:56:30.314508072 +0000 UTC m=+0.143477917 container create bc79b61ab514976db09870decccbd09e332784b524c0d17840bc0f959f4218c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_satoshi, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 10:56:30 compute-0 systemd[1]: Started libpod-conmon-bc79b61ab514976db09870decccbd09e332784b524c0d17840bc0f959f4218c0.scope.
Oct  3 10:56:30 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:56:30 compute-0 podman[489437]: 2025-10-03 10:56:30.491574285 +0000 UTC m=+0.320544170 container init bc79b61ab514976db09870decccbd09e332784b524c0d17840bc0f959f4218c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 10:56:30 compute-0 podman[489437]: 2025-10-03 10:56:30.504045705 +0000 UTC m=+0.333015550 container start bc79b61ab514976db09870decccbd09e332784b524c0d17840bc0f959f4218c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_satoshi, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct  3 10:56:30 compute-0 podman[489437]: 2025-10-03 10:56:30.509581613 +0000 UTC m=+0.338551528 container attach bc79b61ab514976db09870decccbd09e332784b524c0d17840bc0f959f4218c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_satoshi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:56:30 compute-0 jovial_satoshi[489452]: 167 167
Oct  3 10:56:30 compute-0 systemd[1]: libpod-bc79b61ab514976db09870decccbd09e332784b524c0d17840bc0f959f4218c0.scope: Deactivated successfully.
Oct  3 10:56:30 compute-0 podman[489437]: 2025-10-03 10:56:30.518760178 +0000 UTC m=+0.347730013 container died bc79b61ab514976db09870decccbd09e332784b524c0d17840bc0f959f4218c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 10:56:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-acaead0c3544760212c1cd4ecafb98c87385e5e934b0c845e2d7198d891798c3-merged.mount: Deactivated successfully.
Oct  3 10:56:30 compute-0 podman[489437]: 2025-10-03 10:56:30.595956045 +0000 UTC m=+0.424925880 container remove bc79b61ab514976db09870decccbd09e332784b524c0d17840bc0f959f4218c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 10:56:30 compute-0 systemd[1]: libpod-conmon-bc79b61ab514976db09870decccbd09e332784b524c0d17840bc0f959f4218c0.scope: Deactivated successfully.
Oct  3 10:56:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2675: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:30 compute-0 nova_compute[351685]: 2025-10-03 10:56:30.826 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:56:30 compute-0 podman[489476]: 2025-10-03 10:56:30.851619922 +0000 UTC m=+0.077310752 container create d988fc436fa8e7b1fb806fc923224437d05f1cb48ecff56b04faf21ead905f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:56:30 compute-0 podman[489476]: 2025-10-03 10:56:30.812577149 +0000 UTC m=+0.038268029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:56:30 compute-0 systemd[1]: Started libpod-conmon-d988fc436fa8e7b1fb806fc923224437d05f1cb48ecff56b04faf21ead905f4d.scope.
Oct  3 10:56:30 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee31d5dd01c302b8d97723de2e5411d4e886f853a9f2c28ac862166191028d0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee31d5dd01c302b8d97723de2e5411d4e886f853a9f2c28ac862166191028d0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee31d5dd01c302b8d97723de2e5411d4e886f853a9f2c28ac862166191028d0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:56:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee31d5dd01c302b8d97723de2e5411d4e886f853a9f2c28ac862166191028d0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:56:31 compute-0 podman[489476]: 2025-10-03 10:56:31.005590194 +0000 UTC m=+0.231281004 container init d988fc436fa8e7b1fb806fc923224437d05f1cb48ecff56b04faf21ead905f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:56:31 compute-0 podman[489476]: 2025-10-03 10:56:31.028482169 +0000 UTC m=+0.254172959 container start d988fc436fa8e7b1fb806fc923224437d05f1cb48ecff56b04faf21ead905f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 10:56:31 compute-0 podman[489476]: 2025-10-03 10:56:31.033152829 +0000 UTC m=+0.258843639 container attach d988fc436fa8e7b1fb806fc923224437d05f1cb48ecff56b04faf21ead905f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hamilton, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 10:56:31 compute-0 openstack_network_exporter[367524]: ERROR   10:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:56:31 compute-0 openstack_network_exporter[367524]: ERROR   10:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:56:31 compute-0 openstack_network_exporter[367524]: ERROR   10:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:56:31 compute-0 openstack_network_exporter[367524]: ERROR   10:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:56:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:56:31 compute-0 openstack_network_exporter[367524]: ERROR   10:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:56:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:56:31 compute-0 nova_compute[351685]: 2025-10-03 10:56:31.732 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:56:31 compute-0 nova_compute[351685]: 2025-10-03 10:56:31.733 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:56:31 compute-0 nova_compute[351685]: 2025-10-03 10:56:31.734 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]: {
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:        "osd_id": 1,
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:        "type": "bluestore"
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:    },
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:        "osd_id": 2,
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:        "type": "bluestore"
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:    },
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:        "osd_id": 0,
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:        "type": "bluestore"
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]:    }
Oct  3 10:56:32 compute-0 reverent_hamilton[489493]: }
Oct  3 10:56:32 compute-0 systemd[1]: libpod-d988fc436fa8e7b1fb806fc923224437d05f1cb48ecff56b04faf21ead905f4d.scope: Deactivated successfully.
Oct  3 10:56:32 compute-0 podman[489476]: 2025-10-03 10:56:32.176476718 +0000 UTC m=+1.402167568 container died d988fc436fa8e7b1fb806fc923224437d05f1cb48ecff56b04faf21ead905f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hamilton, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 10:56:32 compute-0 systemd[1]: libpod-d988fc436fa8e7b1fb806fc923224437d05f1cb48ecff56b04faf21ead905f4d.scope: Consumed 1.139s CPU time.
Oct  3 10:56:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee31d5dd01c302b8d97723de2e5411d4e886f853a9f2c28ac862166191028d0e-merged.mount: Deactivated successfully.
Oct  3 10:56:32 compute-0 podman[489476]: 2025-10-03 10:56:32.266869819 +0000 UTC m=+1.492560609 container remove d988fc436fa8e7b1fb806fc923224437d05f1cb48ecff56b04faf21ead905f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_hamilton, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 10:56:32 compute-0 systemd[1]: libpod-conmon-d988fc436fa8e7b1fb806fc923224437d05f1cb48ecff56b04faf21ead905f4d.scope: Deactivated successfully.
Oct  3 10:56:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:56:32 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:56:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:56:32 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:56:32 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 199a0455-c3aa-4f4e-b443-7ba139e553e2 does not exist
Oct  3 10:56:32 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 08a1f32a-45ed-458c-81b6-e6536e794a1e does not exist
Oct  3 10:56:32 compute-0 podman[489526]: 2025-10-03 10:56:32.358978257 +0000 UTC m=+0.133772896 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:56:32 compute-0 podman[489534]: 2025-10-03 10:56:32.359898926 +0000 UTC m=+0.120733317 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Oct  3 10:56:32 compute-0 podman[489535]: 2025-10-03 10:56:32.363210822 +0000 UTC m=+0.125079906 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001)
Oct  3 10:56:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:56:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2676: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:33 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:56:33 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:56:33 compute-0 nova_compute[351685]: 2025-10-03 10:56:33.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:33 compute-0 nova_compute[351685]: 2025-10-03 10:56:33.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2677: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2678: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:56:38 compute-0 nova_compute[351685]: 2025-10-03 10:56:38.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2679: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:38 compute-0 nova_compute[351685]: 2025-10-03 10:56:38.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2680: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:40 compute-0 podman[489647]: 2025-10-03 10:56:40.831536185 +0000 UTC m=+0.081162016 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.buildah.version=1.29.0)
Oct  3 10:56:40 compute-0 podman[489646]: 2025-10-03 10:56:40.865064801 +0000 UTC m=+0.114699442 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 10:56:40 compute-0 podman[489648]: 2025-10-03 10:56:40.868403318 +0000 UTC m=+0.105516707 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Oct  3 10:56:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:56:41.662 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:56:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:56:41.662 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:56:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:56:41.663 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:56:41 compute-0 nova_compute[351685]: 2025-10-03 10:56:41.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:56:41 compute-0 nova_compute[351685]: 2025-10-03 10:56:41.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 10:56:41 compute-0 nova_compute[351685]: 2025-10-03 10:56:41.947 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 10:56:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:56:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2681: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:43 compute-0 nova_compute[351685]: 2025-10-03 10:56:43.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:43 compute-0 nova_compute[351685]: 2025-10-03 10:56:43.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2682: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:56:46
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'vms', 'images', 'backups', 'cephfs.cephfs.data']
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2683: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:56:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:56:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:56:48 compute-0 nova_compute[351685]: 2025-10-03 10:56:48.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2684: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:48 compute-0 nova_compute[351685]: 2025-10-03 10:56:48.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2685: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:56:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2686: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:53 compute-0 nova_compute[351685]: 2025-10-03 10:56:53.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:53 compute-0 nova_compute[351685]: 2025-10-03 10:56:53.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:56:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3115116980' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:56:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:56:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3115116980' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:56:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2687: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:56:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:56:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2688: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:56:58 compute-0 nova_compute[351685]: 2025-10-03 10:56:58.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2689: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:56:58 compute-0 nova_compute[351685]: 2025-10-03 10:56:58.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:56:58 compute-0 nova_compute[351685]: 2025-10-03 10:56:58.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 10:56:58 compute-0 nova_compute[351685]: 2025-10-03 10:56:58.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:56:59 compute-0 podman[157165]: time="2025-10-03T10:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:56:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:56:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9092 "" "Go-http-client/1.1"
Oct  3 10:56:59 compute-0 podman[489709]: 2025-10-03 10:56:59.867998898 +0000 UTC m=+0.099159233 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Oct  3 10:56:59 compute-0 podman[489708]: 2025-10-03 10:56:59.886839624 +0000 UTC m=+0.120441627 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc.)
Oct  3 10:56:59 compute-0 podman[489710]: 2025-10-03 10:56:59.927413586 +0000 UTC m=+0.164949656 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 10:56:59 compute-0 podman[489711]: 2025-10-03 10:56:59.943854223 +0000 UTC m=+0.170917707 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 10:57:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2690: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:01 compute-0 openstack_network_exporter[367524]: ERROR   10:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:57:01 compute-0 openstack_network_exporter[367524]: ERROR   10:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:57:01 compute-0 openstack_network_exporter[367524]: ERROR   10:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:57:01 compute-0 openstack_network_exporter[367524]: ERROR   10:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:57:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:57:01 compute-0 openstack_network_exporter[367524]: ERROR   10:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:57:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:57:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:57:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2691: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:02 compute-0 podman[489785]: 2025-10-03 10:57:02.859137071 +0000 UTC m=+0.109974571 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:57:02 compute-0 podman[489787]: 2025-10-03 10:57:02.888568046 +0000 UTC m=+0.129730525 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  3 10:57:02 compute-0 podman[489786]: 2025-10-03 10:57:02.889987202 +0000 UTC m=+0.140906705 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd)
Oct  3 10:57:03 compute-0 nova_compute[351685]: 2025-10-03 10:57:03.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:03 compute-0 nova_compute[351685]: 2025-10-03 10:57:03.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2692: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2693: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:57:08 compute-0 nova_compute[351685]: 2025-10-03 10:57:08.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2694: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:08 compute-0 nova_compute[351685]: 2025-10-03 10:57:08.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2695: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:11 compute-0 podman[489849]: 2025-10-03 10:57:11.890171436 +0000 UTC m=+0.130234751 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 10:57:11 compute-0 podman[489850]: 2025-10-03 10:57:11.90244427 +0000 UTC m=+0.133286020 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, architecture=x86_64)
Oct  3 10:57:11 compute-0 podman[489851]: 2025-10-03 10:57:11.909128684 +0000 UTC m=+0.139599302 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=edpm)
Oct  3 10:57:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:57:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2696: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:13 compute-0 nova_compute[351685]: 2025-10-03 10:57:13.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:13 compute-0 nova_compute[351685]: 2025-10-03 10:57:13.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2697: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:57:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:57:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:57:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:57:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:57:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:57:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2698: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:57:18 compute-0 nova_compute[351685]: 2025-10-03 10:57:18.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2699: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:18 compute-0 nova_compute[351685]: 2025-10-03 10:57:18.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:19 compute-0 nova_compute[351685]: 2025-10-03 10:57:19.750 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:57:19 compute-0 nova_compute[351685]: 2025-10-03 10:57:19.751 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:57:19 compute-0 nova_compute[351685]: 2025-10-03 10:57:19.751 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:57:20 compute-0 nova_compute[351685]: 2025-10-03 10:57:20.164 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:57:20 compute-0 nova_compute[351685]: 2025-10-03 10:57:20.164 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:57:20 compute-0 nova_compute[351685]: 2025-10-03 10:57:20.164 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:57:20 compute-0 nova_compute[351685]: 2025-10-03 10:57:20.165 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:57:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2700: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:21 compute-0 nova_compute[351685]: 2025-10-03 10:57:21.463 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:57:21 compute-0 nova_compute[351685]: 2025-10-03 10:57:21.479 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:57:21 compute-0 nova_compute[351685]: 2025-10-03 10:57:21.480 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:57:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:57:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2701: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:23 compute-0 nova_compute[351685]: 2025-10-03 10:57:23.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:23 compute-0 nova_compute[351685]: 2025-10-03 10:57:23.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2702: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:24 compute-0 nova_compute[351685]: 2025-10-03 10:57:24.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:57:24 compute-0 nova_compute[351685]: 2025-10-03 10:57:24.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:57:24 compute-0 nova_compute[351685]: 2025-10-03 10:57:24.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:57:25 compute-0 nova_compute[351685]: 2025-10-03 10:57:25.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:57:25 compute-0 nova_compute[351685]: 2025-10-03 10:57:25.760 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:57:25 compute-0 nova_compute[351685]: 2025-10-03 10:57:25.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:57:25 compute-0 nova_compute[351685]: 2025-10-03 10:57:25.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:57:25 compute-0 nova_compute[351685]: 2025-10-03 10:57:25.762 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:57:25 compute-0 nova_compute[351685]: 2025-10-03 10:57:25.762 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:57:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:57:26 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2434178234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:57:26 compute-0 nova_compute[351685]: 2025-10-03 10:57:26.283 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:57:26 compute-0 nova_compute[351685]: 2025-10-03 10:57:26.364 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:57:26 compute-0 nova_compute[351685]: 2025-10-03 10:57:26.364 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:57:26 compute-0 nova_compute[351685]: 2025-10-03 10:57:26.365 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:57:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2703: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:26 compute-0 nova_compute[351685]: 2025-10-03 10:57:26.778 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:57:26 compute-0 nova_compute[351685]: 2025-10-03 10:57:26.779 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3811MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:57:26 compute-0 nova_compute[351685]: 2025-10-03 10:57:26.779 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:57:26 compute-0 nova_compute[351685]: 2025-10-03 10:57:26.780 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:57:26 compute-0 nova_compute[351685]: 2025-10-03 10:57:26.876 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:57:26 compute-0 nova_compute[351685]: 2025-10-03 10:57:26.877 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:57:26 compute-0 nova_compute[351685]: 2025-10-03 10:57:26.877 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:57:26 compute-0 nova_compute[351685]: 2025-10-03 10:57:26.909 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:57:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:57:27 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/169338112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:57:27 compute-0 nova_compute[351685]: 2025-10-03 10:57:27.411 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:57:27 compute-0 nova_compute[351685]: 2025-10-03 10:57:27.422 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:57:27 compute-0 nova_compute[351685]: 2025-10-03 10:57:27.574 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:57:27 compute-0 nova_compute[351685]: 2025-10-03 10:57:27.579 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:57:27 compute-0 nova_compute[351685]: 2025-10-03 10:57:27.580 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:57:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:57:28 compute-0 nova_compute[351685]: 2025-10-03 10:57:28.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2704: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:28 compute-0 nova_compute[351685]: 2025-10-03 10:57:28.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:29 compute-0 podman[157165]: time="2025-10-03T10:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:57:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:57:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9084 "" "Go-http-client/1.1"
Oct  3 10:57:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2705: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:30 compute-0 podman[489953]: 2025-10-03 10:57:30.834105228 +0000 UTC m=+0.096603462 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter)
Oct  3 10:57:30 compute-0 podman[489954]: 2025-10-03 10:57:30.84473615 +0000 UTC m=+0.101986615 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 10:57:30 compute-0 podman[489955]: 2025-10-03 10:57:30.850465614 +0000 UTC m=+0.102604246 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 10:57:30 compute-0 podman[489961]: 2025-10-03 10:57:30.871532159 +0000 UTC m=+0.120950983 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 10:57:31 compute-0 openstack_network_exporter[367524]: ERROR   10:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:57:31 compute-0 openstack_network_exporter[367524]: ERROR   10:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:57:31 compute-0 openstack_network_exporter[367524]: ERROR   10:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:57:31 compute-0 openstack_network_exporter[367524]: ERROR   10:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:57:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:57:31 compute-0 openstack_network_exporter[367524]: ERROR   10:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:57:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:57:31 compute-0 nova_compute[351685]: 2025-10-03 10:57:31.581 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:57:31 compute-0 nova_compute[351685]: 2025-10-03 10:57:31.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:57:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:57:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2706: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:32 compute-0 nova_compute[351685]: 2025-10-03 10:57:32.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:57:32 compute-0 nova_compute[351685]: 2025-10-03 10:57:32.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:57:33 compute-0 podman[490108]: 2025-10-03 10:57:33.0619769 +0000 UTC m=+0.103344849 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Oct  3 10:57:33 compute-0 podman[490106]: 2025-10-03 10:57:33.067687234 +0000 UTC m=+0.117109882 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 10:57:33 compute-0 podman[490107]: 2025-10-03 10:57:33.100183726 +0000 UTC m=+0.144104597 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd)
Oct  3 10:57:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:57:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:57:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:57:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:57:33 compute-0 nova_compute[351685]: 2025-10-03 10:57:33.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:33 compute-0 nova_compute[351685]: 2025-10-03 10:57:33.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:57:33 compute-0 nova_compute[351685]: 2025-10-03 10:57:33.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:57:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:57:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:57:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:57:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:57:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:57:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:57:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:57:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev c1cfd8a2-dd3e-42a6-bcde-aa6c68740130 does not exist
Oct  3 10:57:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f4499e3b-503f-4afe-8458-9a3eb1a25b0a does not exist
Oct  3 10:57:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 0f0e4b73-7028-4e5b-a7ed-858678120031 does not exist
Oct  3 10:57:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:57:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:57:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:57:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:57:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:57:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:57:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2707: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:57:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:57:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:57:35 compute-0 podman[490477]: 2025-10-03 10:57:35.634718894 +0000 UTC m=+0.096246391 container create 0396580c4edcb6bf3304bf231bca40672f630934a3bb263eb1137becff7f8f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 10:57:35 compute-0 podman[490477]: 2025-10-03 10:57:35.606772287 +0000 UTC m=+0.068299784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:57:35 compute-0 systemd[1]: Started libpod-conmon-0396580c4edcb6bf3304bf231bca40672f630934a3bb263eb1137becff7f8f17.scope.
Oct  3 10:57:35 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:57:35 compute-0 podman[490477]: 2025-10-03 10:57:35.764002373 +0000 UTC m=+0.225529900 container init 0396580c4edcb6bf3304bf231bca40672f630934a3bb263eb1137becff7f8f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 10:57:35 compute-0 podman[490477]: 2025-10-03 10:57:35.773536279 +0000 UTC m=+0.235063776 container start 0396580c4edcb6bf3304bf231bca40672f630934a3bb263eb1137becff7f8f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 10:57:35 compute-0 podman[490477]: 2025-10-03 10:57:35.778017942 +0000 UTC m=+0.239545449 container attach 0396580c4edcb6bf3304bf231bca40672f630934a3bb263eb1137becff7f8f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 10:57:35 compute-0 reverent_shannon[490490]: 167 167
Oct  3 10:57:35 compute-0 systemd[1]: libpod-0396580c4edcb6bf3304bf231bca40672f630934a3bb263eb1137becff7f8f17.scope: Deactivated successfully.
Oct  3 10:57:35 compute-0 podman[490477]: 2025-10-03 10:57:35.782325991 +0000 UTC m=+0.243853498 container died 0396580c4edcb6bf3304bf231bca40672f630934a3bb263eb1137becff7f8f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 10:57:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0bfef09eb21e13babf73eeea4ea648f17dca4948d649478c9545e69a0927a1e-merged.mount: Deactivated successfully.
Oct  3 10:57:35 compute-0 podman[490477]: 2025-10-03 10:57:35.844157146 +0000 UTC m=+0.305684633 container remove 0396580c4edcb6bf3304bf231bca40672f630934a3bb263eb1137becff7f8f17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:57:35 compute-0 systemd[1]: libpod-conmon-0396580c4edcb6bf3304bf231bca40672f630934a3bb263eb1137becff7f8f17.scope: Deactivated successfully.
Oct  3 10:57:36 compute-0 podman[490513]: 2025-10-03 10:57:36.097857539 +0000 UTC m=+0.087331074 container create 521b967e022c2c9cb175365eeaf4b6e8d97305222749c55ba878caa44bfd5259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:57:36 compute-0 podman[490513]: 2025-10-03 10:57:36.054673124 +0000 UTC m=+0.044146759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:57:36 compute-0 systemd[1]: Started libpod-conmon-521b967e022c2c9cb175365eeaf4b6e8d97305222749c55ba878caa44bfd5259.scope.
Oct  3 10:57:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29ccae646930bd05e87d7ce056a38371c15f95015da12a4c1710d8f8c3bbae09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29ccae646930bd05e87d7ce056a38371c15f95015da12a4c1710d8f8c3bbae09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29ccae646930bd05e87d7ce056a38371c15f95015da12a4c1710d8f8c3bbae09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29ccae646930bd05e87d7ce056a38371c15f95015da12a4c1710d8f8c3bbae09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:57:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29ccae646930bd05e87d7ce056a38371c15f95015da12a4c1710d8f8c3bbae09/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:57:36 compute-0 podman[490513]: 2025-10-03 10:57:36.249578559 +0000 UTC m=+0.239052104 container init 521b967e022c2c9cb175365eeaf4b6e8d97305222749c55ba878caa44bfd5259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_noyce, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:57:36 compute-0 podman[490513]: 2025-10-03 10:57:36.27044617 +0000 UTC m=+0.259919705 container start 521b967e022c2c9cb175365eeaf4b6e8d97305222749c55ba878caa44bfd5259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_noyce, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct  3 10:57:36 compute-0 podman[490513]: 2025-10-03 10:57:36.27514623 +0000 UTC m=+0.264619785 container attach 521b967e022c2c9cb175365eeaf4b6e8d97305222749c55ba878caa44bfd5259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_noyce, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  3 10:57:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2708: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:37 compute-0 elastic_noyce[490529]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:57:37 compute-0 elastic_noyce[490529]: --> relative data size: 1.0
Oct  3 10:57:37 compute-0 elastic_noyce[490529]: --> All data devices are unavailable
Oct  3 10:57:37 compute-0 systemd[1]: libpod-521b967e022c2c9cb175365eeaf4b6e8d97305222749c55ba878caa44bfd5259.scope: Deactivated successfully.
Oct  3 10:57:37 compute-0 systemd[1]: libpod-521b967e022c2c9cb175365eeaf4b6e8d97305222749c55ba878caa44bfd5259.scope: Consumed 1.113s CPU time.
Oct  3 10:57:37 compute-0 podman[490558]: 2025-10-03 10:57:37.529660038 +0000 UTC m=+0.040605854 container died 521b967e022c2c9cb175365eeaf4b6e8d97305222749c55ba878caa44bfd5259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:57:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-29ccae646930bd05e87d7ce056a38371c15f95015da12a4c1710d8f8c3bbae09-merged.mount: Deactivated successfully.
Oct  3 10:57:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:57:37 compute-0 podman[490558]: 2025-10-03 10:57:37.629129311 +0000 UTC m=+0.140075087 container remove 521b967e022c2c9cb175365eeaf4b6e8d97305222749c55ba878caa44bfd5259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_noyce, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:57:37 compute-0 systemd[1]: libpod-conmon-521b967e022c2c9cb175365eeaf4b6e8d97305222749c55ba878caa44bfd5259.scope: Deactivated successfully.
Oct  3 10:57:38 compute-0 nova_compute[351685]: 2025-10-03 10:57:38.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:38 compute-0 podman[490708]: 2025-10-03 10:57:38.618161977 +0000 UTC m=+0.083796840 container create 0179555b8d6924f931a7ffd1aa6952db445c79bb605541bea782aec7683f8a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclaren, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:57:38 compute-0 podman[490708]: 2025-10-03 10:57:38.578767313 +0000 UTC m=+0.044402216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:57:38 compute-0 systemd[1]: Started libpod-conmon-0179555b8d6924f931a7ffd1aa6952db445c79bb605541bea782aec7683f8a19.scope.
Oct  3 10:57:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2709: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:38 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:57:38 compute-0 podman[490708]: 2025-10-03 10:57:38.760687302 +0000 UTC m=+0.226322225 container init 0179555b8d6924f931a7ffd1aa6952db445c79bb605541bea782aec7683f8a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 10:57:38 compute-0 podman[490708]: 2025-10-03 10:57:38.779367141 +0000 UTC m=+0.245002004 container start 0179555b8d6924f931a7ffd1aa6952db445c79bb605541bea782aec7683f8a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclaren, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:57:38 compute-0 podman[490708]: 2025-10-03 10:57:38.785947753 +0000 UTC m=+0.251582666 container attach 0179555b8d6924f931a7ffd1aa6952db445c79bb605541bea782aec7683f8a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  3 10:57:38 compute-0 elegant_mclaren[490724]: 167 167
Oct  3 10:57:38 compute-0 systemd[1]: libpod-0179555b8d6924f931a7ffd1aa6952db445c79bb605541bea782aec7683f8a19.scope: Deactivated successfully.
Oct  3 10:57:38 compute-0 podman[490708]: 2025-10-03 10:57:38.791099718 +0000 UTC m=+0.256734601 container died 0179555b8d6924f931a7ffd1aa6952db445c79bb605541bea782aec7683f8a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:57:38 compute-0 nova_compute[351685]: 2025-10-03 10:57:38.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5e8f41550b80e2df537e4c0c2a197338427dca7b6d352a64629d3794d2be2e1-merged.mount: Deactivated successfully.
Oct  3 10:57:38 compute-0 podman[490708]: 2025-10-03 10:57:38.880919761 +0000 UTC m=+0.346554584 container remove 0179555b8d6924f931a7ffd1aa6952db445c79bb605541bea782aec7683f8a19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Oct  3 10:57:38 compute-0 systemd[1]: libpod-conmon-0179555b8d6924f931a7ffd1aa6952db445c79bb605541bea782aec7683f8a19.scope: Deactivated successfully.
Oct  3 10:57:39 compute-0 podman[490747]: 2025-10-03 10:57:39.14537275 +0000 UTC m=+0.080922779 container create b4864f1af504a165b1d2cad252f3249a994e4a6f50b3ff14a73e7f03473e6544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 10:57:39 compute-0 podman[490747]: 2025-10-03 10:57:39.117686341 +0000 UTC m=+0.053236340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:57:39 compute-0 systemd[1]: Started libpod-conmon-b4864f1af504a165b1d2cad252f3249a994e4a6f50b3ff14a73e7f03473e6544.scope.
Oct  3 10:57:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba74e1cf7d7ede87507ab8e28690a03603370e1012c1b81518b3656ca41faa5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba74e1cf7d7ede87507ab8e28690a03603370e1012c1b81518b3656ca41faa5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba74e1cf7d7ede87507ab8e28690a03603370e1012c1b81518b3656ca41faa5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:57:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba74e1cf7d7ede87507ab8e28690a03603370e1012c1b81518b3656ca41faa5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:57:39 compute-0 podman[490747]: 2025-10-03 10:57:39.290942403 +0000 UTC m=+0.226492442 container init b4864f1af504a165b1d2cad252f3249a994e4a6f50b3ff14a73e7f03473e6544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 10:57:39 compute-0 podman[490747]: 2025-10-03 10:57:39.314804248 +0000 UTC m=+0.250354267 container start b4864f1af504a165b1d2cad252f3249a994e4a6f50b3ff14a73e7f03473e6544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:57:39 compute-0 podman[490747]: 2025-10-03 10:57:39.321153342 +0000 UTC m=+0.256703421 container attach b4864f1af504a165b1d2cad252f3249a994e4a6f50b3ff14a73e7f03473e6544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 10:57:40 compute-0 serene_davinci[490763]: {
Oct  3 10:57:40 compute-0 serene_davinci[490763]:    "0": [
Oct  3 10:57:40 compute-0 serene_davinci[490763]:        {
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "devices": [
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "/dev/loop3"
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            ],
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "lv_name": "ceph_lv0",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "lv_size": "21470642176",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "name": "ceph_lv0",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "tags": {
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.cluster_name": "ceph",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.crush_device_class": "",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.encrypted": "0",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.osd_id": "0",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.type": "block",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.vdo": "0"
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            },
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "type": "block",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "vg_name": "ceph_vg0"
Oct  3 10:57:40 compute-0 serene_davinci[490763]:        }
Oct  3 10:57:40 compute-0 serene_davinci[490763]:    ],
Oct  3 10:57:40 compute-0 serene_davinci[490763]:    "1": [
Oct  3 10:57:40 compute-0 serene_davinci[490763]:        {
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "devices": [
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "/dev/loop4"
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            ],
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "lv_name": "ceph_lv1",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "lv_size": "21470642176",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "name": "ceph_lv1",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "tags": {
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.cluster_name": "ceph",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.crush_device_class": "",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.encrypted": "0",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.osd_id": "1",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.type": "block",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.vdo": "0"
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            },
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "type": "block",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "vg_name": "ceph_vg1"
Oct  3 10:57:40 compute-0 serene_davinci[490763]:        }
Oct  3 10:57:40 compute-0 serene_davinci[490763]:    ],
Oct  3 10:57:40 compute-0 serene_davinci[490763]:    "2": [
Oct  3 10:57:40 compute-0 serene_davinci[490763]:        {
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "devices": [
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "/dev/loop5"
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            ],
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "lv_name": "ceph_lv2",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "lv_size": "21470642176",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "name": "ceph_lv2",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "tags": {
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.cluster_name": "ceph",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.crush_device_class": "",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.encrypted": "0",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.osd_id": "2",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.type": "block",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:                "ceph.vdo": "0"
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            },
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "type": "block",
Oct  3 10:57:40 compute-0 serene_davinci[490763]:            "vg_name": "ceph_vg2"
Oct  3 10:57:40 compute-0 serene_davinci[490763]:        }
Oct  3 10:57:40 compute-0 serene_davinci[490763]:    ]
Oct  3 10:57:40 compute-0 serene_davinci[490763]: }
Oct  3 10:57:40 compute-0 systemd[1]: libpod-b4864f1af504a165b1d2cad252f3249a994e4a6f50b3ff14a73e7f03473e6544.scope: Deactivated successfully.
Oct  3 10:57:40 compute-0 podman[490747]: 2025-10-03 10:57:40.136609888 +0000 UTC m=+1.072159897 container died b4864f1af504a165b1d2cad252f3249a994e4a6f50b3ff14a73e7f03473e6544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 10:57:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ba74e1cf7d7ede87507ab8e28690a03603370e1012c1b81518b3656ca41faa5-merged.mount: Deactivated successfully.
Oct  3 10:57:40 compute-0 podman[490747]: 2025-10-03 10:57:40.207953677 +0000 UTC m=+1.143503666 container remove b4864f1af504a165b1d2cad252f3249a994e4a6f50b3ff14a73e7f03473e6544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:57:40 compute-0 systemd[1]: libpod-conmon-b4864f1af504a165b1d2cad252f3249a994e4a6f50b3ff14a73e7f03473e6544.scope: Deactivated successfully.
Oct  3 10:57:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2710: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.897 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.898 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.899 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a973383e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.904 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.905 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.905 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.905 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.905 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:57:40.905300) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.910 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.911 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.911 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.911 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.911 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.911 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.911 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.911 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.912 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.912 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.912 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.912 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.912 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.913 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:57:40.911633) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.913 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:57:40.912625) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.940 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.940 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.941 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.941 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.941 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.941 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.942 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.942 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.942 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.943 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:57:40.942291) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.996 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.996 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.996 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.997 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.997 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.997 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.997 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.997 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.997 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.997 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.998 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.998 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.998 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.998 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.998 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.999 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.999 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.999 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.999 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:57:40.997502) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.999 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:57:40.999022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:40.999 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.000 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.000 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.000 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.000 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.000 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.000 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.000 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.001 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.001 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.001 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:57:41.000491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.001 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.001 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.001 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.001 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.001 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.002 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.002 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.002 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.003 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.003 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:57:41.001840) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.003 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.003 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.003 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.003 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.003 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.004 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.004 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.004 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.004 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.004 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:57:41.003359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.004 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.005 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:57:41.004679) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.032 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.033 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.033 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.033 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.033 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.033 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.033 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.034 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.034 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:57:41.033874) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.034 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.034 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.035 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.035 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.035 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.035 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.035 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.035 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:57:41.035313) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.035 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.035 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.036 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.036 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.036 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.036 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.036 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.036 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.036 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:57:41.036601) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.037 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.037 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.037 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.037 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.037 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.038 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.038 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.038 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.038 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.038 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:57:41.037901) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.039 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.039 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.039 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.039 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.039 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:57:41.038832) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.039 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.040 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.040 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.040 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.040 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.040 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.040 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.040 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:57:41.039753) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.041 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.041 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.041 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.041 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 80080000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.042 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.042 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.042 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.042 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.043 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.043 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.043 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.043 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.043 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.043 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.044 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:57:41.040849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.044 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.044 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:57:41.041802) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.044 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.044 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:57:41.042808) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.045 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:57:41.043622) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.045 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:57:41.044420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.045 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.045 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.045 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.046 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.046 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:57:41.045480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.046 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.046 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.046 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.046 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.046 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.046 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.047 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.047 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.047 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.047 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:57:41.046689) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.047 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.047 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.047 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:57:41.047573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.048 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.049 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.050 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:57:41.050 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:57:41 compute-0 podman[490921]: 2025-10-03 10:57:41.193594605 +0000 UTC m=+0.049661365 container create a995a68bfe92a42e5012d32294293ceeba2654151065320439eb6eac785a1b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 10:57:41 compute-0 systemd[1]: Started libpod-conmon-a995a68bfe92a42e5012d32294293ceeba2654151065320439eb6eac785a1b55.scope.
Oct  3 10:57:41 compute-0 podman[490921]: 2025-10-03 10:57:41.174937806 +0000 UTC m=+0.031004586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:57:41 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:57:41 compute-0 podman[490921]: 2025-10-03 10:57:41.306734657 +0000 UTC m=+0.162801447 container init a995a68bfe92a42e5012d32294293ceeba2654151065320439eb6eac785a1b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:57:41 compute-0 podman[490921]: 2025-10-03 10:57:41.316353495 +0000 UTC m=+0.172420275 container start a995a68bfe92a42e5012d32294293ceeba2654151065320439eb6eac785a1b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 10:57:41 compute-0 podman[490921]: 2025-10-03 10:57:41.321357015 +0000 UTC m=+0.177423795 container attach a995a68bfe92a42e5012d32294293ceeba2654151065320439eb6eac785a1b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:57:41 compute-0 mystifying_bassi[490938]: 167 167
Oct  3 10:57:41 compute-0 systemd[1]: libpod-a995a68bfe92a42e5012d32294293ceeba2654151065320439eb6eac785a1b55.scope: Deactivated successfully.
Oct  3 10:57:41 compute-0 podman[490921]: 2025-10-03 10:57:41.323750042 +0000 UTC m=+0.179816812 container died a995a68bfe92a42e5012d32294293ceeba2654151065320439eb6eac785a1b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:57:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-13a278c43af8920334fc6a28f28938eea7223155c2f6acd7abb031adb9034783-merged.mount: Deactivated successfully.
Oct  3 10:57:41 compute-0 podman[490921]: 2025-10-03 10:57:41.384052748 +0000 UTC m=+0.240119518 container remove a995a68bfe92a42e5012d32294293ceeba2654151065320439eb6eac785a1b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 10:57:41 compute-0 systemd[1]: libpod-conmon-a995a68bfe92a42e5012d32294293ceeba2654151065320439eb6eac785a1b55.scope: Deactivated successfully.
Oct  3 10:57:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:57:41.663 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:57:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:57:41.665 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:57:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:57:41.666 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:57:41 compute-0 podman[490962]: 2025-10-03 10:57:41.667746705 +0000 UTC m=+0.099095883 container create 64520b2a398e3c2ffcac2374555eb6fcbf9f98e4953234cca9b5ee8068107402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 10:57:41 compute-0 podman[490962]: 2025-10-03 10:57:41.629734424 +0000 UTC m=+0.061083672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:57:41 compute-0 systemd[1]: Started libpod-conmon-64520b2a398e3c2ffcac2374555eb6fcbf9f98e4953234cca9b5ee8068107402.scope.
Oct  3 10:57:41 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a656714d1d14e08a966b3dfb57fba15429c2588d9f5fc21a5d9f4e0c5f54c6f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a656714d1d14e08a966b3dfb57fba15429c2588d9f5fc21a5d9f4e0c5f54c6f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a656714d1d14e08a966b3dfb57fba15429c2588d9f5fc21a5d9f4e0c5f54c6f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a656714d1d14e08a966b3dfb57fba15429c2588d9f5fc21a5d9f4e0c5f54c6f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:57:41 compute-0 podman[490962]: 2025-10-03 10:57:41.83149796 +0000 UTC m=+0.262847198 container init 64520b2a398e3c2ffcac2374555eb6fcbf9f98e4953234cca9b5ee8068107402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brown, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 10:57:41 compute-0 podman[490962]: 2025-10-03 10:57:41.841978577 +0000 UTC m=+0.273327755 container start 64520b2a398e3c2ffcac2374555eb6fcbf9f98e4953234cca9b5ee8068107402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brown, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:57:41 compute-0 podman[490962]: 2025-10-03 10:57:41.849456907 +0000 UTC m=+0.280806145 container attach 64520b2a398e3c2ffcac2374555eb6fcbf9f98e4953234cca9b5ee8068107402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brown, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 10:57:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:57:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2711: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:42 compute-0 podman[490998]: 2025-10-03 10:57:42.860207841 +0000 UTC m=+0.099889458 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 10:57:42 compute-0 podman[490994]: 2025-10-03 10:57:42.864992644 +0000 UTC m=+0.112073008 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:57:42 compute-0 podman[490995]: 2025-10-03 10:57:42.866637917 +0000 UTC m=+0.109443735 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, vcs-type=git, io.openshift.expose-services=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, release-0.7.12=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct  3 10:57:42 compute-0 adoring_brown[490977]: {
Oct  3 10:57:42 compute-0 adoring_brown[490977]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:57:42 compute-0 adoring_brown[490977]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:57:42 compute-0 adoring_brown[490977]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:57:42 compute-0 adoring_brown[490977]:        "osd_id": 1,
Oct  3 10:57:42 compute-0 adoring_brown[490977]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:57:42 compute-0 adoring_brown[490977]:        "type": "bluestore"
Oct  3 10:57:42 compute-0 adoring_brown[490977]:    },
Oct  3 10:57:42 compute-0 adoring_brown[490977]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:57:42 compute-0 adoring_brown[490977]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:57:42 compute-0 adoring_brown[490977]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:57:42 compute-0 adoring_brown[490977]:        "osd_id": 2,
Oct  3 10:57:42 compute-0 adoring_brown[490977]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:57:42 compute-0 adoring_brown[490977]:        "type": "bluestore"
Oct  3 10:57:42 compute-0 adoring_brown[490977]:    },
Oct  3 10:57:42 compute-0 adoring_brown[490977]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:57:42 compute-0 adoring_brown[490977]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:57:42 compute-0 adoring_brown[490977]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:57:42 compute-0 adoring_brown[490977]:        "osd_id": 0,
Oct  3 10:57:42 compute-0 adoring_brown[490977]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:57:42 compute-0 adoring_brown[490977]:        "type": "bluestore"
Oct  3 10:57:42 compute-0 adoring_brown[490977]:    }
Oct  3 10:57:42 compute-0 adoring_brown[490977]: }
Oct  3 10:57:42 compute-0 systemd[1]: libpod-64520b2a398e3c2ffcac2374555eb6fcbf9f98e4953234cca9b5ee8068107402.scope: Deactivated successfully.
Oct  3 10:57:42 compute-0 podman[490962]: 2025-10-03 10:57:42.988633593 +0000 UTC m=+1.419982791 container died 64520b2a398e3c2ffcac2374555eb6fcbf9f98e4953234cca9b5ee8068107402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brown, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 10:57:42 compute-0 systemd[1]: libpod-64520b2a398e3c2ffcac2374555eb6fcbf9f98e4953234cca9b5ee8068107402.scope: Consumed 1.142s CPU time.
Oct  3 10:57:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a656714d1d14e08a966b3dfb57fba15429c2588d9f5fc21a5d9f4e0c5f54c6f6-merged.mount: Deactivated successfully.
Oct  3 10:57:43 compute-0 podman[490962]: 2025-10-03 10:57:43.075033296 +0000 UTC m=+1.506382444 container remove 64520b2a398e3c2ffcac2374555eb6fcbf9f98e4953234cca9b5ee8068107402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brown, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  3 10:57:43 compute-0 systemd[1]: libpod-conmon-64520b2a398e3c2ffcac2374555eb6fcbf9f98e4953234cca9b5ee8068107402.scope: Deactivated successfully.
Oct  3 10:57:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:57:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:57:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:57:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:57:43 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8ae02edd-783f-47b2-a8cf-52c1a4caf74b does not exist
Oct  3 10:57:43 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 13a953fc-757a-41de-a5da-16d7470e4e38 does not exist
Oct  3 10:57:43 compute-0 nova_compute[351685]: 2025-10-03 10:57:43.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:43 compute-0 nova_compute[351685]: 2025-10-03 10:57:43.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:57:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:57:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2712: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:57:46
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'vms', 'images', '.rgw.root', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta']
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2713: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:57:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:57:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:57:48 compute-0 nova_compute[351685]: 2025-10-03 10:57:48.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2714: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:48 compute-0 nova_compute[351685]: 2025-10-03 10:57:48.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2715: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:57:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2716: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:52 compute-0 nova_compute[351685]: 2025-10-03 10:57:52.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:57:53 compute-0 nova_compute[351685]: 2025-10-03 10:57:53.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:53 compute-0 nova_compute[351685]: 2025-10-03 10:57:53.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:57:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/42842839' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:57:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:57:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/42842839' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:57:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2717: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:57:55 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:57:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2718: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:57:58 compute-0 nova_compute[351685]: 2025-10-03 10:57:58.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2719: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:57:58 compute-0 nova_compute[351685]: 2025-10-03 10:57:58.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:57:59 compute-0 podman[157165]: time="2025-10-03T10:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:57:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:57:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9092 "" "Go-http-client/1.1"
Oct  3 10:58:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2720: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:01 compute-0 openstack_network_exporter[367524]: ERROR   10:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:58:01 compute-0 openstack_network_exporter[367524]: ERROR   10:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:58:01 compute-0 openstack_network_exporter[367524]: ERROR   10:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:58:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:58:01 compute-0 openstack_network_exporter[367524]: ERROR   10:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:58:01 compute-0 openstack_network_exporter[367524]: ERROR   10:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:58:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:58:01 compute-0 podman[491138]: 2025-10-03 10:58:01.877144893 +0000 UTC m=+0.106630874 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm)
Oct  3 10:58:01 compute-0 podman[491136]: 2025-10-03 10:58:01.881539734 +0000 UTC m=+0.126149860 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 10:58:01 compute-0 podman[491137]: 2025-10-03 10:58:01.907748075 +0000 UTC m=+0.147307758 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 10:58:01 compute-0 podman[491139]: 2025-10-03 10:58:01.934619388 +0000 UTC m=+0.166545997 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct  3 10:58:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:58:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2721: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:03 compute-0 nova_compute[351685]: 2025-10-03 10:58:03.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:03 compute-0 podman[491215]: 2025-10-03 10:58:03.819450478 +0000 UTC m=+0.079736210 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:58:03 compute-0 podman[491216]: 2025-10-03 10:58:03.82418962 +0000 UTC m=+0.083904064 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible)
Oct  3 10:58:03 compute-0 podman[491217]: 2025-10-03 10:58:03.824897623 +0000 UTC m=+0.079890275 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  3 10:58:03 compute-0 nova_compute[351685]: 2025-10-03 10:58:03.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2722: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2723: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:58:08 compute-0 nova_compute[351685]: 2025-10-03 10:58:08.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2724: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:08 compute-0 nova_compute[351685]: 2025-10-03 10:58:08.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2725: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:58:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2726: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:13 compute-0 nova_compute[351685]: 2025-10-03 10:58:13.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:13 compute-0 podman[491271]: 2025-10-03 10:58:13.818184086 +0000 UTC m=+0.075683270 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.4, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, managed_by=edpm_ansible, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vendor=Red Hat, Inc., release=1214.1726694543)
Oct  3 10:58:13 compute-0 podman[491272]: 2025-10-03 10:58:13.82298938 +0000 UTC m=+0.075359780 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 10:58:13 compute-0 podman[491270]: 2025-10-03 10:58:13.838037043 +0000 UTC m=+0.097209521 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:58:13 compute-0 nova_compute[351685]: 2025-10-03 10:58:13.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2727: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:58:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:58:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:58:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:58:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:58:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:58:16.177389) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489096177426, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 1435, "num_deletes": 251, "total_data_size": 2238315, "memory_usage": 2270944, "flush_reason": "Manual Compaction"}
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489096197893, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 2194416, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54109, "largest_seqno": 55543, "table_properties": {"data_size": 2187736, "index_size": 3814, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13875, "raw_average_key_size": 19, "raw_value_size": 2174354, "raw_average_value_size": 3110, "num_data_blocks": 172, "num_entries": 699, "num_filter_entries": 699, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759488947, "oldest_key_time": 1759488947, "file_creation_time": 1759489096, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 20612 microseconds, and 10660 cpu microseconds.
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:58:16.197992) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 2194416 bytes OK
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:58:16.198024) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:58:16.201158) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:58:16.201182) EVENT_LOG_v1 {"time_micros": 1759489096201175, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:58:16.201205) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 2232003, prev total WAL file size 2232003, number of live WAL files 2.
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:58:16.202820) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(2142KB)], [131(7173KB)]
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489096202870, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 9540296, "oldest_snapshot_seqno": -1}
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 6733 keys, 7797285 bytes, temperature: kUnknown
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489096256926, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 7797285, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7757015, "index_size": 22301, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16901, "raw_key_size": 176551, "raw_average_key_size": 26, "raw_value_size": 7639640, "raw_average_value_size": 1134, "num_data_blocks": 877, "num_entries": 6733, "num_filter_entries": 6733, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759489096, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:58:16.257511) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 7797285 bytes
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:58:16.259735) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.1 rd, 143.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.9) write-amplify(3.6) OK, records in: 7247, records dropped: 514 output_compression: NoCompression
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:58:16.259757) EVENT_LOG_v1 {"time_micros": 1759489096259743, "job": 80, "event": "compaction_finished", "compaction_time_micros": 54167, "compaction_time_cpu_micros": 34371, "output_level": 6, "num_output_files": 1, "total_output_size": 7797285, "num_input_records": 7247, "num_output_records": 6733, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489096260379, "job": 80, "event": "table_file_deletion", "file_number": 133}
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489096262079, "job": 80, "event": "table_file_deletion", "file_number": 131}
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:58:16.202689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:58:16.262367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:58:16.262376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:58:16.262379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:58:16.262382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:58:16 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-10:58:16.262385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 10:58:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2728: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:58:18 compute-0 nova_compute[351685]: 2025-10-03 10:58:18.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2729: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:18 compute-0 nova_compute[351685]: 2025-10-03 10:58:18.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:19 compute-0 nova_compute[351685]: 2025-10-03 10:58:19.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:58:19 compute-0 nova_compute[351685]: 2025-10-03 10:58:19.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:58:19 compute-0 nova_compute[351685]: 2025-10-03 10:58:19.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:58:20 compute-0 nova_compute[351685]: 2025-10-03 10:58:20.229 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:58:20 compute-0 nova_compute[351685]: 2025-10-03 10:58:20.230 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:58:20 compute-0 nova_compute[351685]: 2025-10-03 10:58:20.230 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:58:20 compute-0 nova_compute[351685]: 2025-10-03 10:58:20.231 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:58:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2730: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:21 compute-0 nova_compute[351685]: 2025-10-03 10:58:21.405 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:58:21 compute-0 nova_compute[351685]: 2025-10-03 10:58:21.422 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:58:21 compute-0 nova_compute[351685]: 2025-10-03 10:58:21.423 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:58:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:58:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2731: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:23 compute-0 nova_compute[351685]: 2025-10-03 10:58:23.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:23 compute-0 nova_compute[351685]: 2025-10-03 10:58:23.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2732: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:26 compute-0 nova_compute[351685]: 2025-10-03 10:58:26.419 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:58:26 compute-0 nova_compute[351685]: 2025-10-03 10:58:26.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:58:26 compute-0 nova_compute[351685]: 2025-10-03 10:58:26.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:58:26 compute-0 nova_compute[351685]: 2025-10-03 10:58:26.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:58:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2733: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:26 compute-0 nova_compute[351685]: 2025-10-03 10:58:26.758 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:58:26 compute-0 nova_compute[351685]: 2025-10-03 10:58:26.758 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:58:26 compute-0 nova_compute[351685]: 2025-10-03 10:58:26.759 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:58:26 compute-0 nova_compute[351685]: 2025-10-03 10:58:26.759 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:58:26 compute-0 nova_compute[351685]: 2025-10-03 10:58:26.759 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:58:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:58:27 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/875319816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:58:27 compute-0 nova_compute[351685]: 2025-10-03 10:58:27.237 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:58:27 compute-0 nova_compute[351685]: 2025-10-03 10:58:27.315 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:58:27 compute-0 nova_compute[351685]: 2025-10-03 10:58:27.315 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:58:27 compute-0 nova_compute[351685]: 2025-10-03 10:58:27.315 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:58:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:58:27 compute-0 nova_compute[351685]: 2025-10-03 10:58:27.772 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:58:27 compute-0 nova_compute[351685]: 2025-10-03 10:58:27.774 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3790MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:58:27 compute-0 nova_compute[351685]: 2025-10-03 10:58:27.775 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:58:27 compute-0 nova_compute[351685]: 2025-10-03 10:58:27.775 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:58:27 compute-0 nova_compute[351685]: 2025-10-03 10:58:27.866 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:58:27 compute-0 nova_compute[351685]: 2025-10-03 10:58:27.866 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:58:27 compute-0 nova_compute[351685]: 2025-10-03 10:58:27.867 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:58:27 compute-0 nova_compute[351685]: 2025-10-03 10:58:27.882 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 10:58:27 compute-0 nova_compute[351685]: 2025-10-03 10:58:27.908 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 10:58:27 compute-0 nova_compute[351685]: 2025-10-03 10:58:27.908 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 10:58:27 compute-0 nova_compute[351685]: 2025-10-03 10:58:27.929 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 10:58:27 compute-0 nova_compute[351685]: 2025-10-03 10:58:27.963 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 10:58:28 compute-0 nova_compute[351685]: 2025-10-03 10:58:28.013 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:58:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:58:28 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3262539114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:58:28 compute-0 nova_compute[351685]: 2025-10-03 10:58:28.553 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:58:28 compute-0 nova_compute[351685]: 2025-10-03 10:58:28.566 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:58:28 compute-0 nova_compute[351685]: 2025-10-03 10:58:28.590 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:58:28 compute-0 nova_compute[351685]: 2025-10-03 10:58:28.594 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:58:28 compute-0 nova_compute[351685]: 2025-10-03 10:58:28.595 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:58:28 compute-0 nova_compute[351685]: 2025-10-03 10:58:28.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2734: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:28 compute-0 nova_compute[351685]: 2025-10-03 10:58:28.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:29 compute-0 podman[157165]: time="2025-10-03T10:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:58:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:58:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9091 "" "Go-http-client/1.1"
Oct  3 10:58:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2735: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:31 compute-0 openstack_network_exporter[367524]: ERROR   10:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:58:31 compute-0 openstack_network_exporter[367524]: ERROR   10:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:58:31 compute-0 openstack_network_exporter[367524]: ERROR   10:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:58:31 compute-0 openstack_network_exporter[367524]: ERROR   10:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:58:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:58:31 compute-0 openstack_network_exporter[367524]: ERROR   10:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:58:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:58:31 compute-0 nova_compute[351685]: 2025-10-03 10:58:31.597 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:58:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:58:32 compute-0 nova_compute[351685]: 2025-10-03 10:58:32.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:58:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2736: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:32 compute-0 podman[491377]: 2025-10-03 10:58:32.878919613 +0000 UTC m=+0.120520460 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, architecture=x86_64, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 10:58:32 compute-0 podman[491378]: 2025-10-03 10:58:32.882848789 +0000 UTC m=+0.120427956 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:58:32 compute-0 podman[491379]: 2025-10-03 10:58:32.913594216 +0000 UTC m=+0.139087646 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Oct  3 10:58:32 compute-0 podman[491380]: 2025-10-03 10:58:32.919798535 +0000 UTC m=+0.144048495 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller)
Oct  3 10:58:33 compute-0 nova_compute[351685]: 2025-10-03 10:58:33.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:33 compute-0 nova_compute[351685]: 2025-10-03 10:58:33.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:34 compute-0 nova_compute[351685]: 2025-10-03 10:58:34.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:58:34 compute-0 nova_compute[351685]: 2025-10-03 10:58:34.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:58:34 compute-0 nova_compute[351685]: 2025-10-03 10:58:34.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:58:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2737: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:34 compute-0 podman[491457]: 2025-10-03 10:58:34.880541141 +0000 UTC m=+0.126090698 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 10:58:34 compute-0 podman[491459]: 2025-10-03 10:58:34.884785538 +0000 UTC m=+0.125410227 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3)
Oct  3 10:58:34 compute-0 podman[491458]: 2025-10-03 10:58:34.918160169 +0000 UTC m=+0.159828331 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible)
Oct  3 10:58:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2738: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:58:38 compute-0 nova_compute[351685]: 2025-10-03 10:58:38.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2739: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:38 compute-0 nova_compute[351685]: 2025-10-03 10:58:38.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2740: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:58:41.664 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:58:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:58:41.665 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:58:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:58:41.665 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:58:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:58:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2741: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:43 compute-0 nova_compute[351685]: 2025-10-03 10:58:43.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:43 compute-0 nova_compute[351685]: 2025-10-03 10:58:43.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:43 compute-0 podman[491613]: 2025-10-03 10:58:43.917657936 +0000 UTC m=+0.062701393 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 10:58:43 compute-0 podman[491612]: 2025-10-03 10:58:43.925892911 +0000 UTC m=+0.074444171 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, maintainer=Red Hat, Inc., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.buildah.version=1.29.0, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler)
Oct  3 10:58:44 compute-0 podman[491649]: 2025-10-03 10:58:44.033181445 +0000 UTC m=+0.074491493 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 10:58:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  3 10:58:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 10:58:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:58:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:58:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:58:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:58:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:58:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 10:58:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:58:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:58:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 2f5ab038-246c-4cac-898f-26f3a9ff63cd does not exist
Oct  3 10:58:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f63d8918-3a07-4ff2-be64-98bda5afd7fd does not exist
Oct  3 10:58:44 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 2fd49bdf-ad2a-4a90-bad1-b4d7be88622b does not exist
Oct  3 10:58:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:58:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:58:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:58:44 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:58:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:58:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:58:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2742: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:45 compute-0 podman[491838]: 2025-10-03 10:58:45.260735937 +0000 UTC m=+0.047112613 container create 3b3228d27ff1968e22daf54e142bf7db2012bcec598d07f863f87ab38e6d56d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cray, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct  3 10:58:45 compute-0 systemd[1]: Started libpod-conmon-3b3228d27ff1968e22daf54e142bf7db2012bcec598d07f863f87ab38e6d56d7.scope.
Oct  3 10:58:45 compute-0 podman[491838]: 2025-10-03 10:58:45.24121527 +0000 UTC m=+0.027591966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:58:45 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:58:45 compute-0 podman[491838]: 2025-10-03 10:58:45.369053574 +0000 UTC m=+0.155430270 container init 3b3228d27ff1968e22daf54e142bf7db2012bcec598d07f863f87ab38e6d56d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cray, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 10:58:45 compute-0 podman[491838]: 2025-10-03 10:58:45.377943299 +0000 UTC m=+0.164319975 container start 3b3228d27ff1968e22daf54e142bf7db2012bcec598d07f863f87ab38e6d56d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cray, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Oct  3 10:58:45 compute-0 podman[491838]: 2025-10-03 10:58:45.382547057 +0000 UTC m=+0.168923753 container attach 3b3228d27ff1968e22daf54e142bf7db2012bcec598d07f863f87ab38e6d56d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cray, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  3 10:58:45 compute-0 flamboyant_cray[491854]: 167 167
Oct  3 10:58:45 compute-0 systemd[1]: libpod-3b3228d27ff1968e22daf54e142bf7db2012bcec598d07f863f87ab38e6d56d7.scope: Deactivated successfully.
Oct  3 10:58:45 compute-0 podman[491838]: 2025-10-03 10:58:45.386956959 +0000 UTC m=+0.173333645 container died 3b3228d27ff1968e22daf54e142bf7db2012bcec598d07f863f87ab38e6d56d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:58:45 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:58:45 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:58:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-52848d93ef450b89fa6f5cd4ae040bcc53b3982c4d5f22d4a07c443ede62ffe4-merged.mount: Deactivated successfully.
Oct  3 10:58:45 compute-0 podman[491838]: 2025-10-03 10:58:45.452056468 +0000 UTC m=+0.238433154 container remove 3b3228d27ff1968e22daf54e142bf7db2012bcec598d07f863f87ab38e6d56d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_cray, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 10:58:45 compute-0 systemd[1]: libpod-conmon-3b3228d27ff1968e22daf54e142bf7db2012bcec598d07f863f87ab38e6d56d7.scope: Deactivated successfully.
Oct  3 10:58:45 compute-0 podman[491878]: 2025-10-03 10:58:45.650293431 +0000 UTC m=+0.055792662 container create a1e1fc652ae1214654d5b50dd75e95d49b16dd2ede6461ecf567f6a6e9563fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 10:58:45 compute-0 systemd[1]: Started libpod-conmon-a1e1fc652ae1214654d5b50dd75e95d49b16dd2ede6461ecf567f6a6e9563fa2.scope.
Oct  3 10:58:45 compute-0 podman[491878]: 2025-10-03 10:58:45.63093002 +0000 UTC m=+0.036429271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:58:45 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f60148044b990a89c21e37407c460b6205447a60a9986c8ce96ed83c884d94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f60148044b990a89c21e37407c460b6205447a60a9986c8ce96ed83c884d94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f60148044b990a89c21e37407c460b6205447a60a9986c8ce96ed83c884d94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f60148044b990a89c21e37407c460b6205447a60a9986c8ce96ed83c884d94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:58:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53f60148044b990a89c21e37407c460b6205447a60a9986c8ce96ed83c884d94/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:58:45 compute-0 podman[491878]: 2025-10-03 10:58:45.780822732 +0000 UTC m=+0.186321993 container init a1e1fc652ae1214654d5b50dd75e95d49b16dd2ede6461ecf567f6a6e9563fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 10:58:45 compute-0 podman[491878]: 2025-10-03 10:58:45.796760993 +0000 UTC m=+0.202260224 container start a1e1fc652ae1214654d5b50dd75e95d49b16dd2ede6461ecf567f6a6e9563fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brattain, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:58:45 compute-0 podman[491878]: 2025-10-03 10:58:45.80228425 +0000 UTC m=+0.207783501 container attach a1e1fc652ae1214654d5b50dd75e95d49b16dd2ede6461ecf567f6a6e9563fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:58:46
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['volumes', '.rgw.root', 'images', 'backups', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control']
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2743: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:58:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:58:47 compute-0 cranky_brattain[491894]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:58:47 compute-0 cranky_brattain[491894]: --> relative data size: 1.0
Oct  3 10:58:47 compute-0 cranky_brattain[491894]: --> All data devices are unavailable
Oct  3 10:58:47 compute-0 systemd[1]: libpod-a1e1fc652ae1214654d5b50dd75e95d49b16dd2ede6461ecf567f6a6e9563fa2.scope: Deactivated successfully.
Oct  3 10:58:47 compute-0 systemd[1]: libpod-a1e1fc652ae1214654d5b50dd75e95d49b16dd2ede6461ecf567f6a6e9563fa2.scope: Consumed 1.238s CPU time.
Oct  3 10:58:47 compute-0 podman[491923]: 2025-10-03 10:58:47.142785288 +0000 UTC m=+0.037863656 container died a1e1fc652ae1214654d5b50dd75e95d49b16dd2ede6461ecf567f6a6e9563fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brattain, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:58:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-53f60148044b990a89c21e37407c460b6205447a60a9986c8ce96ed83c884d94-merged.mount: Deactivated successfully.
Oct  3 10:58:47 compute-0 podman[491923]: 2025-10-03 10:58:47.221463494 +0000 UTC m=+0.116541812 container remove a1e1fc652ae1214654d5b50dd75e95d49b16dd2ede6461ecf567f6a6e9563fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Oct  3 10:58:47 compute-0 systemd[1]: libpod-conmon-a1e1fc652ae1214654d5b50dd75e95d49b16dd2ede6461ecf567f6a6e9563fa2.scope: Deactivated successfully.
Oct  3 10:58:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:58:48 compute-0 podman[492078]: 2025-10-03 10:58:48.186230891 +0000 UTC m=+0.072603372 container create 4012fbc1d0263ced0977c30098d2f40b7e58317029596c0a9e731d8858aed213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pike, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 10:58:48 compute-0 systemd[1]: Started libpod-conmon-4012fbc1d0263ced0977c30098d2f40b7e58317029596c0a9e731d8858aed213.scope.
Oct  3 10:58:48 compute-0 podman[492078]: 2025-10-03 10:58:48.15160233 +0000 UTC m=+0.037974891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:58:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:58:48 compute-0 podman[492078]: 2025-10-03 10:58:48.310708477 +0000 UTC m=+0.197080978 container init 4012fbc1d0263ced0977c30098d2f40b7e58317029596c0a9e731d8858aed213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pike, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 10:58:48 compute-0 podman[492078]: 2025-10-03 10:58:48.322435893 +0000 UTC m=+0.208808384 container start 4012fbc1d0263ced0977c30098d2f40b7e58317029596c0a9e731d8858aed213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 10:58:48 compute-0 loving_pike[492094]: 167 167
Oct  3 10:58:48 compute-0 systemd[1]: libpod-4012fbc1d0263ced0977c30098d2f40b7e58317029596c0a9e731d8858aed213.scope: Deactivated successfully.
Oct  3 10:58:48 compute-0 podman[492078]: 2025-10-03 10:58:48.336714822 +0000 UTC m=+0.223087383 container attach 4012fbc1d0263ced0977c30098d2f40b7e58317029596c0a9e731d8858aed213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:58:48 compute-0 podman[492078]: 2025-10-03 10:58:48.33730829 +0000 UTC m=+0.223680791 container died 4012fbc1d0263ced0977c30098d2f40b7e58317029596c0a9e731d8858aed213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 10:58:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-19fc19f24e95cbe95b828d73b80ec2e230a4b457607353ec7f2f25d6a2d644de-merged.mount: Deactivated successfully.
Oct  3 10:58:48 compute-0 podman[492078]: 2025-10-03 10:58:48.453367296 +0000 UTC m=+0.339739767 container remove 4012fbc1d0263ced0977c30098d2f40b7e58317029596c0a9e731d8858aed213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_pike, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Oct  3 10:58:48 compute-0 systemd[1]: libpod-conmon-4012fbc1d0263ced0977c30098d2f40b7e58317029596c0a9e731d8858aed213.scope: Deactivated successfully.
Oct  3 10:58:48 compute-0 nova_compute[351685]: 2025-10-03 10:58:48.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:48 compute-0 podman[492116]: 2025-10-03 10:58:48.728418055 +0000 UTC m=+0.098711910 container create 3cc3cd44dcb79be90a32309d9f65d46b78ab6571b2152af6e2f9b02b66a52a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct  3 10:58:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2744: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:48 compute-0 podman[492116]: 2025-10-03 10:58:48.692175012 +0000 UTC m=+0.062468937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:58:48 compute-0 systemd[1]: Started libpod-conmon-3cc3cd44dcb79be90a32309d9f65d46b78ab6571b2152af6e2f9b02b66a52a0a.scope.
Oct  3 10:58:48 compute-0 nova_compute[351685]: 2025-10-03 10:58:48.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:58:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b184bf784650bad30e3c4fa82ddd3fd0a2cf7e09f29d41b27b73dfc2007eac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:58:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b184bf784650bad30e3c4fa82ddd3fd0a2cf7e09f29d41b27b73dfc2007eac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:58:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b184bf784650bad30e3c4fa82ddd3fd0a2cf7e09f29d41b27b73dfc2007eac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:58:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b184bf784650bad30e3c4fa82ddd3fd0a2cf7e09f29d41b27b73dfc2007eac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:58:48 compute-0 podman[492116]: 2025-10-03 10:58:48.914343042 +0000 UTC m=+0.284636967 container init 3cc3cd44dcb79be90a32309d9f65d46b78ab6571b2152af6e2f9b02b66a52a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 10:58:48 compute-0 podman[492116]: 2025-10-03 10:58:48.934448398 +0000 UTC m=+0.304742263 container start 3cc3cd44dcb79be90a32309d9f65d46b78ab6571b2152af6e2f9b02b66a52a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 10:58:48 compute-0 podman[492116]: 2025-10-03 10:58:48.943174628 +0000 UTC m=+0.313468493 container attach 3cc3cd44dcb79be90a32309d9f65d46b78ab6571b2152af6e2f9b02b66a52a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 10:58:49 compute-0 hungry_swanson[492132]: {
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:    "0": [
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:        {
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "devices": [
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "/dev/loop3"
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            ],
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "lv_name": "ceph_lv0",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "lv_size": "21470642176",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "name": "ceph_lv0",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "tags": {
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.cluster_name": "ceph",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.crush_device_class": "",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.encrypted": "0",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.osd_id": "0",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.type": "block",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.vdo": "0"
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            },
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "type": "block",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "vg_name": "ceph_vg0"
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:        }
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:    ],
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:    "1": [
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:        {
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "devices": [
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "/dev/loop4"
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            ],
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "lv_name": "ceph_lv1",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "lv_size": "21470642176",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "name": "ceph_lv1",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "tags": {
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.cluster_name": "ceph",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.crush_device_class": "",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.encrypted": "0",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.osd_id": "1",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.type": "block",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.vdo": "0"
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            },
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "type": "block",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "vg_name": "ceph_vg1"
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:        }
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:    ],
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:    "2": [
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:        {
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "devices": [
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "/dev/loop5"
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            ],
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "lv_name": "ceph_lv2",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "lv_size": "21470642176",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "name": "ceph_lv2",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "tags": {
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.cephx_lockbox_secret": "",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.cluster_name": "ceph",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.crush_device_class": "",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.encrypted": "0",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.osd_id": "2",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.type": "block",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:                "ceph.vdo": "0"
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            },
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "type": "block",
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:            "vg_name": "ceph_vg2"
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:        }
Oct  3 10:58:49 compute-0 hungry_swanson[492132]:    ]
Oct  3 10:58:49 compute-0 hungry_swanson[492132]: }
Oct  3 10:58:49 compute-0 systemd[1]: libpod-3cc3cd44dcb79be90a32309d9f65d46b78ab6571b2152af6e2f9b02b66a52a0a.scope: Deactivated successfully.
Oct  3 10:58:49 compute-0 podman[492116]: 2025-10-03 10:58:49.786192968 +0000 UTC m=+1.156486833 container died 3cc3cd44dcb79be90a32309d9f65d46b78ab6571b2152af6e2f9b02b66a52a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:58:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-16b184bf784650bad30e3c4fa82ddd3fd0a2cf7e09f29d41b27b73dfc2007eac-merged.mount: Deactivated successfully.
Oct  3 10:58:49 compute-0 podman[492116]: 2025-10-03 10:58:49.903213704 +0000 UTC m=+1.273507539 container remove 3cc3cd44dcb79be90a32309d9f65d46b78ab6571b2152af6e2f9b02b66a52a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swanson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 10:58:49 compute-0 systemd[1]: libpod-conmon-3cc3cd44dcb79be90a32309d9f65d46b78ab6571b2152af6e2f9b02b66a52a0a.scope: Deactivated successfully.
Oct  3 10:58:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2745: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:50 compute-0 podman[492289]: 2025-10-03 10:58:50.977960951 +0000 UTC m=+0.063951013 container create ce8453bc81935fe5cc1887e5209d0e1096dc4bab0e9493ad644266f6bc0ac6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_heyrovsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:58:51 compute-0 systemd[1]: Started libpod-conmon-ce8453bc81935fe5cc1887e5209d0e1096dc4bab0e9493ad644266f6bc0ac6de.scope.
Oct  3 10:58:51 compute-0 podman[492289]: 2025-10-03 10:58:50.956048018 +0000 UTC m=+0.042038110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:58:51 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:58:51 compute-0 podman[492289]: 2025-10-03 10:58:51.11810229 +0000 UTC m=+0.204092412 container init ce8453bc81935fe5cc1887e5209d0e1096dc4bab0e9493ad644266f6bc0ac6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:58:51 compute-0 podman[492289]: 2025-10-03 10:58:51.134657621 +0000 UTC m=+0.220647713 container start ce8453bc81935fe5cc1887e5209d0e1096dc4bab0e9493ad644266f6bc0ac6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 10:58:51 compute-0 podman[492289]: 2025-10-03 10:58:51.141433769 +0000 UTC m=+0.227423841 container attach ce8453bc81935fe5cc1887e5209d0e1096dc4bab0e9493ad644266f6bc0ac6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_heyrovsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:58:51 compute-0 priceless_heyrovsky[492304]: 167 167
Oct  3 10:58:51 compute-0 systemd[1]: libpod-ce8453bc81935fe5cc1887e5209d0e1096dc4bab0e9493ad644266f6bc0ac6de.scope: Deactivated successfully.
Oct  3 10:58:51 compute-0 podman[492289]: 2025-10-03 10:58:51.146717329 +0000 UTC m=+0.232707371 container died ce8453bc81935fe5cc1887e5209d0e1096dc4bab0e9493ad644266f6bc0ac6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_heyrovsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct  3 10:58:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-78db8ae74f0207071ed2d6785a9bd5f6ece4b874bb40693e5dc3c959bf0d4d26-merged.mount: Deactivated successfully.
Oct  3 10:58:51 compute-0 podman[492289]: 2025-10-03 10:58:51.200451713 +0000 UTC m=+0.286441765 container remove ce8453bc81935fe5cc1887e5209d0e1096dc4bab0e9493ad644266f6bc0ac6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 10:58:51 compute-0 systemd[1]: libpod-conmon-ce8453bc81935fe5cc1887e5209d0e1096dc4bab0e9493ad644266f6bc0ac6de.scope: Deactivated successfully.
Oct  3 10:58:51 compute-0 podman[492326]: 2025-10-03 10:58:51.445586652 +0000 UTC m=+0.070881147 container create 9e846cb98036465aa9ab1f35d6a78f48d81e36614ab08fbb14055a951994ad31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_newton, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 10:58:51 compute-0 podman[492326]: 2025-10-03 10:58:51.414172523 +0000 UTC m=+0.039467068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:58:51 compute-0 systemd[1]: Started libpod-conmon-9e846cb98036465aa9ab1f35d6a78f48d81e36614ab08fbb14055a951994ad31.scope.
Oct  3 10:58:51 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:58:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a94b80704b232d6da30da451fc7631277ca85eb7d70f986837b859a4fb1825/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:58:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a94b80704b232d6da30da451fc7631277ca85eb7d70f986837b859a4fb1825/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:58:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a94b80704b232d6da30da451fc7631277ca85eb7d70f986837b859a4fb1825/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:58:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a94b80704b232d6da30da451fc7631277ca85eb7d70f986837b859a4fb1825/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:58:51 compute-0 podman[492326]: 2025-10-03 10:58:51.623613727 +0000 UTC m=+0.248908202 container init 9e846cb98036465aa9ab1f35d6a78f48d81e36614ab08fbb14055a951994ad31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 10:58:51 compute-0 podman[492326]: 2025-10-03 10:58:51.648304869 +0000 UTC m=+0.273599344 container start 9e846cb98036465aa9ab1f35d6a78f48d81e36614ab08fbb14055a951994ad31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_newton, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:58:51 compute-0 podman[492326]: 2025-10-03 10:58:51.653561967 +0000 UTC m=+0.278856472 container attach 9e846cb98036465aa9ab1f35d6a78f48d81e36614ab08fbb14055a951994ad31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_newton, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:58:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:58:52 compute-0 compassionate_newton[492342]: {
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:        "osd_id": 1,
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:        "type": "bluestore"
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:    },
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:        "osd_id": 2,
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:        "type": "bluestore"
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:    },
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:        "osd_id": 0,
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:        "type": "bluestore"
Oct  3 10:58:52 compute-0 compassionate_newton[492342]:    }
Oct  3 10:58:52 compute-0 compassionate_newton[492342]: }
Oct  3 10:58:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2746: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:52 compute-0 systemd[1]: libpod-9e846cb98036465aa9ab1f35d6a78f48d81e36614ab08fbb14055a951994ad31.scope: Deactivated successfully.
Oct  3 10:58:52 compute-0 systemd[1]: libpod-9e846cb98036465aa9ab1f35d6a78f48d81e36614ab08fbb14055a951994ad31.scope: Consumed 1.135s CPU time.
Oct  3 10:58:52 compute-0 podman[492375]: 2025-10-03 10:58:52.860729786 +0000 UTC m=+0.057866519 container died 9e846cb98036465aa9ab1f35d6a78f48d81e36614ab08fbb14055a951994ad31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 10:58:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7a94b80704b232d6da30da451fc7631277ca85eb7d70f986837b859a4fb1825-merged.mount: Deactivated successfully.
Oct  3 10:58:52 compute-0 podman[492375]: 2025-10-03 10:58:52.972449992 +0000 UTC m=+0.169586705 container remove 9e846cb98036465aa9ab1f35d6a78f48d81e36614ab08fbb14055a951994ad31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_newton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:58:52 compute-0 systemd[1]: libpod-conmon-9e846cb98036465aa9ab1f35d6a78f48d81e36614ab08fbb14055a951994ad31.scope: Deactivated successfully.
Oct  3 10:58:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 10:58:53 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:58:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 10:58:53 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:58:53 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 08950afa-5dd3-4640-a5d5-94ec80290ae3 does not exist
Oct  3 10:58:53 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 0444e30a-478d-41db-b735-cd501ed3a667 does not exist
Oct  3 10:58:53 compute-0 nova_compute[351685]: 2025-10-03 10:58:53.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:53 compute-0 nova_compute[351685]: 2025-10-03 10:58:53.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:58:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2560568641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:58:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:58:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2560568641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:58:54 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:58:54 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:58:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2747: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:58:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2748: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:58:58 compute-0 nova_compute[351685]: 2025-10-03 10:58:58.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2749: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:58:58 compute-0 nova_compute[351685]: 2025-10-03 10:58:58.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:58:59 compute-0 podman[157165]: time="2025-10-03T10:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:58:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:58:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9078 "" "Go-http-client/1.1"
Oct  3 10:59:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2750: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:01 compute-0 openstack_network_exporter[367524]: ERROR   10:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:59:01 compute-0 openstack_network_exporter[367524]: ERROR   10:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:59:01 compute-0 openstack_network_exporter[367524]: ERROR   10:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:59:01 compute-0 openstack_network_exporter[367524]: ERROR   10:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:59:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:59:01 compute-0 openstack_network_exporter[367524]: ERROR   10:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:59:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:59:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:59:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2751: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:03 compute-0 nova_compute[351685]: 2025-10-03 10:59:03.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:03 compute-0 nova_compute[351685]: 2025-10-03 10:59:03.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:03 compute-0 podman[492440]: 2025-10-03 10:59:03.87646308 +0000 UTC m=+0.104570838 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 10:59:03 compute-0 podman[492441]: 2025-10-03 10:59:03.8798788 +0000 UTC m=+0.116868533 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Oct  3 10:59:03 compute-0 podman[492439]: 2025-10-03 10:59:03.914359866 +0000 UTC m=+0.157986222 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Oct  3 10:59:03 compute-0 podman[492442]: 2025-10-03 10:59:03.941223419 +0000 UTC m=+0.180087252 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 10:59:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2752: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:05 compute-0 podman[492517]: 2025-10-03 10:59:05.82419982 +0000 UTC m=+0.082454937 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 10:59:05 compute-0 podman[492518]: 2025-10-03 10:59:05.848047236 +0000 UTC m=+0.092879163 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=multipathd)
Oct  3 10:59:05 compute-0 podman[492519]: 2025-10-03 10:59:05.857594072 +0000 UTC m=+0.104387911 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct  3 10:59:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2753: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:59:08 compute-0 nova_compute[351685]: 2025-10-03 10:59:08.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2754: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:08 compute-0 nova_compute[351685]: 2025-10-03 10:59:08.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2755: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:59:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2756: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:13 compute-0 nova_compute[351685]: 2025-10-03 10:59:13.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:13 compute-0 nova_compute[351685]: 2025-10-03 10:59:13.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:14 compute-0 podman[492580]: 2025-10-03 10:59:14.762327827 +0000 UTC m=+0.072413355 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Oct  3 10:59:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2757: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:14 compute-0 podman[492578]: 2025-10-03 10:59:14.779115236 +0000 UTC m=+0.098645317 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 10:59:14 compute-0 podman[492579]: 2025-10-03 10:59:14.792592609 +0000 UTC m=+0.101507949 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, version=9.4, distribution-scope=public, name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, container_name=kepler, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9)
Oct  3 10:59:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:59:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:59:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:59:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:59:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:59:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:59:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2758: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:59:18 compute-0 nova_compute[351685]: 2025-10-03 10:59:18.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2759: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:18 compute-0 nova_compute[351685]: 2025-10-03 10:59:18.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:19 compute-0 nova_compute[351685]: 2025-10-03 10:59:19.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:59:19 compute-0 nova_compute[351685]: 2025-10-03 10:59:19.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 10:59:19 compute-0 nova_compute[351685]: 2025-10-03 10:59:19.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 10:59:20 compute-0 nova_compute[351685]: 2025-10-03 10:59:20.256 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 10:59:20 compute-0 nova_compute[351685]: 2025-10-03 10:59:20.257 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 10:59:20 compute-0 nova_compute[351685]: 2025-10-03 10:59:20.257 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 10:59:20 compute-0 nova_compute[351685]: 2025-10-03 10:59:20.257 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 10:59:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2760: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:21 compute-0 nova_compute[351685]: 2025-10-03 10:59:21.540 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 10:59:21 compute-0 nova_compute[351685]: 2025-10-03 10:59:21.554 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 10:59:21 compute-0 nova_compute[351685]: 2025-10-03 10:59:21.555 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 10:59:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:59:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2761: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:23 compute-0 nova_compute[351685]: 2025-10-03 10:59:23.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:23 compute-0 nova_compute[351685]: 2025-10-03 10:59:23.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2762: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:26 compute-0 nova_compute[351685]: 2025-10-03 10:59:26.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:59:26 compute-0 nova_compute[351685]: 2025-10-03 10:59:26.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:59:26 compute-0 nova_compute[351685]: 2025-10-03 10:59:26.755 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:59:26 compute-0 nova_compute[351685]: 2025-10-03 10:59:26.755 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:59:26 compute-0 nova_compute[351685]: 2025-10-03 10:59:26.756 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:59:26 compute-0 nova_compute[351685]: 2025-10-03 10:59:26.756 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 10:59:26 compute-0 nova_compute[351685]: 2025-10-03 10:59:26.757 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:59:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2763: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:59:27 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300817809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:59:27 compute-0 nova_compute[351685]: 2025-10-03 10:59:27.219 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:59:27 compute-0 nova_compute[351685]: 2025-10-03 10:59:27.476 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:59:27 compute-0 nova_compute[351685]: 2025-10-03 10:59:27.476 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:59:27 compute-0 nova_compute[351685]: 2025-10-03 10:59:27.477 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 10:59:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:59:27 compute-0 nova_compute[351685]: 2025-10-03 10:59:27.864 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 10:59:27 compute-0 nova_compute[351685]: 2025-10-03 10:59:27.865 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3790MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 10:59:27 compute-0 nova_compute[351685]: 2025-10-03 10:59:27.866 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:59:27 compute-0 nova_compute[351685]: 2025-10-03 10:59:27.866 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:59:28 compute-0 nova_compute[351685]: 2025-10-03 10:59:28.203 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 10:59:28 compute-0 nova_compute[351685]: 2025-10-03 10:59:28.203 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 10:59:28 compute-0 nova_compute[351685]: 2025-10-03 10:59:28.204 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 10:59:28 compute-0 nova_compute[351685]: 2025-10-03 10:59:28.411 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 10:59:28 compute-0 nova_compute[351685]: 2025-10-03 10:59:28.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2764: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 10:59:28 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3917107462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 10:59:28 compute-0 nova_compute[351685]: 2025-10-03 10:59:28.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:28 compute-0 nova_compute[351685]: 2025-10-03 10:59:28.896 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 10:59:28 compute-0 nova_compute[351685]: 2025-10-03 10:59:28.908 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 10:59:28 compute-0 nova_compute[351685]: 2025-10-03 10:59:28.949 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 10:59:28 compute-0 nova_compute[351685]: 2025-10-03 10:59:28.951 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 10:59:28 compute-0 nova_compute[351685]: 2025-10-03 10:59:28.951 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:59:29 compute-0 podman[157165]: time="2025-10-03T10:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:59:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:59:29 compute-0 podman[157165]: @ - - [03/Oct/2025:10:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9084 "" "Go-http-client/1.1"
Oct  3 10:59:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2765: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:30 compute-0 nova_compute[351685]: 2025-10-03 10:59:30.953 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:59:30 compute-0 nova_compute[351685]: 2025-10-03 10:59:30.954 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:59:30 compute-0 nova_compute[351685]: 2025-10-03 10:59:30.954 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 10:59:31 compute-0 openstack_network_exporter[367524]: ERROR   10:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 10:59:31 compute-0 openstack_network_exporter[367524]: ERROR   10:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:59:31 compute-0 openstack_network_exporter[367524]: ERROR   10:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 10:59:31 compute-0 openstack_network_exporter[367524]: ERROR   10:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 10:59:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:59:31 compute-0 openstack_network_exporter[367524]: ERROR   10:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 10:59:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 10:59:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:59:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2766: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:33 compute-0 nova_compute[351685]: 2025-10-03 10:59:33.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:33 compute-0 nova_compute[351685]: 2025-10-03 10:59:33.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:59:33 compute-0 nova_compute[351685]: 2025-10-03 10:59:33.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:34 compute-0 nova_compute[351685]: 2025-10-03 10:59:34.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:59:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2767: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:34 compute-0 podman[492684]: 2025-10-03 10:59:34.824731978 +0000 UTC m=+0.083221642 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Oct  3 10:59:34 compute-0 podman[492685]: 2025-10-03 10:59:34.851332273 +0000 UTC m=+0.097687257 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 10:59:34 compute-0 podman[492691]: 2025-10-03 10:59:34.864530046 +0000 UTC m=+0.094186454 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Oct  3 10:59:34 compute-0 podman[492692]: 2025-10-03 10:59:34.919925304 +0000 UTC m=+0.152323860 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 10:59:35 compute-0 nova_compute[351685]: 2025-10-03 10:59:35.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:59:35 compute-0 nova_compute[351685]: 2025-10-03 10:59:35.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:59:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2768: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:36 compute-0 podman[492764]: 2025-10-03 10:59:36.830746299 +0000 UTC m=+0.077724416 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  3 10:59:36 compute-0 podman[492763]: 2025-10-03 10:59:36.847425825 +0000 UTC m=+0.101438448 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 10:59:36 compute-0 podman[492765]: 2025-10-03 10:59:36.849962446 +0000 UTC m=+0.095014681 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 10:59:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:59:38 compute-0 nova_compute[351685]: 2025-10-03 10:59:38.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2769: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:38 compute-0 nova_compute[351685]: 2025-10-03 10:59:38.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2770: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.898 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.898 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.898 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.899 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.905 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.905 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.905 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.905 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.905 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T10:59:40.905742) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.909 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.910 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.910 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.910 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.910 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.910 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.910 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.910 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.911 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.911 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.911 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.911 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.911 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T10:59:40.910469) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T10:59:40.911314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.931 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.931 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.932 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.932 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.932 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.932 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.932 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.932 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.932 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.933 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T10:59:40.932920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.975 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.975 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.976 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.976 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.977 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.977 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.977 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.977 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.977 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.977 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T10:59:40.977524) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.978 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.978 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.979 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.979 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.979 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.979 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.979 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.980 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T10:59:40.979749) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.980 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.980 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.980 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.981 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.981 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.981 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.981 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.981 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.982 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.982 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.982 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.983 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.983 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.983 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.983 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.983 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.983 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.984 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.984 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.985 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.985 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T10:59:40.981866) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.985 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T10:59:40.983729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.985 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.985 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.986 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.986 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T10:59:40.986145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.986 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.986 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.986 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.987 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.987 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.987 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.987 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.988 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.988 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.988 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:40.988 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T10:59:40.988301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.005 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.006 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.006 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.006 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.006 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.006 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.006 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.007 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.007 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.007 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.007 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.008 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.008 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.008 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.008 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.008 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T10:59:41.006760) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.008 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T10:59:41.008062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.009 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.009 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T10:59:41.009559) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.010 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.010 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.010 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.010 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T10:59:41.010713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.011 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.011 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T10:59:41.011598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.012 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.012 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.012 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.012 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T10:59:41.012723) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.013 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.013 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T10:59:41.013781) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.014 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.014 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.014 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.015 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T10:59:41.014852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.015 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 81810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.015 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.015 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.015 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.016 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.016 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T10:59:41.015893) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.016 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.017 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.017 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.017 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.017 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.018 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T10:59:41.017124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.018 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.019 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T10:59:41.018689) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.019 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.019 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.020 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.021 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T10:59:41.019801) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.021 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.021 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.022 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.022 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.023 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.023 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T10:59:41.021385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.024 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T10:59:41.022560) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.024 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 10:59:41.025 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 10:59:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:59:41.665 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 10:59:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:59:41.667 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 10:59:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 10:59:41.667 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 10:59:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:59:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2771: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:43 compute-0 nova_compute[351685]: 2025-10-03 10:59:43.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:43 compute-0 nova_compute[351685]: 2025-10-03 10:59:43.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2772: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:45 compute-0 podman[492824]: 2025-10-03 10:59:45.836803007 +0000 UTC m=+0.088668348 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm)
Oct  3 10:59:45 compute-0 podman[492822]: 2025-10-03 10:59:45.850797766 +0000 UTC m=+0.113444073 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 10:59:45 compute-0 podman[492823]: 2025-10-03 10:59:45.859766694 +0000 UTC m=+0.104903138 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, name=ubi9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, config_id=edpm, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc.)
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_10:59:46
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'volumes', 'default.rgw.control', 'backups', 'vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images']
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2773: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:59:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 10:59:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:59:48 compute-0 nova_compute[351685]: 2025-10-03 10:59:48.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2774: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:48 compute-0 nova_compute[351685]: 2025-10-03 10:59:48.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2775: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:59:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2776: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:53 compute-0 nova_compute[351685]: 2025-10-03 10:59:53.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:53 compute-0 nova_compute[351685]: 2025-10-03 10:59:53.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 10:59:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/515721573' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 10:59:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 10:59:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/515721573' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 10:59:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:59:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:59:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 10:59:54 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:59:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 10:59:54 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:59:54 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 2158e4fa-7212-4b29-bfbc-a3df59418d05 does not exist
Oct  3 10:59:54 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a1631303-d7fa-45d4-8d11-3d09487be0dc does not exist
Oct  3 10:59:54 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 10878818-d7f7-41a4-bcb0-efe072664258 does not exist
Oct  3 10:59:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 10:59:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 10:59:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 10:59:54 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:59:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 10:59:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 10:59:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2777: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:55 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 10:59:55 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 10:59:55 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 10:59:55 compute-0 podman[493151]: 2025-10-03 10:59:55.549179316 +0000 UTC m=+0.058702986 container create 11a75daf52862119eeac3570564a721e0e62a56a6ba222cf678d035a49c809f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:59:55 compute-0 podman[493151]: 2025-10-03 10:59:55.528387128 +0000 UTC m=+0.037910858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:59:55 compute-0 systemd[1]: Started libpod-conmon-11a75daf52862119eeac3570564a721e0e62a56a6ba222cf678d035a49c809f0.scope.
Oct  3 10:59:55 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:59:55 compute-0 podman[493151]: 2025-10-03 10:59:55.764644092 +0000 UTC m=+0.274167812 container init 11a75daf52862119eeac3570564a721e0e62a56a6ba222cf678d035a49c809f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elbakyan, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 10:59:55 compute-0 podman[493151]: 2025-10-03 10:59:55.782865627 +0000 UTC m=+0.292389277 container start 11a75daf52862119eeac3570564a721e0e62a56a6ba222cf678d035a49c809f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elbakyan, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:59:55 compute-0 goofy_elbakyan[493166]: 167 167
Oct  3 10:59:55 compute-0 systemd[1]: libpod-11a75daf52862119eeac3570564a721e0e62a56a6ba222cf678d035a49c809f0.scope: Deactivated successfully.
Oct  3 10:59:55 compute-0 conmon[493166]: conmon 11a75daf52862119eeac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-11a75daf52862119eeac3570564a721e0e62a56a6ba222cf678d035a49c809f0.scope/container/memory.events
Oct  3 10:59:55 compute-0 podman[493151]: 2025-10-03 10:59:55.82221825 +0000 UTC m=+0.331741920 container attach 11a75daf52862119eeac3570564a721e0e62a56a6ba222cf678d035a49c809f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:59:55 compute-0 podman[493151]: 2025-10-03 10:59:55.823764379 +0000 UTC m=+0.333288059 container died 11a75daf52862119eeac3570564a721e0e62a56a6ba222cf678d035a49c809f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elbakyan, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 10:59:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c37d71d0d2f813966981fd5b00551321d17876873b81ee94d301921acf965e5b-merged.mount: Deactivated successfully.
Oct  3 10:59:55 compute-0 podman[493151]: 2025-10-03 10:59:55.977615598 +0000 UTC m=+0.487139288 container remove 11a75daf52862119eeac3570564a721e0e62a56a6ba222cf678d035a49c809f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 10:59:56 compute-0 systemd[1]: libpod-conmon-11a75daf52862119eeac3570564a721e0e62a56a6ba222cf678d035a49c809f0.scope: Deactivated successfully.
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 10:59:56 compute-0 podman[493188]: 2025-10-03 10:59:56.230905478 +0000 UTC m=+0.047407942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 10:59:56 compute-0 podman[493188]: 2025-10-03 10:59:56.628398517 +0000 UTC m=+0.444900951 container create d2e893e8ea3fd8a2bf2554cb52e1b74f3a2c26ebb09961210feb52ba5a6207fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 10:59:56 compute-0 nova_compute[351685]: 2025-10-03 10:59:56.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 10:59:56 compute-0 systemd[1]: Started libpod-conmon-d2e893e8ea3fd8a2bf2554cb52e1b74f3a2c26ebb09961210feb52ba5a6207fd.scope.
Oct  3 10:59:56 compute-0 systemd[1]: Started libcrun container.
Oct  3 10:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8410fc5574ff810f20cbedf00bbc5fa90eb1351272e127f493b8ed71d56c8e9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 10:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8410fc5574ff810f20cbedf00bbc5fa90eb1351272e127f493b8ed71d56c8e9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 10:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8410fc5574ff810f20cbedf00bbc5fa90eb1351272e127f493b8ed71d56c8e9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 10:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8410fc5574ff810f20cbedf00bbc5fa90eb1351272e127f493b8ed71d56c8e9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 10:59:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8410fc5574ff810f20cbedf00bbc5fa90eb1351272e127f493b8ed71d56c8e9c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 10:59:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2778: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:56 compute-0 podman[493188]: 2025-10-03 10:59:56.894046484 +0000 UTC m=+0.710549018 container init d2e893e8ea3fd8a2bf2554cb52e1b74f3a2c26ebb09961210feb52ba5a6207fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 10:59:56 compute-0 podman[493188]: 2025-10-03 10:59:56.902969301 +0000 UTC m=+0.719471745 container start d2e893e8ea3fd8a2bf2554cb52e1b74f3a2c26ebb09961210feb52ba5a6207fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:59:56 compute-0 podman[493188]: 2025-10-03 10:59:56.996657207 +0000 UTC m=+0.813159651 container attach d2e893e8ea3fd8a2bf2554cb52e1b74f3a2c26ebb09961210feb52ba5a6207fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 10:59:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 10:59:58 compute-0 quizzical_carver[493203]: --> passed data devices: 0 physical, 3 LVM
Oct  3 10:59:58 compute-0 quizzical_carver[493203]: --> relative data size: 1.0
Oct  3 10:59:58 compute-0 quizzical_carver[493203]: --> All data devices are unavailable
Oct  3 10:59:58 compute-0 systemd[1]: libpod-d2e893e8ea3fd8a2bf2554cb52e1b74f3a2c26ebb09961210feb52ba5a6207fd.scope: Deactivated successfully.
Oct  3 10:59:58 compute-0 systemd[1]: libpod-d2e893e8ea3fd8a2bf2554cb52e1b74f3a2c26ebb09961210feb52ba5a6207fd.scope: Consumed 1.212s CPU time.
Oct  3 10:59:58 compute-0 podman[493188]: 2025-10-03 10:59:58.191452769 +0000 UTC m=+2.007955243 container died d2e893e8ea3fd8a2bf2554cb52e1b74f3a2c26ebb09961210feb52ba5a6207fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 10:59:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-8410fc5574ff810f20cbedf00bbc5fa90eb1351272e127f493b8ed71d56c8e9c-merged.mount: Deactivated successfully.
Oct  3 10:59:58 compute-0 nova_compute[351685]: 2025-10-03 10:59:58.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2779: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 10:59:58 compute-0 podman[493188]: 2025-10-03 10:59:58.8416698 +0000 UTC m=+2.658172254 container remove d2e893e8ea3fd8a2bf2554cb52e1b74f3a2c26ebb09961210feb52ba5a6207fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 10:59:58 compute-0 systemd[1]: libpod-conmon-d2e893e8ea3fd8a2bf2554cb52e1b74f3a2c26ebb09961210feb52ba5a6207fd.scope: Deactivated successfully.
Oct  3 10:59:58 compute-0 nova_compute[351685]: 2025-10-03 10:59:58.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 10:59:59 compute-0 podman[157165]: time="2025-10-03T10:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 10:59:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 10:59:59 compute-0 podman[157165]: @ - - [03/Oct/2025:10:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9086 "" "Go-http-client/1.1"
Oct  3 11:00:00 compute-0 podman[493379]: 2025-10-03 10:59:59.950172301 +0000 UTC m=+0.050335427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:00:00 compute-0 podman[493379]: 2025-10-03 11:00:00.063592042 +0000 UTC m=+0.163755188 container create 4562f0bb53cddb6d5aa13d809bfd73c32d53b20491b79c4ab103d241eca1ea52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 11:00:00 compute-0 systemd[1]: Started libpod-conmon-4562f0bb53cddb6d5aa13d809bfd73c32d53b20491b79c4ab103d241eca1ea52.scope.
Oct  3 11:00:00 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:00:00 compute-0 podman[493379]: 2025-10-03 11:00:00.335286833 +0000 UTC m=+0.435449989 container init 4562f0bb53cddb6d5aa13d809bfd73c32d53b20491b79c4ab103d241eca1ea52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:00:00 compute-0 podman[493379]: 2025-10-03 11:00:00.344598302 +0000 UTC m=+0.444761398 container start 4562f0bb53cddb6d5aa13d809bfd73c32d53b20491b79c4ab103d241eca1ea52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 11:00:00 compute-0 tender_mclean[493395]: 167 167
Oct  3 11:00:00 compute-0 systemd[1]: libpod-4562f0bb53cddb6d5aa13d809bfd73c32d53b20491b79c4ab103d241eca1ea52.scope: Deactivated successfully.
Oct  3 11:00:00 compute-0 podman[493379]: 2025-10-03 11:00:00.549872131 +0000 UTC m=+0.650035237 container attach 4562f0bb53cddb6d5aa13d809bfd73c32d53b20491b79c4ab103d241eca1ea52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:00:00 compute-0 podman[493379]: 2025-10-03 11:00:00.550391198 +0000 UTC m=+0.650554324 container died 4562f0bb53cddb6d5aa13d809bfd73c32d53b20491b79c4ab103d241eca1ea52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:00:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2780: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-8436cb635773f85ec3aede2fd1881c836bc40ddae71871fc569c69f2ca7ef5f4-merged.mount: Deactivated successfully.
Oct  3 11:00:01 compute-0 podman[493379]: 2025-10-03 11:00:01.260776659 +0000 UTC m=+1.360939765 container remove 4562f0bb53cddb6d5aa13d809bfd73c32d53b20491b79c4ab103d241eca1ea52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:00:01 compute-0 systemd[1]: libpod-conmon-4562f0bb53cddb6d5aa13d809bfd73c32d53b20491b79c4ab103d241eca1ea52.scope: Deactivated successfully.
Oct  3 11:00:01 compute-0 openstack_network_exporter[367524]: ERROR   11:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:00:01 compute-0 openstack_network_exporter[367524]: ERROR   11:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:00:01 compute-0 openstack_network_exporter[367524]: ERROR   11:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:00:01 compute-0 openstack_network_exporter[367524]: ERROR   11:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:00:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:00:01 compute-0 openstack_network_exporter[367524]: ERROR   11:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:00:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:00:01 compute-0 podman[493418]: 2025-10-03 11:00:01.535509788 +0000 UTC m=+0.093031618 container create 1824a375c667ecdbd0c2e485e691a71eee1a4d891b92e3c907437a8e01fcbe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dijkstra, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 11:00:01 compute-0 podman[493418]: 2025-10-03 11:00:01.483154018 +0000 UTC m=+0.040675828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:00:01 compute-0 systemd[1]: Started libpod-conmon-1824a375c667ecdbd0c2e485e691a71eee1a4d891b92e3c907437a8e01fcbe1a.scope.
Oct  3 11:00:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee829a47e90cdece05f064b6d86e1e56a5d4cfe8b991e11f1a69f850c1c7fa6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee829a47e90cdece05f064b6d86e1e56a5d4cfe8b991e11f1a69f850c1c7fa6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee829a47e90cdece05f064b6d86e1e56a5d4cfe8b991e11f1a69f850c1c7fa6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee829a47e90cdece05f064b6d86e1e56a5d4cfe8b991e11f1a69f850c1c7fa6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:00:01 compute-0 podman[493418]: 2025-10-03 11:00:01.921109854 +0000 UTC m=+0.478631664 container init 1824a375c667ecdbd0c2e485e691a71eee1a4d891b92e3c907437a8e01fcbe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dijkstra, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:00:01 compute-0 podman[493418]: 2025-10-03 11:00:01.937491871 +0000 UTC m=+0.495013651 container start 1824a375c667ecdbd0c2e485e691a71eee1a4d891b92e3c907437a8e01fcbe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:00:02 compute-0 podman[493418]: 2025-10-03 11:00:02.028642906 +0000 UTC m=+0.586164806 container attach 1824a375c667ecdbd0c2e485e691a71eee1a4d891b92e3c907437a8e01fcbe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dijkstra, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:00:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]: {
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:    "0": [
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:        {
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "devices": [
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "/dev/loop3"
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            ],
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "lv_name": "ceph_lv0",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "lv_size": "21470642176",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "name": "ceph_lv0",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "tags": {
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.cluster_name": "ceph",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.crush_device_class": "",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.encrypted": "0",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.osd_id": "0",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.type": "block",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.vdo": "0"
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            },
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "type": "block",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "vg_name": "ceph_vg0"
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:        }
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:    ],
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:    "1": [
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:        {
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "devices": [
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "/dev/loop4"
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            ],
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "lv_name": "ceph_lv1",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "lv_size": "21470642176",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "name": "ceph_lv1",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "tags": {
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.cluster_name": "ceph",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.crush_device_class": "",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.encrypted": "0",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.osd_id": "1",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.type": "block",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.vdo": "0"
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            },
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "type": "block",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "vg_name": "ceph_vg1"
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:        }
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:    ],
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:    "2": [
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:        {
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "devices": [
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "/dev/loop5"
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            ],
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "lv_name": "ceph_lv2",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "lv_size": "21470642176",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "name": "ceph_lv2",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "tags": {
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.cluster_name": "ceph",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.crush_device_class": "",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.encrypted": "0",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.osd_id": "2",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.type": "block",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:                "ceph.vdo": "0"
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            },
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "type": "block",
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:            "vg_name": "ceph_vg2"
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:        }
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]:    ]
Oct  3 11:00:02 compute-0 festive_dijkstra[493434]: }
Oct  3 11:00:02 compute-0 systemd[1]: libpod-1824a375c667ecdbd0c2e485e691a71eee1a4d891b92e3c907437a8e01fcbe1a.scope: Deactivated successfully.
Oct  3 11:00:02 compute-0 conmon[493434]: conmon 1824a375c667ecdbd0c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1824a375c667ecdbd0c2e485e691a71eee1a4d891b92e3c907437a8e01fcbe1a.scope/container/memory.events
Oct  3 11:00:02 compute-0 podman[493418]: 2025-10-03 11:00:02.785038356 +0000 UTC m=+1.342560216 container died 1824a375c667ecdbd0c2e485e691a71eee1a4d891b92e3c907437a8e01fcbe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 11:00:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2781: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee829a47e90cdece05f064b6d86e1e56a5d4cfe8b991e11f1a69f850c1c7fa6e-merged.mount: Deactivated successfully.
Oct  3 11:00:03 compute-0 podman[493418]: 2025-10-03 11:00:03.462050757 +0000 UTC m=+2.019572577 container remove 1824a375c667ecdbd0c2e485e691a71eee1a4d891b92e3c907437a8e01fcbe1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 11:00:03 compute-0 systemd[1]: libpod-conmon-1824a375c667ecdbd0c2e485e691a71eee1a4d891b92e3c907437a8e01fcbe1a.scope: Deactivated successfully.
Oct  3 11:00:03 compute-0 nova_compute[351685]: 2025-10-03 11:00:03.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:03 compute-0 nova_compute[351685]: 2025-10-03 11:00:03.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:04 compute-0 podman[493594]: 2025-10-03 11:00:04.461750976 +0000 UTC m=+0.026715138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:00:04 compute-0 podman[493594]: 2025-10-03 11:00:04.634935175 +0000 UTC m=+0.199899257 container create ba00ce70e3d1e37a7906b2f75f32358febd4f4ceaddc3ecb5190389966b03ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:00:04 compute-0 systemd[1]: Started libpod-conmon-ba00ce70e3d1e37a7906b2f75f32358febd4f4ceaddc3ecb5190389966b03ebe.scope.
Oct  3 11:00:04 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:00:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2782: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:04 compute-0 podman[493594]: 2025-10-03 11:00:04.960143544 +0000 UTC m=+0.525107666 container init ba00ce70e3d1e37a7906b2f75f32358febd4f4ceaddc3ecb5190389966b03ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:00:04 compute-0 podman[493594]: 2025-10-03 11:00:04.970781025 +0000 UTC m=+0.535745107 container start ba00ce70e3d1e37a7906b2f75f32358febd4f4ceaddc3ecb5190389966b03ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wilbur, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 11:00:04 compute-0 podman[493594]: 2025-10-03 11:00:04.976682585 +0000 UTC m=+0.541646687 container attach ba00ce70e3d1e37a7906b2f75f32358febd4f4ceaddc3ecb5190389966b03ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wilbur, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 11:00:04 compute-0 stoic_wilbur[493610]: 167 167
Oct  3 11:00:04 compute-0 systemd[1]: libpod-ba00ce70e3d1e37a7906b2f75f32358febd4f4ceaddc3ecb5190389966b03ebe.scope: Deactivated successfully.
Oct  3 11:00:04 compute-0 podman[493594]: 2025-10-03 11:00:04.983418042 +0000 UTC m=+0.548382164 container died ba00ce70e3d1e37a7906b2f75f32358febd4f4ceaddc3ecb5190389966b03ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 11:00:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e531400d2be9365b6676081db06da1769adee4fb4908d77b3f3f7953143deb3-merged.mount: Deactivated successfully.
Oct  3 11:00:05 compute-0 podman[493594]: 2025-10-03 11:00:05.054766461 +0000 UTC m=+0.619730543 container remove ba00ce70e3d1e37a7906b2f75f32358febd4f4ceaddc3ecb5190389966b03ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:00:05 compute-0 systemd[1]: libpod-conmon-ba00ce70e3d1e37a7906b2f75f32358febd4f4ceaddc3ecb5190389966b03ebe.scope: Deactivated successfully.
Oct  3 11:00:05 compute-0 podman[493626]: 2025-10-03 11:00:05.135892535 +0000 UTC m=+0.102994107 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 11:00:05 compute-0 podman[493624]: 2025-10-03 11:00:05.141758574 +0000 UTC m=+0.119304401 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:00:05 compute-0 podman[493616]: 2025-10-03 11:00:05.150747302 +0000 UTC m=+0.127633448 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_id=edpm, name=ubi9-minimal, vcs-type=git, container_name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9)
Oct  3 11:00:05 compute-0 podman[493627]: 2025-10-03 11:00:05.163723039 +0000 UTC m=+0.126971226 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 11:00:05 compute-0 podman[493710]: 2025-10-03 11:00:05.291709237 +0000 UTC m=+0.098400679 container create b1197f7bfe102a8846a46e4af725ec3b7486a105a2d52a48ecde1a7e24af4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_curie, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 11:00:05 compute-0 podman[493710]: 2025-10-03 11:00:05.220354347 +0000 UTC m=+0.027045769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:00:05 compute-0 systemd[1]: Started libpod-conmon-b1197f7bfe102a8846a46e4af725ec3b7486a105a2d52a48ecde1a7e24af4dc7.scope.
Oct  3 11:00:05 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76da7308c9b1229450e979c45e51ab0c08366633f940777b9ad798fcd81a081a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76da7308c9b1229450e979c45e51ab0c08366633f940777b9ad798fcd81a081a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76da7308c9b1229450e979c45e51ab0c08366633f940777b9ad798fcd81a081a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76da7308c9b1229450e979c45e51ab0c08366633f940777b9ad798fcd81a081a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:00:05 compute-0 podman[493710]: 2025-10-03 11:00:05.656163785 +0000 UTC m=+0.462855257 container init b1197f7bfe102a8846a46e4af725ec3b7486a105a2d52a48ecde1a7e24af4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  3 11:00:05 compute-0 podman[493710]: 2025-10-03 11:00:05.673056678 +0000 UTC m=+0.479748080 container start b1197f7bfe102a8846a46e4af725ec3b7486a105a2d52a48ecde1a7e24af4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_curie, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:00:05 compute-0 podman[493710]: 2025-10-03 11:00:05.677924525 +0000 UTC m=+0.484616007 container attach b1197f7bfe102a8846a46e4af725ec3b7486a105a2d52a48ecde1a7e24af4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_curie, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:00:06 compute-0 beautiful_curie[493726]: {
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:        "osd_id": 1,
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:        "type": "bluestore"
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:    },
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:        "osd_id": 2,
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:        "type": "bluestore"
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:    },
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:        "osd_id": 0,
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:        "type": "bluestore"
Oct  3 11:00:06 compute-0 beautiful_curie[493726]:    }
Oct  3 11:00:06 compute-0 beautiful_curie[493726]: }
Oct  3 11:00:06 compute-0 systemd[1]: libpod-b1197f7bfe102a8846a46e4af725ec3b7486a105a2d52a48ecde1a7e24af4dc7.scope: Deactivated successfully.
Oct  3 11:00:06 compute-0 systemd[1]: libpod-b1197f7bfe102a8846a46e4af725ec3b7486a105a2d52a48ecde1a7e24af4dc7.scope: Consumed 1.040s CPU time.
Oct  3 11:00:06 compute-0 podman[493759]: 2025-10-03 11:00:06.765549585 +0000 UTC m=+0.041422260 container died b1197f7bfe102a8846a46e4af725ec3b7486a105a2d52a48ecde1a7e24af4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_curie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:00:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2783: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-76da7308c9b1229450e979c45e51ab0c08366633f940777b9ad798fcd81a081a-merged.mount: Deactivated successfully.
Oct  3 11:00:06 compute-0 podman[493759]: 2025-10-03 11:00:06.862464365 +0000 UTC m=+0.138337030 container remove b1197f7bfe102a8846a46e4af725ec3b7486a105a2d52a48ecde1a7e24af4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_curie, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct  3 11:00:06 compute-0 systemd[1]: libpod-conmon-b1197f7bfe102a8846a46e4af725ec3b7486a105a2d52a48ecde1a7e24af4dc7.scope: Deactivated successfully.
Oct  3 11:00:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:00:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:00:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:00:06 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:00:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b7d83aa9-1e6c-45c9-8b94-0a2b7453004f does not exist
Oct  3 11:00:06 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f9e04187-29b6-4b99-8a60-4df2b7664f05 does not exist
Oct  3 11:00:07 compute-0 podman[493773]: 2025-10-03 11:00:07.025901032 +0000 UTC m=+0.109548347 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:00:07 compute-0 podman[493774]: 2025-10-03 11:00:07.03829174 +0000 UTC m=+0.115883341 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:00:07 compute-0 podman[493775]: 2025-10-03 11:00:07.056032099 +0000 UTC m=+0.123985150 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 11:00:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:00:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:00:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:00:08 compute-0 nova_compute[351685]: 2025-10-03 11:00:08.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2784: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:08 compute-0 nova_compute[351685]: 2025-10-03 11:00:08.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2785: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:00:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2786: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:13 compute-0 nova_compute[351685]: 2025-10-03 11:00:13.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:13 compute-0 nova_compute[351685]: 2025-10-03 11:00:13.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2787: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:00:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:00:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:00:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:00:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:00:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:00:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2788: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:16 compute-0 podman[493886]: 2025-10-03 11:00:16.860534818 +0000 UTC m=+0.083817791 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  3 11:00:16 compute-0 podman[493884]: 2025-10-03 11:00:16.861619024 +0000 UTC m=+0.105079775 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 11:00:16 compute-0 podman[493885]: 2025-10-03 11:00:16.886511472 +0000 UTC m=+0.126980567 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, release-0.7.12=, name=ubi9, release=1214.1726694543)
Oct  3 11:00:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:00:18 compute-0 nova_compute[351685]: 2025-10-03 11:00:18.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2789: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:18 compute-0 nova_compute[351685]: 2025-10-03 11:00:18.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:20 compute-0 nova_compute[351685]: 2025-10-03 11:00:20.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:00:20 compute-0 nova_compute[351685]: 2025-10-03 11:00:20.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:00:20 compute-0 nova_compute[351685]: 2025-10-03 11:00:20.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:00:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2790: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:22 compute-0 nova_compute[351685]: 2025-10-03 11:00:22.185 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:00:22 compute-0 nova_compute[351685]: 2025-10-03 11:00:22.185 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:00:22 compute-0 nova_compute[351685]: 2025-10-03 11:00:22.186 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:00:22 compute-0 nova_compute[351685]: 2025-10-03 11:00:22.186 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:00:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:00:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2791: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:23 compute-0 nova_compute[351685]: 2025-10-03 11:00:23.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:23 compute-0 nova_compute[351685]: 2025-10-03 11:00:23.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2792: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:25 compute-0 nova_compute[351685]: 2025-10-03 11:00:25.523 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:00:25 compute-0 nova_compute[351685]: 2025-10-03 11:00:25.559 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:00:25 compute-0 nova_compute[351685]: 2025-10-03 11:00:25.560 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:00:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2793: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:27 compute-0 nova_compute[351685]: 2025-10-03 11:00:27.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:00:27 compute-0 nova_compute[351685]: 2025-10-03 11:00:27.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:00:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:00:27 compute-0 nova_compute[351685]: 2025-10-03 11:00:27.768 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:00:27 compute-0 nova_compute[351685]: 2025-10-03 11:00:27.768 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:00:27 compute-0 nova_compute[351685]: 2025-10-03 11:00:27.769 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:00:27 compute-0 nova_compute[351685]: 2025-10-03 11:00:27.769 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:00:27 compute-0 nova_compute[351685]: 2025-10-03 11:00:27.770 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:00:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:00:28 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3025722476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:00:28 compute-0 nova_compute[351685]: 2025-10-03 11:00:28.285 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:00:28 compute-0 nova_compute[351685]: 2025-10-03 11:00:28.401 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:00:28 compute-0 nova_compute[351685]: 2025-10-03 11:00:28.401 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:00:28 compute-0 nova_compute[351685]: 2025-10-03 11:00:28.402 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:00:28 compute-0 nova_compute[351685]: 2025-10-03 11:00:28.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2794: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:28 compute-0 nova_compute[351685]: 2025-10-03 11:00:28.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:28 compute-0 nova_compute[351685]: 2025-10-03 11:00:28.934 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:00:28 compute-0 nova_compute[351685]: 2025-10-03 11:00:28.935 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3794MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:00:28 compute-0 nova_compute[351685]: 2025-10-03 11:00:28.935 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:00:28 compute-0 nova_compute[351685]: 2025-10-03 11:00:28.936 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:00:29 compute-0 nova_compute[351685]: 2025-10-03 11:00:29.002 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:00:29 compute-0 nova_compute[351685]: 2025-10-03 11:00:29.003 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:00:29 compute-0 nova_compute[351685]: 2025-10-03 11:00:29.003 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:00:29 compute-0 nova_compute[351685]: 2025-10-03 11:00:29.040 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:00:29 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:00:29 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/952782673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:00:29 compute-0 nova_compute[351685]: 2025-10-03 11:00:29.541 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:00:29 compute-0 nova_compute[351685]: 2025-10-03 11:00:29.554 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:00:29 compute-0 nova_compute[351685]: 2025-10-03 11:00:29.573 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:00:29 compute-0 nova_compute[351685]: 2025-10-03 11:00:29.577 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:00:29 compute-0 nova_compute[351685]: 2025-10-03 11:00:29.577 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:00:29 compute-0 podman[157165]: time="2025-10-03T11:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:00:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:00:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9090 "" "Go-http-client/1.1"
Oct  3 11:00:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2795: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:31 compute-0 openstack_network_exporter[367524]: ERROR   11:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:00:31 compute-0 openstack_network_exporter[367524]: ERROR   11:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:00:31 compute-0 openstack_network_exporter[367524]: ERROR   11:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:00:31 compute-0 openstack_network_exporter[367524]: ERROR   11:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:00:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:00:31 compute-0 openstack_network_exporter[367524]: ERROR   11:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:00:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:00:32 compute-0 nova_compute[351685]: 2025-10-03 11:00:32.578 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:00:32 compute-0 nova_compute[351685]: 2025-10-03 11:00:32.578 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:00:32 compute-0 nova_compute[351685]: 2025-10-03 11:00:32.579 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:00:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:00:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2796: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:33 compute-0 nova_compute[351685]: 2025-10-03 11:00:33.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:33 compute-0 nova_compute[351685]: 2025-10-03 11:00:33.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2797: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:35 compute-0 nova_compute[351685]: 2025-10-03 11:00:35.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:00:35 compute-0 podman[493992]: 2025-10-03 11:00:35.853735423 +0000 UTC m=+0.097757289 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:00:35 compute-0 podman[493991]: 2025-10-03 11:00:35.860044636 +0000 UTC m=+0.118042570 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64)
Oct  3 11:00:35 compute-0 podman[493993]: 2025-10-03 11:00:35.86515531 +0000 UTC m=+0.103341658 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Oct  3 11:00:35 compute-0 podman[493994]: 2025-10-03 11:00:35.897228549 +0000 UTC m=+0.144939614 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct  3 11:00:36 compute-0 nova_compute[351685]: 2025-10-03 11:00:36.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:00:36 compute-0 nova_compute[351685]: 2025-10-03 11:00:36.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:00:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2798: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:37 compute-0 nova_compute[351685]: 2025-10-03 11:00:37.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:00:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:00:37 compute-0 podman[494069]: 2025-10-03 11:00:37.851588471 +0000 UTC m=+0.095510987 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:00:37 compute-0 podman[494067]: 2025-10-03 11:00:37.867229933 +0000 UTC m=+0.113089811 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:00:37 compute-0 podman[494068]: 2025-10-03 11:00:37.867738959 +0000 UTC m=+0.108331918 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:00:38 compute-0 nova_compute[351685]: 2025-10-03 11:00:38.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2799: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:38 compute-0 nova_compute[351685]: 2025-10-03 11:00:38.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2800: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:00:41.666 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:00:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:00:41.667 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:00:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:00:41.668 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:00:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:00:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2801: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:43 compute-0 nova_compute[351685]: 2025-10-03 11:00:43.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:43 compute-0 nova_compute[351685]: 2025-10-03 11:00:43.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2802: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:00:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:00:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:00:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:00:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:00:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:00:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:00:46
Oct  3 11:00:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:00:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:00:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'vms', 'volumes', 'default.rgw.control', 'images', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta']
Oct  3 11:00:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:00:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2803: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:00:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:00:46 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:00:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:00:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:00:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:00:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:00:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:00:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:00:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:00:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:00:47 compute-0 podman[494128]: 2025-10-03 11:00:47.826586853 +0000 UTC m=+0.085664770 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=kepler, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.buildah.version=1.29.0)
Oct  3 11:00:47 compute-0 podman[494129]: 2025-10-03 11:00:47.829883719 +0000 UTC m=+0.082527891 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:00:47 compute-0 podman[494127]: 2025-10-03 11:00:47.840158908 +0000 UTC m=+0.105864818 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:00:48 compute-0 nova_compute[351685]: 2025-10-03 11:00:48.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2804: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:48 compute-0 nova_compute[351685]: 2025-10-03 11:00:48.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2805: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:00:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2806: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:53 compute-0 nova_compute[351685]: 2025-10-03 11:00:53.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:53 compute-0 nova_compute[351685]: 2025-10-03 11:00:53.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:00:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3884635117' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:00:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:00:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3884635117' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:00:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2807: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:00:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 12K writes, 56K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.01 MB/s#012Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.08 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1306 writes, 5666 keys, 1306 commit groups, 1.0 writes per commit group, ingest: 8.45 MB, 0.01 MB/s#012Interval WAL: 1306 writes, 1306 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     26.5      2.64              0.28        40    0.066       0      0       0.0       0.0#012  L6      1/0    7.44 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.4     87.2     72.4      4.22              1.10        39    0.108    223K    21K       0.0       0.0#012 Sum      1/0    7.44 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.4     53.7     54.8      6.86              1.38        79    0.087    223K    21K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.5     47.4     47.2      0.83              0.15         8    0.104     28K   2029       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     87.2     72.4      4.22              1.10        39    0.108    223K    21K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     26.6      2.63              0.28        39    0.067       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.6      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.068, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.37 GB write, 0.07 MB/s write, 0.36 GB read, 0.07 MB/s read, 6.9 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56005dddb1f0#2 capacity: 304.00 MB usage: 44.68 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000325 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2829,43.08 MB,14.1704%) FilterBlock(80,634.98 KB,0.203981%) IndexBlock(80,1005.27 KB,0.322929%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:00:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2808: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:00:58 compute-0 nova_compute[351685]: 2025-10-03 11:00:58.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2809: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:00:58 compute-0 nova_compute[351685]: 2025-10-03 11:00:58.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:00:59 compute-0 podman[157165]: time="2025-10-03T11:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:00:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:00:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9089 "" "Go-http-client/1.1"
Oct  3 11:01:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2810: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:01 compute-0 openstack_network_exporter[367524]: ERROR   11:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:01:01 compute-0 openstack_network_exporter[367524]: ERROR   11:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:01:01 compute-0 openstack_network_exporter[367524]: ERROR   11:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:01:01 compute-0 openstack_network_exporter[367524]: ERROR   11:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:01:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:01:01 compute-0 openstack_network_exporter[367524]: ERROR   11:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:01:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:01:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:01:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2811: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:03 compute-0 nova_compute[351685]: 2025-10-03 11:01:03.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:03 compute-0 nova_compute[351685]: 2025-10-03 11:01:03.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2812: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2813: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:06 compute-0 podman[494202]: 2025-10-03 11:01:06.859936005 +0000 UTC m=+0.105151196 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_id=edpm)
Oct  3 11:01:06 compute-0 podman[494203]: 2025-10-03 11:01:06.8868883 +0000 UTC m=+0.124544339 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  3 11:01:06 compute-0 podman[494204]: 2025-10-03 11:01:06.895941191 +0000 UTC m=+0.121047067 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:01:06 compute-0 podman[494205]: 2025-10-03 11:01:06.931664097 +0000 UTC m=+0.144448348 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  3 11:01:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:01:08 compute-0 podman[494419]: 2025-10-03 11:01:08.223149612 +0000 UTC m=+0.083183842 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 11:01:08 compute-0 podman[494420]: 2025-10-03 11:01:08.244663782 +0000 UTC m=+0.101443047 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:01:08 compute-0 podman[494421]: 2025-10-03 11:01:08.256490132 +0000 UTC m=+0.112340547 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:01:08 compute-0 podman[494504]: 2025-10-03 11:01:08.65459751 +0000 UTC m=+0.308451021 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:01:08 compute-0 nova_compute[351685]: 2025-10-03 11:01:08.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:08 compute-0 podman[494504]: 2025-10-03 11:01:08.775359896 +0000 UTC m=+0.429213377 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:01:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2814: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:08 compute-0 nova_compute[351685]: 2025-10-03 11:01:08.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:01:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:01:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:01:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:01:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2815: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:01:11 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:01:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:01:11 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:01:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:01:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:01:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:01:11 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:01:11 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:01:11 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8624f8c1-aa8c-441d-81fb-44969f2bb08e does not exist
Oct  3 11:01:11 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 5a632c98-5ffd-471a-b86b-289ee7d14206 does not exist
Oct  3 11:01:11 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 0220bb1d-35c0-429c-987c-b163d1137518 does not exist
Oct  3 11:01:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:01:11 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:01:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:01:11 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:01:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:01:11 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:01:12 compute-0 podman[494926]: 2025-10-03 11:01:12.1123963 +0000 UTC m=+0.098601065 container create 792b156897c4d3e2b73c4541a992138894df3d752e21dae9cea544865e5ca8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 11:01:12 compute-0 podman[494926]: 2025-10-03 11:01:12.043621983 +0000 UTC m=+0.029826838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:01:12 compute-0 systemd[1]: Started libpod-conmon-792b156897c4d3e2b73c4541a992138894df3d752e21dae9cea544865e5ca8d0.scope.
Oct  3 11:01:12 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:01:12 compute-0 podman[494926]: 2025-10-03 11:01:12.451468335 +0000 UTC m=+0.437673180 container init 792b156897c4d3e2b73c4541a992138894df3d752e21dae9cea544865e5ca8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 11:01:12 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:01:12 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:01:12 compute-0 podman[494926]: 2025-10-03 11:01:12.470205637 +0000 UTC m=+0.456410402 container start 792b156897c4d3e2b73c4541a992138894df3d752e21dae9cea544865e5ca8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 11:01:12 compute-0 wonderful_almeida[494941]: 167 167
Oct  3 11:01:12 compute-0 systemd[1]: libpod-792b156897c4d3e2b73c4541a992138894df3d752e21dae9cea544865e5ca8d0.scope: Deactivated successfully.
Oct  3 11:01:12 compute-0 podman[494926]: 2025-10-03 11:01:12.656706273 +0000 UTC m=+0.642911048 container attach 792b156897c4d3e2b73c4541a992138894df3d752e21dae9cea544865e5ca8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 11:01:12 compute-0 podman[494926]: 2025-10-03 11:01:12.657891621 +0000 UTC m=+0.644096426 container died 792b156897c4d3e2b73c4541a992138894df3d752e21dae9cea544865e5ca8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 11:01:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:01:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2816: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:12 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Oct  3 11:01:12 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:01:12.897934) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:01:12 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Oct  3 11:01:12 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489272898025, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1641, "num_deletes": 256, "total_data_size": 2635444, "memory_usage": 2680176, "flush_reason": "Manual Compaction"}
Oct  3 11:01:12 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489273077671, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 2588062, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55544, "largest_seqno": 57184, "table_properties": {"data_size": 2580484, "index_size": 4520, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15354, "raw_average_key_size": 19, "raw_value_size": 2565322, "raw_average_value_size": 3284, "num_data_blocks": 202, "num_entries": 781, "num_filter_entries": 781, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759489097, "oldest_key_time": 1759489097, "file_creation_time": 1759489272, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 179801 microseconds, and 12963 cpu microseconds.
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:01:13.077737) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 2588062 bytes OK
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:01:13.077757) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:01:13.085736) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:01:13.085771) EVENT_LOG_v1 {"time_micros": 1759489273085762, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:01:13.085794) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 2628333, prev total WAL file size 2628333, number of live WAL files 2.
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:01:13.087223) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323632' seq:72057594037927935, type:22 .. '6C6F676D0032353134' seq:0, type:0; will stop at (end)
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(2527KB)], [134(7614KB)]
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489273087295, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 10385347, "oldest_snapshot_seqno": -1}
Oct  3 11:01:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-96ff965f3975dc8013262356b50d7dd188a65b55c2f6c6d7699b6a5a56e9f68e-merged.mount: Deactivated successfully.
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 6990 keys, 10285439 bytes, temperature: kUnknown
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489273337695, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 10285439, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10240361, "index_size": 26469, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17541, "raw_key_size": 182761, "raw_average_key_size": 26, "raw_value_size": 10115285, "raw_average_value_size": 1447, "num_data_blocks": 1056, "num_entries": 6990, "num_filter_entries": 6990, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759489273, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:01:13.337961) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 10285439 bytes
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:01:13.390606) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 41.5 rd, 41.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 7.4 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(8.0) write-amplify(4.0) OK, records in: 7514, records dropped: 524 output_compression: NoCompression
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:01:13.390653) EVENT_LOG_v1 {"time_micros": 1759489273390635, "job": 82, "event": "compaction_finished", "compaction_time_micros": 250461, "compaction_time_cpu_micros": 24723, "output_level": 6, "num_output_files": 1, "total_output_size": 10285439, "num_input_records": 7514, "num_output_records": 6990, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489273391655, "job": 82, "event": "table_file_deletion", "file_number": 136}
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489273394088, "job": 82, "event": "table_file_deletion", "file_number": 134}
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:01:13.087049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:01:13.394373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:01:13.394379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:01:13.394382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:01:13.394385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:01:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:01:13.394388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:01:13 compute-0 podman[494926]: 2025-10-03 11:01:13.671342732 +0000 UTC m=+1.657547497 container remove 792b156897c4d3e2b73c4541a992138894df3d752e21dae9cea544865e5ca8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:01:13 compute-0 systemd[1]: libpod-conmon-792b156897c4d3e2b73c4541a992138894df3d752e21dae9cea544865e5ca8d0.scope: Deactivated successfully.
Oct  3 11:01:13 compute-0 nova_compute[351685]: 2025-10-03 11:01:13.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:13 compute-0 podman[494964]: 2025-10-03 11:01:13.941315628 +0000 UTC m=+0.123992921 container create 86f5fb2762cef1a4df6b87c5028f257033415b020dd4b60694c3dd52545595ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swartz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:01:13 compute-0 podman[494964]: 2025-10-03 11:01:13.860936318 +0000 UTC m=+0.043613661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:01:13 compute-0 nova_compute[351685]: 2025-10-03 11:01:13.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:14 compute-0 systemd[1]: Started libpod-conmon-86f5fb2762cef1a4df6b87c5028f257033415b020dd4b60694c3dd52545595ae.scope.
Oct  3 11:01:14 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc13458e9220f671104632506a80ae0c4cbe5475e5547cdf656812c221e9ef1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc13458e9220f671104632506a80ae0c4cbe5475e5547cdf656812c221e9ef1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc13458e9220f671104632506a80ae0c4cbe5475e5547cdf656812c221e9ef1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc13458e9220f671104632506a80ae0c4cbe5475e5547cdf656812c221e9ef1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc13458e9220f671104632506a80ae0c4cbe5475e5547cdf656812c221e9ef1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:01:14 compute-0 podman[494964]: 2025-10-03 11:01:14.273751239 +0000 UTC m=+0.456428562 container init 86f5fb2762cef1a4df6b87c5028f257033415b020dd4b60694c3dd52545595ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swartz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:01:14 compute-0 podman[494964]: 2025-10-03 11:01:14.284090371 +0000 UTC m=+0.466767664 container start 86f5fb2762cef1a4df6b87c5028f257033415b020dd4b60694c3dd52545595ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swartz, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 11:01:14 compute-0 podman[494964]: 2025-10-03 11:01:14.408472083 +0000 UTC m=+0.591149406 container attach 86f5fb2762cef1a4df6b87c5028f257033415b020dd4b60694c3dd52545595ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swartz, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:01:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2817: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:15 compute-0 quizzical_swartz[494981]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:01:15 compute-0 quizzical_swartz[494981]: --> relative data size: 1.0
Oct  3 11:01:15 compute-0 quizzical_swartz[494981]: --> All data devices are unavailable
Oct  3 11:01:15 compute-0 systemd[1]: libpod-86f5fb2762cef1a4df6b87c5028f257033415b020dd4b60694c3dd52545595ae.scope: Deactivated successfully.
Oct  3 11:01:15 compute-0 systemd[1]: libpod-86f5fb2762cef1a4df6b87c5028f257033415b020dd4b60694c3dd52545595ae.scope: Consumed 1.117s CPU time.
Oct  3 11:01:15 compute-0 podman[494964]: 2025-10-03 11:01:15.481190606 +0000 UTC m=+1.663867909 container died 86f5fb2762cef1a4df6b87c5028f257033415b020dd4b60694c3dd52545595ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swartz, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  3 11:01:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbc13458e9220f671104632506a80ae0c4cbe5475e5547cdf656812c221e9ef1-merged.mount: Deactivated successfully.
Oct  3 11:01:15 compute-0 podman[494964]: 2025-10-03 11:01:15.546845674 +0000 UTC m=+1.729522987 container remove 86f5fb2762cef1a4df6b87c5028f257033415b020dd4b60694c3dd52545595ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:01:15 compute-0 systemd[1]: libpod-conmon-86f5fb2762cef1a4df6b87c5028f257033415b020dd4b60694c3dd52545595ae.scope: Deactivated successfully.
Oct  3 11:01:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:01:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:01:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:01:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:01:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:01:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:01:16 compute-0 podman[495159]: 2025-10-03 11:01:16.390344648 +0000 UTC m=+0.077188318 container create 84ec4b2d60b30a05806465b4b1b47628f17178534df4d46c8eda67110b5888b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:01:16 compute-0 podman[495159]: 2025-10-03 11:01:16.353859127 +0000 UTC m=+0.040702817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:01:16 compute-0 systemd[1]: Started libpod-conmon-84ec4b2d60b30a05806465b4b1b47628f17178534df4d46c8eda67110b5888b5.scope.
Oct  3 11:01:16 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:01:16 compute-0 podman[495159]: 2025-10-03 11:01:16.532178871 +0000 UTC m=+0.219022601 container init 84ec4b2d60b30a05806465b4b1b47628f17178534df4d46c8eda67110b5888b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 11:01:16 compute-0 podman[495159]: 2025-10-03 11:01:16.544222927 +0000 UTC m=+0.231066587 container start 84ec4b2d60b30a05806465b4b1b47628f17178534df4d46c8eda67110b5888b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_benz, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 11:01:16 compute-0 zen_benz[495175]: 167 167
Oct  3 11:01:16 compute-0 systemd[1]: libpod-84ec4b2d60b30a05806465b4b1b47628f17178534df4d46c8eda67110b5888b5.scope: Deactivated successfully.
Oct  3 11:01:16 compute-0 podman[495159]: 2025-10-03 11:01:16.562347379 +0000 UTC m=+0.249191119 container attach 84ec4b2d60b30a05806465b4b1b47628f17178534df4d46c8eda67110b5888b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:01:16 compute-0 podman[495159]: 2025-10-03 11:01:16.563878208 +0000 UTC m=+0.250721848 container died 84ec4b2d60b30a05806465b4b1b47628f17178534df4d46c8eda67110b5888b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:01:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-21af1b7fc5a9936ff1dd8710deeb4ea109527a3eae44f2bc9df143cab772ff0f-merged.mount: Deactivated successfully.
Oct  3 11:01:16 compute-0 podman[495159]: 2025-10-03 11:01:16.677757534 +0000 UTC m=+0.364601184 container remove 84ec4b2d60b30a05806465b4b1b47628f17178534df4d46c8eda67110b5888b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_benz, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:01:16 compute-0 systemd[1]: libpod-conmon-84ec4b2d60b30a05806465b4b1b47628f17178534df4d46c8eda67110b5888b5.scope: Deactivated successfully.
Oct  3 11:01:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2818: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:16 compute-0 podman[495199]: 2025-10-03 11:01:16.96239266 +0000 UTC m=+0.083357896 container create 643e6b9c8465e60273e7f780971790bb05f201290cb6e75adc581c65442577c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dijkstra, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:01:17 compute-0 podman[495199]: 2025-10-03 11:01:16.932678467 +0000 UTC m=+0.053643753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:01:17 compute-0 systemd[1]: Started libpod-conmon-643e6b9c8465e60273e7f780971790bb05f201290cb6e75adc581c65442577c2.scope.
Oct  3 11:01:17 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc31d7f3136e3b99f5ce6310220735cc55102984445157db2a6b029df6c6315c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc31d7f3136e3b99f5ce6310220735cc55102984445157db2a6b029df6c6315c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc31d7f3136e3b99f5ce6310220735cc55102984445157db2a6b029df6c6315c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc31d7f3136e3b99f5ce6310220735cc55102984445157db2a6b029df6c6315c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:01:17 compute-0 podman[495199]: 2025-10-03 11:01:17.144823086 +0000 UTC m=+0.265788342 container init 643e6b9c8465e60273e7f780971790bb05f201290cb6e75adc581c65442577c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dijkstra, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  3 11:01:17 compute-0 podman[495199]: 2025-10-03 11:01:17.166970637 +0000 UTC m=+0.287935883 container start 643e6b9c8465e60273e7f780971790bb05f201290cb6e75adc581c65442577c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dijkstra, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:01:17 compute-0 podman[495199]: 2025-10-03 11:01:17.173100144 +0000 UTC m=+0.294065430 container attach 643e6b9c8465e60273e7f780971790bb05f201290cb6e75adc581c65442577c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dijkstra, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:01:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]: {
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:    "0": [
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:        {
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "devices": [
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "/dev/loop3"
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            ],
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "lv_name": "ceph_lv0",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "lv_size": "21470642176",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "name": "ceph_lv0",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "tags": {
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.cluster_name": "ceph",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.crush_device_class": "",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.encrypted": "0",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.osd_id": "0",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.type": "block",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.vdo": "0"
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            },
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "type": "block",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "vg_name": "ceph_vg0"
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:        }
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:    ],
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:    "1": [
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:        {
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "devices": [
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "/dev/loop4"
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            ],
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "lv_name": "ceph_lv1",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "lv_size": "21470642176",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "name": "ceph_lv1",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "tags": {
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.cluster_name": "ceph",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.crush_device_class": "",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.encrypted": "0",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.osd_id": "1",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.type": "block",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.vdo": "0"
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            },
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "type": "block",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "vg_name": "ceph_vg1"
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:        }
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:    ],
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:    "2": [
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:        {
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "devices": [
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "/dev/loop5"
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            ],
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "lv_name": "ceph_lv2",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "lv_size": "21470642176",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "name": "ceph_lv2",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "tags": {
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.cluster_name": "ceph",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.crush_device_class": "",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.encrypted": "0",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.osd_id": "2",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.type": "block",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:                "ceph.vdo": "0"
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            },
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "type": "block",
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:            "vg_name": "ceph_vg2"
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:        }
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]:    ]
Oct  3 11:01:17 compute-0 clever_dijkstra[495215]: }
Oct  3 11:01:18 compute-0 systemd[1]: libpod-643e6b9c8465e60273e7f780971790bb05f201290cb6e75adc581c65442577c2.scope: Deactivated successfully.
Oct  3 11:01:18 compute-0 podman[495199]: 2025-10-03 11:01:18.000494892 +0000 UTC m=+1.121460158 container died 643e6b9c8465e60273e7f780971790bb05f201290cb6e75adc581c65442577c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:01:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc31d7f3136e3b99f5ce6310220735cc55102984445157db2a6b029df6c6315c-merged.mount: Deactivated successfully.
Oct  3 11:01:18 compute-0 podman[495199]: 2025-10-03 11:01:18.078560637 +0000 UTC m=+1.199525873 container remove 643e6b9c8465e60273e7f780971790bb05f201290cb6e75adc581c65442577c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_dijkstra, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:01:18 compute-0 systemd[1]: libpod-conmon-643e6b9c8465e60273e7f780971790bb05f201290cb6e75adc581c65442577c2.scope: Deactivated successfully.
Oct  3 11:01:18 compute-0 podman[495225]: 2025-10-03 11:01:18.130794265 +0000 UTC m=+0.098023498 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:01:18 compute-0 podman[495227]: 2025-10-03 11:01:18.152704068 +0000 UTC m=+0.111890673 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, container_name=kepler, managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=)
Oct  3 11:01:18 compute-0 podman[495228]: 2025-10-03 11:01:18.1646507 +0000 UTC m=+0.112678017 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:01:18 compute-0 nova_compute[351685]: 2025-10-03 11:01:18.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2819: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:18 compute-0 nova_compute[351685]: 2025-10-03 11:01:18.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:19 compute-0 podman[495429]: 2025-10-03 11:01:19.17121292 +0000 UTC m=+0.098331647 container create 5796f4799833001d0bc853f0c39e3a0983be6c311fbe8fcd5f7a0a88825c3b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_varahamihira, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:01:19 compute-0 podman[495429]: 2025-10-03 11:01:19.135523515 +0000 UTC m=+0.062642302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:01:19 compute-0 systemd[1]: Started libpod-conmon-5796f4799833001d0bc853f0c39e3a0983be6c311fbe8fcd5f7a0a88825c3b49.scope.
Oct  3 11:01:19 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:01:19 compute-0 podman[495429]: 2025-10-03 11:01:19.335182313 +0000 UTC m=+0.262301040 container init 5796f4799833001d0bc853f0c39e3a0983be6c311fbe8fcd5f7a0a88825c3b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:01:19 compute-0 podman[495429]: 2025-10-03 11:01:19.352329173 +0000 UTC m=+0.279447860 container start 5796f4799833001d0bc853f0c39e3a0983be6c311fbe8fcd5f7a0a88825c3b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 11:01:19 compute-0 podman[495429]: 2025-10-03 11:01:19.35718708 +0000 UTC m=+0.284305857 container attach 5796f4799833001d0bc853f0c39e3a0983be6c311fbe8fcd5f7a0a88825c3b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_varahamihira, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:01:19 compute-0 strange_varahamihira[495445]: 167 167
Oct  3 11:01:19 compute-0 systemd[1]: libpod-5796f4799833001d0bc853f0c39e3a0983be6c311fbe8fcd5f7a0a88825c3b49.scope: Deactivated successfully.
Oct  3 11:01:19 compute-0 conmon[495445]: conmon 5796f4799833001d0bc8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5796f4799833001d0bc853f0c39e3a0983be6c311fbe8fcd5f7a0a88825c3b49.scope/container/memory.events
Oct  3 11:01:19 compute-0 podman[495429]: 2025-10-03 11:01:19.37090006 +0000 UTC m=+0.298018777 container died 5796f4799833001d0bc853f0c39e3a0983be6c311fbe8fcd5f7a0a88825c3b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_varahamihira, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct  3 11:01:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c36295cfffc6d579ddeaf76fec72728dac3d34ef07ac7e8bb7cedacc52e9e02-merged.mount: Deactivated successfully.
Oct  3 11:01:19 compute-0 podman[495429]: 2025-10-03 11:01:19.427192077 +0000 UTC m=+0.354310764 container remove 5796f4799833001d0bc853f0c39e3a0983be6c311fbe8fcd5f7a0a88825c3b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:01:19 compute-0 systemd[1]: libpod-conmon-5796f4799833001d0bc853f0c39e3a0983be6c311fbe8fcd5f7a0a88825c3b49.scope: Deactivated successfully.
Oct  3 11:01:19 compute-0 podman[495468]: 2025-10-03 11:01:19.633677155 +0000 UTC m=+0.040581144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:01:19 compute-0 podman[495468]: 2025-10-03 11:01:19.90269275 +0000 UTC m=+0.309596699 container create a80811e8970faf44e1243d76df6f9917fd6965bb714633d0aa21c78e75296f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leavitt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 11:01:20 compute-0 systemd[1]: Started libpod-conmon-a80811e8970faf44e1243d76df6f9917fd6965bb714633d0aa21c78e75296f05.scope.
Oct  3 11:01:20 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:01:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92c8226b8dc9997dc6f5f80e43417a5ec98fff06c8c59b100ccb9823658861ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:01:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92c8226b8dc9997dc6f5f80e43417a5ec98fff06c8c59b100ccb9823658861ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:01:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92c8226b8dc9997dc6f5f80e43417a5ec98fff06c8c59b100ccb9823658861ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:01:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92c8226b8dc9997dc6f5f80e43417a5ec98fff06c8c59b100ccb9823658861ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:01:20 compute-0 podman[495468]: 2025-10-03 11:01:20.359860204 +0000 UTC m=+0.766764153 container init a80811e8970faf44e1243d76df6f9917fd6965bb714633d0aa21c78e75296f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leavitt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 11:01:20 compute-0 podman[495468]: 2025-10-03 11:01:20.379733012 +0000 UTC m=+0.786636961 container start a80811e8970faf44e1243d76df6f9917fd6965bb714633d0aa21c78e75296f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 11:01:20 compute-0 podman[495468]: 2025-10-03 11:01:20.477671246 +0000 UTC m=+0.884575275 container attach a80811e8970faf44e1243d76df6f9917fd6965bb714633d0aa21c78e75296f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leavitt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:01:20 compute-0 nova_compute[351685]: 2025-10-03 11:01:20.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:01:20 compute-0 nova_compute[351685]: 2025-10-03 11:01:20.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:01:20 compute-0 nova_compute[351685]: 2025-10-03 11:01:20.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:01:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2820: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:21 compute-0 nova_compute[351685]: 2025-10-03 11:01:21.272 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:01:21 compute-0 nova_compute[351685]: 2025-10-03 11:01:21.273 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:01:21 compute-0 nova_compute[351685]: 2025-10-03 11:01:21.274 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:01:21 compute-0 nova_compute[351685]: 2025-10-03 11:01:21.275 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]: {
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:        "osd_id": 1,
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:        "type": "bluestore"
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:    },
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:        "osd_id": 2,
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:        "type": "bluestore"
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:    },
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:        "osd_id": 0,
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:        "type": "bluestore"
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]:    }
Oct  3 11:01:21 compute-0 quirky_leavitt[495484]: }
Oct  3 11:01:21 compute-0 systemd[1]: libpod-a80811e8970faf44e1243d76df6f9917fd6965bb714633d0aa21c78e75296f05.scope: Deactivated successfully.
Oct  3 11:01:21 compute-0 systemd[1]: libpod-a80811e8970faf44e1243d76df6f9917fd6965bb714633d0aa21c78e75296f05.scope: Consumed 1.198s CPU time.
Oct  3 11:01:21 compute-0 podman[495517]: 2025-10-03 11:01:21.64698453 +0000 UTC m=+0.050959137 container died a80811e8970faf44e1243d76df6f9917fd6965bb714633d0aa21c78e75296f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leavitt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 11:01:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-92c8226b8dc9997dc6f5f80e43417a5ec98fff06c8c59b100ccb9823658861ea-merged.mount: Deactivated successfully.
Oct  3 11:01:22 compute-0 podman[495517]: 2025-10-03 11:01:22.445461569 +0000 UTC m=+0.849436166 container remove a80811e8970faf44e1243d76df6f9917fd6965bb714633d0aa21c78e75296f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_leavitt, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:01:22 compute-0 systemd[1]: libpod-conmon-a80811e8970faf44e1243d76df6f9917fd6965bb714633d0aa21c78e75296f05.scope: Deactivated successfully.
Oct  3 11:01:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:01:22 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:01:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:01:22 compute-0 nova_compute[351685]: 2025-10-03 11:01:22.709 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:01:22 compute-0 nova_compute[351685]: 2025-10-03 11:01:22.739 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:01:22 compute-0 nova_compute[351685]: 2025-10-03 11:01:22.740 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:01:22 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:01:22 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 5e02951b-950f-4173-aec7-6dc5492aac10 does not exist
Oct  3 11:01:22 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev daa15d39-02aa-4db2-84e4-364e1e6b9ae8 does not exist
Oct  3 11:01:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:01:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2821: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:23 compute-0 nova_compute[351685]: 2025-10-03 11:01:23.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:23 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:01:23 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:01:23 compute-0 nova_compute[351685]: 2025-10-03 11:01:23.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2822: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2823: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:01:28 compute-0 nova_compute[351685]: 2025-10-03 11:01:28.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2824: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:28 compute-0 nova_compute[351685]: 2025-10-03 11:01:28.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:29 compute-0 nova_compute[351685]: 2025-10-03 11:01:29.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:01:29 compute-0 nova_compute[351685]: 2025-10-03 11:01:29.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:01:29 compute-0 podman[157165]: time="2025-10-03T11:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:01:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:01:29 compute-0 nova_compute[351685]: 2025-10-03 11:01:29.780 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:01:29 compute-0 nova_compute[351685]: 2025-10-03 11:01:29.780 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:01:29 compute-0 nova_compute[351685]: 2025-10-03 11:01:29.780 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:01:29 compute-0 nova_compute[351685]: 2025-10-03 11:01:29.780 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:01:29 compute-0 nova_compute[351685]: 2025-10-03 11:01:29.781 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:01:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9082 "" "Go-http-client/1.1"
Oct  3 11:01:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:01:30 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3349332976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:01:30 compute-0 nova_compute[351685]: 2025-10-03 11:01:30.250 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:01:30 compute-0 nova_compute[351685]: 2025-10-03 11:01:30.382 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:01:30 compute-0 nova_compute[351685]: 2025-10-03 11:01:30.383 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:01:30 compute-0 nova_compute[351685]: 2025-10-03 11:01:30.383 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:01:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2825: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:30 compute-0 nova_compute[351685]: 2025-10-03 11:01:30.885 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:01:30 compute-0 nova_compute[351685]: 2025-10-03 11:01:30.886 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3787MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:01:30 compute-0 nova_compute[351685]: 2025-10-03 11:01:30.886 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:01:30 compute-0 nova_compute[351685]: 2025-10-03 11:01:30.887 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:01:31 compute-0 nova_compute[351685]: 2025-10-03 11:01:31.001 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:01:31 compute-0 nova_compute[351685]: 2025-10-03 11:01:31.001 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:01:31 compute-0 nova_compute[351685]: 2025-10-03 11:01:31.002 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:01:31 compute-0 nova_compute[351685]: 2025-10-03 11:01:31.046 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:01:31 compute-0 openstack_network_exporter[367524]: ERROR   11:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:01:31 compute-0 openstack_network_exporter[367524]: ERROR   11:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:01:31 compute-0 openstack_network_exporter[367524]: ERROR   11:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:01:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:01:31 compute-0 openstack_network_exporter[367524]: ERROR   11:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:01:31 compute-0 openstack_network_exporter[367524]: ERROR   11:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:01:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:01:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:01:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4245350957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:01:31 compute-0 nova_compute[351685]: 2025-10-03 11:01:31.610 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:01:31 compute-0 nova_compute[351685]: 2025-10-03 11:01:31.620 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:01:31 compute-0 nova_compute[351685]: 2025-10-03 11:01:31.731 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:01:31 compute-0 nova_compute[351685]: 2025-10-03 11:01:31.732 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:01:31 compute-0 nova_compute[351685]: 2025-10-03 11:01:31.732 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:01:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:01:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2826: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:33 compute-0 nova_compute[351685]: 2025-10-03 11:01:33.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:33 compute-0 nova_compute[351685]: 2025-10-03 11:01:33.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:34 compute-0 nova_compute[351685]: 2025-10-03 11:01:34.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:01:34 compute-0 nova_compute[351685]: 2025-10-03 11:01:34.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:01:34 compute-0 nova_compute[351685]: 2025-10-03 11:01:34.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:01:34 compute-0 nova_compute[351685]: 2025-10-03 11:01:34.732 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:01:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2827: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:36 compute-0 nova_compute[351685]: 2025-10-03 11:01:36.743 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:01:36 compute-0 nova_compute[351685]: 2025-10-03 11:01:36.744 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:01:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2828: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:37 compute-0 nova_compute[351685]: 2025-10-03 11:01:37.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:01:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:01:37 compute-0 podman[495626]: 2025-10-03 11:01:37.859415197 +0000 UTC m=+0.099072781 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  3 11:01:37 compute-0 podman[495625]: 2025-10-03 11:01:37.87667665 +0000 UTC m=+0.114237397 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.openshift.tags=minimal rhel9, distribution-scope=public, container_name=openstack_network_exporter, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.6, vendor=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 11:01:37 compute-0 podman[495627]: 2025-10-03 11:01:37.885484263 +0000 UTC m=+0.110175467 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Oct  3 11:01:37 compute-0 podman[495628]: 2025-10-03 11:01:37.960542092 +0000 UTC m=+0.178886663 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller)
Oct  3 11:01:38 compute-0 nova_compute[351685]: 2025-10-03 11:01:38.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2829: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:38 compute-0 podman[495703]: 2025-10-03 11:01:38.87887994 +0000 UTC m=+0.116842351 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 11:01:38 compute-0 podman[495704]: 2025-10-03 11:01:38.897513319 +0000 UTC m=+0.125563602 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  3 11:01:38 compute-0 podman[495702]: 2025-10-03 11:01:38.903182201 +0000 UTC m=+0.145305685 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:01:38 compute-0 nova_compute[351685]: 2025-10-03 11:01:38.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:39 compute-0 nova_compute[351685]: 2025-10-03 11:01:39.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:01:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2830: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.899 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.899 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.899 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.900 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.911 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.912 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.912 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.912 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.913 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.914 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:01:40.913138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.919 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.920 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.921 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.921 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.921 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.921 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.921 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.922 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.922 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.923 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.923 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.923 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.923 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:01:40.921799) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.923 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.924 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.924 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:01:40.924082) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.952 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.953 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.953 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.954 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.954 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.955 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.955 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.955 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:40.956 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:01:40.955734) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.015 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.016 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.016 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.016 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.016 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.017 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.017 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:01:41.016782) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.017 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.018 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.018 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.019 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:01:41.020153) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.021 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.021 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.022 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.024 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:01:41.023024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.025 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.025 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.025 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.025 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.026 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.026 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:01:41.026021) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.026 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.027 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.027 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.028 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.028 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.028 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.028 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.029 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.029 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.030 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:01:41.028926) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.030 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.031 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.031 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.031 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.031 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.031 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.032 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:01:41.031821) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.058 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.059 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.059 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.059 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.059 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.060 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.060 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.061 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.061 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.062 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.062 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.063 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:01:41.060021) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.063 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:01:41.063684) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.064 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.065 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.066 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.066 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.066 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:01:41.066656) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.066 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.067 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.068 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.069 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.069 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:01:41.069121) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.069 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.070 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.070 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.070 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.070 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.071 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.071 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.072 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.072 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:01:41.070912) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.072 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.072 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.074 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:01:41.073027) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.074 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.074 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.074 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.075 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:01:41.074797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.075 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.076 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.076 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.076 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 83620000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.077 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:01:41.076595) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.078 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.078 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.078 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.078 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.078 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:01:41.078686) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.080 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.080 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.080 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.080 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.080 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.081 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.081 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.082 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:01:41.080585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.082 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.083 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.083 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.084 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.084 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.085 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:01:41.082589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:01:41.084472) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.086 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.086 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.086 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.087 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:01:41.086869) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.088 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.088 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.088 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.088 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.088 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.089 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:01:41.088744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.091 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.091 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.091 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.091 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.091 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.091 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.092 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.092 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.092 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.092 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.092 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.092 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.093 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.093 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.093 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.093 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.093 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:01:41.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:01:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:01:41.667 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:01:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:01:41.668 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:01:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:01:41.669 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:01:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:01:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2831: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:43 compute-0 nova_compute[351685]: 2025-10-03 11:01:43.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:43 compute-0 nova_compute[351685]: 2025-10-03 11:01:43.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:44 compute-0 nova_compute[351685]: 2025-10-03 11:01:44.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:01:44 compute-0 nova_compute[351685]: 2025-10-03 11:01:44.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 11:01:44 compute-0 nova_compute[351685]: 2025-10-03 11:01:44.753 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 11:01:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2832: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:01:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:01:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:01:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:01:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:01:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:01:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:01:46
Oct  3 11:01:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:01:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:01:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'images', 'volumes', '.mgr', 'vms', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct  3 11:01:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:01:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2833: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:46 compute-0 ceph-mgr[192071]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3262515590
Oct  3 11:01:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:01:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:01:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:01:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:01:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:01:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:01:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:01:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:01:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:01:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:01:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:01:48 compute-0 nova_compute[351685]: 2025-10-03 11:01:48.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2834: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:48 compute-0 podman[495762]: 2025-10-03 11:01:48.888153855 +0000 UTC m=+0.134082894 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:01:48 compute-0 podman[495764]: 2025-10-03 11:01:48.896885826 +0000 UTC m=+0.136214953 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true)
Oct  3 11:01:48 compute-0 podman[495763]: 2025-10-03 11:01:48.910950547 +0000 UTC m=+0.152493336 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, architecture=x86_64, container_name=kepler, name=ubi9, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=)
Oct  3 11:01:48 compute-0 nova_compute[351685]: 2025-10-03 11:01:48.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2835: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:01:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2836: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:53 compute-0 nova_compute[351685]: 2025-10-03 11:01:53.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:53 compute-0 nova_compute[351685]: 2025-10-03 11:01:53.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:01:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/745617593' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:01:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:01:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/745617593' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:01:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2837: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:01:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2838: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:01:58 compute-0 nova_compute[351685]: 2025-10-03 11:01:58.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2839: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:01:58 compute-0 nova_compute[351685]: 2025-10-03 11:01:58.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:01:59 compute-0 podman[157165]: time="2025-10-03T11:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:01:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:01:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9092 "" "Go-http-client/1.1"
Oct  3 11:02:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2840: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:01 compute-0 openstack_network_exporter[367524]: ERROR   11:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:02:01 compute-0 openstack_network_exporter[367524]: ERROR   11:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:02:01 compute-0 openstack_network_exporter[367524]: ERROR   11:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:02:01 compute-0 openstack_network_exporter[367524]: ERROR   11:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:02:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:02:01 compute-0 openstack_network_exporter[367524]: ERROR   11:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:02:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:02:01 compute-0 nova_compute[351685]: 2025-10-03 11:02:01.747 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:02:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:02:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2841: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:03 compute-0 nova_compute[351685]: 2025-10-03 11:02:03.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:03 compute-0 nova_compute[351685]: 2025-10-03 11:02:03.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2842: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:02:05.184316) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489325184435, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 659, "num_deletes": 251, "total_data_size": 811754, "memory_usage": 824616, "flush_reason": "Manual Compaction"}
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489325195395, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 804483, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57185, "largest_seqno": 57843, "table_properties": {"data_size": 800971, "index_size": 1419, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7838, "raw_average_key_size": 19, "raw_value_size": 794007, "raw_average_value_size": 1946, "num_data_blocks": 63, "num_entries": 408, "num_filter_entries": 408, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759489273, "oldest_key_time": 1759489273, "file_creation_time": 1759489325, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 11158 microseconds, and 5671 cpu microseconds.
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:02:05.195492) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 804483 bytes OK
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:02:05.195526) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:02:05.198292) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:02:05.198318) EVENT_LOG_v1 {"time_micros": 1759489325198309, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:02:05.198349) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 808282, prev total WAL file size 808282, number of live WAL files 2.
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:02:05.199311) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(785KB)], [137(10044KB)]
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489325199395, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 11089922, "oldest_snapshot_seqno": -1}
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 6886 keys, 9355231 bytes, temperature: kUnknown
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489325292569, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 9355231, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9311793, "index_size": 25149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 181247, "raw_average_key_size": 26, "raw_value_size": 9189410, "raw_average_value_size": 1334, "num_data_blocks": 993, "num_entries": 6886, "num_filter_entries": 6886, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759489325, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:02:05.293385) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 9355231 bytes
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:02:05.298342) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 118.3 rd, 99.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.8 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(25.4) write-amplify(11.6) OK, records in: 7398, records dropped: 512 output_compression: NoCompression
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:02:05.298396) EVENT_LOG_v1 {"time_micros": 1759489325298375, "job": 84, "event": "compaction_finished", "compaction_time_micros": 93711, "compaction_time_cpu_micros": 32353, "output_level": 6, "num_output_files": 1, "total_output_size": 9355231, "num_input_records": 7398, "num_output_records": 6886, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489325300896, "job": 84, "event": "table_file_deletion", "file_number": 139}
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489325306466, "job": 84, "event": "table_file_deletion", "file_number": 137}
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:02:05.198994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:02:05.307274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:02:05.307284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:02:05.307287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:02:05.307289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:02:05 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:02:05.307291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:02:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2843: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:02:08 compute-0 nova_compute[351685]: 2025-10-03 11:02:08.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:08 compute-0 podman[495825]: 2025-10-03 11:02:08.844761752 +0000 UTC m=+0.094605159 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 11:02:08 compute-0 podman[495826]: 2025-10-03 11:02:08.855076593 +0000 UTC m=+0.107125260 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 11:02:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2844: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:08 compute-0 podman[495824]: 2025-10-03 11:02:08.877620566 +0000 UTC m=+0.127883255 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Oct  3 11:02:08 compute-0 podman[495827]: 2025-10-03 11:02:08.894322562 +0000 UTC m=+0.132169563 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  3 11:02:08 compute-0 nova_compute[351685]: 2025-10-03 11:02:08.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:09 compute-0 podman[495904]: 2025-10-03 11:02:09.002653689 +0000 UTC m=+0.072143996 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Oct  3 11:02:09 compute-0 podman[495908]: 2025-10-03 11:02:09.012994612 +0000 UTC m=+0.066818807 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 11:02:09 compute-0 podman[495905]: 2025-10-03 11:02:09.017063932 +0000 UTC m=+0.077067964 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:02:09 compute-0 nova_compute[351685]: 2025-10-03 11:02:09.058 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:02:09 compute-0 nova_compute[351685]: 2025-10-03 11:02:09.078 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Triggering sync for uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  3 11:02:09 compute-0 nova_compute[351685]: 2025-10-03 11:02:09.079 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:02:09 compute-0 nova_compute[351685]: 2025-10-03 11:02:09.080 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:02:09 compute-0 nova_compute[351685]: 2025-10-03 11:02:09.104 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:02:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2845: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:11 compute-0 nova_compute[351685]: 2025-10-03 11:02:11.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:02:11 compute-0 nova_compute[351685]: 2025-10-03 11:02:11.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 11:02:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:02:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2846: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:13 compute-0 nova_compute[351685]: 2025-10-03 11:02:13.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:13 compute-0 nova_compute[351685]: 2025-10-03 11:02:13.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2847: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:02:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:02:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:02:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:02:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:02:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:02:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2848: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:02:18 compute-0 nova_compute[351685]: 2025-10-03 11:02:18.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2849: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:18 compute-0 nova_compute[351685]: 2025-10-03 11:02:18.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:19 compute-0 podman[495968]: 2025-10-03 11:02:19.856978864 +0000 UTC m=+0.103281876 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release=1214.1726694543, distribution-scope=public, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, name=ubi9, io.buildah.version=1.29.0, version=9.4, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible)
Oct  3 11:02:19 compute-0 podman[495967]: 2025-10-03 11:02:19.864201207 +0000 UTC m=+0.119670253 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 11:02:19 compute-0 podman[495969]: 2025-10-03 11:02:19.867844643 +0000 UTC m=+0.127423011 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:02:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2850: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:22 compute-0 nova_compute[351685]: 2025-10-03 11:02:22.749 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:02:22 compute-0 nova_compute[351685]: 2025-10-03 11:02:22.750 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:02:22 compute-0 nova_compute[351685]: 2025-10-03 11:02:22.750 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:02:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:02:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2851: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:23 compute-0 nova_compute[351685]: 2025-10-03 11:02:23.333 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:02:23 compute-0 nova_compute[351685]: 2025-10-03 11:02:23.333 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:02:23 compute-0 nova_compute[351685]: 2025-10-03 11:02:23.333 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:02:23 compute-0 nova_compute[351685]: 2025-10-03 11:02:23.334 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:02:23 compute-0 nova_compute[351685]: 2025-10-03 11:02:23.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:24 compute-0 nova_compute[351685]: 2025-10-03 11:02:24.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:02:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:02:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:02:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:02:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:02:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:02:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 16781862-9203-4595-9407-19022e7121ef does not exist
Oct  3 11:02:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 9ab5d3bc-5a39-417c-a705-231a2390d795 does not exist
Oct  3 11:02:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 7bda16fc-064a-4273-8845-7b56c21f4698 does not exist
Oct  3 11:02:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:02:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:02:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:02:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:02:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:02:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:02:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:02:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:02:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:02:24 compute-0 nova_compute[351685]: 2025-10-03 11:02:24.795 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:02:24 compute-0 nova_compute[351685]: 2025-10-03 11:02:24.823 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:02:24 compute-0 nova_compute[351685]: 2025-10-03 11:02:24.823 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:02:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2852: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:25 compute-0 podman[496297]: 2025-10-03 11:02:25.419514157 +0000 UTC m=+0.075352219 container create bf245e923675b256cd14692bbc6108028d9ec153e6526fb7af23d384023c37e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_herschel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:02:25 compute-0 podman[496297]: 2025-10-03 11:02:25.386394844 +0000 UTC m=+0.042232956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:02:25 compute-0 systemd[1]: Started libpod-conmon-bf245e923675b256cd14692bbc6108028d9ec153e6526fb7af23d384023c37e8.scope.
Oct  3 11:02:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:02:25 compute-0 podman[496297]: 2025-10-03 11:02:25.558389976 +0000 UTC m=+0.214228028 container init bf245e923675b256cd14692bbc6108028d9ec153e6526fb7af23d384023c37e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_herschel, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:02:25 compute-0 podman[496297]: 2025-10-03 11:02:25.576507447 +0000 UTC m=+0.232345509 container start bf245e923675b256cd14692bbc6108028d9ec153e6526fb7af23d384023c37e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:02:25 compute-0 podman[496297]: 2025-10-03 11:02:25.582649394 +0000 UTC m=+0.238487466 container attach bf245e923675b256cd14692bbc6108028d9ec153e6526fb7af23d384023c37e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_herschel, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:02:25 compute-0 awesome_herschel[496313]: 167 167
Oct  3 11:02:25 compute-0 systemd[1]: libpod-bf245e923675b256cd14692bbc6108028d9ec153e6526fb7af23d384023c37e8.scope: Deactivated successfully.
Oct  3 11:02:25 compute-0 podman[496297]: 2025-10-03 11:02:25.587060665 +0000 UTC m=+0.242898717 container died bf245e923675b256cd14692bbc6108028d9ec153e6526fb7af23d384023c37e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_herschel, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:02:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-28c9588deaa3c3b3d7aa4fc16b0f3fa2b71cf422ee91cc1212381a09b2c18f81-merged.mount: Deactivated successfully.
Oct  3 11:02:25 compute-0 podman[496297]: 2025-10-03 11:02:25.660303336 +0000 UTC m=+0.316141368 container remove bf245e923675b256cd14692bbc6108028d9ec153e6526fb7af23d384023c37e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_herschel, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  3 11:02:25 compute-0 systemd[1]: libpod-conmon-bf245e923675b256cd14692bbc6108028d9ec153e6526fb7af23d384023c37e8.scope: Deactivated successfully.
Oct  3 11:02:25 compute-0 podman[496336]: 2025-10-03 11:02:25.863705306 +0000 UTC m=+0.047310331 container create a147bd542da795641fc78ea42c6cf612b0c343bc1999672ea5db5d566a3c82d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_engelbart, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 11:02:25 compute-0 systemd[1]: Started libpod-conmon-a147bd542da795641fc78ea42c6cf612b0c343bc1999672ea5db5d566a3c82d6.scope.
Oct  3 11:02:25 compute-0 podman[496336]: 2025-10-03 11:02:25.843780496 +0000 UTC m=+0.027385531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:02:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:02:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66844ba64da10c85c5de88c6018aba6634080d812fa9f3e245b7ab89ddac6815/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:02:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66844ba64da10c85c5de88c6018aba6634080d812fa9f3e245b7ab89ddac6815/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:02:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66844ba64da10c85c5de88c6018aba6634080d812fa9f3e245b7ab89ddac6815/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:02:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66844ba64da10c85c5de88c6018aba6634080d812fa9f3e245b7ab89ddac6815/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:02:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66844ba64da10c85c5de88c6018aba6634080d812fa9f3e245b7ab89ddac6815/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:02:26 compute-0 podman[496336]: 2025-10-03 11:02:26.008619047 +0000 UTC m=+0.192224112 container init a147bd542da795641fc78ea42c6cf612b0c343bc1999672ea5db5d566a3c82d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_engelbart, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:02:26 compute-0 podman[496336]: 2025-10-03 11:02:26.022473422 +0000 UTC m=+0.206078437 container start a147bd542da795641fc78ea42c6cf612b0c343bc1999672ea5db5d566a3c82d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 11:02:26 compute-0 podman[496336]: 2025-10-03 11:02:26.026689657 +0000 UTC m=+0.210294752 container attach a147bd542da795641fc78ea42c6cf612b0c343bc1999672ea5db5d566a3c82d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_engelbart, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:02:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2853: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:27 compute-0 interesting_engelbart[496353]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:02:27 compute-0 interesting_engelbart[496353]: --> relative data size: 1.0
Oct  3 11:02:27 compute-0 interesting_engelbart[496353]: --> All data devices are unavailable
Oct  3 11:02:27 compute-0 systemd[1]: libpod-a147bd542da795641fc78ea42c6cf612b0c343bc1999672ea5db5d566a3c82d6.scope: Deactivated successfully.
Oct  3 11:02:27 compute-0 podman[496336]: 2025-10-03 11:02:27.181794604 +0000 UTC m=+1.365399629 container died a147bd542da795641fc78ea42c6cf612b0c343bc1999672ea5db5d566a3c82d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:02:27 compute-0 systemd[1]: libpod-a147bd542da795641fc78ea42c6cf612b0c343bc1999672ea5db5d566a3c82d6.scope: Consumed 1.086s CPU time.
Oct  3 11:02:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-66844ba64da10c85c5de88c6018aba6634080d812fa9f3e245b7ab89ddac6815-merged.mount: Deactivated successfully.
Oct  3 11:02:27 compute-0 podman[496336]: 2025-10-03 11:02:27.261340547 +0000 UTC m=+1.444945562 container remove a147bd542da795641fc78ea42c6cf612b0c343bc1999672ea5db5d566a3c82d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:02:27 compute-0 systemd[1]: libpod-conmon-a147bd542da795641fc78ea42c6cf612b0c343bc1999672ea5db5d566a3c82d6.scope: Deactivated successfully.
Oct  3 11:02:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:02:28 compute-0 podman[496531]: 2025-10-03 11:02:28.433191662 +0000 UTC m=+0.083442039 container create 4b68c3ee1268d506fcb10228c74decf58ec75cb3e4081c6857f7cb05a84f31a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 11:02:28 compute-0 podman[496531]: 2025-10-03 11:02:28.396770253 +0000 UTC m=+0.047020670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:02:28 compute-0 systemd[1]: Started libpod-conmon-4b68c3ee1268d506fcb10228c74decf58ec75cb3e4081c6857f7cb05a84f31a7.scope.
Oct  3 11:02:28 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:02:28 compute-0 podman[496531]: 2025-10-03 11:02:28.582819734 +0000 UTC m=+0.233070081 container init 4b68c3ee1268d506fcb10228c74decf58ec75cb3e4081c6857f7cb05a84f31a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 11:02:28 compute-0 podman[496531]: 2025-10-03 11:02:28.602757765 +0000 UTC m=+0.253008142 container start 4b68c3ee1268d506fcb10228c74decf58ec75cb3e4081c6857f7cb05a84f31a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 11:02:28 compute-0 podman[496531]: 2025-10-03 11:02:28.609385578 +0000 UTC m=+0.259635965 container attach 4b68c3ee1268d506fcb10228c74decf58ec75cb3e4081c6857f7cb05a84f31a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:02:28 compute-0 silly_jepsen[496546]: 167 167
Oct  3 11:02:28 compute-0 systemd[1]: libpod-4b68c3ee1268d506fcb10228c74decf58ec75cb3e4081c6857f7cb05a84f31a7.scope: Deactivated successfully.
Oct  3 11:02:28 compute-0 podman[496531]: 2025-10-03 11:02:28.615334049 +0000 UTC m=+0.265584406 container died 4b68c3ee1268d506fcb10228c74decf58ec75cb3e4081c6857f7cb05a84f31a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:02:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0a3361f84649337ce2f725bd8fdb2268d1d190b36a88ef051725a251bbd43fc-merged.mount: Deactivated successfully.
Oct  3 11:02:28 compute-0 podman[496531]: 2025-10-03 11:02:28.681362798 +0000 UTC m=+0.331613135 container remove 4b68c3ee1268d506fcb10228c74decf58ec75cb3e4081c6857f7cb05a84f31a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:02:28 compute-0 systemd[1]: libpod-conmon-4b68c3ee1268d506fcb10228c74decf58ec75cb3e4081c6857f7cb05a84f31a7.scope: Deactivated successfully.
Oct  3 11:02:28 compute-0 nova_compute[351685]: 2025-10-03 11:02:28.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2854: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:28 compute-0 podman[496569]: 2025-10-03 11:02:28.982495044 +0000 UTC m=+0.081023532 container create 7fda744b298268eb781c378ea4ec2f52fc574a15165503b0b54554fbb632aa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 11:02:29 compute-0 nova_compute[351685]: 2025-10-03 11:02:29.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:29 compute-0 podman[496569]: 2025-10-03 11:02:28.947392787 +0000 UTC m=+0.045921355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:02:29 compute-0 systemd[1]: Started libpod-conmon-7fda744b298268eb781c378ea4ec2f52fc574a15165503b0b54554fbb632aa01.scope.
Oct  3 11:02:29 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08ff86d9b7a0a15b3ebd380c72a04e048427c71cd552c86766b5c6e915d97859/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08ff86d9b7a0a15b3ebd380c72a04e048427c71cd552c86766b5c6e915d97859/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08ff86d9b7a0a15b3ebd380c72a04e048427c71cd552c86766b5c6e915d97859/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08ff86d9b7a0a15b3ebd380c72a04e048427c71cd552c86766b5c6e915d97859/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:02:29 compute-0 podman[496569]: 2025-10-03 11:02:29.13064956 +0000 UTC m=+0.229178038 container init 7fda744b298268eb781c378ea4ec2f52fc574a15165503b0b54554fbb632aa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 11:02:29 compute-0 podman[496569]: 2025-10-03 11:02:29.146954353 +0000 UTC m=+0.245482871 container start 7fda744b298268eb781c378ea4ec2f52fc574a15165503b0b54554fbb632aa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_visvesvaraya, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:02:29 compute-0 podman[496569]: 2025-10-03 11:02:29.153764041 +0000 UTC m=+0.252292519 container attach 7fda744b298268eb781c378ea4ec2f52fc574a15165503b0b54554fbb632aa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_visvesvaraya, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:02:29 compute-0 podman[157165]: time="2025-10-03T11:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:02:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47842 "" "Go-http-client/1.1"
Oct  3 11:02:29 compute-0 nova_compute[351685]: 2025-10-03 11:02:29.799 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:02:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9513 "" "Go-http-client/1.1"
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]: {
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:    "0": [
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:        {
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:            "devices": [
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:                "/dev/loop3"
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:            ],
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:            "lv_name": "ceph_lv0",
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:            "lv_size": "21470642176",
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:            "name": "ceph_lv0",
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:            "tags": {
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:                "ceph.cluster_name": "ceph",
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:                "ceph.crush_device_class": "",
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:                "ceph.encrypted": "0",
Oct  3 11:02:29 compute-0 amazing_visvesvaraya[496585]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.osd_id": "0",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.type": "block",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.vdo": "0"
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            },
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "type": "block",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "vg_name": "ceph_vg0"
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:        }
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:    ],
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:    "1": [
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:        {
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "devices": [
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "/dev/loop4"
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            ],
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "lv_name": "ceph_lv1",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "lv_size": "21470642176",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "name": "ceph_lv1",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "tags": {
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.cluster_name": "ceph",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.crush_device_class": "",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.encrypted": "0",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.osd_id": "1",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.type": "block",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.vdo": "0"
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            },
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "type": "block",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "vg_name": "ceph_vg1"
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:        }
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:    ],
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:    "2": [
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:        {
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "devices": [
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "/dev/loop5"
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            ],
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "lv_name": "ceph_lv2",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "lv_size": "21470642176",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "name": "ceph_lv2",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "tags": {
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.cluster_name": "ceph",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.crush_device_class": "",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.encrypted": "0",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.osd_id": "2",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.type": "block",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:                "ceph.vdo": "0"
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            },
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "type": "block",
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:            "vg_name": "ceph_vg2"
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:        }
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]:    ]
Oct  3 11:02:30 compute-0 amazing_visvesvaraya[496585]: }
Oct  3 11:02:30 compute-0 systemd[1]: libpod-7fda744b298268eb781c378ea4ec2f52fc574a15165503b0b54554fbb632aa01.scope: Deactivated successfully.
Oct  3 11:02:30 compute-0 podman[496569]: 2025-10-03 11:02:30.038364485 +0000 UTC m=+1.136893003 container died 7fda744b298268eb781c378ea4ec2f52fc574a15165503b0b54554fbb632aa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_visvesvaraya, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:02:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-08ff86d9b7a0a15b3ebd380c72a04e048427c71cd552c86766b5c6e915d97859-merged.mount: Deactivated successfully.
Oct  3 11:02:30 compute-0 podman[496569]: 2025-10-03 11:02:30.152627914 +0000 UTC m=+1.251156432 container remove 7fda744b298268eb781c378ea4ec2f52fc574a15165503b0b54554fbb632aa01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_visvesvaraya, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:02:30 compute-0 systemd[1]: libpod-conmon-7fda744b298268eb781c378ea4ec2f52fc574a15165503b0b54554fbb632aa01.scope: Deactivated successfully.
Oct  3 11:02:30 compute-0 nova_compute[351685]: 2025-10-03 11:02:30.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:02:30 compute-0 nova_compute[351685]: 2025-10-03 11:02:30.759 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:02:30 compute-0 nova_compute[351685]: 2025-10-03 11:02:30.760 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:02:30 compute-0 nova_compute[351685]: 2025-10-03 11:02:30.760 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:02:30 compute-0 nova_compute[351685]: 2025-10-03 11:02:30.761 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:02:30 compute-0 nova_compute[351685]: 2025-10-03 11:02:30.761 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:02:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2855: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:02:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1185494619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:02:31 compute-0 nova_compute[351685]: 2025-10-03 11:02:31.294 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:02:31 compute-0 podman[496765]: 2025-10-03 11:02:31.311011506 +0000 UTC m=+0.082417457 container create da9378ffbafa0bd1c3a068c35d1338273958792640d1ab533ffd828f3315faf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:02:31 compute-0 podman[496765]: 2025-10-03 11:02:31.273429289 +0000 UTC m=+0.044835280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:02:31 compute-0 systemd[1]: Started libpod-conmon-da9378ffbafa0bd1c3a068c35d1338273958792640d1ab533ffd828f3315faf1.scope.
Oct  3 11:02:31 compute-0 nova_compute[351685]: 2025-10-03 11:02:31.383 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:02:31 compute-0 nova_compute[351685]: 2025-10-03 11:02:31.383 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:02:31 compute-0 nova_compute[351685]: 2025-10-03 11:02:31.384 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:02:31 compute-0 openstack_network_exporter[367524]: ERROR   11:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:02:31 compute-0 openstack_network_exporter[367524]: ERROR   11:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:02:31 compute-0 openstack_network_exporter[367524]: ERROR   11:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:02:31 compute-0 openstack_network_exporter[367524]: ERROR   11:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:02:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:02:31 compute-0 openstack_network_exporter[367524]: ERROR   11:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:02:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:02:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:02:31 compute-0 podman[496765]: 2025-10-03 11:02:31.439103367 +0000 UTC m=+0.210509368 container init da9378ffbafa0bd1c3a068c35d1338273958792640d1ab533ffd828f3315faf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:02:31 compute-0 podman[496765]: 2025-10-03 11:02:31.458621714 +0000 UTC m=+0.230027655 container start da9378ffbafa0bd1c3a068c35d1338273958792640d1ab533ffd828f3315faf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:02:31 compute-0 podman[496765]: 2025-10-03 11:02:31.462650743 +0000 UTC m=+0.234056734 container attach da9378ffbafa0bd1c3a068c35d1338273958792640d1ab533ffd828f3315faf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:02:31 compute-0 priceless_keller[496783]: 167 167
Oct  3 11:02:31 compute-0 systemd[1]: libpod-da9378ffbafa0bd1c3a068c35d1338273958792640d1ab533ffd828f3315faf1.scope: Deactivated successfully.
Oct  3 11:02:31 compute-0 conmon[496783]: conmon da9378ffbafa0bd1c3a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-da9378ffbafa0bd1c3a068c35d1338273958792640d1ab533ffd828f3315faf1.scope/container/memory.events
Oct  3 11:02:31 compute-0 podman[496765]: 2025-10-03 11:02:31.470720832 +0000 UTC m=+0.242126783 container died da9378ffbafa0bd1c3a068c35d1338273958792640d1ab533ffd828f3315faf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 11:02:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5224d985e0515be81ae0afecf5c694ea1d90ed4c211cb4a7a9213717cd3a0350-merged.mount: Deactivated successfully.
Oct  3 11:02:31 compute-0 podman[496765]: 2025-10-03 11:02:31.529466818 +0000 UTC m=+0.300872749 container remove da9378ffbafa0bd1c3a068c35d1338273958792640d1ab533ffd828f3315faf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 11:02:31 compute-0 systemd[1]: libpod-conmon-da9378ffbafa0bd1c3a068c35d1338273958792640d1ab533ffd828f3315faf1.scope: Deactivated successfully.
Oct  3 11:02:31 compute-0 podman[496804]: 2025-10-03 11:02:31.778496571 +0000 UTC m=+0.063000933 container create d5d46bc1d6635fb1212df62ef2b12000bfd0e23850f7fa1fb303d913d1bbb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:02:31 compute-0 nova_compute[351685]: 2025-10-03 11:02:31.791 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:02:31 compute-0 nova_compute[351685]: 2025-10-03 11:02:31.793 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3776MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:02:31 compute-0 nova_compute[351685]: 2025-10-03 11:02:31.793 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:02:31 compute-0 nova_compute[351685]: 2025-10-03 11:02:31.793 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:02:31 compute-0 systemd[1]: Started libpod-conmon-d5d46bc1d6635fb1212df62ef2b12000bfd0e23850f7fa1fb303d913d1bbb467.scope.
Oct  3 11:02:31 compute-0 podman[496804]: 2025-10-03 11:02:31.755286036 +0000 UTC m=+0.039790378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:02:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:02:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad77c6ef05ce893bff5d398e93c5388900eb6a4d4c423cbe7c4cf1c9cd98ff38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:02:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad77c6ef05ce893bff5d398e93c5388900eb6a4d4c423cbe7c4cf1c9cd98ff38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:02:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad77c6ef05ce893bff5d398e93c5388900eb6a4d4c423cbe7c4cf1c9cd98ff38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:02:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad77c6ef05ce893bff5d398e93c5388900eb6a4d4c423cbe7c4cf1c9cd98ff38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:02:31 compute-0 nova_compute[351685]: 2025-10-03 11:02:31.890 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:02:31 compute-0 nova_compute[351685]: 2025-10-03 11:02:31.891 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:02:31 compute-0 nova_compute[351685]: 2025-10-03 11:02:31.891 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:02:31 compute-0 podman[496804]: 2025-10-03 11:02:31.901682245 +0000 UTC m=+0.186186577 container init d5d46bc1d6635fb1212df62ef2b12000bfd0e23850f7fa1fb303d913d1bbb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 11:02:31 compute-0 podman[496804]: 2025-10-03 11:02:31.91335677 +0000 UTC m=+0.197861092 container start d5d46bc1d6635fb1212df62ef2b12000bfd0e23850f7fa1fb303d913d1bbb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 11:02:31 compute-0 podman[496804]: 2025-10-03 11:02:31.917083219 +0000 UTC m=+0.201587631 container attach d5d46bc1d6635fb1212df62ef2b12000bfd0e23850f7fa1fb303d913d1bbb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct  3 11:02:31 compute-0 nova_compute[351685]: 2025-10-03 11:02:31.937 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:02:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:02:32 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4079364155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:02:32 compute-0 nova_compute[351685]: 2025-10-03 11:02:32.466 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:02:32 compute-0 nova_compute[351685]: 2025-10-03 11:02:32.479 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:02:32 compute-0 nova_compute[351685]: 2025-10-03 11:02:32.497 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:02:32 compute-0 nova_compute[351685]: 2025-10-03 11:02:32.499 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:02:32 compute-0 nova_compute[351685]: 2025-10-03 11:02:32.500 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:02:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:02:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2856: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]: {
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:        "osd_id": 1,
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:        "type": "bluestore"
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:    },
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:        "osd_id": 2,
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:        "type": "bluestore"
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:    },
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:        "osd_id": 0,
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:        "type": "bluestore"
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]:    }
Oct  3 11:02:32 compute-0 mystifying_burnell[496820]: }
Oct  3 11:02:32 compute-0 systemd[1]: libpod-d5d46bc1d6635fb1212df62ef2b12000bfd0e23850f7fa1fb303d913d1bbb467.scope: Deactivated successfully.
Oct  3 11:02:32 compute-0 podman[496804]: 2025-10-03 11:02:32.927020627 +0000 UTC m=+1.211524989 container died d5d46bc1d6635fb1212df62ef2b12000bfd0e23850f7fa1fb303d913d1bbb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:02:32 compute-0 systemd[1]: libpod-d5d46bc1d6635fb1212df62ef2b12000bfd0e23850f7fa1fb303d913d1bbb467.scope: Consumed 1.019s CPU time.
Oct  3 11:02:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad77c6ef05ce893bff5d398e93c5388900eb6a4d4c423cbe7c4cf1c9cd98ff38-merged.mount: Deactivated successfully.
Oct  3 11:02:33 compute-0 podman[496804]: 2025-10-03 11:02:33.017875963 +0000 UTC m=+1.302380295 container remove d5d46bc1d6635fb1212df62ef2b12000bfd0e23850f7fa1fb303d913d1bbb467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:02:33 compute-0 systemd[1]: libpod-conmon-d5d46bc1d6635fb1212df62ef2b12000bfd0e23850f7fa1fb303d913d1bbb467.scope: Deactivated successfully.
Oct  3 11:02:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:02:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:02:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:02:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:02:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a6f41e16-2cf5-4dba-ab85-57e8fcda9e70 does not exist
Oct  3 11:02:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 10d40a30-24db-4924-bf22-649268d0fe33 does not exist
Oct  3 11:02:33 compute-0 nova_compute[351685]: 2025-10-03 11:02:33.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:34 compute-0 nova_compute[351685]: 2025-10-03 11:02:34.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:02:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:02:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2857: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:35 compute-0 nova_compute[351685]: 2025-10-03 11:02:35.500 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:02:35 compute-0 nova_compute[351685]: 2025-10-03 11:02:35.501 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:02:35 compute-0 nova_compute[351685]: 2025-10-03 11:02:35.501 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:02:36 compute-0 nova_compute[351685]: 2025-10-03 11:02:36.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:02:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2858: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:37 compute-0 nova_compute[351685]: 2025-10-03 11:02:37.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:02:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:02:38 compute-0 nova_compute[351685]: 2025-10-03 11:02:38.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2859: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:39 compute-0 nova_compute[351685]: 2025-10-03 11:02:39.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:39 compute-0 nova_compute[351685]: 2025-10-03 11:02:39.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:02:39 compute-0 nova_compute[351685]: 2025-10-03 11:02:39.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:02:39 compute-0 podman[496941]: 2025-10-03 11:02:39.871025132 +0000 UTC m=+0.109885748 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:02:39 compute-0 podman[496938]: 2025-10-03 11:02:39.872002824 +0000 UTC m=+0.114642081 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:02:39 compute-0 podman[496940]: 2025-10-03 11:02:39.882892903 +0000 UTC m=+0.124420525 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  3 11:02:39 compute-0 podman[496959]: 2025-10-03 11:02:39.889699752 +0000 UTC m=+0.102108609 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  3 11:02:39 compute-0 podman[496942]: 2025-10-03 11:02:39.909119175 +0000 UTC m=+0.139073825 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Oct  3 11:02:39 compute-0 podman[496939]: 2025-10-03 11:02:39.909717914 +0000 UTC m=+0.153491748 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=openstack_network_exporter, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container)
Oct  3 11:02:39 compute-0 podman[496943]: 2025-10-03 11:02:39.93918285 +0000 UTC m=+0.150973707 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 11:02:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2860: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:02:41.669 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:02:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:02:41.670 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:02:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:02:41.671 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:02:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:02:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2861: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:43 compute-0 nova_compute[351685]: 2025-10-03 11:02:43.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:44 compute-0 nova_compute[351685]: 2025-10-03 11:02:44.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2862: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:02:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:02:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:02:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:02:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:02:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:02:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:02:46
Oct  3 11:02:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:02:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:02:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'vms', 'volumes', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta']
Oct  3 11:02:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:02:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2863: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:02:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:02:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:02:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:02:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:02:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:02:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:02:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:02:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:02:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:02:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:02:48 compute-0 nova_compute[351685]: 2025-10-03 11:02:48.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2864: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:49 compute-0 nova_compute[351685]: 2025-10-03 11:02:49.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:50 compute-0 podman[497072]: 2025-10-03 11:02:50.823107836 +0000 UTC m=+0.077972324 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 11:02:50 compute-0 podman[497074]: 2025-10-03 11:02:50.844806842 +0000 UTC m=+0.089935347 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  3 11:02:50 compute-0 podman[497073]: 2025-10-03 11:02:50.852971984 +0000 UTC m=+0.096620802 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release=1214.1726694543, architecture=x86_64, managed_by=edpm_ansible, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, distribution-scope=public, vcs-type=git, container_name=kepler)
Oct  3 11:02:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2865: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:02:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.2 total, 600.0 interval#012Cumulative writes: 8918 writes, 32K keys, 8918 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 8918 writes, 2332 syncs, 3.82 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 11:02:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:02:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2866: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:53 compute-0 nova_compute[351685]: 2025-10-03 11:02:53.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:54 compute-0 nova_compute[351685]: 2025-10-03 11:02:54.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:02:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2907022588' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:02:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:02:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2907022588' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:02:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2867: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:02:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2868: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:02:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 9704 writes, 35K keys, 9704 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 9704 writes, 2548 syncs, 3.81 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 11:02:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:02:58 compute-0 nova_compute[351685]: 2025-10-03 11:02:58.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2869: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:02:59 compute-0 nova_compute[351685]: 2025-10-03 11:02:59.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:02:59 compute-0 podman[157165]: time="2025-10-03T11:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:02:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:02:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9088 "" "Go-http-client/1.1"
Oct  3 11:03:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2870: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:01 compute-0 openstack_network_exporter[367524]: ERROR   11:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:03:01 compute-0 openstack_network_exporter[367524]: ERROR   11:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:03:01 compute-0 openstack_network_exporter[367524]: ERROR   11:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:03:01 compute-0 openstack_network_exporter[367524]: ERROR   11:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:03:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:03:01 compute-0 openstack_network_exporter[367524]: ERROR   11:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:03:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:03:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:03:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2871: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:03:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 7885 writes, 29K keys, 7885 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 7885 writes, 1901 syncs, 4.15 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 11:03:03 compute-0 nova_compute[351685]: 2025-10-03 11:03:03.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:04 compute-0 nova_compute[351685]: 2025-10-03 11:03:04.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2872: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:05 compute-0 ceph-mgr[192071]: [devicehealth INFO root] Check health
Oct  3 11:03:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2873: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:03:08 compute-0 nova_compute[351685]: 2025-10-03 11:03:08.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2874: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:09 compute-0 nova_compute[351685]: 2025-10-03 11:03:09.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:10 compute-0 podman[497134]: 2025-10-03 11:03:10.878282152 +0000 UTC m=+0.103612867 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 11:03:10 compute-0 podman[497132]: 2025-10-03 11:03:10.898138919 +0000 UTC m=+0.137477643 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:03:10 compute-0 podman[497133]: 2025-10-03 11:03:10.901751965 +0000 UTC m=+0.134036373 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Oct  3 11:03:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2875: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:10 compute-0 podman[497148]: 2025-10-03 11:03:10.917512371 +0000 UTC m=+0.114406353 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:03:10 compute-0 podman[497135]: 2025-10-03 11:03:10.93432343 +0000 UTC m=+0.153740705 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  3 11:03:10 compute-0 podman[497142]: 2025-10-03 11:03:10.936888794 +0000 UTC m=+0.144453329 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 11:03:10 compute-0 podman[497136]: 2025-10-03 11:03:10.936997967 +0000 UTC m=+0.162669102 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930)
Oct  3 11:03:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:03:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2876: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:13 compute-0 nova_compute[351685]: 2025-10-03 11:03:13.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:14 compute-0 nova_compute[351685]: 2025-10-03 11:03:14.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2877: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:03:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:03:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:03:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:03:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:03:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:03:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2878: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:03:18 compute-0 nova_compute[351685]: 2025-10-03 11:03:18.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2879: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:19 compute-0 nova_compute[351685]: 2025-10-03 11:03:19.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2880: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:21 compute-0 podman[497269]: 2025-10-03 11:03:21.83730582 +0000 UTC m=+0.090851707 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 11:03:21 compute-0 podman[497270]: 2025-10-03 11:03:21.862076825 +0000 UTC m=+0.103954358 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.tags=base rhel9)
Oct  3 11:03:21 compute-0 podman[497271]: 2025-10-03 11:03:21.885052323 +0000 UTC m=+0.120525310 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:03:22 compute-0 nova_compute[351685]: 2025-10-03 11:03:22.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:03:22 compute-0 nova_compute[351685]: 2025-10-03 11:03:22.732 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:03:22 compute-0 nova_compute[351685]: 2025-10-03 11:03:22.732 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:03:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:03:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2881: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:23 compute-0 nova_compute[351685]: 2025-10-03 11:03:23.350 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:03:23 compute-0 nova_compute[351685]: 2025-10-03 11:03:23.351 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:03:23 compute-0 nova_compute[351685]: 2025-10-03 11:03:23.351 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:03:23 compute-0 nova_compute[351685]: 2025-10-03 11:03:23.353 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:03:23 compute-0 nova_compute[351685]: 2025-10-03 11:03:23.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:24 compute-0 nova_compute[351685]: 2025-10-03 11:03:24.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:24 compute-0 nova_compute[351685]: 2025-10-03 11:03:24.711 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:03:24 compute-0 nova_compute[351685]: 2025-10-03 11:03:24.733 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:03:24 compute-0 nova_compute[351685]: 2025-10-03 11:03:24.734 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:03:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2882: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2883: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:03:28 compute-0 nova_compute[351685]: 2025-10-03 11:03:28.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2884: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:29 compute-0 nova_compute[351685]: 2025-10-03 11:03:29.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:29 compute-0 podman[157165]: time="2025-10-03T11:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:03:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:03:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9092 "" "Go-http-client/1.1"
Oct  3 11:03:30 compute-0 nova_compute[351685]: 2025-10-03 11:03:30.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:03:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2885: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:31 compute-0 openstack_network_exporter[367524]: ERROR   11:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:03:31 compute-0 openstack_network_exporter[367524]: ERROR   11:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:03:31 compute-0 openstack_network_exporter[367524]: ERROR   11:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:03:31 compute-0 openstack_network_exporter[367524]: ERROR   11:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:03:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:03:31 compute-0 openstack_network_exporter[367524]: ERROR   11:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:03:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:03:31 compute-0 nova_compute[351685]: 2025-10-03 11:03:31.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:03:31 compute-0 nova_compute[351685]: 2025-10-03 11:03:31.760 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:03:31 compute-0 nova_compute[351685]: 2025-10-03 11:03:31.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:03:31 compute-0 nova_compute[351685]: 2025-10-03 11:03:31.762 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:03:31 compute-0 nova_compute[351685]: 2025-10-03 11:03:31.762 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:03:31 compute-0 nova_compute[351685]: 2025-10-03 11:03:31.764 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:03:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:03:32 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2780951681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:03:32 compute-0 nova_compute[351685]: 2025-10-03 11:03:32.283 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:03:32 compute-0 nova_compute[351685]: 2025-10-03 11:03:32.390 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:03:32 compute-0 nova_compute[351685]: 2025-10-03 11:03:32.391 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:03:32 compute-0 nova_compute[351685]: 2025-10-03 11:03:32.392 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:03:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:03:32 compute-0 nova_compute[351685]: 2025-10-03 11:03:32.881 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:03:32 compute-0 nova_compute[351685]: 2025-10-03 11:03:32.883 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3796MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:03:32 compute-0 nova_compute[351685]: 2025-10-03 11:03:32.884 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:03:32 compute-0 nova_compute[351685]: 2025-10-03 11:03:32.884 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:03:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2886: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:32 compute-0 nova_compute[351685]: 2025-10-03 11:03:32.969 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:03:32 compute-0 nova_compute[351685]: 2025-10-03 11:03:32.970 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:03:32 compute-0 nova_compute[351685]: 2025-10-03 11:03:32.970 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:03:32 compute-0 nova_compute[351685]: 2025-10-03 11:03:32.991 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 11:03:33 compute-0 nova_compute[351685]: 2025-10-03 11:03:33.011 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 11:03:33 compute-0 nova_compute[351685]: 2025-10-03 11:03:33.012 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 11:03:33 compute-0 nova_compute[351685]: 2025-10-03 11:03:33.028 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 11:03:33 compute-0 nova_compute[351685]: 2025-10-03 11:03:33.056 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 11:03:33 compute-0 nova_compute[351685]: 2025-10-03 11:03:33.095 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:03:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:03:33 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1765908561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:03:33 compute-0 nova_compute[351685]: 2025-10-03 11:03:33.601 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:03:33 compute-0 nova_compute[351685]: 2025-10-03 11:03:33.610 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:03:33 compute-0 nova_compute[351685]: 2025-10-03 11:03:33.635 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:03:33 compute-0 nova_compute[351685]: 2025-10-03 11:03:33.638 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:03:33 compute-0 nova_compute[351685]: 2025-10-03 11:03:33.638 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:03:33 compute-0 nova_compute[351685]: 2025-10-03 11:03:33.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:34 compute-0 nova_compute[351685]: 2025-10-03 11:03:34.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:03:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:03:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:03:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:03:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:03:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:03:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 228a980d-0beb-454d-b763-bfb7492122b2 does not exist
Oct  3 11:03:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 14fac2a4-f6d0-41f3-b566-0734864d1db0 does not exist
Oct  3 11:03:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b7fb7fc4-ac55-4107-bf93-8924ad7bfdbe does not exist
Oct  3 11:03:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:03:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:03:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:03:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:03:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:03:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:03:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2887: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:03:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:03:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:03:35 compute-0 nova_compute[351685]: 2025-10-03 11:03:35.640 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:03:35 compute-0 nova_compute[351685]: 2025-10-03 11:03:35.640 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:03:35 compute-0 nova_compute[351685]: 2025-10-03 11:03:35.640 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:03:35 compute-0 podman[497644]: 2025-10-03 11:03:35.648725457 +0000 UTC m=+0.076626100 container create eab6188f2a67bdedc5082329463ff77d2598018cdcb725cb618d3146a4c228c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 11:03:35 compute-0 systemd[1]: Started libpod-conmon-eab6188f2a67bdedc5082329463ff77d2598018cdcb725cb618d3146a4c228c4.scope.
Oct  3 11:03:35 compute-0 podman[497644]: 2025-10-03 11:03:35.621366929 +0000 UTC m=+0.049267552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:03:35 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:03:35 compute-0 podman[497644]: 2025-10-03 11:03:35.791602824 +0000 UTC m=+0.219503547 container init eab6188f2a67bdedc5082329463ff77d2598018cdcb725cb618d3146a4c228c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_spence, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 11:03:35 compute-0 podman[497644]: 2025-10-03 11:03:35.808066182 +0000 UTC m=+0.235966795 container start eab6188f2a67bdedc5082329463ff77d2598018cdcb725cb618d3146a4c228c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 11:03:35 compute-0 podman[497644]: 2025-10-03 11:03:35.813472725 +0000 UTC m=+0.241373388 container attach eab6188f2a67bdedc5082329463ff77d2598018cdcb725cb618d3146a4c228c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_spence, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:03:35 compute-0 awesome_spence[497660]: 167 167
Oct  3 11:03:35 compute-0 systemd[1]: libpod-eab6188f2a67bdedc5082329463ff77d2598018cdcb725cb618d3146a4c228c4.scope: Deactivated successfully.
Oct  3 11:03:35 compute-0 podman[497644]: 2025-10-03 11:03:35.816206683 +0000 UTC m=+0.244107296 container died eab6188f2a67bdedc5082329463ff77d2598018cdcb725cb618d3146a4c228c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_spence, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:03:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0b0c8a21bc0fd4d5ceb2cece7bc622515e0828ca173fce3f7db078433575695-merged.mount: Deactivated successfully.
Oct  3 11:03:35 compute-0 podman[497644]: 2025-10-03 11:03:35.879099631 +0000 UTC m=+0.307000274 container remove eab6188f2a67bdedc5082329463ff77d2598018cdcb725cb618d3146a4c228c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 11:03:35 compute-0 systemd[1]: libpod-conmon-eab6188f2a67bdedc5082329463ff77d2598018cdcb725cb618d3146a4c228c4.scope: Deactivated successfully.
Oct  3 11:03:36 compute-0 podman[497683]: 2025-10-03 11:03:36.152413955 +0000 UTC m=+0.095252598 container create ced03dc2eb1421550af9ba949b8f1c7da0e4e132b61e637f667a53fbd2642d67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 11:03:36 compute-0 podman[497683]: 2025-10-03 11:03:36.116471321 +0000 UTC m=+0.059310074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:03:36 compute-0 systemd[1]: Started libpod-conmon-ced03dc2eb1421550af9ba949b8f1c7da0e4e132b61e637f667a53fbd2642d67.scope.
Oct  3 11:03:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a9b137d298243c9e42b9e7ed7bd60a41cd88ceb6abbe7758a5ff664ee2da707/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a9b137d298243c9e42b9e7ed7bd60a41cd88ceb6abbe7758a5ff664ee2da707/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a9b137d298243c9e42b9e7ed7bd60a41cd88ceb6abbe7758a5ff664ee2da707/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a9b137d298243c9e42b9e7ed7bd60a41cd88ceb6abbe7758a5ff664ee2da707/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:03:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a9b137d298243c9e42b9e7ed7bd60a41cd88ceb6abbe7758a5ff664ee2da707/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:03:36 compute-0 podman[497683]: 2025-10-03 11:03:36.311663417 +0000 UTC m=+0.254502090 container init ced03dc2eb1421550af9ba949b8f1c7da0e4e132b61e637f667a53fbd2642d67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:03:36 compute-0 podman[497683]: 2025-10-03 11:03:36.327811915 +0000 UTC m=+0.270650558 container start ced03dc2eb1421550af9ba949b8f1c7da0e4e132b61e637f667a53fbd2642d67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 11:03:36 compute-0 podman[497683]: 2025-10-03 11:03:36.333050133 +0000 UTC m=+0.275888846 container attach ced03dc2eb1421550af9ba949b8f1c7da0e4e132b61e637f667a53fbd2642d67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:03:36 compute-0 nova_compute[351685]: 2025-10-03 11:03:36.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:03:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2888: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:37 compute-0 lucid_galois[497700]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:03:37 compute-0 lucid_galois[497700]: --> relative data size: 1.0
Oct  3 11:03:37 compute-0 lucid_galois[497700]: --> All data devices are unavailable
Oct  3 11:03:37 compute-0 systemd[1]: libpod-ced03dc2eb1421550af9ba949b8f1c7da0e4e132b61e637f667a53fbd2642d67.scope: Deactivated successfully.
Oct  3 11:03:37 compute-0 systemd[1]: libpod-ced03dc2eb1421550af9ba949b8f1c7da0e4e132b61e637f667a53fbd2642d67.scope: Consumed 1.292s CPU time.
Oct  3 11:03:37 compute-0 podman[497683]: 2025-10-03 11:03:37.688059436 +0000 UTC m=+1.630898110 container died ced03dc2eb1421550af9ba949b8f1c7da0e4e132b61e637f667a53fbd2642d67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:03:37 compute-0 nova_compute[351685]: 2025-10-03 11:03:37.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a9b137d298243c9e42b9e7ed7bd60a41cd88ceb6abbe7758a5ff664ee2da707-merged.mount: Deactivated successfully.
Oct  3 11:03:37 compute-0 podman[497683]: 2025-10-03 11:03:37.768523039 +0000 UTC m=+1.711361672 container remove ced03dc2eb1421550af9ba949b8f1c7da0e4e132b61e637f667a53fbd2642d67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:03:37 compute-0 systemd[1]: libpod-conmon-ced03dc2eb1421550af9ba949b8f1c7da0e4e132b61e637f667a53fbd2642d67.scope: Deactivated successfully.
Oct  3 11:03:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:03:38 compute-0 podman[497880]: 2025-10-03 11:03:38.860244762 +0000 UTC m=+0.093611216 container create 9b950cdcb2d77bf2fa2fe59127429baff9e344f6422ae0c053aef32e1bc53f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_newton, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 11:03:38 compute-0 nova_compute[351685]: 2025-10-03 11:03:38.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:38 compute-0 podman[497880]: 2025-10-03 11:03:38.824881327 +0000 UTC m=+0.058247841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:03:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2889: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:38 compute-0 systemd[1]: Started libpod-conmon-9b950cdcb2d77bf2fa2fe59127429baff9e344f6422ae0c053aef32e1bc53f37.scope.
Oct  3 11:03:38 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:03:38 compute-0 podman[497880]: 2025-10-03 11:03:38.997795557 +0000 UTC m=+0.231162061 container init 9b950cdcb2d77bf2fa2fe59127429baff9e344f6422ae0c053aef32e1bc53f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 11:03:39 compute-0 podman[497880]: 2025-10-03 11:03:39.014312747 +0000 UTC m=+0.247679171 container start 9b950cdcb2d77bf2fa2fe59127429baff9e344f6422ae0c053aef32e1bc53f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_newton, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct  3 11:03:39 compute-0 podman[497880]: 2025-10-03 11:03:39.018342417 +0000 UTC m=+0.251708851 container attach 9b950cdcb2d77bf2fa2fe59127429baff9e344f6422ae0c053aef32e1bc53f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_newton, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:03:39 compute-0 quizzical_newton[497895]: 167 167
Oct  3 11:03:39 compute-0 systemd[1]: libpod-9b950cdcb2d77bf2fa2fe59127429baff9e344f6422ae0c053aef32e1bc53f37.scope: Deactivated successfully.
Oct  3 11:03:39 compute-0 podman[497880]: 2025-10-03 11:03:39.025513517 +0000 UTC m=+0.258879941 container died 9b950cdcb2d77bf2fa2fe59127429baff9e344f6422ae0c053aef32e1bc53f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:03:39 compute-0 nova_compute[351685]: 2025-10-03 11:03:39.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-09f248a5b98cfbdc908880095e4a204390a9784c4a82d229d536febb7783d11d-merged.mount: Deactivated successfully.
Oct  3 11:03:39 compute-0 podman[497880]: 2025-10-03 11:03:39.07762772 +0000 UTC m=+0.310994124 container remove 9b950cdcb2d77bf2fa2fe59127429baff9e344f6422ae0c053aef32e1bc53f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 11:03:39 compute-0 systemd[1]: libpod-conmon-9b950cdcb2d77bf2fa2fe59127429baff9e344f6422ae0c053aef32e1bc53f37.scope: Deactivated successfully.
Oct  3 11:03:39 compute-0 podman[497919]: 2025-10-03 11:03:39.291108143 +0000 UTC m=+0.068424388 container create a71e8df94dbec8b3014bb7e0c39ee4b3d0d698476bfd8a15eabe7b7120200e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_dijkstra, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct  3 11:03:39 compute-0 podman[497919]: 2025-10-03 11:03:39.261681358 +0000 UTC m=+0.038997633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:03:39 compute-0 systemd[1]: Started libpod-conmon-a71e8df94dbec8b3014bb7e0c39ee4b3d0d698476bfd8a15eabe7b7120200e71.scope.
Oct  3 11:03:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d795328051271c70185a5c148bae8e8629358e50609cbfa5ff351819e2ce5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d795328051271c70185a5c148bae8e8629358e50609cbfa5ff351819e2ce5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d795328051271c70185a5c148bae8e8629358e50609cbfa5ff351819e2ce5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d795328051271c70185a5c148bae8e8629358e50609cbfa5ff351819e2ce5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:03:39 compute-0 podman[497919]: 2025-10-03 11:03:39.411545077 +0000 UTC m=+0.188861312 container init a71e8df94dbec8b3014bb7e0c39ee4b3d0d698476bfd8a15eabe7b7120200e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct  3 11:03:39 compute-0 podman[497919]: 2025-10-03 11:03:39.429165933 +0000 UTC m=+0.206482158 container start a71e8df94dbec8b3014bb7e0c39ee4b3d0d698476bfd8a15eabe7b7120200e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_dijkstra, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:03:39 compute-0 podman[497919]: 2025-10-03 11:03:39.43309662 +0000 UTC m=+0.210412865 container attach a71e8df94dbec8b3014bb7e0c39ee4b3d0d698476bfd8a15eabe7b7120200e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_dijkstra, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 11:03:39 compute-0 nova_compute[351685]: 2025-10-03 11:03:39.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]: {
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:    "0": [
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:        {
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "devices": [
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "/dev/loop3"
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            ],
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "lv_name": "ceph_lv0",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "lv_size": "21470642176",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "name": "ceph_lv0",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "tags": {
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.cluster_name": "ceph",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.crush_device_class": "",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.encrypted": "0",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.osd_id": "0",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.type": "block",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.vdo": "0"
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            },
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "type": "block",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "vg_name": "ceph_vg0"
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:        }
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:    ],
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:    "1": [
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:        {
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "devices": [
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "/dev/loop4"
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            ],
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "lv_name": "ceph_lv1",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "lv_size": "21470642176",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "name": "ceph_lv1",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "tags": {
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.cluster_name": "ceph",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.crush_device_class": "",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.encrypted": "0",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.osd_id": "1",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.type": "block",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.vdo": "0"
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            },
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "type": "block",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "vg_name": "ceph_vg1"
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:        }
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:    ],
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:    "2": [
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:        {
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "devices": [
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "/dev/loop5"
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            ],
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "lv_name": "ceph_lv2",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "lv_size": "21470642176",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "name": "ceph_lv2",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "tags": {
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.cluster_name": "ceph",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.crush_device_class": "",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.encrypted": "0",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.osd_id": "2",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.type": "block",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:                "ceph.vdo": "0"
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            },
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "type": "block",
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:            "vg_name": "ceph_vg2"
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:        }
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]:    ]
Oct  3 11:03:40 compute-0 nifty_dijkstra[497935]: }
Oct  3 11:03:40 compute-0 systemd[1]: libpod-a71e8df94dbec8b3014bb7e0c39ee4b3d0d698476bfd8a15eabe7b7120200e71.scope: Deactivated successfully.
Oct  3 11:03:40 compute-0 podman[497919]: 2025-10-03 11:03:40.301800173 +0000 UTC m=+1.079116388 container died a71e8df94dbec8b3014bb7e0c39ee4b3d0d698476bfd8a15eabe7b7120200e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 11:03:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-02d795328051271c70185a5c148bae8e8629358e50609cbfa5ff351819e2ce5e-merged.mount: Deactivated successfully.
Oct  3 11:03:40 compute-0 podman[497919]: 2025-10-03 11:03:40.375781498 +0000 UTC m=+1.153097733 container remove a71e8df94dbec8b3014bb7e0c39ee4b3d0d698476bfd8a15eabe7b7120200e71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_dijkstra, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:03:40 compute-0 systemd[1]: libpod-conmon-a71e8df94dbec8b3014bb7e0c39ee4b3d0d698476bfd8a15eabe7b7120200e71.scope: Deactivated successfully.
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.899 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.900 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.900 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.901 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.909 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.909 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.910 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.910 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.910 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.911 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:03:40.910312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.916 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.917 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.917 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.917 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.917 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.917 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.917 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.918 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.918 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.918 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.918 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.918 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.919 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.919 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.919 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:03:40.917936) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.919 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:03:40.919218) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2890: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.952 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.952 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.953 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.953 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.953 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.953 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.954 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.954 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:40.954 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:03:40.954066) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.018 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.019 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.019 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.020 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.020 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.021 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.021 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:03:41.020796) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.022 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.023 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.023 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.023 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:03:41.023433) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.025 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.025 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.025 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.025 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.025 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.026 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.026 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.026 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.027 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.027 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.027 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.028 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.028 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:03:41.025892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.028 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:03:41.027898) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.029 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.029 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.029 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.029 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.029 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.029 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.029 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.030 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.030 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.030 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.031 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.031 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.031 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:03:41.029800) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.031 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.031 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.031 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.032 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:03:41.031766) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.071 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.072 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.072 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.072 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.073 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:03:41.073410) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.074 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.074 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.075 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:03:41.076184) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.076 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.077 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.077 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.078 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.078 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.078 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.078 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.079 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.080 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.080 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.081 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.081 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:03:41.079290) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.081 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.081 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.082 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.082 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:03:41.081598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.083 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.083 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.084 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.084 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.084 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.085 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:03:41.082936) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.085 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.086 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:03:41.084415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.086 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.086 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.087 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.087 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.087 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.088 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.088 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 85500000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.088 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:03:41.086340) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.088 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.089 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.089 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.089 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.089 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.089 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:03:41.088182) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.090 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.090 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.091 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:03:41.089771) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.091 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.091 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.092 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.092 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.093 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.093 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.093 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:03:41.091793) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.093 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.094 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.094 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.094 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:03:41.093931) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.095 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.095 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.095 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.095 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.095 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.096 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:03:41.095920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.096 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.097 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.097 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.098 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.098 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.098 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.098 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.099 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.099 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.099 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.099 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.100 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.100 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:03:41.098071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.100 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.100 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:03:41.099935) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:03:41.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:03:41 compute-0 podman[498092]: 2025-10-03 11:03:41.434102239 +0000 UTC m=+0.064419729 container create 04b283b45393f55344729b106f962498b76cd3edca1d9ee01b9c4d32f45f06a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:03:41 compute-0 podman[498092]: 2025-10-03 11:03:41.410959736 +0000 UTC m=+0.041277296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:03:41 compute-0 systemd[1]: Started libpod-conmon-04b283b45393f55344729b106f962498b76cd3edca1d9ee01b9c4d32f45f06a7.scope.
Oct  3 11:03:41 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:03:41 compute-0 podman[498092]: 2025-10-03 11:03:41.591173771 +0000 UTC m=+0.221491261 container init 04b283b45393f55344729b106f962498b76cd3edca1d9ee01b9c4d32f45f06a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 11:03:41 compute-0 podman[498092]: 2025-10-03 11:03:41.601504502 +0000 UTC m=+0.231821972 container start 04b283b45393f55344729b106f962498b76cd3edca1d9ee01b9c4d32f45f06a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:03:41 compute-0 podman[498092]: 2025-10-03 11:03:41.60610049 +0000 UTC m=+0.236417960 container attach 04b283b45393f55344729b106f962498b76cd3edca1d9ee01b9c4d32f45f06a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:03:41 compute-0 cool_feynman[498147]: 167 167
Oct  3 11:03:41 compute-0 systemd[1]: libpod-04b283b45393f55344729b106f962498b76cd3edca1d9ee01b9c4d32f45f06a7.scope: Deactivated successfully.
Oct  3 11:03:41 compute-0 podman[498092]: 2025-10-03 11:03:41.611060809 +0000 UTC m=+0.241378289 container died 04b283b45393f55344729b106f962498b76cd3edca1d9ee01b9c4d32f45f06a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:03:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c8bedb3111658d0f99fbe862ed091d935550d6a89ef38fd1c9cd3c83c104c12-merged.mount: Deactivated successfully.
Oct  3 11:03:41 compute-0 podman[498111]: 2025-10-03 11:03:41.652474438 +0000 UTC m=+0.114289519 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Oct  3 11:03:41 compute-0 podman[498110]: 2025-10-03 11:03:41.670116225 +0000 UTC m=+0.158220160 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Oct  3 11:03:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:03:41.671 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:03:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:03:41.672 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:03:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:03:41.672 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:03:41 compute-0 podman[498092]: 2025-10-03 11:03:41.674898488 +0000 UTC m=+0.305215958 container remove 04b283b45393f55344729b106f962498b76cd3edca1d9ee01b9c4d32f45f06a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:03:41 compute-0 podman[498112]: 2025-10-03 11:03:41.678046979 +0000 UTC m=+0.145297564 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 11:03:41 compute-0 podman[498106]: 2025-10-03 11:03:41.678187984 +0000 UTC m=+0.164454609 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:03:41 compute-0 podman[498109]: 2025-10-03 11:03:41.680988354 +0000 UTC m=+0.166271219 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_id=edpm, architecture=x86_64, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct  3 11:03:41 compute-0 systemd[1]: libpod-conmon-04b283b45393f55344729b106f962498b76cd3edca1d9ee01b9c4d32f45f06a7.scope: Deactivated successfully.
Oct  3 11:03:41 compute-0 podman[498128]: 2025-10-03 11:03:41.687827573 +0000 UTC m=+0.140811930 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:03:41 compute-0 podman[498129]: 2025-10-03 11:03:41.702339689 +0000 UTC m=+0.162981163 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:03:41 compute-0 nova_compute[351685]: 2025-10-03 11:03:41.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:03:41 compute-0 podman[498264]: 2025-10-03 11:03:41.895619253 +0000 UTC m=+0.089621218 container create 1fb9ce7f945472c86e0535e1b36370b1c06c307500bc44aa0171c10f6d0eecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 11:03:41 compute-0 podman[498264]: 2025-10-03 11:03:41.853883873 +0000 UTC m=+0.047885898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:03:41 compute-0 systemd[1]: Started libpod-conmon-1fb9ce7f945472c86e0535e1b36370b1c06c307500bc44aa0171c10f6d0eecea.scope.
Oct  3 11:03:42 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e72b9753ae758a137da2f1052fa50b84718d8e6b4d92330367103850cbcd100/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e72b9753ae758a137da2f1052fa50b84718d8e6b4d92330367103850cbcd100/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e72b9753ae758a137da2f1052fa50b84718d8e6b4d92330367103850cbcd100/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e72b9753ae758a137da2f1052fa50b84718d8e6b4d92330367103850cbcd100/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:03:42 compute-0 podman[498264]: 2025-10-03 11:03:42.035462082 +0000 UTC m=+0.229464097 container init 1fb9ce7f945472c86e0535e1b36370b1c06c307500bc44aa0171c10f6d0eecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 11:03:42 compute-0 podman[498264]: 2025-10-03 11:03:42.053391077 +0000 UTC m=+0.247393042 container start 1fb9ce7f945472c86e0535e1b36370b1c06c307500bc44aa0171c10f6d0eecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 11:03:42 compute-0 podman[498264]: 2025-10-03 11:03:42.060410022 +0000 UTC m=+0.254411967 container attach 1fb9ce7f945472c86e0535e1b36370b1c06c307500bc44aa0171c10f6d0eecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:03:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:03:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2891: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]: {
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:        "osd_id": 1,
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:        "type": "bluestore"
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:    },
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:        "osd_id": 2,
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:        "type": "bluestore"
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:    },
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:        "osd_id": 0,
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:        "type": "bluestore"
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]:    }
Oct  3 11:03:43 compute-0 stupefied_margulis[498280]: }
Oct  3 11:03:43 compute-0 systemd[1]: libpod-1fb9ce7f945472c86e0535e1b36370b1c06c307500bc44aa0171c10f6d0eecea.scope: Deactivated successfully.
Oct  3 11:03:43 compute-0 systemd[1]: libpod-1fb9ce7f945472c86e0535e1b36370b1c06c307500bc44aa0171c10f6d0eecea.scope: Consumed 1.176s CPU time.
Oct  3 11:03:43 compute-0 podman[498264]: 2025-10-03 11:03:43.235368997 +0000 UTC m=+1.429370962 container died 1fb9ce7f945472c86e0535e1b36370b1c06c307500bc44aa0171c10f6d0eecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_margulis, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:03:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e72b9753ae758a137da2f1052fa50b84718d8e6b4d92330367103850cbcd100-merged.mount: Deactivated successfully.
Oct  3 11:03:43 compute-0 podman[498264]: 2025-10-03 11:03:43.339978314 +0000 UTC m=+1.533980239 container remove 1fb9ce7f945472c86e0535e1b36370b1c06c307500bc44aa0171c10f6d0eecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:03:43 compute-0 systemd[1]: libpod-conmon-1fb9ce7f945472c86e0535e1b36370b1c06c307500bc44aa0171c10f6d0eecea.scope: Deactivated successfully.
Oct  3 11:03:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:03:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:03:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:03:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:03:43 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 14b9333b-c8f3-41fd-af95-57f8b385eb50 does not exist
Oct  3 11:03:43 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f2f77119-96cd-4c4e-b9a8-b4d4e96d771e does not exist
Oct  3 11:03:43 compute-0 nova_compute[351685]: 2025-10-03 11:03:43.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:44 compute-0 nova_compute[351685]: 2025-10-03 11:03:44.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:03:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:03:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2892: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:03:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:03:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:03:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:03:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:03:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:03:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:03:46
Oct  3 11:03:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:03:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:03:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.meta', 'images', '.rgw.root', 'vms', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data']
Oct  3 11:03:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:03:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2893: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:03:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:03:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:03:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:03:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:03:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:03:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:03:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:03:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:03:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:03:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:03:48 compute-0 nova_compute[351685]: 2025-10-03 11:03:48.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2894: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:49 compute-0 nova_compute[351685]: 2025-10-03 11:03:49.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2895: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:03:52 compute-0 podman[498377]: 2025-10-03 11:03:52.870463028 +0000 UTC m=+0.107677108 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 11:03:52 compute-0 podman[498379]: 2025-10-03 11:03:52.880967655 +0000 UTC m=+0.118671730 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Oct  3 11:03:52 compute-0 podman[498378]: 2025-10-03 11:03:52.886614706 +0000 UTC m=+0.123298038 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9, release-0.7.12=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.openshift.tags=base rhel9)
Oct  3 11:03:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2896: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:53 compute-0 nova_compute[351685]: 2025-10-03 11:03:53.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:54 compute-0 nova_compute[351685]: 2025-10-03 11:03:54.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:03:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/398107569' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:03:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:03:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/398107569' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:03:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2897: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:03:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2898: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:03:58 compute-0 nova_compute[351685]: 2025-10-03 11:03:58.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2899: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:03:59 compute-0 nova_compute[351685]: 2025-10-03 11:03:59.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:03:59 compute-0 podman[157165]: time="2025-10-03T11:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:03:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:03:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9100 "" "Go-http-client/1.1"
Oct  3 11:04:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2900: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:01 compute-0 openstack_network_exporter[367524]: ERROR   11:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:04:01 compute-0 openstack_network_exporter[367524]: ERROR   11:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:04:01 compute-0 openstack_network_exporter[367524]: ERROR   11:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:04:01 compute-0 openstack_network_exporter[367524]: ERROR   11:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:04:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:04:01 compute-0 openstack_network_exporter[367524]: ERROR   11:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:04:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:04:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:04:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2901: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:03 compute-0 nova_compute[351685]: 2025-10-03 11:04:03.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:04 compute-0 nova_compute[351685]: 2025-10-03 11:04:04.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:04 compute-0 nova_compute[351685]: 2025-10-03 11:04:04.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:04:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2902: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2903: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:04:07.877872) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489447877908, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 1189, "num_deletes": 250, "total_data_size": 1824394, "memory_usage": 1853200, "flush_reason": "Manual Compaction"}
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489447889711, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 1073812, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57844, "largest_seqno": 59032, "table_properties": {"data_size": 1069432, "index_size": 1903, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11465, "raw_average_key_size": 20, "raw_value_size": 1059941, "raw_average_value_size": 1913, "num_data_blocks": 87, "num_entries": 554, "num_filter_entries": 554, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759489326, "oldest_key_time": 1759489326, "file_creation_time": 1759489447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 11905 microseconds, and 7324 cpu microseconds.
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:04:07.889776) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 1073812 bytes OK
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:04:07.889797) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:04:07.892304) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:04:07.892321) EVENT_LOG_v1 {"time_micros": 1759489447892315, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:04:07.892340) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 1818965, prev total WAL file size 1818965, number of live WAL files 2.
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:04:07.894192) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353034' seq:72057594037927935, type:22 .. '6D6772737461740032373535' seq:0, type:0; will stop at (end)
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(1048KB)], [140(9135KB)]
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489447894301, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 10429043, "oldest_snapshot_seqno": -1}
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 6983 keys, 7852406 bytes, temperature: kUnknown
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489447970375, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 7852406, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7811475, "index_size": 22386, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 183419, "raw_average_key_size": 26, "raw_value_size": 7690440, "raw_average_value_size": 1101, "num_data_blocks": 883, "num_entries": 6983, "num_filter_entries": 6983, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759489447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:04:07.970606) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 7852406 bytes
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:04:07.973064) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.0 rd, 103.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.9 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(17.0) write-amplify(7.3) OK, records in: 7440, records dropped: 457 output_compression: NoCompression
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:04:07.973087) EVENT_LOG_v1 {"time_micros": 1759489447973075, "job": 86, "event": "compaction_finished", "compaction_time_micros": 76135, "compaction_time_cpu_micros": 45871, "output_level": 6, "num_output_files": 1, "total_output_size": 7852406, "num_input_records": 7440, "num_output_records": 6983, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489447973542, "job": 86, "event": "table_file_deletion", "file_number": 142}
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489447976018, "job": 86, "event": "table_file_deletion", "file_number": 140}
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:04:07.893862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:04:07.976316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:04:07.976326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:04:07.976329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:04:07.976332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:04:07 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:04:07.976335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:04:08 compute-0 nova_compute[351685]: 2025-10-03 11:04:08.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2904: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:09 compute-0 nova_compute[351685]: 2025-10-03 11:04:09.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2905: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:11 compute-0 podman[498435]: 2025-10-03 11:04:11.874760361 +0000 UTC m=+0.121260034 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:04:11 compute-0 podman[498438]: 2025-10-03 11:04:11.884430921 +0000 UTC m=+0.103723031 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  3 11:04:11 compute-0 podman[498437]: 2025-10-03 11:04:11.894818605 +0000 UTC m=+0.146068220 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  3 11:04:11 compute-0 podman[498436]: 2025-10-03 11:04:11.893703099 +0000 UTC m=+0.137501335 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal)
Oct  3 11:04:11 compute-0 podman[498439]: 2025-10-03 11:04:11.900179107 +0000 UTC m=+0.140022006 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 11:04:11 compute-0 podman[498449]: 2025-10-03 11:04:11.903675188 +0000 UTC m=+0.125774648 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  3 11:04:11 compute-0 podman[498440]: 2025-10-03 11:04:11.925613853 +0000 UTC m=+0.151700550 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  3 11:04:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:04:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2906: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:13 compute-0 nova_compute[351685]: 2025-10-03 11:04:13.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:14 compute-0 nova_compute[351685]: 2025-10-03 11:04:14.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2907: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:04:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:04:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:04:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:04:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:04:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:04:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2908: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:04:18 compute-0 nova_compute[351685]: 2025-10-03 11:04:18.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2909: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:19 compute-0 nova_compute[351685]: 2025-10-03 11:04:19.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2910: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:22 compute-0 nova_compute[351685]: 2025-10-03 11:04:22.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:04:22 compute-0 nova_compute[351685]: 2025-10-03 11:04:22.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:04:22 compute-0 nova_compute[351685]: 2025-10-03 11:04:22.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:04:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:04:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2911: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:23 compute-0 nova_compute[351685]: 2025-10-03 11:04:23.361 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:04:23 compute-0 nova_compute[351685]: 2025-10-03 11:04:23.361 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:04:23 compute-0 nova_compute[351685]: 2025-10-03 11:04:23.362 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:04:23 compute-0 nova_compute[351685]: 2025-10-03 11:04:23.363 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:04:23 compute-0 podman[498567]: 2025-10-03 11:04:23.863741829 +0000 UTC m=+0.109890018 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 11:04:23 compute-0 podman[498569]: 2025-10-03 11:04:23.886026124 +0000 UTC m=+0.126897323 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  3 11:04:23 compute-0 podman[498568]: 2025-10-03 11:04:23.905272952 +0000 UTC m=+0.141416330 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, distribution-scope=public, io.openshift.expose-services=, name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm)
Oct  3 11:04:23 compute-0 nova_compute[351685]: 2025-10-03 11:04:23.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:24 compute-0 nova_compute[351685]: 2025-10-03 11:04:24.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:24 compute-0 nova_compute[351685]: 2025-10-03 11:04:24.616 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:04:24 compute-0 nova_compute[351685]: 2025-10-03 11:04:24.632 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:04:24 compute-0 nova_compute[351685]: 2025-10-03 11:04:24.632 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:04:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2912: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2913: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:04:28 compute-0 nova_compute[351685]: 2025-10-03 11:04:28.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2914: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:29 compute-0 nova_compute[351685]: 2025-10-03 11:04:29.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:29 compute-0 podman[157165]: time="2025-10-03T11:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:04:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:04:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9097 "" "Go-http-client/1.1"
Oct  3 11:04:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2915: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:31 compute-0 openstack_network_exporter[367524]: ERROR   11:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:04:31 compute-0 openstack_network_exporter[367524]: ERROR   11:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:04:31 compute-0 openstack_network_exporter[367524]: ERROR   11:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:04:31 compute-0 openstack_network_exporter[367524]: ERROR   11:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:04:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:04:31 compute-0 openstack_network_exporter[367524]: ERROR   11:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:04:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:04:31 compute-0 nova_compute[351685]: 2025-10-03 11:04:31.628 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:04:32 compute-0 nova_compute[351685]: 2025-10-03 11:04:32.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:04:32 compute-0 nova_compute[351685]: 2025-10-03 11:04:32.756 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:04:32 compute-0 nova_compute[351685]: 2025-10-03 11:04:32.756 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:04:32 compute-0 nova_compute[351685]: 2025-10-03 11:04:32.757 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:04:32 compute-0 nova_compute[351685]: 2025-10-03 11:04:32.757 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:04:32 compute-0 nova_compute[351685]: 2025-10-03 11:04:32.757 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:04:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:04:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2916: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:04:33 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1312619554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:04:33 compute-0 nova_compute[351685]: 2025-10-03 11:04:33.284 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:04:33 compute-0 nova_compute[351685]: 2025-10-03 11:04:33.376 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:04:33 compute-0 nova_compute[351685]: 2025-10-03 11:04:33.376 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:04:33 compute-0 nova_compute[351685]: 2025-10-03 11:04:33.377 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:04:33 compute-0 nova_compute[351685]: 2025-10-03 11:04:33.778 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:04:33 compute-0 nova_compute[351685]: 2025-10-03 11:04:33.779 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3808MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:04:33 compute-0 nova_compute[351685]: 2025-10-03 11:04:33.779 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:04:33 compute-0 nova_compute[351685]: 2025-10-03 11:04:33.779 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:04:33 compute-0 nova_compute[351685]: 2025-10-03 11:04:33.911 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:04:33 compute-0 nova_compute[351685]: 2025-10-03 11:04:33.911 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:04:33 compute-0 nova_compute[351685]: 2025-10-03 11:04:33.912 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:04:33 compute-0 nova_compute[351685]: 2025-10-03 11:04:33.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:34 compute-0 nova_compute[351685]: 2025-10-03 11:04:34.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:34 compute-0 nova_compute[351685]: 2025-10-03 11:04:34.203 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:04:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:04:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3870657179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:04:34 compute-0 nova_compute[351685]: 2025-10-03 11:04:34.694 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:04:34 compute-0 nova_compute[351685]: 2025-10-03 11:04:34.703 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:04:34 compute-0 nova_compute[351685]: 2025-10-03 11:04:34.725 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:04:34 compute-0 nova_compute[351685]: 2025-10-03 11:04:34.728 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:04:34 compute-0 nova_compute[351685]: 2025-10-03 11:04:34.729 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:04:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2917: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Oct  3 11:04:36 compute-0 nova_compute[351685]: 2025-10-03 11:04:36.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:04:36 compute-0 nova_compute[351685]: 2025-10-03 11:04:36.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:04:36 compute-0 nova_compute[351685]: 2025-10-03 11:04:36.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:04:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2918: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Oct  3 11:04:37 compute-0 nova_compute[351685]: 2025-10-03 11:04:37.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:04:37 compute-0 nova_compute[351685]: 2025-10-03 11:04:37.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:04:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:04:38 compute-0 nova_compute[351685]: 2025-10-03 11:04:38.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2919: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Oct  3 11:04:39 compute-0 nova_compute[351685]: 2025-10-03 11:04:39.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2920: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 11:04:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:04:41.672 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:04:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:04:41.672 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:04:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:04:41.673 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:04:41 compute-0 nova_compute[351685]: 2025-10-03 11:04:41.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:04:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:04:42 compute-0 podman[498671]: 2025-10-03 11:04:42.883992063 +0000 UTC m=+0.108791533 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible)
Oct  3 11:04:42 compute-0 podman[498672]: 2025-10-03 11:04:42.888749697 +0000 UTC m=+0.128295660 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 11:04:42 compute-0 podman[498669]: 2025-10-03 11:04:42.89916073 +0000 UTC m=+0.136567684 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 11:04:42 compute-0 podman[498670]: 2025-10-03 11:04:42.900911966 +0000 UTC m=+0.136961007 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  3 11:04:42 compute-0 podman[498668]: 2025-10-03 11:04:42.909687738 +0000 UTC m=+0.154646015 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:04:42 compute-0 podman[498674]: 2025-10-03 11:04:42.919866704 +0000 UTC m=+0.148478335 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 11:04:42 compute-0 podman[498673]: 2025-10-03 11:04:42.953219716 +0000 UTC m=+0.174335377 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct  3 11:04:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2921: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 11:04:43 compute-0 nova_compute[351685]: 2025-10-03 11:04:43.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:04:43 compute-0 nova_compute[351685]: 2025-10-03 11:04:43.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:44 compute-0 nova_compute[351685]: 2025-10-03 11:04:44.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2922: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 11:04:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:04:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:04:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:04:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:04:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:04:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:04:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:04:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:04:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:04:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:04:45 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e28dd710-f8c1-45a4-b50e-022f504c33d0 does not exist
Oct  3 11:04:45 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 25951df4-0a71-4d13-8b85-3118a8c64ef8 does not exist
Oct  3 11:04:45 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev dbf44459-33da-4b0e-99ba-05ead701f939 does not exist
Oct  3 11:04:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:04:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:04:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:04:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:04:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:04:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:04:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:04:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:04:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:04:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:04:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:04:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:04:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:04:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:04:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:04:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:04:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:04:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:04:46
Oct  3 11:04:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:04:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:04:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', '.rgw.root', '.mgr', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'vms', 'volumes', 'backups', 'cephfs.cephfs.meta']
Oct  3 11:04:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:04:46 compute-0 podman[499191]: 2025-10-03 11:04:46.843596613 +0000 UTC m=+0.089949629 container create f53abf954f439013ae2d308216f81f604e6380fef0cce2fcb23fd9b99ae3f2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:04:46 compute-0 podman[499191]: 2025-10-03 11:04:46.810850272 +0000 UTC m=+0.057203298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:04:46 compute-0 systemd[1]: Started libpod-conmon-f53abf954f439013ae2d308216f81f604e6380fef0cce2fcb23fd9b99ae3f2e3.scope.
Oct  3 11:04:46 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:04:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2923: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Oct  3 11:04:46 compute-0 podman[499191]: 2025-10-03 11:04:46.98369929 +0000 UTC m=+0.230052286 container init f53abf954f439013ae2d308216f81f604e6380fef0cce2fcb23fd9b99ae3f2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 11:04:46 compute-0 podman[499191]: 2025-10-03 11:04:46.992600856 +0000 UTC m=+0.238953852 container start f53abf954f439013ae2d308216f81f604e6380fef0cce2fcb23fd9b99ae3f2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cray, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 11:04:46 compute-0 podman[499191]: 2025-10-03 11:04:46.997396799 +0000 UTC m=+0.243749805 container attach f53abf954f439013ae2d308216f81f604e6380fef0cce2fcb23fd9b99ae3f2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cray, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  3 11:04:47 compute-0 brave_cray[499206]: 167 167
Oct  3 11:04:47 compute-0 systemd[1]: libpod-f53abf954f439013ae2d308216f81f604e6380fef0cce2fcb23fd9b99ae3f2e3.scope: Deactivated successfully.
Oct  3 11:04:47 compute-0 podman[499191]: 2025-10-03 11:04:47.002836644 +0000 UTC m=+0.249189640 container died f53abf954f439013ae2d308216f81f604e6380fef0cce2fcb23fd9b99ae3f2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cray, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:04:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3933b6ee39c39cee633094683c930ad8ae1339458dd482ab6cde7ea853399d16-merged.mount: Deactivated successfully.
Oct  3 11:04:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:04:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:04:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:04:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:04:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:04:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:04:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:04:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:04:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:04:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:04:47 compute-0 podman[499191]: 2025-10-03 11:04:47.056442185 +0000 UTC m=+0.302795161 container remove f53abf954f439013ae2d308216f81f604e6380fef0cce2fcb23fd9b99ae3f2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 11:04:47 compute-0 systemd[1]: libpod-conmon-f53abf954f439013ae2d308216f81f604e6380fef0cce2fcb23fd9b99ae3f2e3.scope: Deactivated successfully.
Oct  3 11:04:47 compute-0 podman[499230]: 2025-10-03 11:04:47.323158026 +0000 UTC m=+0.081198787 container create cc8cf905208c2c322b7179b0d3b3114dea4818598c0ee32d877c14b5335283fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mayer, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 11:04:47 compute-0 podman[499230]: 2025-10-03 11:04:47.293980489 +0000 UTC m=+0.052021260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:04:47 compute-0 systemd[1]: Started libpod-conmon-cc8cf905208c2c322b7179b0d3b3114dea4818598c0ee32d877c14b5335283fa.scope.
Oct  3 11:04:47 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c356b671c51ac7ff084c5cf9374fc4aa43d55f3a4d65616352ed8ffd6a4ce805/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c356b671c51ac7ff084c5cf9374fc4aa43d55f3a4d65616352ed8ffd6a4ce805/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c356b671c51ac7ff084c5cf9374fc4aa43d55f3a4d65616352ed8ffd6a4ce805/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c356b671c51ac7ff084c5cf9374fc4aa43d55f3a4d65616352ed8ffd6a4ce805/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c356b671c51ac7ff084c5cf9374fc4aa43d55f3a4d65616352ed8ffd6a4ce805/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:04:47 compute-0 podman[499230]: 2025-10-03 11:04:47.493637009 +0000 UTC m=+0.251677810 container init cc8cf905208c2c322b7179b0d3b3114dea4818598c0ee32d877c14b5335283fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mayer, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 11:04:47 compute-0 podman[499230]: 2025-10-03 11:04:47.514214298 +0000 UTC m=+0.272255059 container start cc8cf905208c2c322b7179b0d3b3114dea4818598c0ee32d877c14b5335283fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mayer, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:04:47 compute-0 podman[499230]: 2025-10-03 11:04:47.520591163 +0000 UTC m=+0.278631894 container attach cc8cf905208c2c322b7179b0d3b3114dea4818598c0ee32d877c14b5335283fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 11:04:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:04:48 compute-0 jolly_mayer[499246]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:04:48 compute-0 jolly_mayer[499246]: --> relative data size: 1.0
Oct  3 11:04:48 compute-0 jolly_mayer[499246]: --> All data devices are unavailable
Oct  3 11:04:48 compute-0 systemd[1]: libpod-cc8cf905208c2c322b7179b0d3b3114dea4818598c0ee32d877c14b5335283fa.scope: Deactivated successfully.
Oct  3 11:04:48 compute-0 systemd[1]: libpod-cc8cf905208c2c322b7179b0d3b3114dea4818598c0ee32d877c14b5335283fa.scope: Consumed 1.145s CPU time.
Oct  3 11:04:48 compute-0 podman[499275]: 2025-10-03 11:04:48.819479086 +0000 UTC m=+0.050802672 container died cc8cf905208c2c322b7179b0d3b3114dea4818598c0ee32d877c14b5335283fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mayer, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:04:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c356b671c51ac7ff084c5cf9374fc4aa43d55f3a4d65616352ed8ffd6a4ce805-merged.mount: Deactivated successfully.
Oct  3 11:04:48 compute-0 nova_compute[351685]: 2025-10-03 11:04:48.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2924: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct  3 11:04:49 compute-0 podman[499275]: 2025-10-03 11:04:49.045115059 +0000 UTC m=+0.276438585 container remove cc8cf905208c2c322b7179b0d3b3114dea4818598c0ee32d877c14b5335283fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mayer, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:04:49 compute-0 systemd[1]: libpod-conmon-cc8cf905208c2c322b7179b0d3b3114dea4818598c0ee32d877c14b5335283fa.scope: Deactivated successfully.
Oct  3 11:04:49 compute-0 nova_compute[351685]: 2025-10-03 11:04:49.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:50 compute-0 podman[499428]: 2025-10-03 11:04:50.040765387 +0000 UTC m=+0.039109666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:04:50 compute-0 podman[499428]: 2025-10-03 11:04:50.152806954 +0000 UTC m=+0.151151223 container create 6f5b7e8211ddcff8b6154c9fc654eac98471d7481e5915949189ae03be59d13e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 11:04:50 compute-0 systemd[1]: Started libpod-conmon-6f5b7e8211ddcff8b6154c9fc654eac98471d7481e5915949189ae03be59d13e.scope.
Oct  3 11:04:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:04:50 compute-0 podman[499428]: 2025-10-03 11:04:50.388569292 +0000 UTC m=+0.386913591 container init 6f5b7e8211ddcff8b6154c9fc654eac98471d7481e5915949189ae03be59d13e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 11:04:50 compute-0 podman[499428]: 2025-10-03 11:04:50.406668142 +0000 UTC m=+0.405012441 container start 6f5b7e8211ddcff8b6154c9fc654eac98471d7481e5915949189ae03be59d13e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_buck, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 11:04:50 compute-0 silly_buck[499443]: 167 167
Oct  3 11:04:50 compute-0 systemd[1]: libpod-6f5b7e8211ddcff8b6154c9fc654eac98471d7481e5915949189ae03be59d13e.scope: Deactivated successfully.
Oct  3 11:04:50 compute-0 podman[499428]: 2025-10-03 11:04:50.435685054 +0000 UTC m=+0.434029363 container attach 6f5b7e8211ddcff8b6154c9fc654eac98471d7481e5915949189ae03be59d13e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_buck, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 11:04:50 compute-0 podman[499428]: 2025-10-03 11:04:50.436139089 +0000 UTC m=+0.434483358 container died 6f5b7e8211ddcff8b6154c9fc654eac98471d7481e5915949189ae03be59d13e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:04:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-999ea4caae9c63e328bb4a0130924db7b872c94e37049c2629b16604b6f12585-merged.mount: Deactivated successfully.
Oct  3 11:04:50 compute-0 podman[499428]: 2025-10-03 11:04:50.615526097 +0000 UTC m=+0.613870396 container remove 6f5b7e8211ddcff8b6154c9fc654eac98471d7481e5915949189ae03be59d13e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_buck, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:04:50 compute-0 systemd[1]: libpod-conmon-6f5b7e8211ddcff8b6154c9fc654eac98471d7481e5915949189ae03be59d13e.scope: Deactivated successfully.
Oct  3 11:04:50 compute-0 podman[499467]: 2025-10-03 11:04:50.900830645 +0000 UTC m=+0.090285649 container create c37005bd3342c109d890a68c0a28d79d93e2c990d16f7886d2be078240888068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meitner, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 11:04:50 compute-0 podman[499467]: 2025-10-03 11:04:50.863995282 +0000 UTC m=+0.053450366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:04:50 compute-0 systemd[1]: Started libpod-conmon-c37005bd3342c109d890a68c0a28d79d93e2c990d16f7886d2be078240888068.scope.
Oct  3 11:04:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2925: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Oct  3 11:04:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed457e86791307ddbb89f41f46561faf18d910f5f7c6f979b29a99bd3164293/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:04:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed457e86791307ddbb89f41f46561faf18d910f5f7c6f979b29a99bd3164293/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:04:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed457e86791307ddbb89f41f46561faf18d910f5f7c6f979b29a99bd3164293/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:04:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed457e86791307ddbb89f41f46561faf18d910f5f7c6f979b29a99bd3164293/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:04:51 compute-0 podman[499467]: 2025-10-03 11:04:51.044705902 +0000 UTC m=+0.234160966 container init c37005bd3342c109d890a68c0a28d79d93e2c990d16f7886d2be078240888068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:04:51 compute-0 podman[499467]: 2025-10-03 11:04:51.07296324 +0000 UTC m=+0.262418264 container start c37005bd3342c109d890a68c0a28d79d93e2c990d16f7886d2be078240888068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meitner, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:04:51 compute-0 podman[499467]: 2025-10-03 11:04:51.079880992 +0000 UTC m=+0.269335996 container attach c37005bd3342c109d890a68c0a28d79d93e2c990d16f7886d2be078240888068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meitner, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:04:51 compute-0 gifted_meitner[499482]: {
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:    "0": [
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:        {
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "devices": [
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "/dev/loop3"
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            ],
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "lv_name": "ceph_lv0",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "lv_size": "21470642176",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "name": "ceph_lv0",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "tags": {
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.cluster_name": "ceph",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.crush_device_class": "",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.encrypted": "0",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.osd_id": "0",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.type": "block",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.vdo": "0"
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            },
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "type": "block",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "vg_name": "ceph_vg0"
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:        }
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:    ],
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:    "1": [
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:        {
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "devices": [
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "/dev/loop4"
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            ],
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "lv_name": "ceph_lv1",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "lv_size": "21470642176",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "name": "ceph_lv1",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "tags": {
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.cluster_name": "ceph",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.crush_device_class": "",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.encrypted": "0",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.osd_id": "1",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.type": "block",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.vdo": "0"
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            },
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "type": "block",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "vg_name": "ceph_vg1"
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:        }
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:    ],
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:    "2": [
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:        {
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "devices": [
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "/dev/loop5"
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            ],
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "lv_name": "ceph_lv2",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "lv_size": "21470642176",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "name": "ceph_lv2",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "tags": {
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.cluster_name": "ceph",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.crush_device_class": "",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.encrypted": "0",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.osd_id": "2",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.type": "block",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:                "ceph.vdo": "0"
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            },
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "type": "block",
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:            "vg_name": "ceph_vg2"
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:        }
Oct  3 11:04:51 compute-0 gifted_meitner[499482]:    ]
Oct  3 11:04:51 compute-0 gifted_meitner[499482]: }
Oct  3 11:04:51 compute-0 systemd[1]: libpod-c37005bd3342c109d890a68c0a28d79d93e2c990d16f7886d2be078240888068.scope: Deactivated successfully.
Oct  3 11:04:51 compute-0 podman[499467]: 2025-10-03 11:04:51.914186812 +0000 UTC m=+1.103641826 container died c37005bd3342c109d890a68c0a28d79d93e2c990d16f7886d2be078240888068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:04:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ed457e86791307ddbb89f41f46561faf18d910f5f7c6f979b29a99bd3164293-merged.mount: Deactivated successfully.
Oct  3 11:04:51 compute-0 podman[499467]: 2025-10-03 11:04:51.998048934 +0000 UTC m=+1.187503938 container remove c37005bd3342c109d890a68c0a28d79d93e2c990d16f7886d2be078240888068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:04:52 compute-0 systemd[1]: libpod-conmon-c37005bd3342c109d890a68c0a28d79d93e2c990d16f7886d2be078240888068.scope: Deactivated successfully.
Oct  3 11:04:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:04:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2926: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:53 compute-0 podman[499641]: 2025-10-03 11:04:53.06370224 +0000 UTC m=+0.090701842 container create 9c774fdea2c5850daad5867569f986c625bb37c5c9f34e90d7a3219d0a1ebbd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:04:53 compute-0 podman[499641]: 2025-10-03 11:04:53.031970921 +0000 UTC m=+0.058970563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:04:53 compute-0 systemd[1]: Started libpod-conmon-9c774fdea2c5850daad5867569f986c625bb37c5c9f34e90d7a3219d0a1ebbd3.scope.
Oct  3 11:04:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:04:53 compute-0 podman[499641]: 2025-10-03 11:04:53.216967569 +0000 UTC m=+0.243967211 container init 9c774fdea2c5850daad5867569f986c625bb37c5c9f34e90d7a3219d0a1ebbd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:04:53 compute-0 podman[499641]: 2025-10-03 11:04:53.232959573 +0000 UTC m=+0.259959175 container start 9c774fdea2c5850daad5867569f986c625bb37c5c9f34e90d7a3219d0a1ebbd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:04:53 compute-0 competent_maxwell[499656]: 167 167
Oct  3 11:04:53 compute-0 systemd[1]: libpod-9c774fdea2c5850daad5867569f986c625bb37c5c9f34e90d7a3219d0a1ebbd3.scope: Deactivated successfully.
Oct  3 11:04:53 compute-0 podman[499641]: 2025-10-03 11:04:53.248060637 +0000 UTC m=+0.275060239 container attach 9c774fdea2c5850daad5867569f986c625bb37c5c9f34e90d7a3219d0a1ebbd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 11:04:53 compute-0 podman[499641]: 2025-10-03 11:04:53.248581124 +0000 UTC m=+0.275580726 container died 9c774fdea2c5850daad5867569f986c625bb37c5c9f34e90d7a3219d0a1ebbd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 11:04:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c912aeb936801808fa892479cb8aad057cdb5e825b4f11a6d30476cafc03098-merged.mount: Deactivated successfully.
Oct  3 11:04:53 compute-0 podman[499641]: 2025-10-03 11:04:53.329995137 +0000 UTC m=+0.356994729 container remove 9c774fdea2c5850daad5867569f986c625bb37c5c9f34e90d7a3219d0a1ebbd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_maxwell, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:04:53 compute-0 systemd[1]: libpod-conmon-9c774fdea2c5850daad5867569f986c625bb37c5c9f34e90d7a3219d0a1ebbd3.scope: Deactivated successfully.
Oct  3 11:04:53 compute-0 podman[499679]: 2025-10-03 11:04:53.563962237 +0000 UTC m=+0.068147868 container create dc9cff0b3569c378303ed2db27d6114be7f8dbe858ed75b9c8a9f4be51792da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_aryabhata, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:04:53 compute-0 systemd[1]: Started libpod-conmon-dc9cff0b3569c378303ed2db27d6114be7f8dbe858ed75b9c8a9f4be51792da4.scope.
Oct  3 11:04:53 compute-0 podman[499679]: 2025-10-03 11:04:53.540857756 +0000 UTC m=+0.045043407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:04:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7f9709ea55a3bee0ee19758d451c352a1fc425f5e2ae9e51febcf8ce41484d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7f9709ea55a3bee0ee19758d451c352a1fc425f5e2ae9e51febcf8ce41484d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7f9709ea55a3bee0ee19758d451c352a1fc425f5e2ae9e51febcf8ce41484d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:04:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7f9709ea55a3bee0ee19758d451c352a1fc425f5e2ae9e51febcf8ce41484d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:04:53 compute-0 podman[499679]: 2025-10-03 11:04:53.761368474 +0000 UTC m=+0.265554155 container init dc9cff0b3569c378303ed2db27d6114be7f8dbe858ed75b9c8a9f4be51792da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_aryabhata, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:04:53 compute-0 podman[499679]: 2025-10-03 11:04:53.771914712 +0000 UTC m=+0.276100333 container start dc9cff0b3569c378303ed2db27d6114be7f8dbe858ed75b9c8a9f4be51792da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_aryabhata, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 11:04:53 compute-0 podman[499679]: 2025-10-03 11:04:53.775847359 +0000 UTC m=+0.280033020 container attach dc9cff0b3569c378303ed2db27d6114be7f8dbe858ed75b9c8a9f4be51792da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 11:04:53 compute-0 nova_compute[351685]: 2025-10-03 11:04:53.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:04:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3349194492' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:04:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:04:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3349194492' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:04:54 compute-0 nova_compute[351685]: 2025-10-03 11:04:54.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:54 compute-0 podman[499718]: 2025-10-03 11:04:54.849354246 +0000 UTC m=+0.100322971 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, name=ubi9, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, release-0.7.12=)
Oct  3 11:04:54 compute-0 podman[499717]: 2025-10-03 11:04:54.860564396 +0000 UTC m=+0.108718640 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 11:04:54 compute-0 podman[499719]: 2025-10-03 11:04:54.9045877 +0000 UTC m=+0.145176832 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]: {
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:        "osd_id": 1,
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:        "type": "bluestore"
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:    },
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:        "osd_id": 2,
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:        "type": "bluestore"
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:    },
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:        "osd_id": 0,
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:        "type": "bluestore"
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]:    }
Oct  3 11:04:54 compute-0 unruffled_aryabhata[499695]: }
Oct  3 11:04:54 compute-0 systemd[1]: libpod-dc9cff0b3569c378303ed2db27d6114be7f8dbe858ed75b9c8a9f4be51792da4.scope: Deactivated successfully.
Oct  3 11:04:54 compute-0 podman[499679]: 2025-10-03 11:04:54.944366926 +0000 UTC m=+1.448552587 container died dc9cff0b3569c378303ed2db27d6114be7f8dbe858ed75b9c8a9f4be51792da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_aryabhata, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:04:54 compute-0 systemd[1]: libpod-dc9cff0b3569c378303ed2db27d6114be7f8dbe858ed75b9c8a9f4be51792da4.scope: Consumed 1.175s CPU time.
Oct  3 11:04:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2927: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c7f9709ea55a3bee0ee19758d451c352a1fc425f5e2ae9e51febcf8ce41484d-merged.mount: Deactivated successfully.
Oct  3 11:04:55 compute-0 podman[499679]: 2025-10-03 11:04:55.019738245 +0000 UTC m=+1.523923866 container remove dc9cff0b3569c378303ed2db27d6114be7f8dbe858ed75b9c8a9f4be51792da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  3 11:04:55 compute-0 systemd[1]: libpod-conmon-dc9cff0b3569c378303ed2db27d6114be7f8dbe858ed75b9c8a9f4be51792da4.scope: Deactivated successfully.
Oct  3 11:04:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:04:55 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:04:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:04:55 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:04:55 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 4293a0c2-196b-4b5d-a81a-31d6234c8710 does not exist
Oct  3 11:04:55 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e2ec24f2-814a-46be-b900-c04157ba387a does not exist
Oct  3 11:04:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:04:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:04:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2928: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:04:58 compute-0 nova_compute[351685]: 2025-10-03 11:04:58.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2929: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:04:59 compute-0 nova_compute[351685]: 2025-10-03 11:04:59.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:04:59 compute-0 podman[157165]: time="2025-10-03T11:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:04:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:04:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9097 "" "Go-http-client/1.1"
Oct  3 11:05:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2930: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:01 compute-0 openstack_network_exporter[367524]: ERROR   11:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:05:01 compute-0 openstack_network_exporter[367524]: ERROR   11:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:05:01 compute-0 openstack_network_exporter[367524]: ERROR   11:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:05:01 compute-0 openstack_network_exporter[367524]: ERROR   11:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:05:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:05:01 compute-0 openstack_network_exporter[367524]: ERROR   11:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:05:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:05:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:05:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2931: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:03 compute-0 nova_compute[351685]: 2025-10-03 11:05:03.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:04 compute-0 nova_compute[351685]: 2025-10-03 11:05:04.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2932: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2933: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:05:08 compute-0 nova_compute[351685]: 2025-10-03 11:05:08.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2934: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:09 compute-0 nova_compute[351685]: 2025-10-03 11:05:09.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2935: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:05:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2936: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:13 compute-0 podman[499851]: 2025-10-03 11:05:13.877347944 +0000 UTC m=+0.114506248 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  3 11:05:13 compute-0 podman[499852]: 2025-10-03 11:05:13.883901183 +0000 UTC m=+0.111944525 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:05:13 compute-0 podman[499853]: 2025-10-03 11:05:13.889422759 +0000 UTC m=+0.129825765 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  3 11:05:13 compute-0 podman[499861]: 2025-10-03 11:05:13.896042771 +0000 UTC m=+0.128075540 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:05:13 compute-0 podman[499850]: 2025-10-03 11:05:13.896502826 +0000 UTC m=+0.148568695 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, version=9.6, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350)
Oct  3 11:05:13 compute-0 podman[499849]: 2025-10-03 11:05:13.902772565 +0000 UTC m=+0.144698971 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 11:05:13 compute-0 podman[499855]: 2025-10-03 11:05:13.935129109 +0000 UTC m=+0.160198596 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:05:13 compute-0 nova_compute[351685]: 2025-10-03 11:05:13.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:14 compute-0 nova_compute[351685]: 2025-10-03 11:05:14.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2937: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:05:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:05:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:05:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:05:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:05:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:05:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2938: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:05:18 compute-0 nova_compute[351685]: 2025-10-03 11:05:18.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2939: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:19 compute-0 nova_compute[351685]: 2025-10-03 11:05:19.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2940: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:05:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2941: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:23 compute-0 nova_compute[351685]: 2025-10-03 11:05:23.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:24 compute-0 nova_compute[351685]: 2025-10-03 11:05:24.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:24 compute-0 nova_compute[351685]: 2025-10-03 11:05:24.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:05:24 compute-0 nova_compute[351685]: 2025-10-03 11:05:24.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:05:24 compute-0 nova_compute[351685]: 2025-10-03 11:05:24.732 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:05:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2942: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:25 compute-0 nova_compute[351685]: 2025-10-03 11:05:25.427 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:05:25 compute-0 nova_compute[351685]: 2025-10-03 11:05:25.428 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:05:25 compute-0 nova_compute[351685]: 2025-10-03 11:05:25.429 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:05:25 compute-0 nova_compute[351685]: 2025-10-03 11:05:25.429 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:05:25 compute-0 podman[499987]: 2025-10-03 11:05:25.838149476 +0000 UTC m=+0.090324825 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:05:25 compute-0 podman[499989]: 2025-10-03 11:05:25.865714266 +0000 UTC m=+0.105573821 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:05:25 compute-0 podman[499988]: 2025-10-03 11:05:25.919133112 +0000 UTC m=+0.158527172 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., version=9.4, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-container, managed_by=edpm_ansible, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct  3 11:05:26 compute-0 nova_compute[351685]: 2025-10-03 11:05:26.923 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:05:26 compute-0 nova_compute[351685]: 2025-10-03 11:05:26.951 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:05:26 compute-0 nova_compute[351685]: 2025-10-03 11:05:26.952 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:05:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2943: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:05:28 compute-0 nova_compute[351685]: 2025-10-03 11:05:28.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2944: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:29 compute-0 nova_compute[351685]: 2025-10-03 11:05:29.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:29 compute-0 podman[157165]: time="2025-10-03T11:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:05:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:05:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9088 "" "Go-http-client/1.1"
Oct  3 11:05:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2945: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:31 compute-0 openstack_network_exporter[367524]: ERROR   11:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:05:31 compute-0 openstack_network_exporter[367524]: ERROR   11:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:05:31 compute-0 openstack_network_exporter[367524]: ERROR   11:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:05:31 compute-0 openstack_network_exporter[367524]: ERROR   11:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:05:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:05:31 compute-0 openstack_network_exporter[367524]: ERROR   11:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:05:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:05:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:05:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2946: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:33 compute-0 nova_compute[351685]: 2025-10-03 11:05:33.947 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:05:33 compute-0 nova_compute[351685]: 2025-10-03 11:05:33.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:34 compute-0 nova_compute[351685]: 2025-10-03 11:05:34.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:34 compute-0 nova_compute[351685]: 2025-10-03 11:05:34.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:05:34 compute-0 nova_compute[351685]: 2025-10-03 11:05:34.768 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:05:34 compute-0 nova_compute[351685]: 2025-10-03 11:05:34.768 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:05:34 compute-0 nova_compute[351685]: 2025-10-03 11:05:34.769 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:05:34 compute-0 nova_compute[351685]: 2025-10-03 11:05:34.769 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:05:34 compute-0 nova_compute[351685]: 2025-10-03 11:05:34.769 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:05:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2947: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:05:35 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1401335697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:05:35 compute-0 nova_compute[351685]: 2025-10-03 11:05:35.283 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:05:35 compute-0 nova_compute[351685]: 2025-10-03 11:05:35.390 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:05:35 compute-0 nova_compute[351685]: 2025-10-03 11:05:35.390 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:05:35 compute-0 nova_compute[351685]: 2025-10-03 11:05:35.391 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:05:35 compute-0 nova_compute[351685]: 2025-10-03 11:05:35.934 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:05:35 compute-0 nova_compute[351685]: 2025-10-03 11:05:35.935 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3813MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:05:35 compute-0 nova_compute[351685]: 2025-10-03 11:05:35.935 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:05:35 compute-0 nova_compute[351685]: 2025-10-03 11:05:35.935 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:05:36 compute-0 nova_compute[351685]: 2025-10-03 11:05:36.036 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:05:36 compute-0 nova_compute[351685]: 2025-10-03 11:05:36.037 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:05:36 compute-0 nova_compute[351685]: 2025-10-03 11:05:36.037 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:05:36 compute-0 nova_compute[351685]: 2025-10-03 11:05:36.086 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:05:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:05:36 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/434418445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:05:36 compute-0 nova_compute[351685]: 2025-10-03 11:05:36.624 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:05:36 compute-0 nova_compute[351685]: 2025-10-03 11:05:36.636 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:05:36 compute-0 nova_compute[351685]: 2025-10-03 11:05:36.655 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:05:36 compute-0 nova_compute[351685]: 2025-10-03 11:05:36.658 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:05:36 compute-0 nova_compute[351685]: 2025-10-03 11:05:36.658 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:05:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2948: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:37 compute-0 nova_compute[351685]: 2025-10-03 11:05:37.659 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:05:37 compute-0 nova_compute[351685]: 2025-10-03 11:05:37.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:05:37 compute-0 nova_compute[351685]: 2025-10-03 11:05:37.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:05:37 compute-0 nova_compute[351685]: 2025-10-03 11:05:37.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:05:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:05:38 compute-0 nova_compute[351685]: 2025-10-03 11:05:38.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2949: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:39 compute-0 nova_compute[351685]: 2025-10-03 11:05:39.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:39 compute-0 nova_compute[351685]: 2025-10-03 11:05:39.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.900 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.900 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.901 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.901 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.910 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.911 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.911 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.911 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.911 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:05:40.911929) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.917 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.918 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.918 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.918 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.918 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.918 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.918 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.919 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.919 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.920 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.920 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.920 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:05:40.918828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.920 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.920 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:05:40.920223) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.955 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.956 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.957 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.957 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.958 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.958 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.959 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.959 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:40.960 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:05:40.959214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2950: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.006 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.008 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.009 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:05:41.009634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.011 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.012 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.012 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.012 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:05:41.012925) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.016 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.016 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.016 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.016 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.017 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:05:41.016687) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.017 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.018 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.019 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.021 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.022 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:05:41.020021) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.023 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.023 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.024 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.025 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.026 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:05:41.023458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:05:41.027620) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.049 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.049 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.050 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.050 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.050 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.050 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.051 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.051 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.052 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:05:41.050550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.052 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.052 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.052 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.052 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.052 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.054 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.054 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.054 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:05:41.052805) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.054 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.054 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.054 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.055 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.055 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.056 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:05:41.054881) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.056 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.056 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.056 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.056 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.057 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.057 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.058 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.058 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.058 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:05:41.056941) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.059 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:05:41.058178) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.059 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.059 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.059 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:05:41.059539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.060 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.060 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.060 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.061 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.062 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:05:41.060874) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.062 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.063 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:05:41.062914) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.062 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.063 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 87450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.064 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.064 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.065 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.065 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.065 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.066 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.066 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.066 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.066 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:05:41.065140) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.067 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:05:41.066729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.067 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.068 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:05:41.067991) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.069 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.069 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.069 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.070 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.070 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:05:41.069941) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.071 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.071 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.072 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.072 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.072 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.072 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:05:41.072171) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.073 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.074 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.074 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.075 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:05:41.073916) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:05:41.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:05:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:05:41.672 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:05:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:05:41.672 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:05:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:05:41.673 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:05:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:05:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2951: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:43 compute-0 nova_compute[351685]: 2025-10-03 11:05:43.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:05:43 compute-0 nova_compute[351685]: 2025-10-03 11:05:43.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:44 compute-0 nova_compute[351685]: 2025-10-03 11:05:44.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:44 compute-0 nova_compute[351685]: 2025-10-03 11:05:44.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:05:44 compute-0 podman[500096]: 2025-10-03 11:05:44.837957331 +0000 UTC m=+0.112264996 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:05:44 compute-0 podman[500095]: 2025-10-03 11:05:44.85422143 +0000 UTC m=+0.144352141 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct  3 11:05:44 compute-0 podman[500117]: 2025-10-03 11:05:44.85956356 +0000 UTC m=+0.113554667 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid)
Oct  3 11:05:44 compute-0 podman[500108]: 2025-10-03 11:05:44.865070656 +0000 UTC m=+0.121643675 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 11:05:44 compute-0 podman[500094]: 2025-10-03 11:05:44.880783007 +0000 UTC m=+0.173806150 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 11:05:44 compute-0 podman[500109]: 2025-10-03 11:05:44.889825836 +0000 UTC m=+0.134310329 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  3 11:05:44 compute-0 podman[500104]: 2025-10-03 11:05:44.900036602 +0000 UTC m=+0.161050133 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  3 11:05:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2952: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:05:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:05:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:05:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:05:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:05:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:05:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:05:46
Oct  3 11:05:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:05:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:05:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'vms', '.rgw.root', 'images', 'default.rgw.control', 'default.rgw.meta', 'backups']
Oct  3 11:05:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:05:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2953: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:05:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:05:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:05:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:05:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:05:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:05:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:05:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:05:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:05:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:05:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:05:48 compute-0 nova_compute[351685]: 2025-10-03 11:05:48.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2954: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:49 compute-0 nova_compute[351685]: 2025-10-03 11:05:49.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2955: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:05:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2956: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:53 compute-0 nova_compute[351685]: 2025-10-03 11:05:53.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:05:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2878403329' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:05:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:05:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2878403329' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:05:54 compute-0 nova_compute[351685]: 2025-10-03 11:05:54.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2957: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:05:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:05:56 compute-0 podman[500389]: 2025-10-03 11:05:56.705843821 +0000 UTC m=+0.096713059 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., release-0.7.12=, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.29.0, vcs-type=git, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.component=ubi9-container, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct  3 11:05:56 compute-0 podman[500390]: 2025-10-03 11:05:56.723142013 +0000 UTC m=+0.125513088 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:05:56 compute-0 podman[500388]: 2025-10-03 11:05:56.746480999 +0000 UTC m=+0.140729865 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:05:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2958: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:57 compute-0 podman[500559]: 2025-10-03 11:05:57.443837704 +0000 UTC m=+0.081703570 container create 6ba31e6d17942bf5b451d1d62bfc91529439a3513f66cd9ab523dca2753b6a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:05:57 compute-0 systemd[1]: Started libpod-conmon-6ba31e6d17942bf5b451d1d62bfc91529439a3513f66cd9ab523dca2753b6a16.scope.
Oct  3 11:05:57 compute-0 podman[500559]: 2025-10-03 11:05:57.411074077 +0000 UTC m=+0.048940013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:05:57 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:05:57 compute-0 podman[500559]: 2025-10-03 11:05:57.581402686 +0000 UTC m=+0.219268582 container init 6ba31e6d17942bf5b451d1d62bfc91529439a3513f66cd9ab523dca2753b6a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:05:57 compute-0 podman[500559]: 2025-10-03 11:05:57.598702537 +0000 UTC m=+0.236568383 container start 6ba31e6d17942bf5b451d1d62bfc91529439a3513f66cd9ab523dca2753b6a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 11:05:57 compute-0 podman[500559]: 2025-10-03 11:05:57.603414689 +0000 UTC m=+0.241280595 container attach 6ba31e6d17942bf5b451d1d62bfc91529439a3513f66cd9ab523dca2753b6a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_stonebraker, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 11:05:57 compute-0 unruffled_stonebraker[500574]: 167 167
Oct  3 11:05:57 compute-0 systemd[1]: libpod-6ba31e6d17942bf5b451d1d62bfc91529439a3513f66cd9ab523dca2753b6a16.scope: Deactivated successfully.
Oct  3 11:05:57 compute-0 podman[500559]: 2025-10-03 11:05:57.611383293 +0000 UTC m=+0.249249169 container died 6ba31e6d17942bf5b451d1d62bfc91529439a3513f66cd9ab523dca2753b6a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_stonebraker, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 11:05:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-349325d7b5d52b14493e63135c64663f62ef30055f53258fffae091a7ef47235-merged.mount: Deactivated successfully.
Oct  3 11:05:57 compute-0 podman[500559]: 2025-10-03 11:05:57.675575502 +0000 UTC m=+0.313441368 container remove 6ba31e6d17942bf5b451d1d62bfc91529439a3513f66cd9ab523dca2753b6a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_stonebraker, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  3 11:05:57 compute-0 systemd[1]: libpod-conmon-6ba31e6d17942bf5b451d1d62bfc91529439a3513f66cd9ab523dca2753b6a16.scope: Deactivated successfully.
Oct  3 11:05:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:05:57 compute-0 podman[500597]: 2025-10-03 11:05:57.922372542 +0000 UTC m=+0.103820126 container create 8c85da380f71c0581f6bd9bd404767d8b8d864b84802e7f256691a893d02852e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_torvalds, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:05:57 compute-0 podman[500597]: 2025-10-03 11:05:57.861971873 +0000 UTC m=+0.043419477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:05:58 compute-0 systemd[1]: Started libpod-conmon-8c85da380f71c0581f6bd9bd404767d8b8d864b84802e7f256691a893d02852e.scope.
Oct  3 11:05:58 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04f13a5c576ea710048e851772c303b0e06d92fe5255285447c01f1bf34a2720/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04f13a5c576ea710048e851772c303b0e06d92fe5255285447c01f1bf34a2720/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04f13a5c576ea710048e851772c303b0e06d92fe5255285447c01f1bf34a2720/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04f13a5c576ea710048e851772c303b0e06d92fe5255285447c01f1bf34a2720/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:05:58 compute-0 podman[500597]: 2025-10-03 11:05:58.10742081 +0000 UTC m=+0.288868474 container init 8c85da380f71c0581f6bd9bd404767d8b8d864b84802e7f256691a893d02852e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_torvalds, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 11:05:58 compute-0 podman[500597]: 2025-10-03 11:05:58.128462112 +0000 UTC m=+0.309909696 container start 8c85da380f71c0581f6bd9bd404767d8b8d864b84802e7f256691a893d02852e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:05:58 compute-0 podman[500597]: 2025-10-03 11:05:58.1387672 +0000 UTC m=+0.320214794 container attach 8c85da380f71c0581f6bd9bd404767d8b8d864b84802e7f256691a893d02852e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:05:58.154498) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489558154616, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1108, "num_deletes": 251, "total_data_size": 1655486, "memory_usage": 1677392, "flush_reason": "Manual Compaction"}
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489558172422, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 1628892, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59033, "largest_seqno": 60140, "table_properties": {"data_size": 1623495, "index_size": 2855, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11396, "raw_average_key_size": 19, "raw_value_size": 1612728, "raw_average_value_size": 2790, "num_data_blocks": 128, "num_entries": 578, "num_filter_entries": 578, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759489448, "oldest_key_time": 1759489448, "file_creation_time": 1759489558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 17966 microseconds, and 8820 cpu microseconds.
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:05:58.172479) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 1628892 bytes OK
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:05:58.172498) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:05:58.174789) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:05:58.174803) EVENT_LOG_v1 {"time_micros": 1759489558174798, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:05:58.174820) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 1650359, prev total WAL file size 1650359, number of live WAL files 2.
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:05:58.176054) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(1590KB)], [143(7668KB)]
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489558176172, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 9481298, "oldest_snapshot_seqno": -1}
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 7047 keys, 7748891 bytes, temperature: kUnknown
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489558219146, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 7748891, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7707619, "index_size": 22535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 185388, "raw_average_key_size": 26, "raw_value_size": 7585493, "raw_average_value_size": 1076, "num_data_blocks": 884, "num_entries": 7047, "num_filter_entries": 7047, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759489558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:05:58.219518) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 7748891 bytes
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:05:58.221413) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 219.9 rd, 179.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.5 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(10.6) write-amplify(4.8) OK, records in: 7561, records dropped: 514 output_compression: NoCompression
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:05:58.221434) EVENT_LOG_v1 {"time_micros": 1759489558221422, "job": 88, "event": "compaction_finished", "compaction_time_micros": 43110, "compaction_time_cpu_micros": 22608, "output_level": 6, "num_output_files": 1, "total_output_size": 7748891, "num_input_records": 7561, "num_output_records": 7047, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489558221843, "job": 88, "event": "table_file_deletion", "file_number": 145}
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489558223122, "job": 88, "event": "table_file_deletion", "file_number": 143}
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:05:58.175720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:05:58.223393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:05:58.223402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:05:58.223405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:05:58.223408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:05:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:05:58.223411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:05:58 compute-0 nova_compute[351685]: 2025-10-03 11:05:58.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2959: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:05:59 compute-0 nova_compute[351685]: 2025-10-03 11:05:59.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:05:59 compute-0 podman[157165]: time="2025-10-03T11:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:05:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47863 "" "Go-http-client/1.1"
Oct  3 11:05:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9517 "" "Go-http-client/1.1"
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]: [
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:    {
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:        "available": false,
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:        "ceph_device": false,
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:        "lsm_data": {},
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:        "lvs": [],
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:        "path": "/dev/sr0",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:        "rejected_reasons": [
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "Insufficient space (<5GB)",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "Has a FileSystem"
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:        ],
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:        "sys_api": {
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "actuators": null,
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "device_nodes": "sr0",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "devname": "sr0",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "human_readable_size": "482.00 KB",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "id_bus": "ata",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "model": "QEMU DVD-ROM",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "nr_requests": "2",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "parent": "/dev/sr0",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "partitions": {},
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "path": "/dev/sr0",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "removable": "1",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "rev": "2.5+",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "ro": "0",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "rotational": "0",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "sas_address": "",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "sas_device_handle": "",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "scheduler_mode": "mq-deadline",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "sectors": 0,
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "sectorsize": "2048",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "size": 493568.0,
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "support_discard": "2048",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "type": "disk",
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:            "vendor": "QEMU"
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:        }
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]:    }
Oct  3 11:06:00 compute-0 hopeful_torvalds[500613]: ]
Oct  3 11:06:00 compute-0 systemd[1]: libpod-8c85da380f71c0581f6bd9bd404767d8b8d864b84802e7f256691a893d02852e.scope: Deactivated successfully.
Oct  3 11:06:00 compute-0 systemd[1]: libpod-8c85da380f71c0581f6bd9bd404767d8b8d864b84802e7f256691a893d02852e.scope: Consumed 2.331s CPU time.
Oct  3 11:06:00 compute-0 conmon[500613]: conmon 8c85da380f71c0581f6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c85da380f71c0581f6bd9bd404767d8b8d864b84802e7f256691a893d02852e.scope/container/memory.events
Oct  3 11:06:00 compute-0 podman[500597]: 2025-10-03 11:06:00.397633781 +0000 UTC m=+2.579081335 container died 8c85da380f71c0581f6bd9bd404767d8b8d864b84802e7f256691a893d02852e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_torvalds, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 11:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-04f13a5c576ea710048e851772c303b0e06d92fe5255285447c01f1bf34a2720-merged.mount: Deactivated successfully.
Oct  3 11:06:00 compute-0 podman[500597]: 2025-10-03 11:06:00.544126378 +0000 UTC m=+2.725573932 container remove 8c85da380f71c0581f6bd9bd404767d8b8d864b84802e7f256691a893d02852e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:06:00 compute-0 systemd[1]: libpod-conmon-8c85da380f71c0581f6bd9bd404767d8b8d864b84802e7f256691a893d02852e.scope: Deactivated successfully.
Oct  3 11:06:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:06:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:06:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:06:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:06:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:06:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:06:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:06:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:06:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:06:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:06:00 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 201beab5-618e-430b-aa0f-da7936bb6978 does not exist
Oct  3 11:06:00 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b4af31b0-4f4b-4fb2-81cb-dac393f44c55 does not exist
Oct  3 11:06:00 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 9fd108a2-c42d-4253-9b66-632a2eba111a does not exist
Oct  3 11:06:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:06:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:06:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:06:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:06:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:06:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:06:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2960: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:01 compute-0 openstack_network_exporter[367524]: ERROR   11:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:06:01 compute-0 openstack_network_exporter[367524]: ERROR   11:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:06:01 compute-0 openstack_network_exporter[367524]: ERROR   11:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:06:01 compute-0 openstack_network_exporter[367524]: ERROR   11:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:06:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:06:01 compute-0 openstack_network_exporter[367524]: ERROR   11:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:06:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:06:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:06:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:06:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:06:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:06:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:06:01 compute-0 podman[502950]: 2025-10-03 11:06:01.772508518 +0000 UTC m=+0.089967124 container create 2a0bf65fd14392c8d148f75cd53784e2d9ee107501cb81bcf3f28c00b8669d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_maxwell, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 11:06:01 compute-0 podman[502950]: 2025-10-03 11:06:01.74845879 +0000 UTC m=+0.065917456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:06:01 compute-0 systemd[1]: Started libpod-conmon-2a0bf65fd14392c8d148f75cd53784e2d9ee107501cb81bcf3f28c00b8669d9b.scope.
Oct  3 11:06:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:06:01 compute-0 podman[502950]: 2025-10-03 11:06:01.908222391 +0000 UTC m=+0.225681017 container init 2a0bf65fd14392c8d148f75cd53784e2d9ee107501cb81bcf3f28c00b8669d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_maxwell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 11:06:01 compute-0 podman[502950]: 2025-10-03 11:06:01.924953545 +0000 UTC m=+0.242412161 container start 2a0bf65fd14392c8d148f75cd53784e2d9ee107501cb81bcf3f28c00b8669d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_maxwell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 11:06:01 compute-0 loving_maxwell[502965]: 167 167
Oct  3 11:06:01 compute-0 systemd[1]: libpod-2a0bf65fd14392c8d148f75cd53784e2d9ee107501cb81bcf3f28c00b8669d9b.scope: Deactivated successfully.
Oct  3 11:06:01 compute-0 podman[502950]: 2025-10-03 11:06:01.930571174 +0000 UTC m=+0.248029760 container attach 2a0bf65fd14392c8d148f75cd53784e2d9ee107501cb81bcf3f28c00b8669d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  3 11:06:01 compute-0 conmon[502965]: conmon 2a0bf65fd14392c8d148 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a0bf65fd14392c8d148f75cd53784e2d9ee107501cb81bcf3f28c00b8669d9b.scope/container/memory.events
Oct  3 11:06:01 compute-0 podman[502950]: 2025-10-03 11:06:01.931958669 +0000 UTC m=+0.249417245 container died 2a0bf65fd14392c8d148f75cd53784e2d9ee107501cb81bcf3f28c00b8669d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 11:06:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd14a0d7f2af07e6fdead7ffaaff788114ed46bef8b736771a2fc785331f5da0-merged.mount: Deactivated successfully.
Oct  3 11:06:01 compute-0 podman[502950]: 2025-10-03 11:06:01.985764566 +0000 UTC m=+0.303223142 container remove 2a0bf65fd14392c8d148f75cd53784e2d9ee107501cb81bcf3f28c00b8669d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 11:06:02 compute-0 systemd[1]: libpod-conmon-2a0bf65fd14392c8d148f75cd53784e2d9ee107501cb81bcf3f28c00b8669d9b.scope: Deactivated successfully.
Oct  3 11:06:02 compute-0 podman[502989]: 2025-10-03 11:06:02.229564 +0000 UTC m=+0.096908125 container create bd0fcd40e538a6aad9a0bb7f73cf68bf7b1bd45786a1d34b3383ab7042b1716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  3 11:06:02 compute-0 podman[502989]: 2025-10-03 11:06:02.189873483 +0000 UTC m=+0.057217658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:06:02 compute-0 systemd[1]: Started libpod-conmon-bd0fcd40e538a6aad9a0bb7f73cf68bf7b1bd45786a1d34b3383ab7042b1716c.scope.
Oct  3 11:06:02 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:06:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0b654fbba2d18666f4791efcc8389cf07e02f37e9d819b368b0ae25c4d573c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:06:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0b654fbba2d18666f4791efcc8389cf07e02f37e9d819b368b0ae25c4d573c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:06:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0b654fbba2d18666f4791efcc8389cf07e02f37e9d819b368b0ae25c4d573c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:06:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0b654fbba2d18666f4791efcc8389cf07e02f37e9d819b368b0ae25c4d573c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:06:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0b654fbba2d18666f4791efcc8389cf07e02f37e9d819b368b0ae25c4d573c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:06:02 compute-0 podman[502989]: 2025-10-03 11:06:02.388140914 +0000 UTC m=+0.255485109 container init bd0fcd40e538a6aad9a0bb7f73cf68bf7b1bd45786a1d34b3383ab7042b1716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:06:02 compute-0 podman[502989]: 2025-10-03 11:06:02.410658443 +0000 UTC m=+0.278002568 container start bd0fcd40e538a6aad9a0bb7f73cf68bf7b1bd45786a1d34b3383ab7042b1716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:06:02 compute-0 podman[502989]: 2025-10-03 11:06:02.417749719 +0000 UTC m=+0.285093854 container attach bd0fcd40e538a6aad9a0bb7f73cf68bf7b1bd45786a1d34b3383ab7042b1716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 11:06:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:06:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2961: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:03 compute-0 sweet_chaum[503005]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:06:03 compute-0 sweet_chaum[503005]: --> relative data size: 1.0
Oct  3 11:06:03 compute-0 sweet_chaum[503005]: --> All data devices are unavailable
Oct  3 11:06:03 compute-0 systemd[1]: libpod-bd0fcd40e538a6aad9a0bb7f73cf68bf7b1bd45786a1d34b3383ab7042b1716c.scope: Deactivated successfully.
Oct  3 11:06:03 compute-0 systemd[1]: libpod-bd0fcd40e538a6aad9a0bb7f73cf68bf7b1bd45786a1d34b3383ab7042b1716c.scope: Consumed 1.038s CPU time.
Oct  3 11:06:03 compute-0 podman[502989]: 2025-10-03 11:06:03.518661168 +0000 UTC m=+1.386005273 container died bd0fcd40e538a6aad9a0bb7f73cf68bf7b1bd45786a1d34b3383ab7042b1716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:06:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea0b654fbba2d18666f4791efcc8389cf07e02f37e9d819b368b0ae25c4d573c-merged.mount: Deactivated successfully.
Oct  3 11:06:03 compute-0 podman[502989]: 2025-10-03 11:06:03.721176833 +0000 UTC m=+1.588520938 container remove bd0fcd40e538a6aad9a0bb7f73cf68bf7b1bd45786a1d34b3383ab7042b1716c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 11:06:03 compute-0 systemd[1]: libpod-conmon-bd0fcd40e538a6aad9a0bb7f73cf68bf7b1bd45786a1d34b3383ab7042b1716c.scope: Deactivated successfully.
Oct  3 11:06:04 compute-0 nova_compute[351685]: 2025-10-03 11:06:04.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:04 compute-0 nova_compute[351685]: 2025-10-03 11:06:04.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:04 compute-0 podman[503185]: 2025-10-03 11:06:04.608481094 +0000 UTC m=+0.059188311 container create 0aeecc7d3ffb0f20f0b2ea1607f475ea6c29113b1829824e6cb6fcc7855440d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wu, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  3 11:06:04 compute-0 systemd[1]: Started libpod-conmon-0aeecc7d3ffb0f20f0b2ea1607f475ea6c29113b1829824e6cb6fcc7855440d3.scope.
Oct  3 11:06:04 compute-0 podman[503185]: 2025-10-03 11:06:04.585737898 +0000 UTC m=+0.036445195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:06:04 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:06:04 compute-0 podman[503185]: 2025-10-03 11:06:04.743114592 +0000 UTC m=+0.193821849 container init 0aeecc7d3ffb0f20f0b2ea1607f475ea6c29113b1829824e6cb6fcc7855440d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wu, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:06:04 compute-0 podman[503185]: 2025-10-03 11:06:04.75559169 +0000 UTC m=+0.206298947 container start 0aeecc7d3ffb0f20f0b2ea1607f475ea6c29113b1829824e6cb6fcc7855440d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wu, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:06:04 compute-0 funny_wu[503201]: 167 167
Oct  3 11:06:04 compute-0 systemd[1]: libpod-0aeecc7d3ffb0f20f0b2ea1607f475ea6c29113b1829824e6cb6fcc7855440d3.scope: Deactivated successfully.
Oct  3 11:06:04 compute-0 podman[503185]: 2025-10-03 11:06:04.792752237 +0000 UTC m=+0.243459474 container attach 0aeecc7d3ffb0f20f0b2ea1607f475ea6c29113b1829824e6cb6fcc7855440d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:06:04 compute-0 podman[503185]: 2025-10-03 11:06:04.79315312 +0000 UTC m=+0.243860337 container died 0aeecc7d3ffb0f20f0b2ea1607f475ea6c29113b1829824e6cb6fcc7855440d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:06:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d554989fde797b84ebfe6ca927cbb6e14ea2da5682845a0ebaddd4933cc02c6-merged.mount: Deactivated successfully.
Oct  3 11:06:04 compute-0 podman[503185]: 2025-10-03 11:06:04.935636199 +0000 UTC m=+0.386343416 container remove 0aeecc7d3ffb0f20f0b2ea1607f475ea6c29113b1829824e6cb6fcc7855440d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 11:06:04 compute-0 systemd[1]: libpod-conmon-0aeecc7d3ffb0f20f0b2ea1607f475ea6c29113b1829824e6cb6fcc7855440d3.scope: Deactivated successfully.
Oct  3 11:06:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2962: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:05 compute-0 podman[503225]: 2025-10-03 11:06:05.213382596 +0000 UTC m=+0.082438632 container create c2c7112783bd9398d5e95c7e4196d2b572b432f857690160b8f46d9fb03a36b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 11:06:05 compute-0 podman[503225]: 2025-10-03 11:06:05.179803104 +0000 UTC m=+0.048859140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:06:05 compute-0 systemd[1]: Started libpod-conmon-c2c7112783bd9398d5e95c7e4196d2b572b432f857690160b8f46d9fb03a36b0.scope.
Oct  3 11:06:05 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f587cd66f7e48f20e1c15277a507722a7d15d5628e19fe9f8514fe486f093699/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f587cd66f7e48f20e1c15277a507722a7d15d5628e19fe9f8514fe486f093699/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f587cd66f7e48f20e1c15277a507722a7d15d5628e19fe9f8514fe486f093699/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f587cd66f7e48f20e1c15277a507722a7d15d5628e19fe9f8514fe486f093699/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:06:05 compute-0 podman[503225]: 2025-10-03 11:06:05.352199299 +0000 UTC m=+0.221255335 container init c2c7112783bd9398d5e95c7e4196d2b572b432f857690160b8f46d9fb03a36b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_williamson, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 11:06:05 compute-0 podman[503225]: 2025-10-03 11:06:05.369726639 +0000 UTC m=+0.238782675 container start c2c7112783bd9398d5e95c7e4196d2b572b432f857690160b8f46d9fb03a36b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_williamson, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 11:06:05 compute-0 podman[503225]: 2025-10-03 11:06:05.378532589 +0000 UTC m=+0.247588595 container attach c2c7112783bd9398d5e95c7e4196d2b572b432f857690160b8f46d9fb03a36b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 11:06:06 compute-0 elegant_williamson[503241]: {
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:    "0": [
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:        {
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "devices": [
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "/dev/loop3"
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            ],
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "lv_name": "ceph_lv0",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "lv_size": "21470642176",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "name": "ceph_lv0",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "tags": {
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.cluster_name": "ceph",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.crush_device_class": "",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.encrypted": "0",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.osd_id": "0",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.type": "block",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.vdo": "0"
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            },
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "type": "block",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "vg_name": "ceph_vg0"
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:        }
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:    ],
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:    "1": [
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:        {
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "devices": [
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "/dev/loop4"
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            ],
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "lv_name": "ceph_lv1",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "lv_size": "21470642176",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "name": "ceph_lv1",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "tags": {
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.cluster_name": "ceph",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.crush_device_class": "",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.encrypted": "0",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.osd_id": "1",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.type": "block",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.vdo": "0"
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            },
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "type": "block",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "vg_name": "ceph_vg1"
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:        }
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:    ],
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:    "2": [
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:        {
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "devices": [
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "/dev/loop5"
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            ],
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "lv_name": "ceph_lv2",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "lv_size": "21470642176",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "name": "ceph_lv2",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "tags": {
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.cluster_name": "ceph",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.crush_device_class": "",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.encrypted": "0",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.osd_id": "2",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.type": "block",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:                "ceph.vdo": "0"
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            },
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "type": "block",
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:            "vg_name": "ceph_vg2"
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:        }
Oct  3 11:06:06 compute-0 elegant_williamson[503241]:    ]
Oct  3 11:06:06 compute-0 elegant_williamson[503241]: }
Oct  3 11:06:06 compute-0 systemd[1]: libpod-c2c7112783bd9398d5e95c7e4196d2b572b432f857690160b8f46d9fb03a36b0.scope: Deactivated successfully.
Oct  3 11:06:06 compute-0 podman[503225]: 2025-10-03 11:06:06.227120312 +0000 UTC m=+1.096176318 container died c2c7112783bd9398d5e95c7e4196d2b572b432f857690160b8f46d9fb03a36b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_williamson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 11:06:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f587cd66f7e48f20e1c15277a507722a7d15d5628e19fe9f8514fe486f093699-merged.mount: Deactivated successfully.
Oct  3 11:06:06 compute-0 podman[503225]: 2025-10-03 11:06:06.937008938 +0000 UTC m=+1.806064954 container remove c2c7112783bd9398d5e95c7e4196d2b572b432f857690160b8f46d9fb03a36b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_williamson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:06:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2963: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:07 compute-0 systemd[1]: libpod-conmon-c2c7112783bd9398d5e95c7e4196d2b572b432f857690160b8f46d9fb03a36b0.scope: Deactivated successfully.
Oct  3 11:06:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:06:07 compute-0 podman[503402]: 2025-10-03 11:06:07.924204896 +0000 UTC m=+0.102335027 container create 07f455f4491aa9ceec22f1ac9204acd6e13d5afa2267a4e66271f8f2edb1460e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_blackburn, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct  3 11:06:07 compute-0 podman[503402]: 2025-10-03 11:06:07.865582555 +0000 UTC m=+0.043712726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:06:08 compute-0 systemd[1]: Started libpod-conmon-07f455f4491aa9ceec22f1ac9204acd6e13d5afa2267a4e66271f8f2edb1460e.scope.
Oct  3 11:06:08 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:06:08 compute-0 podman[503402]: 2025-10-03 11:06:08.122030192 +0000 UTC m=+0.300160343 container init 07f455f4491aa9ceec22f1ac9204acd6e13d5afa2267a4e66271f8f2edb1460e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 11:06:08 compute-0 podman[503402]: 2025-10-03 11:06:08.130820293 +0000 UTC m=+0.308950414 container start 07f455f4491aa9ceec22f1ac9204acd6e13d5afa2267a4e66271f8f2edb1460e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 11:06:08 compute-0 loving_blackburn[503417]: 167 167
Oct  3 11:06:08 compute-0 systemd[1]: libpod-07f455f4491aa9ceec22f1ac9204acd6e13d5afa2267a4e66271f8f2edb1460e.scope: Deactivated successfully.
Oct  3 11:06:08 compute-0 podman[503402]: 2025-10-03 11:06:08.144563272 +0000 UTC m=+0.322693403 container attach 07f455f4491aa9ceec22f1ac9204acd6e13d5afa2267a4e66271f8f2edb1460e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_blackburn, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:06:08 compute-0 podman[503402]: 2025-10-03 11:06:08.145586815 +0000 UTC m=+0.323716976 container died 07f455f4491aa9ceec22f1ac9204acd6e13d5afa2267a4e66271f8f2edb1460e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 11:06:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-358d3e67fd65eccc891c6ea4fc7f0207ed42c216c5100cd6ca48a48b2bfe5309-merged.mount: Deactivated successfully.
Oct  3 11:06:08 compute-0 podman[503402]: 2025-10-03 11:06:08.406868097 +0000 UTC m=+0.584998258 container remove 07f455f4491aa9ceec22f1ac9204acd6e13d5afa2267a4e66271f8f2edb1460e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 11:06:08 compute-0 systemd[1]: libpod-conmon-07f455f4491aa9ceec22f1ac9204acd6e13d5afa2267a4e66271f8f2edb1460e.scope: Deactivated successfully.
Oct  3 11:06:08 compute-0 podman[503441]: 2025-10-03 11:06:08.741627515 +0000 UTC m=+0.104908570 container create 46839e2a463c9a7b1bf731373fec43e56395e418494635eb0b89b76a10aba232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldwasser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:06:08 compute-0 podman[503441]: 2025-10-03 11:06:08.692966302 +0000 UTC m=+0.056247407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:06:08 compute-0 systemd[1]: Started libpod-conmon-46839e2a463c9a7b1bf731373fec43e56395e418494635eb0b89b76a10aba232.scope.
Oct  3 11:06:08 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92767f1f0bc1a162f0065f6764213e761e398e90900adee583e774f140287cb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92767f1f0bc1a162f0065f6764213e761e398e90900adee583e774f140287cb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92767f1f0bc1a162f0065f6764213e761e398e90900adee583e774f140287cb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:06:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92767f1f0bc1a162f0065f6764213e761e398e90900adee583e774f140287cb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:06:08 compute-0 podman[503441]: 2025-10-03 11:06:08.926334342 +0000 UTC m=+0.289615457 container init 46839e2a463c9a7b1bf731373fec43e56395e418494635eb0b89b76a10aba232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 11:06:08 compute-0 podman[503441]: 2025-10-03 11:06:08.943060766 +0000 UTC m=+0.306341791 container start 46839e2a463c9a7b1bf731373fec43e56395e418494635eb0b89b76a10aba232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:06:08 compute-0 podman[503441]: 2025-10-03 11:06:08.957334622 +0000 UTC m=+0.320615697 container attach 46839e2a463c9a7b1bf731373fec43e56395e418494635eb0b89b76a10aba232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldwasser, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:06:09 compute-0 nova_compute[351685]: 2025-10-03 11:06:09.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2964: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:09 compute-0 nova_compute[351685]: 2025-10-03 11:06:09.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:09 compute-0 nova_compute[351685]: 2025-10-03 11:06:09.726 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]: {
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:        "osd_id": 1,
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:        "type": "bluestore"
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:    },
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:        "osd_id": 2,
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:        "type": "bluestore"
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:    },
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:        "osd_id": 0,
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:        "type": "bluestore"
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]:    }
Oct  3 11:06:09 compute-0 unruffled_goldwasser[503457]: }
Oct  3 11:06:10 compute-0 systemd[1]: libpod-46839e2a463c9a7b1bf731373fec43e56395e418494635eb0b89b76a10aba232.scope: Deactivated successfully.
Oct  3 11:06:10 compute-0 systemd[1]: libpod-46839e2a463c9a7b1bf731373fec43e56395e418494635eb0b89b76a10aba232.scope: Consumed 1.080s CPU time.
Oct  3 11:06:10 compute-0 podman[503441]: 2025-10-03 11:06:10.022439059 +0000 UTC m=+1.385720134 container died 46839e2a463c9a7b1bf731373fec43e56395e418494635eb0b89b76a10aba232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:06:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-92767f1f0bc1a162f0065f6764213e761e398e90900adee583e774f140287cb0-merged.mount: Deactivated successfully.
Oct  3 11:06:10 compute-0 podman[503441]: 2025-10-03 11:06:10.16344691 +0000 UTC m=+1.526727955 container remove 46839e2a463c9a7b1bf731373fec43e56395e418494635eb0b89b76a10aba232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 11:06:10 compute-0 systemd[1]: libpod-conmon-46839e2a463c9a7b1bf731373fec43e56395e418494635eb0b89b76a10aba232.scope: Deactivated successfully.
Oct  3 11:06:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:06:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:06:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:06:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:06:10 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d1fbc4bd-7e9b-4722-9053-508836c77d94 does not exist
Oct  3 11:06:10 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 399e008b-f907-4a62-9e01-cee8a3a8ccd8 does not exist
Oct  3 11:06:10 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:06:10 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:06:10 compute-0 auditd[710]: Audit daemon rotating log files
Oct  3 11:06:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2965: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:06:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2966: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:14 compute-0 nova_compute[351685]: 2025-10-03 11:06:14.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:14 compute-0 nova_compute[351685]: 2025-10-03 11:06:14.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2967: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:15 compute-0 podman[503562]: 2025-10-03 11:06:15.858571942 +0000 UTC m=+0.098909989 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true)
Oct  3 11:06:15 compute-0 podman[503553]: 2025-10-03 11:06:15.872948171 +0000 UTC m=+0.130317982 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:06:15 compute-0 podman[503554]: 2025-10-03 11:06:15.876165834 +0000 UTC m=+0.129662191 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, name=ubi9-minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release=1755695350, config_id=edpm, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 11:06:15 compute-0 podman[503555]: 2025-10-03 11:06:15.884528851 +0000 UTC m=+0.116775990 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:06:15 compute-0 podman[503556]: 2025-10-03 11:06:15.885218893 +0000 UTC m=+0.118022249 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3)
Oct  3 11:06:15 compute-0 podman[503568]: 2025-10-03 11:06:15.89515944 +0000 UTC m=+0.112012057 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  3 11:06:15 compute-0 podman[503567]: 2025-10-03 11:06:15.922967458 +0000 UTC m=+0.143139761 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  3 11:06:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:06:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:06:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:06:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:06:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:06:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:06:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2968: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:06:19 compute-0 nova_compute[351685]: 2025-10-03 11:06:19.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2969: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:19 compute-0 nova_compute[351685]: 2025-10-03 11:06:19.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2970: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:06:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2971: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:24 compute-0 nova_compute[351685]: 2025-10-03 11:06:24.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:24 compute-0 nova_compute[351685]: 2025-10-03 11:06:24.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:24 compute-0 nova_compute[351685]: 2025-10-03 11:06:24.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:06:24 compute-0 nova_compute[351685]: 2025-10-03 11:06:24.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:06:24 compute-0 nova_compute[351685]: 2025-10-03 11:06:24.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:06:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2972: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:25 compute-0 nova_compute[351685]: 2025-10-03 11:06:25.432 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:06:25 compute-0 nova_compute[351685]: 2025-10-03 11:06:25.432 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:06:25 compute-0 nova_compute[351685]: 2025-10-03 11:06:25.432 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:06:25 compute-0 nova_compute[351685]: 2025-10-03 11:06:25.433 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:06:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2973: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:27 compute-0 nova_compute[351685]: 2025-10-03 11:06:27.515 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:06:27 compute-0 nova_compute[351685]: 2025-10-03 11:06:27.548 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:06:27 compute-0 nova_compute[351685]: 2025-10-03 11:06:27.549 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:06:27 compute-0 podman[503694]: 2025-10-03 11:06:27.847823432 +0000 UTC m=+0.097482553 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:06:27 compute-0 podman[503695]: 2025-10-03 11:06:27.851266362 +0000 UTC m=+0.094558740 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, release=1214.1726694543, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, container_name=kepler, name=ubi9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release-0.7.12=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Oct  3 11:06:27 compute-0 podman[503698]: 2025-10-03 11:06:27.875136465 +0000 UTC m=+0.107810694 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 11:06:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:06:29 compute-0 nova_compute[351685]: 2025-10-03 11:06:29.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2974: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:29 compute-0 nova_compute[351685]: 2025-10-03 11:06:29.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:29 compute-0 podman[157165]: time="2025-10-03T11:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:06:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:06:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9096 "" "Go-http-client/1.1"
Oct  3 11:06:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2975: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:31 compute-0 openstack_network_exporter[367524]: ERROR   11:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:06:31 compute-0 openstack_network_exporter[367524]: ERROR   11:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:06:31 compute-0 openstack_network_exporter[367524]: ERROR   11:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:06:31 compute-0 openstack_network_exporter[367524]: ERROR   11:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:06:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:06:31 compute-0 openstack_network_exporter[367524]: ERROR   11:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:06:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:06:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:06:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2976: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:34 compute-0 nova_compute[351685]: 2025-10-03 11:06:34.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:34 compute-0 nova_compute[351685]: 2025-10-03 11:06:34.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:34 compute-0 nova_compute[351685]: 2025-10-03 11:06:34.544 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:06:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2977: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:35 compute-0 nova_compute[351685]: 2025-10-03 11:06:35.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:06:36 compute-0 nova_compute[351685]: 2025-10-03 11:06:36.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:06:36 compute-0 nova_compute[351685]: 2025-10-03 11:06:36.766 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:06:36 compute-0 nova_compute[351685]: 2025-10-03 11:06:36.767 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:06:36 compute-0 nova_compute[351685]: 2025-10-03 11:06:36.767 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:06:36 compute-0 nova_compute[351685]: 2025-10-03 11:06:36.768 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:06:36 compute-0 nova_compute[351685]: 2025-10-03 11:06:36.768 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:06:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2978: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:06:37 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1472173572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:06:37 compute-0 nova_compute[351685]: 2025-10-03 11:06:37.286 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:06:37 compute-0 nova_compute[351685]: 2025-10-03 11:06:37.385 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:06:37 compute-0 nova_compute[351685]: 2025-10-03 11:06:37.386 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:06:37 compute-0 nova_compute[351685]: 2025-10-03 11:06:37.387 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:06:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:06:37 compute-0 nova_compute[351685]: 2025-10-03 11:06:37.954 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:06:37 compute-0 nova_compute[351685]: 2025-10-03 11:06:37.956 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3815MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:06:37 compute-0 nova_compute[351685]: 2025-10-03 11:06:37.957 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:06:37 compute-0 nova_compute[351685]: 2025-10-03 11:06:37.957 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:06:38 compute-0 nova_compute[351685]: 2025-10-03 11:06:38.060 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:06:38 compute-0 nova_compute[351685]: 2025-10-03 11:06:38.060 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:06:38 compute-0 nova_compute[351685]: 2025-10-03 11:06:38.061 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:06:38 compute-0 nova_compute[351685]: 2025-10-03 11:06:38.145 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:06:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:06:38 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/516235848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:06:38 compute-0 nova_compute[351685]: 2025-10-03 11:06:38.633 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:06:38 compute-0 nova_compute[351685]: 2025-10-03 11:06:38.646 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:06:38 compute-0 nova_compute[351685]: 2025-10-03 11:06:38.668 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:06:38 compute-0 nova_compute[351685]: 2025-10-03 11:06:38.672 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:06:38 compute-0 nova_compute[351685]: 2025-10-03 11:06:38.673 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:06:39 compute-0 nova_compute[351685]: 2025-10-03 11:06:39.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2979: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:39 compute-0 nova_compute[351685]: 2025-10-03 11:06:39.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:39 compute-0 nova_compute[351685]: 2025-10-03 11:06:39.672 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:06:39 compute-0 nova_compute[351685]: 2025-10-03 11:06:39.674 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:06:39 compute-0 nova_compute[351685]: 2025-10-03 11:06:39.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:06:39 compute-0 nova_compute[351685]: 2025-10-03 11:06:39.732 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:06:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2980: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:06:41.674 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:06:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:06:41.675 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:06:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:06:41.676 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:06:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:06:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2981: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:44 compute-0 nova_compute[351685]: 2025-10-03 11:06:44.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:44 compute-0 nova_compute[351685]: 2025-10-03 11:06:44.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:44 compute-0 nova_compute[351685]: 2025-10-03 11:06:44.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:06:44 compute-0 nova_compute[351685]: 2025-10-03 11:06:44.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:06:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2982: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:06:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:06:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:06:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:06:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:06:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:06:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:06:46
Oct  3 11:06:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:06:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:06:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', '.rgw.root', 'volumes', 'default.rgw.log']
Oct  3 11:06:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:06:46 compute-0 nova_compute[351685]: 2025-10-03 11:06:46.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:06:46 compute-0 podman[503800]: 2025-10-03 11:06:46.893203739 +0000 UTC m=+0.112760291 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Oct  3 11:06:46 compute-0 podman[503797]: 2025-10-03 11:06:46.904279893 +0000 UTC m=+0.146217509 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:06:46 compute-0 podman[503804]: 2025-10-03 11:06:46.904861231 +0000 UTC m=+0.123083611 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible)
Oct  3 11:06:46 compute-0 podman[503798]: 2025-10-03 11:06:46.910371347 +0000 UTC m=+0.141579441 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm, release=1755695350, io.buildah.version=1.33.7, version=9.6, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 11:06:46 compute-0 podman[503799]: 2025-10-03 11:06:46.914472178 +0000 UTC m=+0.146391415 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:06:46 compute-0 podman[503825]: 2025-10-03 11:06:46.922731692 +0000 UTC m=+0.122719159 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 11:06:46 compute-0 podman[503809]: 2025-10-03 11:06:46.958054849 +0000 UTC m=+0.165375531 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller)
Oct  3 11:06:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2983: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:06:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:06:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:06:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:06:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:06:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:06:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:06:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:06:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:06:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:06:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:06:49 compute-0 nova_compute[351685]: 2025-10-03 11:06:49.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2984: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:49 compute-0 nova_compute[351685]: 2025-10-03 11:06:49.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2985: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:06:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2986: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:54 compute-0 nova_compute[351685]: 2025-10-03 11:06:54.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:06:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2235902889' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:06:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:06:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2235902889' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:06:54 compute-0 nova_compute[351685]: 2025-10-03 11:06:54.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2987: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:06:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:06:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2988: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:57 compute-0 nova_compute[351685]: 2025-10-03 11:06:57.749 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:06:57 compute-0 nova_compute[351685]: 2025-10-03 11:06:57.750 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 11:06:57 compute-0 nova_compute[351685]: 2025-10-03 11:06:57.774 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 11:06:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:06:58 compute-0 podman[503932]: 2025-10-03 11:06:58.869135266 +0000 UTC m=+0.117653317 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 11:06:58 compute-0 podman[503934]: 2025-10-03 11:06:58.885510499 +0000 UTC m=+0.125919631 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 11:06:58 compute-0 podman[503933]: 2025-10-03 11:06:58.9087129 +0000 UTC m=+0.154083050 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, io.buildah.version=1.29.0, release-0.7.12=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, io.openshift.tags=base rhel9)
Oct  3 11:06:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2989: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:06:59 compute-0 nova_compute[351685]: 2025-10-03 11:06:59.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:59 compute-0 nova_compute[351685]: 2025-10-03 11:06:59.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:06:59 compute-0 podman[157165]: time="2025-10-03T11:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:06:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:06:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9102 "" "Go-http-client/1.1"
Oct  3 11:07:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2990: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:01 compute-0 openstack_network_exporter[367524]: ERROR   11:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:07:01 compute-0 openstack_network_exporter[367524]: ERROR   11:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:07:01 compute-0 openstack_network_exporter[367524]: ERROR   11:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:07:01 compute-0 openstack_network_exporter[367524]: ERROR   11:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:07:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:07:01 compute-0 openstack_network_exporter[367524]: ERROR   11:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:07:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:07:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:07:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2991: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:04 compute-0 nova_compute[351685]: 2025-10-03 11:07:04.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:04 compute-0 nova_compute[351685]: 2025-10-03 11:07:04.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2992: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2993: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:07:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2994: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:09 compute-0 nova_compute[351685]: 2025-10-03 11:07:09.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:09 compute-0 nova_compute[351685]: 2025-10-03 11:07:09.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2995: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:07:11 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:07:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:07:11 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:07:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:07:11 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:07:11 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 2501b149-8e8a-465f-9eca-121d1e67166f does not exist
Oct  3 11:07:11 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev fc25b171-4bcf-4058-99f9-eb89d137a007 does not exist
Oct  3 11:07:11 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a79c7780-fb0b-46ea-8567-0ff459b59cf9 does not exist
Oct  3 11:07:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:07:11 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:07:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:07:11 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:07:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:07:11 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:07:12 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:07:12 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:07:12 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:07:12 compute-0 podman[504261]: 2025-10-03 11:07:12.703551462 +0000 UTC m=+0.048791628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:07:12 compute-0 podman[504261]: 2025-10-03 11:07:12.817544371 +0000 UTC m=+0.162784447 container create 468b2e0840d4ac8383abc71ad5dac8ce6819fa40ddec5f5101c46098062549cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:07:12 compute-0 systemd[1]: Started libpod-conmon-468b2e0840d4ac8383abc71ad5dac8ce6819fa40ddec5f5101c46098062549cf.scope.
Oct  3 11:07:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:07:12 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:07:12 compute-0 podman[504261]: 2025-10-03 11:07:12.962554202 +0000 UTC m=+0.307794298 container init 468b2e0840d4ac8383abc71ad5dac8ce6819fa40ddec5f5101c46098062549cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:07:12 compute-0 podman[504261]: 2025-10-03 11:07:12.980550576 +0000 UTC m=+0.325790652 container start 468b2e0840d4ac8383abc71ad5dac8ce6819fa40ddec5f5101c46098062549cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 11:07:12 compute-0 adoring_banzai[504276]: 167 167
Oct  3 11:07:12 compute-0 systemd[1]: libpod-468b2e0840d4ac8383abc71ad5dac8ce6819fa40ddec5f5101c46098062549cf.scope: Deactivated successfully.
Oct  3 11:07:13 compute-0 podman[504261]: 2025-10-03 11:07:13.042305878 +0000 UTC m=+0.387546014 container attach 468b2e0840d4ac8383abc71ad5dac8ce6819fa40ddec5f5101c46098062549cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banzai, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct  3 11:07:13 compute-0 podman[504261]: 2025-10-03 11:07:13.043977952 +0000 UTC m=+0.389218038 container died 468b2e0840d4ac8383abc71ad5dac8ce6819fa40ddec5f5101c46098062549cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banzai, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 11:07:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2996: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-691a0d1ce72ff3fcbdbde5a4426bd78180f69ca7b7129f907a1e9b9163454683-merged.mount: Deactivated successfully.
Oct  3 11:07:13 compute-0 podman[504261]: 2025-10-03 11:07:13.295020767 +0000 UTC m=+0.640260883 container remove 468b2e0840d4ac8383abc71ad5dac8ce6819fa40ddec5f5101c46098062549cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:07:13 compute-0 systemd[1]: libpod-conmon-468b2e0840d4ac8383abc71ad5dac8ce6819fa40ddec5f5101c46098062549cf.scope: Deactivated successfully.
Oct  3 11:07:13 compute-0 podman[504301]: 2025-10-03 11:07:13.580395528 +0000 UTC m=+0.083845298 container create f78f3a5e920bf1e6fe80c6e609e89c956119ad2dca6e661f3109eaf9a4863f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:07:13 compute-0 podman[504301]: 2025-10-03 11:07:13.544597595 +0000 UTC m=+0.048047425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:07:13 compute-0 systemd[1]: Started libpod-conmon-f78f3a5e920bf1e6fe80c6e609e89c956119ad2dca6e661f3109eaf9a4863f9a.scope.
Oct  3 11:07:13 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc16208a32732f96e9feaa5938ff0fc51b815510fbdf9ae4c4f06a78456c0c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc16208a32732f96e9feaa5938ff0fc51b815510fbdf9ae4c4f06a78456c0c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc16208a32732f96e9feaa5938ff0fc51b815510fbdf9ae4c4f06a78456c0c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc16208a32732f96e9feaa5938ff0fc51b815510fbdf9ae4c4f06a78456c0c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfc16208a32732f96e9feaa5938ff0fc51b815510fbdf9ae4c4f06a78456c0c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:07:13 compute-0 podman[504301]: 2025-10-03 11:07:13.768790633 +0000 UTC m=+0.272240383 container init f78f3a5e920bf1e6fe80c6e609e89c956119ad2dca6e661f3109eaf9a4863f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct  3 11:07:13 compute-0 podman[504301]: 2025-10-03 11:07:13.779514606 +0000 UTC m=+0.282964336 container start f78f3a5e920bf1e6fe80c6e609e89c956119ad2dca6e661f3109eaf9a4863f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kirch, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:07:13 compute-0 podman[504301]: 2025-10-03 11:07:13.785013241 +0000 UTC m=+0.288462971 container attach f78f3a5e920bf1e6fe80c6e609e89c956119ad2dca6e661f3109eaf9a4863f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kirch, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 11:07:14 compute-0 nova_compute[351685]: 2025-10-03 11:07:14.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:14 compute-0 nova_compute[351685]: 2025-10-03 11:07:14.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:14 compute-0 nova_compute[351685]: 2025-10-03 11:07:14.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:07:14 compute-0 nova_compute[351685]: 2025-10-03 11:07:14.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 11:07:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2997: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:15 compute-0 loving_kirch[504317]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:07:15 compute-0 loving_kirch[504317]: --> relative data size: 1.0
Oct  3 11:07:15 compute-0 loving_kirch[504317]: --> All data devices are unavailable
Oct  3 11:07:15 compute-0 systemd[1]: libpod-f78f3a5e920bf1e6fe80c6e609e89c956119ad2dca6e661f3109eaf9a4863f9a.scope: Deactivated successfully.
Oct  3 11:07:15 compute-0 podman[504301]: 2025-10-03 11:07:15.114217169 +0000 UTC m=+1.617666949 container died f78f3a5e920bf1e6fe80c6e609e89c956119ad2dca6e661f3109eaf9a4863f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kirch, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 11:07:15 compute-0 systemd[1]: libpod-f78f3a5e920bf1e6fe80c6e609e89c956119ad2dca6e661f3109eaf9a4863f9a.scope: Consumed 1.278s CPU time.
Oct  3 11:07:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfc16208a32732f96e9feaa5938ff0fc51b815510fbdf9ae4c4f06a78456c0c5-merged.mount: Deactivated successfully.
Oct  3 11:07:15 compute-0 podman[504301]: 2025-10-03 11:07:15.297667757 +0000 UTC m=+1.801117507 container remove f78f3a5e920bf1e6fe80c6e609e89c956119ad2dca6e661f3109eaf9a4863f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct  3 11:07:15 compute-0 systemd[1]: libpod-conmon-f78f3a5e920bf1e6fe80c6e609e89c956119ad2dca6e661f3109eaf9a4863f9a.scope: Deactivated successfully.
Oct  3 11:07:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:07:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:07:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:07:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:07:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:07:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:07:16 compute-0 podman[504496]: 2025-10-03 11:07:16.395875069 +0000 UTC m=+0.081474112 container create 00f805f69c0ecc92412ea13a7fece00df663adb0f9ecb5eab51ace405928244d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 11:07:16 compute-0 podman[504496]: 2025-10-03 11:07:16.36987375 +0000 UTC m=+0.055472813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:07:16 compute-0 systemd[1]: Started libpod-conmon-00f805f69c0ecc92412ea13a7fece00df663adb0f9ecb5eab51ace405928244d.scope.
Oct  3 11:07:16 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:07:16 compute-0 podman[504496]: 2025-10-03 11:07:16.524995912 +0000 UTC m=+0.210595025 container init 00f805f69c0ecc92412ea13a7fece00df663adb0f9ecb5eab51ace405928244d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 11:07:16 compute-0 podman[504496]: 2025-10-03 11:07:16.534715503 +0000 UTC m=+0.220314516 container start 00f805f69c0ecc92412ea13a7fece00df663adb0f9ecb5eab51ace405928244d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_northcutt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 11:07:16 compute-0 podman[504496]: 2025-10-03 11:07:16.539387962 +0000 UTC m=+0.224987055 container attach 00f805f69c0ecc92412ea13a7fece00df663adb0f9ecb5eab51ace405928244d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_northcutt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  3 11:07:16 compute-0 hungry_northcutt[504512]: 167 167
Oct  3 11:07:16 compute-0 systemd[1]: libpod-00f805f69c0ecc92412ea13a7fece00df663adb0f9ecb5eab51ace405928244d.scope: Deactivated successfully.
Oct  3 11:07:16 compute-0 podman[504496]: 2025-10-03 11:07:16.544302068 +0000 UTC m=+0.229901081 container died 00f805f69c0ecc92412ea13a7fece00df663adb0f9ecb5eab51ace405928244d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 11:07:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-66a5e85ade98bf72eb5286d9dc831d7407637065fe675f7f06d5543a13e876c0-merged.mount: Deactivated successfully.
Oct  3 11:07:16 compute-0 podman[504496]: 2025-10-03 11:07:16.596048561 +0000 UTC m=+0.281647574 container remove 00f805f69c0ecc92412ea13a7fece00df663adb0f9ecb5eab51ace405928244d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:07:16 compute-0 systemd[1]: libpod-conmon-00f805f69c0ecc92412ea13a7fece00df663adb0f9ecb5eab51ace405928244d.scope: Deactivated successfully.
Oct  3 11:07:16 compute-0 podman[504535]: 2025-10-03 11:07:16.870667409 +0000 UTC m=+0.101164551 container create 793ae3fb6a30615f571457416dbd3733b5c3183020691f0b992bdbf332bddcf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:07:16 compute-0 podman[504535]: 2025-10-03 11:07:16.824705321 +0000 UTC m=+0.055202523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:07:16 compute-0 systemd[1]: Started libpod-conmon-793ae3fb6a30615f571457416dbd3733b5c3183020691f0b992bdbf332bddcf2.scope.
Oct  3 11:07:17 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:07:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab8d4af92955ee483e736fcac5ff8ef23993293d9d019f50722060ea41f7fe1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:07:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab8d4af92955ee483e736fcac5ff8ef23993293d9d019f50722060ea41f7fe1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:07:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab8d4af92955ee483e736fcac5ff8ef23993293d9d019f50722060ea41f7fe1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:07:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab8d4af92955ee483e736fcac5ff8ef23993293d9d019f50722060ea41f7fe1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:07:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2998: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:17 compute-0 podman[504535]: 2025-10-03 11:07:17.065991545 +0000 UTC m=+0.296488667 container init 793ae3fb6a30615f571457416dbd3733b5c3183020691f0b992bdbf332bddcf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:07:17 compute-0 podman[504535]: 2025-10-03 11:07:17.079866608 +0000 UTC m=+0.310363730 container start 793ae3fb6a30615f571457416dbd3733b5c3183020691f0b992bdbf332bddcf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 11:07:17 compute-0 podman[504535]: 2025-10-03 11:07:17.083884787 +0000 UTC m=+0.314381929 container attach 793ae3fb6a30615f571457416dbd3733b5c3183020691f0b992bdbf332bddcf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:07:17 compute-0 podman[504557]: 2025-10-03 11:07:17.144852613 +0000 UTC m=+0.118487285 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 11:07:17 compute-0 podman[504566]: 2025-10-03 11:07:17.14539796 +0000 UTC m=+0.119463355 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:07:17 compute-0 podman[504554]: 2025-10-03 11:07:17.156791503 +0000 UTC m=+0.157402975 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:07:17 compute-0 podman[504553]: 2025-10-03 11:07:17.169539901 +0000 UTC m=+0.158093959 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=edpm)
Oct  3 11:07:17 compute-0 podman[504551]: 2025-10-03 11:07:17.172223397 +0000 UTC m=+0.170330280 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:07:17 compute-0 podman[504556]: 2025-10-03 11:07:17.176843064 +0000 UTC m=+0.157735587 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Oct  3 11:07:17 compute-0 podman[504578]: 2025-10-03 11:07:17.209756575 +0000 UTC m=+0.161456826 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]: {
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:    "0": [
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:        {
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "devices": [
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "/dev/loop3"
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            ],
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "lv_name": "ceph_lv0",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "lv_size": "21470642176",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "name": "ceph_lv0",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "tags": {
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.cluster_name": "ceph",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.crush_device_class": "",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.encrypted": "0",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.osd_id": "0",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.type": "block",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.vdo": "0"
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            },
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "type": "block",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "vg_name": "ceph_vg0"
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:        }
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:    ],
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:    "1": [
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:        {
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "devices": [
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "/dev/loop4"
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            ],
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "lv_name": "ceph_lv1",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "lv_size": "21470642176",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "name": "ceph_lv1",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "tags": {
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.cluster_name": "ceph",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.crush_device_class": "",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.encrypted": "0",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.osd_id": "1",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.type": "block",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.vdo": "0"
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            },
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "type": "block",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "vg_name": "ceph_vg1"
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:        }
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:    ],
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:    "2": [
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:        {
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "devices": [
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "/dev/loop5"
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            ],
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "lv_name": "ceph_lv2",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "lv_size": "21470642176",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "name": "ceph_lv2",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "tags": {
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.cluster_name": "ceph",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.crush_device_class": "",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.encrypted": "0",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.osd_id": "2",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.type": "block",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:                "ceph.vdo": "0"
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            },
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "type": "block",
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:            "vg_name": "ceph_vg2"
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:        }
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]:    ]
Oct  3 11:07:17 compute-0 laughing_mcclintock[504550]: }
Oct  3 11:07:17 compute-0 systemd[1]: libpod-793ae3fb6a30615f571457416dbd3733b5c3183020691f0b992bdbf332bddcf2.scope: Deactivated successfully.
Oct  3 11:07:17 compute-0 podman[504535]: 2025-10-03 11:07:17.892109351 +0000 UTC m=+1.122606463 container died 793ae3fb6a30615f571457416dbd3733b5c3183020691f0b992bdbf332bddcf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcclintock, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 11:07:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:07:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab8d4af92955ee483e736fcac5ff8ef23993293d9d019f50722060ea41f7fe1e-merged.mount: Deactivated successfully.
Oct  3 11:07:18 compute-0 podman[504535]: 2025-10-03 11:07:18.007680611 +0000 UTC m=+1.238177793 container remove 793ae3fb6a30615f571457416dbd3733b5c3183020691f0b992bdbf332bddcf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcclintock, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:07:18 compute-0 systemd[1]: libpod-conmon-793ae3fb6a30615f571457416dbd3733b5c3183020691f0b992bdbf332bddcf2.scope: Deactivated successfully.
Oct  3 11:07:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v2999: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:19 compute-0 nova_compute[351685]: 2025-10-03 11:07:19.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:19 compute-0 podman[504840]: 2025-10-03 11:07:19.141614504 +0000 UTC m=+0.117894884 container create 7c54d5a30f7b4f7f6199a25c4d8d0c7854418b0eb0015919332725a5482cffe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcclintock, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  3 11:07:19 compute-0 podman[504840]: 2025-10-03 11:07:19.077307541 +0000 UTC m=+0.053587901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:07:19 compute-0 nova_compute[351685]: 2025-10-03 11:07:19.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:19 compute-0 systemd[1]: Started libpod-conmon-7c54d5a30f7b4f7f6199a25c4d8d0c7854418b0eb0015919332725a5482cffe2.scope.
Oct  3 11:07:19 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:07:19 compute-0 podman[504840]: 2025-10-03 11:07:19.418391521 +0000 UTC m=+0.394671941 container init 7c54d5a30f7b4f7f6199a25c4d8d0c7854418b0eb0015919332725a5482cffe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 11:07:19 compute-0 podman[504840]: 2025-10-03 11:07:19.437062207 +0000 UTC m=+0.413342557 container start 7c54d5a30f7b4f7f6199a25c4d8d0c7854418b0eb0015919332725a5482cffe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 11:07:19 compute-0 confident_mcclintock[504856]: 167 167
Oct  3 11:07:19 compute-0 systemd[1]: libpod-7c54d5a30f7b4f7f6199a25c4d8d0c7854418b0eb0015919332725a5482cffe2.scope: Deactivated successfully.
Oct  3 11:07:19 compute-0 podman[504840]: 2025-10-03 11:07:19.49041845 +0000 UTC m=+0.466698890 container attach 7c54d5a30f7b4f7f6199a25c4d8d0c7854418b0eb0015919332725a5482cffe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 11:07:19 compute-0 podman[504840]: 2025-10-03 11:07:19.491185685 +0000 UTC m=+0.467466075 container died 7c54d5a30f7b4f7f6199a25c4d8d0c7854418b0eb0015919332725a5482cffe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcclintock, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:07:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-83617ff7c2bd37d737da0d56a6c18ec88cab639aad4a6850e41aad1478adbcf8-merged.mount: Deactivated successfully.
Oct  3 11:07:19 compute-0 podman[504840]: 2025-10-03 11:07:19.791534114 +0000 UTC m=+0.767814484 container remove 7c54d5a30f7b4f7f6199a25c4d8d0c7854418b0eb0015919332725a5482cffe2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:07:19 compute-0 systemd[1]: libpod-conmon-7c54d5a30f7b4f7f6199a25c4d8d0c7854418b0eb0015919332725a5482cffe2.scope: Deactivated successfully.
Oct  3 11:07:20 compute-0 podman[504880]: 2025-10-03 11:07:20.090380076 +0000 UTC m=+0.058249122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:07:20 compute-0 podman[504880]: 2025-10-03 11:07:20.21862685 +0000 UTC m=+0.186495876 container create a254b080dd881c39fa3e9871729199cc4e7bda002a4cf5ca382118ae93746527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct  3 11:07:20 compute-0 systemd[1]: Started libpod-conmon-a254b080dd881c39fa3e9871729199cc4e7bda002a4cf5ca382118ae93746527.scope.
Oct  3 11:07:20 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a8c0d5677a7d13c9b8fe6d3eef45bc9e51b827e149a85ced075c3c1697dd95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a8c0d5677a7d13c9b8fe6d3eef45bc9e51b827e149a85ced075c3c1697dd95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a8c0d5677a7d13c9b8fe6d3eef45bc9e51b827e149a85ced075c3c1697dd95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a8c0d5677a7d13c9b8fe6d3eef45bc9e51b827e149a85ced075c3c1697dd95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:07:20 compute-0 podman[504880]: 2025-10-03 11:07:20.413310645 +0000 UTC m=+0.381179731 container init a254b080dd881c39fa3e9871729199cc4e7bda002a4cf5ca382118ae93746527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 11:07:20 compute-0 podman[504880]: 2025-10-03 11:07:20.430818625 +0000 UTC m=+0.398687651 container start a254b080dd881c39fa3e9871729199cc4e7bda002a4cf5ca382118ae93746527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 11:07:20 compute-0 podman[504880]: 2025-10-03 11:07:20.436365202 +0000 UTC m=+0.404234238 container attach a254b080dd881c39fa3e9871729199cc4e7bda002a4cf5ca382118ae93746527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 11:07:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3000: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:21 compute-0 admiring_tu[504894]: {
Oct  3 11:07:21 compute-0 admiring_tu[504894]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:07:21 compute-0 admiring_tu[504894]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:07:21 compute-0 admiring_tu[504894]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:07:21 compute-0 admiring_tu[504894]:        "osd_id": 1,
Oct  3 11:07:21 compute-0 admiring_tu[504894]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:07:21 compute-0 admiring_tu[504894]:        "type": "bluestore"
Oct  3 11:07:21 compute-0 admiring_tu[504894]:    },
Oct  3 11:07:21 compute-0 admiring_tu[504894]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:07:21 compute-0 admiring_tu[504894]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:07:21 compute-0 admiring_tu[504894]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:07:21 compute-0 admiring_tu[504894]:        "osd_id": 2,
Oct  3 11:07:21 compute-0 admiring_tu[504894]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:07:21 compute-0 admiring_tu[504894]:        "type": "bluestore"
Oct  3 11:07:21 compute-0 admiring_tu[504894]:    },
Oct  3 11:07:21 compute-0 admiring_tu[504894]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:07:21 compute-0 admiring_tu[504894]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:07:21 compute-0 admiring_tu[504894]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:07:21 compute-0 admiring_tu[504894]:        "osd_id": 0,
Oct  3 11:07:21 compute-0 admiring_tu[504894]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:07:21 compute-0 admiring_tu[504894]:        "type": "bluestore"
Oct  3 11:07:21 compute-0 admiring_tu[504894]:    }
Oct  3 11:07:21 compute-0 admiring_tu[504894]: }
Oct  3 11:07:21 compute-0 systemd[1]: libpod-a254b080dd881c39fa3e9871729199cc4e7bda002a4cf5ca382118ae93746527.scope: Deactivated successfully.
Oct  3 11:07:21 compute-0 systemd[1]: libpod-a254b080dd881c39fa3e9871729199cc4e7bda002a4cf5ca382118ae93746527.scope: Consumed 1.160s CPU time.
Oct  3 11:07:21 compute-0 podman[504880]: 2025-10-03 11:07:21.59610342 +0000 UTC m=+1.563972486 container died a254b080dd881c39fa3e9871729199cc4e7bda002a4cf5ca382118ae93746527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 11:07:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-83a8c0d5677a7d13c9b8fe6d3eef45bc9e51b827e149a85ced075c3c1697dd95-merged.mount: Deactivated successfully.
Oct  3 11:07:21 compute-0 podman[504880]: 2025-10-03 11:07:21.861433311 +0000 UTC m=+1.829302317 container remove a254b080dd881c39fa3e9871729199cc4e7bda002a4cf5ca382118ae93746527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_tu, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 11:07:21 compute-0 systemd[1]: libpod-conmon-a254b080dd881c39fa3e9871729199cc4e7bda002a4cf5ca382118ae93746527.scope: Deactivated successfully.
Oct  3 11:07:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:07:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:07:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:07:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:07:21 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b69cbc9f-bb64-4eb7-9d49-1984081e49d6 does not exist
Oct  3 11:07:21 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b623a103-435a-4c9c-94a5-682ae40aa249 does not exist
Oct  3 11:07:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:07:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:07:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:07:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3001: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:24 compute-0 nova_compute[351685]: 2025-10-03 11:07:24.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:24 compute-0 nova_compute[351685]: 2025-10-03 11:07:24.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3002: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:26 compute-0 nova_compute[351685]: 2025-10-03 11:07:26.771 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:07:26 compute-0 nova_compute[351685]: 2025-10-03 11:07:26.771 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:07:26 compute-0 nova_compute[351685]: 2025-10-03 11:07:26.772 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:07:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3003: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:27 compute-0 nova_compute[351685]: 2025-10-03 11:07:27.480 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:07:27 compute-0 nova_compute[351685]: 2025-10-03 11:07:27.481 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:07:27 compute-0 nova_compute[351685]: 2025-10-03 11:07:27.481 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:07:27 compute-0 nova_compute[351685]: 2025-10-03 11:07:27.482 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:07:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:07:28 compute-0 nova_compute[351685]: 2025-10-03 11:07:28.663 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:07:28 compute-0 nova_compute[351685]: 2025-10-03 11:07:28.707 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:07:28 compute-0 nova_compute[351685]: 2025-10-03 11:07:28.707 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:07:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3004: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:29 compute-0 nova_compute[351685]: 2025-10-03 11:07:29.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:29 compute-0 nova_compute[351685]: 2025-10-03 11:07:29.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:29 compute-0 podman[157165]: time="2025-10-03T11:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:07:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:07:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9096 "" "Go-http-client/1.1"
Oct  3 11:07:29 compute-0 podman[504993]: 2025-10-03 11:07:29.892198801 +0000 UTC m=+0.133136552 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:07:29 compute-0 podman[504995]: 2025-10-03 11:07:29.897833791 +0000 UTC m=+0.124706703 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Oct  3 11:07:29 compute-0 podman[504994]: 2025-10-03 11:07:29.901510528 +0000 UTC m=+0.143418970 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, version=9.4, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct  3 11:07:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3005: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:31 compute-0 openstack_network_exporter[367524]: ERROR   11:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:07:31 compute-0 openstack_network_exporter[367524]: ERROR   11:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:07:31 compute-0 openstack_network_exporter[367524]: ERROR   11:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:07:31 compute-0 openstack_network_exporter[367524]: ERROR   11:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:07:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:07:31 compute-0 openstack_network_exporter[367524]: ERROR   11:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:07:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:07:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:07:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3006: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:33 compute-0 nova_compute[351685]: 2025-10-03 11:07:33.661 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:07:34 compute-0 nova_compute[351685]: 2025-10-03 11:07:34.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:34 compute-0 nova_compute[351685]: 2025-10-03 11:07:34.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3007: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:35 compute-0 nova_compute[351685]: 2025-10-03 11:07:35.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:07:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3008: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:07:38 compute-0 nova_compute[351685]: 2025-10-03 11:07:38.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:07:38 compute-0 nova_compute[351685]: 2025-10-03 11:07:38.770 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:07:38 compute-0 nova_compute[351685]: 2025-10-03 11:07:38.770 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:07:38 compute-0 nova_compute[351685]: 2025-10-03 11:07:38.770 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:07:38 compute-0 nova_compute[351685]: 2025-10-03 11:07:38.770 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:07:38 compute-0 nova_compute[351685]: 2025-10-03 11:07:38.771 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:07:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3009: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:39 compute-0 nova_compute[351685]: 2025-10-03 11:07:39.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:39 compute-0 nova_compute[351685]: 2025-10-03 11:07:39.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:39 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:07:39 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2274627029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:07:39 compute-0 nova_compute[351685]: 2025-10-03 11:07:39.297 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:07:39 compute-0 nova_compute[351685]: 2025-10-03 11:07:39.409 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:07:39 compute-0 nova_compute[351685]: 2025-10-03 11:07:39.410 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:07:39 compute-0 nova_compute[351685]: 2025-10-03 11:07:39.411 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:07:39 compute-0 nova_compute[351685]: 2025-10-03 11:07:39.840 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:07:39 compute-0 nova_compute[351685]: 2025-10-03 11:07:39.842 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3816MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:07:39 compute-0 nova_compute[351685]: 2025-10-03 11:07:39.842 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:07:39 compute-0 nova_compute[351685]: 2025-10-03 11:07:39.842 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:07:39 compute-0 nova_compute[351685]: 2025-10-03 11:07:39.924 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:07:39 compute-0 nova_compute[351685]: 2025-10-03 11:07:39.925 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:07:39 compute-0 nova_compute[351685]: 2025-10-03 11:07:39.925 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:07:39 compute-0 nova_compute[351685]: 2025-10-03 11:07:39.972 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:07:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:07:40 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2979995787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:07:40 compute-0 nova_compute[351685]: 2025-10-03 11:07:40.465 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:07:40 compute-0 nova_compute[351685]: 2025-10-03 11:07:40.474 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:07:40 compute-0 nova_compute[351685]: 2025-10-03 11:07:40.488 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:07:40 compute-0 nova_compute[351685]: 2025-10-03 11:07:40.490 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:07:40 compute-0 nova_compute[351685]: 2025-10-03 11:07:40.490 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.900 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.901 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.903 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.914 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.915 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.916 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.916 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.916 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:07:40.916407) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.926 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.927 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.927 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.927 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.928 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.928 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.928 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.928 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.929 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.930 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.930 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.930 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.930 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:07:40.928610) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:07:40.930843) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.960 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.962 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.962 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.963 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.963 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.963 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.963 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.963 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:40.964 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:07:40.963860) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.032 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.033 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.034 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.035 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.035 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.035 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.036 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.036 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:07:41.035964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.037 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.037 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.038 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.038 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.039 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.039 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.039 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.039 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.039 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.040 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.041 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:07:41.039635) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.042 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.042 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.042 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.043 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.043 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:07:41.043081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.043 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.044 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.044 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.045 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.046 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.046 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.046 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:07:41.046515) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.047 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.047 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.048 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.048 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.049 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.049 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.050 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.050 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.051 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.051 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.052 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.052 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.053 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.053 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:07:41.050173) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.053 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.054 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:07:41.053683) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3010: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.093 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.094 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.094 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.095 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.095 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.095 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:07:41.095646) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.096 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.097 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.097 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.098 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.098 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.098 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.099 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.099 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.100 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:07:41.099735) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.100 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.100 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.101 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.101 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.102 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.102 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.102 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.102 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.103 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:07:41.102815) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.103 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.104 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.104 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.105 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.105 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.105 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.106 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.106 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.106 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.107 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.107 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.108 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:07:41.105382) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.109 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.109 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.109 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.109 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:07:41.107094) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.109 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.110 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.110 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:07:41.109533) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.111 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.111 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.111 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.111 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.112 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.112 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:07:41.111441) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.112 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.113 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.113 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.113 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 89310000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.114 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:07:41.113481) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.115 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.115 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.115 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.115 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.115 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.116 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.117 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.117 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.117 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.117 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:07:41.115552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.117 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.118 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:07:41.117819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.118 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.118 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.119 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.119 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.119 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.120 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.120 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.121 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:07:41.119875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.121 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.121 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.122 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.122 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.122 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:07:41.121970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.123 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.123 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.124 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.124 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.124 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.124 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.126 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.127 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.127 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:07:41.124426) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.128 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:07:41.127796) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.128 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:07:41.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:07:41 compute-0 nova_compute[351685]: 2025-10-03 11:07:41.492 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:07:41 compute-0 nova_compute[351685]: 2025-10-03 11:07:41.493 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:07:41 compute-0 nova_compute[351685]: 2025-10-03 11:07:41.493 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:07:41 compute-0 nova_compute[351685]: 2025-10-03 11:07:41.494 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:07:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:07:41.676 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:07:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:07:41.677 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:07:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:07:41.678 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:07:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:07:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3011: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:44 compute-0 nova_compute[351685]: 2025-10-03 11:07:44.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:44 compute-0 nova_compute[351685]: 2025-10-03 11:07:44.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:44 compute-0 nova_compute[351685]: 2025-10-03 11:07:44.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:07:44 compute-0 nova_compute[351685]: 2025-10-03 11:07:44.732 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:07:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3012: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:07:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:07:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:07:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:07:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:07:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:07:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:07:46
Oct  3 11:07:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:07:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:07:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'vms']
Oct  3 11:07:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:07:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:07:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3013: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:07:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:07:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:07:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:07:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:07:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:07:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:07:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:07:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:07:47 compute-0 podman[505097]: 2025-10-03 11:07:47.894873984 +0000 UTC m=+0.129004600 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 11:07:47 compute-0 podman[505117]: 2025-10-03 11:07:47.901903797 +0000 UTC m=+0.103815765 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 11:07:47 compute-0 podman[505098]: 2025-10-03 11:07:47.917661421 +0000 UTC m=+0.155693292 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 11:07:47 compute-0 podman[505104]: 2025-10-03 11:07:47.922440963 +0000 UTC m=+0.142259382 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20250930, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 11:07:47 compute-0 podman[505099]: 2025-10-03 11:07:47.927093812 +0000 UTC m=+0.152639924 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  3 11:07:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:07:47 compute-0 podman[505100]: 2025-10-03 11:07:47.931692399 +0000 UTC m=+0.145114824 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 11:07:47 compute-0 podman[505112]: 2025-10-03 11:07:47.960171089 +0000 UTC m=+0.162400666 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 11:07:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3014: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:49 compute-0 nova_compute[351685]: 2025-10-03 11:07:49.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:49 compute-0 nova_compute[351685]: 2025-10-03 11:07:49.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3015: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:07:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3016: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:54 compute-0 nova_compute[351685]: 2025-10-03 11:07:54.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:07:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3014524516' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:07:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:07:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3014524516' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:07:54 compute-0 nova_compute[351685]: 2025-10-03 11:07:54.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3017: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:07:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:07:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3018: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:07:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3019: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:07:59 compute-0 nova_compute[351685]: 2025-10-03 11:07:59.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:59 compute-0 nova_compute[351685]: 2025-10-03 11:07:59.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:07:59 compute-0 podman[157165]: time="2025-10-03T11:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:07:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:07:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9093 "" "Go-http-client/1.1"
Oct  3 11:08:00 compute-0 podman[505229]: 2025-10-03 11:08:00.886336935 +0000 UTC m=+0.137482211 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 11:08:00 compute-0 podman[505230]: 2025-10-03 11:08:00.892645016 +0000 UTC m=+0.126731467 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, release-0.7.12=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, version=9.4, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct  3 11:08:00 compute-0 podman[505231]: 2025-10-03 11:08:00.907647426 +0000 UTC m=+0.140491227 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 11:08:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3020: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:01 compute-0 openstack_network_exporter[367524]: ERROR   11:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:08:01 compute-0 openstack_network_exporter[367524]: ERROR   11:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:08:01 compute-0 openstack_network_exporter[367524]: ERROR   11:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:08:01 compute-0 openstack_network_exporter[367524]: ERROR   11:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:08:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:08:01 compute-0 openstack_network_exporter[367524]: ERROR   11:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:08:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:08:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:08:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3021: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:04 compute-0 nova_compute[351685]: 2025-10-03 11:08:04.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:04 compute-0 nova_compute[351685]: 2025-10-03 11:08:04.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3022: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3023: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:08:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3024: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:09 compute-0 nova_compute[351685]: 2025-10-03 11:08:09.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:09 compute-0 nova_compute[351685]: 2025-10-03 11:08:09.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3025: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:12 compute-0 nova_compute[351685]: 2025-10-03 11:08:12.726 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:08:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:08:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3026: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:14 compute-0 nova_compute[351685]: 2025-10-03 11:08:14.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:14 compute-0 nova_compute[351685]: 2025-10-03 11:08:14.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3027: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:08:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:08:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:08:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:08:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:08:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:08:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3028: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:08:18 compute-0 podman[505290]: 2025-10-03 11:08:18.866972685 +0000 UTC m=+0.114526637 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 11:08:18 compute-0 podman[505291]: 2025-10-03 11:08:18.886339014 +0000 UTC m=+0.113276719 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:08:18 compute-0 podman[505302]: 2025-10-03 11:08:18.887176901 +0000 UTC m=+0.115428817 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 11:08:18 compute-0 podman[505303]: 2025-10-03 11:08:18.896903941 +0000 UTC m=+0.106724059 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Oct  3 11:08:18 compute-0 podman[505289]: 2025-10-03 11:08:18.902560861 +0000 UTC m=+0.152856461 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:08:18 compute-0 podman[505321]: 2025-10-03 11:08:18.923896382 +0000 UTC m=+0.122263144 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  3 11:08:18 compute-0 podman[505304]: 2025-10-03 11:08:18.942071973 +0000 UTC m=+0.133545715 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct  3 11:08:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3029: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:19 compute-0 nova_compute[351685]: 2025-10-03 11:08:19.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:19 compute-0 nova_compute[351685]: 2025-10-03 11:08:19.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3030: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:08:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3031: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:08:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:08:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:08:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:08:24 compute-0 nova_compute[351685]: 2025-10-03 11:08:24.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:08:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:08:24 compute-0 nova_compute[351685]: 2025-10-03 11:08:24.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:08:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:08:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:08:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:08:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:08:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:08:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6caac027-ca4e-4f4a-bee9-6f682760170f does not exist
Oct  3 11:08:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 64921708-d231-4a0d-b55a-2025da8e52d7 does not exist
Oct  3 11:08:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d5db517b-d7cc-4de8-9355-97c0ae471021 does not exist
Oct  3 11:08:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:08:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:08:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:08:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:08:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:08:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:08:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3032: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:25 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:08:25 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:08:25 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:08:25 compute-0 podman[505814]: 2025-10-03 11:08:25.446519315 +0000 UTC m=+0.089750147 container create 6df2eddbcc38ee1b8700c9ab7a8caccb496e0bee12ec46df2045b7d364ab429f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:08:25 compute-0 podman[505814]: 2025-10-03 11:08:25.392321564 +0000 UTC m=+0.035552436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:08:25 compute-0 systemd[1]: Started libpod-conmon-6df2eddbcc38ee1b8700c9ab7a8caccb496e0bee12ec46df2045b7d364ab429f.scope.
Oct  3 11:08:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:08:25 compute-0 podman[505814]: 2025-10-03 11:08:25.610383036 +0000 UTC m=+0.253613958 container init 6df2eddbcc38ee1b8700c9ab7a8caccb496e0bee12ec46df2045b7d364ab429f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kilby, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 11:08:25 compute-0 podman[505814]: 2025-10-03 11:08:25.62743503 +0000 UTC m=+0.270665862 container start 6df2eddbcc38ee1b8700c9ab7a8caccb496e0bee12ec46df2045b7d364ab429f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:08:25 compute-0 intelligent_kilby[505828]: 167 167
Oct  3 11:08:25 compute-0 systemd[1]: libpod-6df2eddbcc38ee1b8700c9ab7a8caccb496e0bee12ec46df2045b7d364ab429f.scope: Deactivated successfully.
Oct  3 11:08:25 compute-0 podman[505814]: 2025-10-03 11:08:25.644156304 +0000 UTC m=+0.287387176 container attach 6df2eddbcc38ee1b8700c9ab7a8caccb496e0bee12ec46df2045b7d364ab429f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct  3 11:08:25 compute-0 podman[505814]: 2025-10-03 11:08:25.645068024 +0000 UTC m=+0.288298886 container died 6df2eddbcc38ee1b8700c9ab7a8caccb496e0bee12ec46df2045b7d364ab429f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:08:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-55b3f1532f276799604058e7e932439fc72f9db15379b6f06ec5a58f6d862a48-merged.mount: Deactivated successfully.
Oct  3 11:08:25 compute-0 podman[505814]: 2025-10-03 11:08:25.814370119 +0000 UTC m=+0.457600971 container remove 6df2eddbcc38ee1b8700c9ab7a8caccb496e0bee12ec46df2045b7d364ab429f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kilby, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:08:25 compute-0 systemd[1]: libpod-conmon-6df2eddbcc38ee1b8700c9ab7a8caccb496e0bee12ec46df2045b7d364ab429f.scope: Deactivated successfully.
Oct  3 11:08:26 compute-0 podman[505851]: 2025-10-03 11:08:26.073879414 +0000 UTC m=+0.061781813 container create f2b0dc953fd3f110f3c394d11c628892f91193b893ab03e798f1472aecf848e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  3 11:08:26 compute-0 systemd[1]: Started libpod-conmon-f2b0dc953fd3f110f3c394d11c628892f91193b893ab03e798f1472aecf848e8.scope.
Oct  3 11:08:26 compute-0 podman[505851]: 2025-10-03 11:08:26.049567928 +0000 UTC m=+0.037470377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:08:26 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e6c8f49615fca42365058b5bd846773b9681a00face92761482c4b39c81b76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e6c8f49615fca42365058b5bd846773b9681a00face92761482c4b39c81b76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e6c8f49615fca42365058b5bd846773b9681a00face92761482c4b39c81b76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e6c8f49615fca42365058b5bd846773b9681a00face92761482c4b39c81b76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87e6c8f49615fca42365058b5bd846773b9681a00face92761482c4b39c81b76/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:08:26 compute-0 podman[505851]: 2025-10-03 11:08:26.233030476 +0000 UTC m=+0.220932965 container init f2b0dc953fd3f110f3c394d11c628892f91193b893ab03e798f1472aecf848e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 11:08:26 compute-0 podman[505851]: 2025-10-03 11:08:26.250451212 +0000 UTC m=+0.238353641 container start f2b0dc953fd3f110f3c394d11c628892f91193b893ab03e798f1472aecf848e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  3 11:08:26 compute-0 podman[505851]: 2025-10-03 11:08:26.256919198 +0000 UTC m=+0.244821657 container attach f2b0dc953fd3f110f3c394d11c628892f91193b893ab03e798f1472aecf848e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:08:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3033: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:27 compute-0 elegant_payne[505866]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:08:27 compute-0 elegant_payne[505866]: --> relative data size: 1.0
Oct  3 11:08:27 compute-0 elegant_payne[505866]: --> All data devices are unavailable
Oct  3 11:08:27 compute-0 systemd[1]: libpod-f2b0dc953fd3f110f3c394d11c628892f91193b893ab03e798f1472aecf848e8.scope: Deactivated successfully.
Oct  3 11:08:27 compute-0 systemd[1]: libpod-f2b0dc953fd3f110f3c394d11c628892f91193b893ab03e798f1472aecf848e8.scope: Consumed 1.121s CPU time.
Oct  3 11:08:27 compute-0 podman[505851]: 2025-10-03 11:08:27.43711836 +0000 UTC m=+1.425020789 container died f2b0dc953fd3f110f3c394d11c628892f91193b893ab03e798f1472aecf848e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:08:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-87e6c8f49615fca42365058b5bd846773b9681a00face92761482c4b39c81b76-merged.mount: Deactivated successfully.
Oct  3 11:08:27 compute-0 podman[505851]: 2025-10-03 11:08:27.510873104 +0000 UTC m=+1.498775513 container remove f2b0dc953fd3f110f3c394d11c628892f91193b893ab03e798f1472aecf848e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:08:27 compute-0 systemd[1]: libpod-conmon-f2b0dc953fd3f110f3c394d11c628892f91193b893ab03e798f1472aecf848e8.scope: Deactivated successfully.
Oct  3 11:08:27 compute-0 nova_compute[351685]: 2025-10-03 11:08:27.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:08:27 compute-0 nova_compute[351685]: 2025-10-03 11:08:27.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:08:27 compute-0 nova_compute[351685]: 2025-10-03 11:08:27.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:08:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:08:28 compute-0 nova_compute[351685]: 2025-10-03 11:08:28.179 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:08:28 compute-0 nova_compute[351685]: 2025-10-03 11:08:28.180 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:08:28 compute-0 nova_compute[351685]: 2025-10-03 11:08:28.180 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:08:28 compute-0 nova_compute[351685]: 2025-10-03 11:08:28.180 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:08:28 compute-0 podman[506048]: 2025-10-03 11:08:28.572878952 +0000 UTC m=+0.068202559 container create dde96cc7dae9ce890f171df5ffba8b7d422cf70f07c16320a66ed25b377ad315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chebyshev, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 11:08:28 compute-0 podman[506048]: 2025-10-03 11:08:28.541089297 +0000 UTC m=+0.036412944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:08:28 compute-0 systemd[1]: Started libpod-conmon-dde96cc7dae9ce890f171df5ffba8b7d422cf70f07c16320a66ed25b377ad315.scope.
Oct  3 11:08:28 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:08:28 compute-0 podman[506048]: 2025-10-03 11:08:28.741168125 +0000 UTC m=+0.236491812 container init dde96cc7dae9ce890f171df5ffba8b7d422cf70f07c16320a66ed25b377ad315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chebyshev, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:08:28 compute-0 podman[506048]: 2025-10-03 11:08:28.753084215 +0000 UTC m=+0.248407842 container start dde96cc7dae9ce890f171df5ffba8b7d422cf70f07c16320a66ed25b377ad315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 11:08:28 compute-0 silly_chebyshev[506063]: 167 167
Oct  3 11:08:28 compute-0 systemd[1]: libpod-dde96cc7dae9ce890f171df5ffba8b7d422cf70f07c16320a66ed25b377ad315.scope: Deactivated successfully.
Oct  3 11:08:28 compute-0 podman[506048]: 2025-10-03 11:08:28.777441293 +0000 UTC m=+0.272764940 container attach dde96cc7dae9ce890f171df5ffba8b7d422cf70f07c16320a66ed25b377ad315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chebyshev, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 11:08:28 compute-0 podman[506048]: 2025-10-03 11:08:28.778221517 +0000 UTC m=+0.273545154 container died dde96cc7dae9ce890f171df5ffba8b7d422cf70f07c16320a66ed25b377ad315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 11:08:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a773a25a4365a2d3140c5f17ca3d766d5a75163f213b3c670850068e49e62aa-merged.mount: Deactivated successfully.
Oct  3 11:08:28 compute-0 podman[506048]: 2025-10-03 11:08:28.869400769 +0000 UTC m=+0.364724376 container remove dde96cc7dae9ce890f171df5ffba8b7d422cf70f07c16320a66ed25b377ad315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 11:08:28 compute-0 systemd[1]: libpod-conmon-dde96cc7dae9ce890f171df5ffba8b7d422cf70f07c16320a66ed25b377ad315.scope: Deactivated successfully.
Oct  3 11:08:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3034: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:29 compute-0 nova_compute[351685]: 2025-10-03 11:08:29.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:29 compute-0 podman[506085]: 2025-10-03 11:08:29.12816638 +0000 UTC m=+0.076056448 container create dca131b933eb03cc8ac7bd86c29ccaf84f649e1f5e028934c3bd91b743e1a8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:08:29 compute-0 podman[506085]: 2025-10-03 11:08:29.087921746 +0000 UTC m=+0.035811864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:08:29 compute-0 systemd[1]: Started libpod-conmon-dca131b933eb03cc8ac7bd86c29ccaf84f649e1f5e028934c3bd91b743e1a8c4.scope.
Oct  3 11:08:29 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29532c67028bf750ee12e9951b94d7a5565a8f99c420ccc7ac312db9d9eae9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29532c67028bf750ee12e9951b94d7a5565a8f99c420ccc7ac312db9d9eae9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29532c67028bf750ee12e9951b94d7a5565a8f99c420ccc7ac312db9d9eae9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29532c67028bf750ee12e9951b94d7a5565a8f99c420ccc7ac312db9d9eae9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:08:29 compute-0 nova_compute[351685]: 2025-10-03 11:08:29.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:29 compute-0 podman[506085]: 2025-10-03 11:08:29.257118487 +0000 UTC m=+0.205008585 container init dca131b933eb03cc8ac7bd86c29ccaf84f649e1f5e028934c3bd91b743e1a8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:08:29 compute-0 podman[506085]: 2025-10-03 11:08:29.287603752 +0000 UTC m=+0.235493790 container start dca131b933eb03cc8ac7bd86c29ccaf84f649e1f5e028934c3bd91b743e1a8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 11:08:29 compute-0 podman[506085]: 2025-10-03 11:08:29.295535085 +0000 UTC m=+0.243425203 container attach dca131b933eb03cc8ac7bd86c29ccaf84f649e1f5e028934c3bd91b743e1a8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:08:29 compute-0 nova_compute[351685]: 2025-10-03 11:08:29.704 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:08:29 compute-0 nova_compute[351685]: 2025-10-03 11:08:29.727 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:08:29 compute-0 nova_compute[351685]: 2025-10-03 11:08:29.727 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:08:29 compute-0 podman[157165]: time="2025-10-03T11:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:08:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47834 "" "Go-http-client/1.1"
Oct  3 11:08:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9514 "" "Go-http-client/1.1"
Oct  3 11:08:30 compute-0 elegant_moore[506100]: {
Oct  3 11:08:30 compute-0 elegant_moore[506100]:    "0": [
Oct  3 11:08:30 compute-0 elegant_moore[506100]:        {
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "devices": [
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "/dev/loop3"
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            ],
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "lv_name": "ceph_lv0",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "lv_size": "21470642176",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "name": "ceph_lv0",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "tags": {
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.cluster_name": "ceph",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.crush_device_class": "",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.encrypted": "0",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.osd_id": "0",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.type": "block",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.vdo": "0"
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            },
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "type": "block",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "vg_name": "ceph_vg0"
Oct  3 11:08:30 compute-0 elegant_moore[506100]:        }
Oct  3 11:08:30 compute-0 elegant_moore[506100]:    ],
Oct  3 11:08:30 compute-0 elegant_moore[506100]:    "1": [
Oct  3 11:08:30 compute-0 elegant_moore[506100]:        {
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "devices": [
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "/dev/loop4"
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            ],
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "lv_name": "ceph_lv1",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "lv_size": "21470642176",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "name": "ceph_lv1",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "tags": {
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.cluster_name": "ceph",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.crush_device_class": "",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.encrypted": "0",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.osd_id": "1",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.type": "block",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.vdo": "0"
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            },
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "type": "block",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "vg_name": "ceph_vg1"
Oct  3 11:08:30 compute-0 elegant_moore[506100]:        }
Oct  3 11:08:30 compute-0 elegant_moore[506100]:    ],
Oct  3 11:08:30 compute-0 elegant_moore[506100]:    "2": [
Oct  3 11:08:30 compute-0 elegant_moore[506100]:        {
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "devices": [
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "/dev/loop5"
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            ],
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "lv_name": "ceph_lv2",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "lv_size": "21470642176",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "name": "ceph_lv2",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "tags": {
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.cluster_name": "ceph",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.crush_device_class": "",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.encrypted": "0",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.osd_id": "2",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.type": "block",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:                "ceph.vdo": "0"
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            },
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "type": "block",
Oct  3 11:08:30 compute-0 elegant_moore[506100]:            "vg_name": "ceph_vg2"
Oct  3 11:08:30 compute-0 elegant_moore[506100]:        }
Oct  3 11:08:30 compute-0 elegant_moore[506100]:    ]
Oct  3 11:08:30 compute-0 elegant_moore[506100]: }
Oct  3 11:08:30 compute-0 systemd[1]: libpod-dca131b933eb03cc8ac7bd86c29ccaf84f649e1f5e028934c3bd91b743e1a8c4.scope: Deactivated successfully.
Oct  3 11:08:30 compute-0 podman[506085]: 2025-10-03 11:08:30.152939669 +0000 UTC m=+1.100829767 container died dca131b933eb03cc8ac7bd86c29ccaf84f649e1f5e028934c3bd91b743e1a8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct  3 11:08:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c29532c67028bf750ee12e9951b94d7a5565a8f99c420ccc7ac312db9d9eae9e-merged.mount: Deactivated successfully.
Oct  3 11:08:30 compute-0 podman[506085]: 2025-10-03 11:08:30.242087876 +0000 UTC m=+1.189977914 container remove dca131b933eb03cc8ac7bd86c29ccaf84f649e1f5e028934c3bd91b743e1a8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 11:08:30 compute-0 systemd[1]: libpod-conmon-dca131b933eb03cc8ac7bd86c29ccaf84f649e1f5e028934c3bd91b743e1a8c4.scope: Deactivated successfully.
Oct  3 11:08:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3035: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:31 compute-0 openstack_network_exporter[367524]: ERROR   11:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:08:31 compute-0 openstack_network_exporter[367524]: ERROR   11:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:08:31 compute-0 openstack_network_exporter[367524]: ERROR   11:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:08:31 compute-0 openstack_network_exporter[367524]: ERROR   11:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:08:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:08:31 compute-0 openstack_network_exporter[367524]: ERROR   11:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:08:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:08:31 compute-0 podman[506257]: 2025-10-03 11:08:31.421861993 +0000 UTC m=+0.079413926 container create f7912d00aa8c8a9b049e501a9dc69a3c582e6badb1b76a3059674c4b467f26f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_diffie, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct  3 11:08:31 compute-0 systemd[1]: Started libpod-conmon-f7912d00aa8c8a9b049e501a9dc69a3c582e6badb1b76a3059674c4b467f26f8.scope.
Oct  3 11:08:31 compute-0 podman[506257]: 2025-10-03 11:08:31.38949371 +0000 UTC m=+0.047045673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:08:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:08:31 compute-0 podman[506257]: 2025-10-03 11:08:31.527216577 +0000 UTC m=+0.184768540 container init f7912d00aa8c8a9b049e501a9dc69a3c582e6badb1b76a3059674c4b467f26f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_diffie, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 11:08:31 compute-0 podman[506257]: 2025-10-03 11:08:31.538707823 +0000 UTC m=+0.196259746 container start f7912d00aa8c8a9b049e501a9dc69a3c582e6badb1b76a3059674c4b467f26f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:08:31 compute-0 tender_diffie[506274]: 167 167
Oct  3 11:08:31 compute-0 podman[506257]: 2025-10-03 11:08:31.544408685 +0000 UTC m=+0.201960658 container attach f7912d00aa8c8a9b049e501a9dc69a3c582e6badb1b76a3059674c4b467f26f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_diffie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:08:31 compute-0 systemd[1]: libpod-f7912d00aa8c8a9b049e501a9dc69a3c582e6badb1b76a3059674c4b467f26f8.scope: Deactivated successfully.
Oct  3 11:08:31 compute-0 conmon[506274]: conmon f7912d00aa8c8a9b049e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7912d00aa8c8a9b049e501a9dc69a3c582e6badb1b76a3059674c4b467f26f8.scope/container/memory.events
Oct  3 11:08:31 compute-0 podman[506257]: 2025-10-03 11:08:31.548141445 +0000 UTC m=+0.205693378 container died f7912d00aa8c8a9b049e501a9dc69a3c582e6badb1b76a3059674c4b467f26f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_diffie, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 11:08:31 compute-0 podman[506273]: 2025-10-03 11:08:31.564684703 +0000 UTC m=+0.089490958 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:08:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dc92790e64c0ff4a272c96acc1a27f5abed33ef7a818ab8c3a171b1fe6f068a-merged.mount: Deactivated successfully.
Oct  3 11:08:31 compute-0 podman[506257]: 2025-10-03 11:08:31.602349506 +0000 UTC m=+0.259901429 container remove f7912d00aa8c8a9b049e501a9dc69a3c582e6badb1b76a3059674c4b467f26f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_diffie, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:08:31 compute-0 podman[506272]: 2025-10-03 11:08:31.607096957 +0000 UTC m=+0.131934853 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., config_id=edpm, container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 11:08:31 compute-0 podman[506269]: 2025-10-03 11:08:31.607344775 +0000 UTC m=+0.132046017 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 11:08:31 compute-0 systemd[1]: libpod-conmon-f7912d00aa8c8a9b049e501a9dc69a3c582e6badb1b76a3059674c4b467f26f8.scope: Deactivated successfully.
Oct  3 11:08:31 compute-0 podman[506356]: 2025-10-03 11:08:31.841221402 +0000 UTC m=+0.086722280 container create 7007a9a2698a19ac044b591fd61c9845d543a5f1182f90fbd85aee4a18af7c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 11:08:31 compute-0 systemd[1]: Started libpod-conmon-7007a9a2698a19ac044b591fd61c9845d543a5f1182f90fbd85aee4a18af7c04.scope.
Oct  3 11:08:31 compute-0 podman[506356]: 2025-10-03 11:08:31.815077568 +0000 UTC m=+0.060578536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:08:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f600584fbd23d117c81dbbf1e29d4a462f333649f8dd6b982868f054b10c5a9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f600584fbd23d117c81dbbf1e29d4a462f333649f8dd6b982868f054b10c5a9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f600584fbd23d117c81dbbf1e29d4a462f333649f8dd6b982868f054b10c5a9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f600584fbd23d117c81dbbf1e29d4a462f333649f8dd6b982868f054b10c5a9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:08:31 compute-0 podman[506356]: 2025-10-03 11:08:31.972970208 +0000 UTC m=+0.218471146 container init 7007a9a2698a19ac044b591fd61c9845d543a5f1182f90fbd85aee4a18af7c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 11:08:31 compute-0 podman[506356]: 2025-10-03 11:08:31.997225173 +0000 UTC m=+0.242726091 container start 7007a9a2698a19ac044b591fd61c9845d543a5f1182f90fbd85aee4a18af7c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 11:08:32 compute-0 podman[506356]: 2025-10-03 11:08:32.004314719 +0000 UTC m=+0.249815637 container attach 7007a9a2698a19ac044b591fd61c9845d543a5f1182f90fbd85aee4a18af7c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:08:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:08:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3036: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:33 compute-0 condescending_haslett[506372]: {
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:        "osd_id": 1,
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:        "type": "bluestore"
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:    },
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:        "osd_id": 2,
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:        "type": "bluestore"
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:    },
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:        "osd_id": 0,
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:        "type": "bluestore"
Oct  3 11:08:33 compute-0 condescending_haslett[506372]:    }
Oct  3 11:08:33 compute-0 condescending_haslett[506372]: }
Oct  3 11:08:33 compute-0 systemd[1]: libpod-7007a9a2698a19ac044b591fd61c9845d543a5f1182f90fbd85aee4a18af7c04.scope: Deactivated successfully.
Oct  3 11:08:33 compute-0 systemd[1]: libpod-7007a9a2698a19ac044b591fd61c9845d543a5f1182f90fbd85aee4a18af7c04.scope: Consumed 1.253s CPU time.
Oct  3 11:08:33 compute-0 podman[506356]: 2025-10-03 11:08:33.254514896 +0000 UTC m=+1.500015844 container died 7007a9a2698a19ac044b591fd61c9845d543a5f1182f90fbd85aee4a18af7c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:08:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f600584fbd23d117c81dbbf1e29d4a462f333649f8dd6b982868f054b10c5a9f-merged.mount: Deactivated successfully.
Oct  3 11:08:33 compute-0 podman[506356]: 2025-10-03 11:08:33.353553197 +0000 UTC m=+1.599054085 container remove 7007a9a2698a19ac044b591fd61c9845d543a5f1182f90fbd85aee4a18af7c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_haslett, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  3 11:08:33 compute-0 systemd[1]: libpod-conmon-7007a9a2698a19ac044b591fd61c9845d543a5f1182f90fbd85aee4a18af7c04.scope: Deactivated successfully.
Oct  3 11:08:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:08:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:08:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:08:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:08:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 2c0fde6f-5eba-48bb-b0aa-181aae2706e5 does not exist
Oct  3 11:08:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 2da3d688-a3a3-4bc9-af68-db923102fc07 does not exist
Oct  3 11:08:34 compute-0 nova_compute[351685]: 2025-10-03 11:08:34.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:34 compute-0 nova_compute[351685]: 2025-10-03 11:08:34.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:08:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:08:34 compute-0 nova_compute[351685]: 2025-10-03 11:08:34.723 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:08:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3037: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3038: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:37 compute-0 nova_compute[351685]: 2025-10-03 11:08:37.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:08:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:08:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3039: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:39 compute-0 nova_compute[351685]: 2025-10-03 11:08:39.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:39 compute-0 nova_compute[351685]: 2025-10-03 11:08:39.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:40 compute-0 nova_compute[351685]: 2025-10-03 11:08:40.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:08:40 compute-0 nova_compute[351685]: 2025-10-03 11:08:40.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:08:40 compute-0 nova_compute[351685]: 2025-10-03 11:08:40.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:08:40 compute-0 nova_compute[351685]: 2025-10-03 11:08:40.787 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:08:40 compute-0 nova_compute[351685]: 2025-10-03 11:08:40.788 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:08:40 compute-0 nova_compute[351685]: 2025-10-03 11:08:40.788 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:08:40 compute-0 nova_compute[351685]: 2025-10-03 11:08:40.789 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:08:40 compute-0 nova_compute[351685]: 2025-10-03 11:08:40.789 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:08:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3040: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:08:41 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/384777780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:08:41 compute-0 nova_compute[351685]: 2025-10-03 11:08:41.268 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:08:41 compute-0 nova_compute[351685]: 2025-10-03 11:08:41.350 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:08:41 compute-0 nova_compute[351685]: 2025-10-03 11:08:41.351 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:08:41 compute-0 nova_compute[351685]: 2025-10-03 11:08:41.351 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:08:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:08:41.678 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:08:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:08:41.678 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:08:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:08:41.679 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:08:41 compute-0 nova_compute[351685]: 2025-10-03 11:08:41.760 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:08:41 compute-0 nova_compute[351685]: 2025-10-03 11:08:41.761 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3789MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:08:41 compute-0 nova_compute[351685]: 2025-10-03 11:08:41.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:08:41 compute-0 nova_compute[351685]: 2025-10-03 11:08:41.762 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:08:41 compute-0 nova_compute[351685]: 2025-10-03 11:08:41.846 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:08:41 compute-0 nova_compute[351685]: 2025-10-03 11:08:41.846 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:08:41 compute-0 nova_compute[351685]: 2025-10-03 11:08:41.846 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:08:41 compute-0 nova_compute[351685]: 2025-10-03 11:08:41.865 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 11:08:41 compute-0 nova_compute[351685]: 2025-10-03 11:08:41.878 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 11:08:41 compute-0 nova_compute[351685]: 2025-10-03 11:08:41.879 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 11:08:41 compute-0 nova_compute[351685]: 2025-10-03 11:08:41.894 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 11:08:41 compute-0 nova_compute[351685]: 2025-10-03 11:08:41.912 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 11:08:41 compute-0 nova_compute[351685]: 2025-10-03 11:08:41.945 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:08:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:08:42 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2558462958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:08:42 compute-0 nova_compute[351685]: 2025-10-03 11:08:42.437 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:08:42 compute-0 nova_compute[351685]: 2025-10-03 11:08:42.446 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:08:42 compute-0 nova_compute[351685]: 2025-10-03 11:08:42.469 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:08:42 compute-0 nova_compute[351685]: 2025-10-03 11:08:42.470 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:08:42 compute-0 nova_compute[351685]: 2025-10-03 11:08:42.470 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:08:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:08:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3041: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:43 compute-0 nova_compute[351685]: 2025-10-03 11:08:43.470 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:08:43 compute-0 nova_compute[351685]: 2025-10-03 11:08:43.471 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:08:44 compute-0 nova_compute[351685]: 2025-10-03 11:08:44.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:44 compute-0 nova_compute[351685]: 2025-10-03 11:08:44.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3042: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:45 compute-0 nova_compute[351685]: 2025-10-03 11:08:45.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:08:45 compute-0 nova_compute[351685]: 2025-10-03 11:08:45.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:08:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:08:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:08:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:08:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:08:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:08:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:08:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:08:46
Oct  3 11:08:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:08:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:08:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['images', 'vms', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'default.rgw.log', '.mgr', 'default.rgw.meta', '.rgw.root']
Oct  3 11:08:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:08:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:08:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:08:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:08:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:08:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:08:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:08:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:08:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:08:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:08:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:08:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3043: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:08:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3044: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:49 compute-0 nova_compute[351685]: 2025-10-03 11:08:49.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:49 compute-0 nova_compute[351685]: 2025-10-03 11:08:49.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:49 compute-0 podman[506510]: 2025-10-03 11:08:49.897336447 +0000 UTC m=+0.133618537 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 11:08:49 compute-0 podman[506513]: 2025-10-03 11:08:49.897951027 +0000 UTC m=+0.118092552 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct  3 11:08:49 compute-0 podman[506517]: 2025-10-03 11:08:49.908924947 +0000 UTC m=+0.103740443 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 11:08:49 compute-0 podman[506512]: 2025-10-03 11:08:49.909550427 +0000 UTC m=+0.134923419 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  3 11:08:49 compute-0 podman[506525]: 2025-10-03 11:08:49.920421854 +0000 UTC m=+0.133550075 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:08:49 compute-0 podman[506511]: 2025-10-03 11:08:49.920676212 +0000 UTC m=+0.159793492 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Oct  3 11:08:49 compute-0 podman[506530]: 2025-10-03 11:08:49.931154387 +0000 UTC m=+0.132046777 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 11:08:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3045: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:08:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3046: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:08:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2231541597' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:08:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:08:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2231541597' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:08:54 compute-0 nova_compute[351685]: 2025-10-03 11:08:54.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:54 compute-0 nova_compute[351685]: 2025-10-03 11:08:54.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3047: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:08:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:08:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3048: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:08:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3049: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:08:59 compute-0 nova_compute[351685]: 2025-10-03 11:08:59.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:59 compute-0 nova_compute[351685]: 2025-10-03 11:08:59.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:08:59 compute-0 podman[157165]: time="2025-10-03T11:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:08:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:08:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9099 "" "Go-http-client/1.1"
Oct  3 11:09:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3050: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:01 compute-0 openstack_network_exporter[367524]: ERROR   11:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:09:01 compute-0 openstack_network_exporter[367524]: ERROR   11:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:09:01 compute-0 openstack_network_exporter[367524]: ERROR   11:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:09:01 compute-0 openstack_network_exporter[367524]: ERROR   11:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:09:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:09:01 compute-0 openstack_network_exporter[367524]: ERROR   11:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:09:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:09:01 compute-0 podman[506649]: 2025-10-03 11:09:01.85722095 +0000 UTC m=+0.114789266 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, version=9.4, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, name=ubi9, release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, release-0.7.12=, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 11:09:01 compute-0 podman[506650]: 2025-10-03 11:09:01.861805806 +0000 UTC m=+0.105710286 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:09:01 compute-0 podman[506648]: 2025-10-03 11:09:01.866227017 +0000 UTC m=+0.115602282 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:09:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:02.956372) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489742956411, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1729, "num_deletes": 255, "total_data_size": 2835502, "memory_usage": 2881320, "flush_reason": "Manual Compaction"}
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489742979658, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 2764459, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60141, "largest_seqno": 61869, "table_properties": {"data_size": 2756500, "index_size": 4837, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15950, "raw_average_key_size": 19, "raw_value_size": 2740622, "raw_average_value_size": 3379, "num_data_blocks": 216, "num_entries": 811, "num_filter_entries": 811, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759489559, "oldest_key_time": 1759489559, "file_creation_time": 1759489742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 23419 microseconds, and 9602 cpu microseconds.
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:02.979789) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 2764459 bytes OK
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:02.979817) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:02.982616) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:02.982639) EVENT_LOG_v1 {"time_micros": 1759489742982632, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:02.982666) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 2828088, prev total WAL file size 2828088, number of live WAL files 2.
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:02.984452) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353133' seq:72057594037927935, type:22 .. '6C6F676D0032373634' seq:0, type:0; will stop at (end)
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(2699KB)], [146(7567KB)]
Oct  3 11:09:02 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489742984556, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 10513350, "oldest_snapshot_seqno": -1}
Oct  3 11:09:03 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 7336 keys, 10414761 bytes, temperature: kUnknown
Oct  3 11:09:03 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489743061844, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 10414761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10368260, "index_size": 27060, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18373, "raw_key_size": 192228, "raw_average_key_size": 26, "raw_value_size": 10237696, "raw_average_value_size": 1395, "num_data_blocks": 1078, "num_entries": 7336, "num_filter_entries": 7336, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759489742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:09:03 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:09:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:03.062475) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 10414761 bytes
Oct  3 11:09:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:03.064114) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.9 rd, 134.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 7.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(7.6) write-amplify(3.8) OK, records in: 7858, records dropped: 522 output_compression: NoCompression
Oct  3 11:09:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:03.064133) EVENT_LOG_v1 {"time_micros": 1759489743064124, "job": 90, "event": "compaction_finished", "compaction_time_micros": 77354, "compaction_time_cpu_micros": 49007, "output_level": 6, "num_output_files": 1, "total_output_size": 10414761, "num_input_records": 7858, "num_output_records": 7336, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:09:03 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:09:03 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489743064948, "job": 90, "event": "table_file_deletion", "file_number": 148}
Oct  3 11:09:03 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:09:03 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489743066510, "job": 90, "event": "table_file_deletion", "file_number": 146}
Oct  3 11:09:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:02.984034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:09:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:03.066766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:09:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:03.066772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:09:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:03.066775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:09:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:03.066778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:09:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:03.066781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:09:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3051: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:04 compute-0 nova_compute[351685]: 2025-10-03 11:09:04.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:04 compute-0 nova_compute[351685]: 2025-10-03 11:09:04.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3052: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3053: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:09:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3054: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:09 compute-0 nova_compute[351685]: 2025-10-03 11:09:09.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:09 compute-0 nova_compute[351685]: 2025-10-03 11:09:09.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3055: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:09:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3056: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:14 compute-0 nova_compute[351685]: 2025-10-03 11:09:14.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:14 compute-0 nova_compute[351685]: 2025-10-03 11:09:14.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3057: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:09:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:09:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:09:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:09:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:09:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:09:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3058: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:09:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3059: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:19 compute-0 nova_compute[351685]: 2025-10-03 11:09:19.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:19 compute-0 nova_compute[351685]: 2025-10-03 11:09:19.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:20 compute-0 podman[506709]: 2025-10-03 11:09:20.883862357 +0000 UTC m=+0.119020661 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter)
Oct  3 11:09:20 compute-0 podman[506711]: 2025-10-03 11:09:20.885634424 +0000 UTC m=+0.111501691 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:09:20 compute-0 podman[506708]: 2025-10-03 11:09:20.889680273 +0000 UTC m=+0.132205631 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:09:20 compute-0 podman[506710]: 2025-10-03 11:09:20.900989284 +0000 UTC m=+0.125027493 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:09:20 compute-0 podman[506724]: 2025-10-03 11:09:20.902546794 +0000 UTC m=+0.115655804 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  3 11:09:20 compute-0 podman[506719]: 2025-10-03 11:09:20.917048717 +0000 UTC m=+0.133956338 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:09:20 compute-0 podman[506717]: 2025-10-03 11:09:20.924705092 +0000 UTC m=+0.131966935 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 11:09:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3060: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:09:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3061: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:24 compute-0 nova_compute[351685]: 2025-10-03 11:09:24.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:24 compute-0 nova_compute[351685]: 2025-10-03 11:09:24.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3062: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3063: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:09:28 compute-0 nova_compute[351685]: 2025-10-03 11:09:28.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:09:28 compute-0 nova_compute[351685]: 2025-10-03 11:09:28.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:09:28 compute-0 nova_compute[351685]: 2025-10-03 11:09:28.732 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:09:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3064: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:29 compute-0 nova_compute[351685]: 2025-10-03 11:09:29.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:29 compute-0 nova_compute[351685]: 2025-10-03 11:09:29.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:29 compute-0 nova_compute[351685]: 2025-10-03 11:09:29.476 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:09:29 compute-0 nova_compute[351685]: 2025-10-03 11:09:29.476 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:09:29 compute-0 nova_compute[351685]: 2025-10-03 11:09:29.476 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:09:29 compute-0 nova_compute[351685]: 2025-10-03 11:09:29.476 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:09:29 compute-0 podman[157165]: time="2025-10-03T11:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:09:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:09:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9100 "" "Go-http-client/1.1"
Oct  3 11:09:30 compute-0 nova_compute[351685]: 2025-10-03 11:09:30.620 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:09:30 compute-0 nova_compute[351685]: 2025-10-03 11:09:30.634 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:09:30 compute-0 nova_compute[351685]: 2025-10-03 11:09:30.635 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:09:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3065: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:31 compute-0 openstack_network_exporter[367524]: ERROR   11:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:09:31 compute-0 openstack_network_exporter[367524]: ERROR   11:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:09:31 compute-0 openstack_network_exporter[367524]: ERROR   11:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:09:31 compute-0 openstack_network_exporter[367524]: ERROR   11:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:09:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:09:31 compute-0 openstack_network_exporter[367524]: ERROR   11:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:09:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:09:32 compute-0 podman[506844]: 2025-10-03 11:09:32.865452759 +0000 UTC m=+0.098375182 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:09:32 compute-0 podman[506846]: 2025-10-03 11:09:32.896667056 +0000 UTC m=+0.114944341 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible)
Oct  3 11:09:32 compute-0 podman[506845]: 2025-10-03 11:09:32.899150965 +0000 UTC m=+0.124554908 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, version=9.4, config_id=edpm, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, vcs-type=git, distribution-scope=public, architecture=x86_64, io.buildah.version=1.29.0, managed_by=edpm_ansible, container_name=kepler, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Oct  3 11:09:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:09:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3066: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:34 compute-0 nova_compute[351685]: 2025-10-03 11:09:34.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:34 compute-0 nova_compute[351685]: 2025-10-03 11:09:34.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:34 compute-0 nova_compute[351685]: 2025-10-03 11:09:34.629 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:09:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  3 11:09:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 11:09:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:09:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:09:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:09:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:09:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:09:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:09:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 18d10949-bd42-4c73-bfc7-8fdc20fe09b8 does not exist
Oct  3 11:09:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3db5989d-afe0-4a8d-8a99-338b330a7a8f does not exist
Oct  3 11:09:34 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3f685704-99b4-4b9b-a317-6d4cf58b8a79 does not exist
Oct  3 11:09:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:09:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:09:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:09:34 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:09:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:09:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:09:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3067: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 11:09:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:09:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:09:35 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:09:36 compute-0 podman[507174]: 2025-10-03 11:09:36.083335858 +0000 UTC m=+0.095697227 container create 860458f5b39d009507a81d8e74a59a2a9d1d02ac22ffcb5decef0599efe6dfaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mcclintock, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:09:36 compute-0 podman[507174]: 2025-10-03 11:09:36.047175363 +0000 UTC m=+0.059536782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:09:36 compute-0 systemd[1]: Started libpod-conmon-860458f5b39d009507a81d8e74a59a2a9d1d02ac22ffcb5decef0599efe6dfaa.scope.
Oct  3 11:09:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:09:36 compute-0 podman[507174]: 2025-10-03 11:09:36.21367214 +0000 UTC m=+0.226033559 container init 860458f5b39d009507a81d8e74a59a2a9d1d02ac22ffcb5decef0599efe6dfaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 11:09:36 compute-0 podman[507174]: 2025-10-03 11:09:36.225869068 +0000 UTC m=+0.238230437 container start 860458f5b39d009507a81d8e74a59a2a9d1d02ac22ffcb5decef0599efe6dfaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  3 11:09:36 compute-0 podman[507174]: 2025-10-03 11:09:36.232290704 +0000 UTC m=+0.244652073 container attach 860458f5b39d009507a81d8e74a59a2a9d1d02ac22ffcb5decef0599efe6dfaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 11:09:36 compute-0 cranky_mcclintock[507191]: 167 167
Oct  3 11:09:36 compute-0 systemd[1]: libpod-860458f5b39d009507a81d8e74a59a2a9d1d02ac22ffcb5decef0599efe6dfaa.scope: Deactivated successfully.
Oct  3 11:09:36 compute-0 podman[507174]: 2025-10-03 11:09:36.240701122 +0000 UTC m=+0.253062491 container died 860458f5b39d009507a81d8e74a59a2a9d1d02ac22ffcb5decef0599efe6dfaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:09:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-26032f10a40ee6ee6915cb4feceb39f4d343eac8f7ebaf5045e5472155ff3fd3-merged.mount: Deactivated successfully.
Oct  3 11:09:36 compute-0 podman[507174]: 2025-10-03 11:09:36.307224336 +0000 UTC m=+0.319585675 container remove 860458f5b39d009507a81d8e74a59a2a9d1d02ac22ffcb5decef0599efe6dfaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mcclintock, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:09:36 compute-0 systemd[1]: libpod-conmon-860458f5b39d009507a81d8e74a59a2a9d1d02ac22ffcb5decef0599efe6dfaa.scope: Deactivated successfully.
Oct  3 11:09:36 compute-0 podman[507213]: 2025-10-03 11:09:36.555432981 +0000 UTC m=+0.082342210 container create 68bea3a15abc92a81a8e6d57e6f2ab65df662dcba996796669114a2609112002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_villani, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:09:36 compute-0 podman[507213]: 2025-10-03 11:09:36.519540124 +0000 UTC m=+0.046449403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:09:36 compute-0 systemd[1]: Started libpod-conmon-68bea3a15abc92a81a8e6d57e6f2ab65df662dcba996796669114a2609112002.scope.
Oct  3 11:09:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cdd36fe15411961e5be3999aede025602feda0a10a7266fdfd335db23f45b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cdd36fe15411961e5be3999aede025602feda0a10a7266fdfd335db23f45b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cdd36fe15411961e5be3999aede025602feda0a10a7266fdfd335db23f45b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cdd36fe15411961e5be3999aede025602feda0a10a7266fdfd335db23f45b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cdd36fe15411961e5be3999aede025602feda0a10a7266fdfd335db23f45b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:09:36 compute-0 podman[507213]: 2025-10-03 11:09:36.727924728 +0000 UTC m=+0.254833947 container init 68bea3a15abc92a81a8e6d57e6f2ab65df662dcba996796669114a2609112002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 11:09:36 compute-0 podman[507213]: 2025-10-03 11:09:36.745418037 +0000 UTC m=+0.272327236 container start 68bea3a15abc92a81a8e6d57e6f2ab65df662dcba996796669114a2609112002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:09:36 compute-0 podman[507213]: 2025-10-03 11:09:36.751727249 +0000 UTC m=+0.278636538 container attach 68bea3a15abc92a81a8e6d57e6f2ab65df662dcba996796669114a2609112002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:09:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3068: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:37 compute-0 cool_villani[507229]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:09:37 compute-0 cool_villani[507229]: --> relative data size: 1.0
Oct  3 11:09:37 compute-0 cool_villani[507229]: --> All data devices are unavailable
Oct  3 11:09:37 compute-0 systemd[1]: libpod-68bea3a15abc92a81a8e6d57e6f2ab65df662dcba996796669114a2609112002.scope: Deactivated successfully.
Oct  3 11:09:37 compute-0 systemd[1]: libpod-68bea3a15abc92a81a8e6d57e6f2ab65df662dcba996796669114a2609112002.scope: Consumed 1.088s CPU time.
Oct  3 11:09:37 compute-0 podman[507213]: 2025-10-03 11:09:37.886990904 +0000 UTC m=+1.413900103 container died 68bea3a15abc92a81a8e6d57e6f2ab65df662dcba996796669114a2609112002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:09:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-02cdd36fe15411961e5be3999aede025602feda0a10a7266fdfd335db23f45b2-merged.mount: Deactivated successfully.
Oct  3 11:09:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:09:37 compute-0 podman[507213]: 2025-10-03 11:09:37.971054228 +0000 UTC m=+1.497963427 container remove 68bea3a15abc92a81a8e6d57e6f2ab65df662dcba996796669114a2609112002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  3 11:09:37 compute-0 systemd[1]: libpod-conmon-68bea3a15abc92a81a8e6d57e6f2ab65df662dcba996796669114a2609112002.scope: Deactivated successfully.
Oct  3 11:09:39 compute-0 podman[507407]: 2025-10-03 11:09:39.012488359 +0000 UTC m=+0.049218612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:09:39 compute-0 podman[507407]: 2025-10-03 11:09:39.118207824 +0000 UTC m=+0.154938037 container create a460c6ef77a2cdfbbab2e3adcdf79cc9f30b766709b3e6b8f2550cbbb174ec88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_buck, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 11:09:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3069: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:39 compute-0 nova_compute[351685]: 2025-10-03 11:09:39.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:39 compute-0 systemd[1]: Started libpod-conmon-a460c6ef77a2cdfbbab2e3adcdf79cc9f30b766709b3e6b8f2550cbbb174ec88.scope.
Oct  3 11:09:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:09:39 compute-0 nova_compute[351685]: 2025-10-03 11:09:39.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:39 compute-0 podman[507407]: 2025-10-03 11:09:39.307664664 +0000 UTC m=+0.344394927 container init a460c6ef77a2cdfbbab2e3adcdf79cc9f30b766709b3e6b8f2550cbbb174ec88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_buck, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:09:39 compute-0 podman[507407]: 2025-10-03 11:09:39.325001467 +0000 UTC m=+0.361731640 container start a460c6ef77a2cdfbbab2e3adcdf79cc9f30b766709b3e6b8f2550cbbb174ec88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:09:39 compute-0 nice_buck[507421]: 167 167
Oct  3 11:09:39 compute-0 systemd[1]: libpod-a460c6ef77a2cdfbbab2e3adcdf79cc9f30b766709b3e6b8f2550cbbb174ec88.scope: Deactivated successfully.
Oct  3 11:09:39 compute-0 podman[507407]: 2025-10-03 11:09:39.345360947 +0000 UTC m=+0.382091120 container attach a460c6ef77a2cdfbbab2e3adcdf79cc9f30b766709b3e6b8f2550cbbb174ec88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_buck, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:09:39 compute-0 podman[507407]: 2025-10-03 11:09:39.350512931 +0000 UTC m=+0.387243134 container died a460c6ef77a2cdfbbab2e3adcdf79cc9f30b766709b3e6b8f2550cbbb174ec88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_buck, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 11:09:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1af5a4e7c85133614e0f74ea04d71756c6efc44649015163ecb010a413121349-merged.mount: Deactivated successfully.
Oct  3 11:09:39 compute-0 podman[507407]: 2025-10-03 11:09:39.486404141 +0000 UTC m=+0.523134294 container remove a460c6ef77a2cdfbbab2e3adcdf79cc9f30b766709b3e6b8f2550cbbb174ec88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:09:39 compute-0 systemd[1]: libpod-conmon-a460c6ef77a2cdfbbab2e3adcdf79cc9f30b766709b3e6b8f2550cbbb174ec88.scope: Deactivated successfully.
Oct  3 11:09:39 compute-0 nova_compute[351685]: 2025-10-03 11:09:39.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:09:39 compute-0 podman[507444]: 2025-10-03 11:09:39.733907402 +0000 UTC m=+0.078458766 container create 6cf02adc043504525dc3519929241544995a8998441f1a730715ca69699e96cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:09:39 compute-0 podman[507444]: 2025-10-03 11:09:39.696169018 +0000 UTC m=+0.040720422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:09:39 compute-0 systemd[1]: Started libpod-conmon-6cf02adc043504525dc3519929241544995a8998441f1a730715ca69699e96cc.scope.
Oct  3 11:09:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:09:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe63b6272937474331bcf84e55e4c21ec3985cf261c67884d2ee6f14ad5b71fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:09:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe63b6272937474331bcf84e55e4c21ec3985cf261c67884d2ee6f14ad5b71fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:09:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe63b6272937474331bcf84e55e4c21ec3985cf261c67884d2ee6f14ad5b71fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:09:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe63b6272937474331bcf84e55e4c21ec3985cf261c67884d2ee6f14ad5b71fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:09:39 compute-0 podman[507444]: 2025-10-03 11:09:39.915928614 +0000 UTC m=+0.260479978 container init 6cf02adc043504525dc3519929241544995a8998441f1a730715ca69699e96cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:09:39 compute-0 podman[507444]: 2025-10-03 11:09:39.937166142 +0000 UTC m=+0.281717506 container start 6cf02adc043504525dc3519929241544995a8998441f1a730715ca69699e96cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:09:39 compute-0 podman[507444]: 2025-10-03 11:09:39.949823056 +0000 UTC m=+0.294374400 container attach 6cf02adc043504525dc3519929241544995a8998441f1a730715ca69699e96cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_galois, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:09:40 compute-0 unruffled_galois[507461]: {
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:    "0": [
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:        {
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "devices": [
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "/dev/loop3"
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            ],
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "lv_name": "ceph_lv0",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "lv_size": "21470642176",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "name": "ceph_lv0",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "tags": {
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.cluster_name": "ceph",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.crush_device_class": "",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.encrypted": "0",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.osd_id": "0",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.type": "block",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.vdo": "0"
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            },
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "type": "block",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "vg_name": "ceph_vg0"
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:        }
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:    ],
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:    "1": [
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:        {
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "devices": [
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "/dev/loop4"
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            ],
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "lv_name": "ceph_lv1",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "lv_size": "21470642176",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "name": "ceph_lv1",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "tags": {
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.cluster_name": "ceph",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.crush_device_class": "",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.encrypted": "0",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.osd_id": "1",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.type": "block",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.vdo": "0"
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            },
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "type": "block",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "vg_name": "ceph_vg1"
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:        }
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:    ],
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:    "2": [
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:        {
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "devices": [
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "/dev/loop5"
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            ],
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "lv_name": "ceph_lv2",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "lv_size": "21470642176",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "name": "ceph_lv2",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "tags": {
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.cluster_name": "ceph",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.crush_device_class": "",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.encrypted": "0",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.osd_id": "2",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.type": "block",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:                "ceph.vdo": "0"
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            },
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "type": "block",
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:            "vg_name": "ceph_vg2"
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:        }
Oct  3 11:09:40 compute-0 unruffled_galois[507461]:    ]
Oct  3 11:09:40 compute-0 unruffled_galois[507461]: }
Oct  3 11:09:40 compute-0 nova_compute[351685]: 2025-10-03 11:09:40.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:09:40 compute-0 nova_compute[351685]: 2025-10-03 11:09:40.768 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:09:40 compute-0 nova_compute[351685]: 2025-10-03 11:09:40.769 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:09:40 compute-0 nova_compute[351685]: 2025-10-03 11:09:40.769 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:09:40 compute-0 systemd[1]: libpod-6cf02adc043504525dc3519929241544995a8998441f1a730715ca69699e96cc.scope: Deactivated successfully.
Oct  3 11:09:40 compute-0 podman[507444]: 2025-10-03 11:09:40.771511831 +0000 UTC m=+1.116063225 container died 6cf02adc043504525dc3519929241544995a8998441f1a730715ca69699e96cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_galois, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:09:40 compute-0 nova_compute[351685]: 2025-10-03 11:09:40.770 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:09:40 compute-0 nova_compute[351685]: 2025-10-03 11:09:40.773 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.901 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.902 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe63b6272937474331bcf84e55e4c21ec3985cf261c67884d2ee6f14ad5b71fc-merged.mount: Deactivated successfully.
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.915 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.917 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.919 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.919 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.920 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.919 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.920 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.920 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.921 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.921 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.921 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.921 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:09:40.920027) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.926 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.927 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.927 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.927 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.927 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.928 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.928 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.928 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.929 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:09:40.928152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.928 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.929 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.930 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.930 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.930 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:09:40.930357) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.949 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.950 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.950 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.950 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.951 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.951 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.951 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.951 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.951 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.952 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:09:40.951474) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.995 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.995 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.996 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.996 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.996 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.996 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.997 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.997 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.997 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.997 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.998 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.997 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:09:40.997208) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.998 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.998 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.999 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.999 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.999 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.999 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:40.999 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.000 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 podman[507444]: 2025-10-03 11:09:41.000914215 +0000 UTC m=+1.345465579 container remove 6cf02adc043504525dc3519929241544995a8998441f1a730715ca69699e96cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_galois, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.001 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.002 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.002 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.002 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.002 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.002 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.003 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.003 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:09:40.999296) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.004 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.004 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.004 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.005 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.005 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.005 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.005 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:09:41.002699) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.005 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.006 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.005 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:09:41.005170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.006 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.006 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.006 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.007 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.008 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.008 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.008 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:09:41.007188) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:09:41.009335) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 systemd[1]: libpod-conmon-6cf02adc043504525dc3519929241544995a8998441f1a730715ca69699e96cc.scope: Deactivated successfully.
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.047 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.048 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.049 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.049 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.049 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.050 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.050 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:09:41.050087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.051 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.051 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.052 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.052 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.053 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.054 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.054 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.054 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:09:41.054105) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.054 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.055 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.055 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.056 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.056 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.057 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:09:41.057554) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.059 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.060 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.060 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:09:41.060585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.062 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.063 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:09:41.062992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.063 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.064 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.064 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.065 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.065 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.065 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.066 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.066 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:09:41.065544) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.068 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:09:41.067725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.069 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.069 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.069 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.070 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.070 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 91170000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.070 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:09:41.070002) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.071 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.071 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.072 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.072 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.072 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.073 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.073 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:09:41.072158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.074 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.074 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.074 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.075 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:09:41.074365) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.075 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.076 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.076 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.077 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:09:41.076682) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.077 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.078 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.078 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.078 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:09:41.078966) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.079 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.081 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.081 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.082 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.082 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:09:41.082385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.083 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.083 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.084 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:09:41.084725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.085 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:09:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:09:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3070: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:09:41 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2057580684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:09:41 compute-0 nova_compute[351685]: 2025-10-03 11:09:41.339 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:09:41 compute-0 nova_compute[351685]: 2025-10-03 11:09:41.421 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:09:41 compute-0 nova_compute[351685]: 2025-10-03 11:09:41.421 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:09:41 compute-0 nova_compute[351685]: 2025-10-03 11:09:41.421 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:09:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:09:41.679 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:09:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:09:41.679 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:09:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:09:41.680 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:09:41 compute-0 nova_compute[351685]: 2025-10-03 11:09:41.877 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:09:41 compute-0 nova_compute[351685]: 2025-10-03 11:09:41.879 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3816MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:09:41 compute-0 nova_compute[351685]: 2025-10-03 11:09:41.879 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:09:41 compute-0 nova_compute[351685]: 2025-10-03 11:09:41.879 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:09:41 compute-0 podman[507644]: 2025-10-03 11:09:41.934500192 +0000 UTC m=+0.061406032 container create 92771115ac676b52fdecb720abd03147ea5541fb64559762ee63d8663cfa3e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_satoshi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:09:41 compute-0 podman[507644]: 2025-10-03 11:09:41.904948289 +0000 UTC m=+0.031854189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:09:42 compute-0 systemd[1]: Started libpod-conmon-92771115ac676b52fdecb720abd03147ea5541fb64559762ee63d8663cfa3e42.scope.
Oct  3 11:09:42 compute-0 nova_compute[351685]: 2025-10-03 11:09:42.027 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:09:42 compute-0 nova_compute[351685]: 2025-10-03 11:09:42.027 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:09:42 compute-0 nova_compute[351685]: 2025-10-03 11:09:42.028 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:09:42 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:09:42 compute-0 podman[507644]: 2025-10-03 11:09:42.069446331 +0000 UTC m=+0.196352231 container init 92771115ac676b52fdecb720abd03147ea5541fb64559762ee63d8663cfa3e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:09:42 compute-0 podman[507644]: 2025-10-03 11:09:42.086298219 +0000 UTC m=+0.213204029 container start 92771115ac676b52fdecb720abd03147ea5541fb64559762ee63d8663cfa3e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 11:09:42 compute-0 podman[507644]: 2025-10-03 11:09:42.091755363 +0000 UTC m=+0.218661273 container attach 92771115ac676b52fdecb720abd03147ea5541fb64559762ee63d8663cfa3e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 11:09:42 compute-0 peaceful_satoshi[507659]: 167 167
Oct  3 11:09:42 compute-0 systemd[1]: libpod-92771115ac676b52fdecb720abd03147ea5541fb64559762ee63d8663cfa3e42.scope: Deactivated successfully.
Oct  3 11:09:42 compute-0 podman[507644]: 2025-10-03 11:09:42.098658764 +0000 UTC m=+0.225564614 container died 92771115ac676b52fdecb720abd03147ea5541fb64559762ee63d8663cfa3e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:09:42 compute-0 nova_compute[351685]: 2025-10-03 11:09:42.137 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:09:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-563a1de9c4d2b3e76f631c8740ab946c7131634717b2d39090841a6dd729da52-merged.mount: Deactivated successfully.
Oct  3 11:09:42 compute-0 podman[507644]: 2025-10-03 11:09:42.167802431 +0000 UTC m=+0.294708251 container remove 92771115ac676b52fdecb720abd03147ea5541fb64559762ee63d8663cfa3e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_satoshi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 11:09:42 compute-0 systemd[1]: libpod-conmon-92771115ac676b52fdecb720abd03147ea5541fb64559762ee63d8663cfa3e42.scope: Deactivated successfully.
Oct  3 11:09:42 compute-0 podman[507701]: 2025-10-03 11:09:42.432974417 +0000 UTC m=+0.063976203 container create e1a3bfd3cf55dca5faad1aa6e36da1e15c724574613ec069062a30315801ae2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 11:09:42 compute-0 systemd[1]: Started libpod-conmon-e1a3bfd3cf55dca5faad1aa6e36da1e15c724574613ec069062a30315801ae2e.scope.
Oct  3 11:09:42 compute-0 podman[507701]: 2025-10-03 11:09:42.411219443 +0000 UTC m=+0.042221209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:09:42 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2a9457042ea9cf722130630040e110283aacdc875fb0421c9f33763f5c0d462/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2a9457042ea9cf722130630040e110283aacdc875fb0421c9f33763f5c0d462/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2a9457042ea9cf722130630040e110283aacdc875fb0421c9f33763f5c0d462/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2a9457042ea9cf722130630040e110283aacdc875fb0421c9f33763f5c0d462/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:09:42 compute-0 podman[507701]: 2025-10-03 11:09:42.575686864 +0000 UTC m=+0.206688710 container init e1a3bfd3cf55dca5faad1aa6e36da1e15c724574613ec069062a30315801ae2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 11:09:42 compute-0 podman[507701]: 2025-10-03 11:09:42.591155658 +0000 UTC m=+0.222157434 container start e1a3bfd3cf55dca5faad1aa6e36da1e15c724574613ec069062a30315801ae2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:09:42 compute-0 podman[507701]: 2025-10-03 11:09:42.594937608 +0000 UTC m=+0.225939444 container attach e1a3bfd3cf55dca5faad1aa6e36da1e15c724574613ec069062a30315801ae2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 11:09:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:09:42 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/669647616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:09:42 compute-0 nova_compute[351685]: 2025-10-03 11:09:42.641 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:09:42 compute-0 nova_compute[351685]: 2025-10-03 11:09:42.651 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:09:42 compute-0 nova_compute[351685]: 2025-10-03 11:09:42.675 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:09:42 compute-0 nova_compute[351685]: 2025-10-03 11:09:42.678 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:09:42 compute-0 nova_compute[351685]: 2025-10-03 11:09:42.679 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:09:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:09:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3071: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:43.552434) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489783552478, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 575, "num_deletes": 251, "total_data_size": 608268, "memory_usage": 619944, "flush_reason": "Manual Compaction"}
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489783564950, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 602591, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61870, "largest_seqno": 62444, "table_properties": {"data_size": 599464, "index_size": 1098, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7281, "raw_average_key_size": 19, "raw_value_size": 593192, "raw_average_value_size": 1556, "num_data_blocks": 50, "num_entries": 381, "num_filter_entries": 381, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759489743, "oldest_key_time": 1759489743, "file_creation_time": 1759489783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 12613 microseconds, and 5681 cpu microseconds.
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:43.565042) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 602591 bytes OK
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:43.565068) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:43.569091) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:43.569115) EVENT_LOG_v1 {"time_micros": 1759489783569107, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:43.569142) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 605072, prev total WAL file size 605072, number of live WAL files 2.
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:43.570573) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(588KB)], [149(10170KB)]
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489783570616, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 11017352, "oldest_snapshot_seqno": -1}
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 7205 keys, 9284055 bytes, temperature: kUnknown
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489783653482, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 9284055, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9239438, "index_size": 25496, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 190190, "raw_average_key_size": 26, "raw_value_size": 9112138, "raw_average_value_size": 1264, "num_data_blocks": 1004, "num_entries": 7205, "num_filter_entries": 7205, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759489783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:43.653774) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 9284055 bytes
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:43.656853) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.8 rd, 111.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.9 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(33.7) write-amplify(15.4) OK, records in: 7717, records dropped: 512 output_compression: NoCompression
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:43.656893) EVENT_LOG_v1 {"time_micros": 1759489783656876, "job": 92, "event": "compaction_finished", "compaction_time_micros": 82950, "compaction_time_cpu_micros": 50732, "output_level": 6, "num_output_files": 1, "total_output_size": 9284055, "num_input_records": 7717, "num_output_records": 7205, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489783657717, "job": 92, "event": "table_file_deletion", "file_number": 151}
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489783662045, "job": 92, "event": "table_file_deletion", "file_number": 149}
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:43.570329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:43.662355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:43.662364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:43.662367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:43.662370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:09:43 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:09:43.662374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]: {
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:        "osd_id": 1,
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:        "type": "bluestore"
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:    },
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:        "osd_id": 2,
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:        "type": "bluestore"
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:    },
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:        "osd_id": 0,
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:        "type": "bluestore"
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]:    }
Oct  3 11:09:43 compute-0 eager_visvesvaraya[507717]: }
Oct  3 11:09:43 compute-0 nova_compute[351685]: 2025-10-03 11:09:43.680 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:09:43 compute-0 nova_compute[351685]: 2025-10-03 11:09:43.681 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:09:43 compute-0 nova_compute[351685]: 2025-10-03 11:09:43.681 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:09:43 compute-0 systemd[1]: libpod-e1a3bfd3cf55dca5faad1aa6e36da1e15c724574613ec069062a30315801ae2e.scope: Deactivated successfully.
Oct  3 11:09:43 compute-0 systemd[1]: libpod-e1a3bfd3cf55dca5faad1aa6e36da1e15c724574613ec069062a30315801ae2e.scope: Consumed 1.119s CPU time.
Oct  3 11:09:43 compute-0 podman[507701]: 2025-10-03 11:09:43.715068721 +0000 UTC m=+1.346070557 container died e1a3bfd3cf55dca5faad1aa6e36da1e15c724574613ec069062a30315801ae2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:09:43 compute-0 nova_compute[351685]: 2025-10-03 11:09:43.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:09:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2a9457042ea9cf722130630040e110283aacdc875fb0421c9f33763f5c0d462-merged.mount: Deactivated successfully.
Oct  3 11:09:43 compute-0 podman[507701]: 2025-10-03 11:09:43.823637367 +0000 UTC m=+1.454639173 container remove e1a3bfd3cf55dca5faad1aa6e36da1e15c724574613ec069062a30315801ae2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_visvesvaraya, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:09:43 compute-0 systemd[1]: libpod-conmon-e1a3bfd3cf55dca5faad1aa6e36da1e15c724574613ec069062a30315801ae2e.scope: Deactivated successfully.
Oct  3 11:09:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:09:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:09:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:09:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:09:43 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 33015d9a-dce6-494e-b64d-24b69dd51bde does not exist
Oct  3 11:09:43 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 52c5b0cb-07f2-410b-a7d1-0b6964544e94 does not exist
Oct  3 11:09:44 compute-0 nova_compute[351685]: 2025-10-03 11:09:44.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:44 compute-0 nova_compute[351685]: 2025-10-03 11:09:44.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:09:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:09:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3072: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:45 compute-0 nova_compute[351685]: 2025-10-03 11:09:45.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:09:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:09:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:09:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:09:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:09:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:09:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:09:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:09:46
Oct  3 11:09:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:09:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:09:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'images', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'backups', '.rgw.root']
Oct  3 11:09:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:09:46 compute-0 nova_compute[351685]: 2025-10-03 11:09:46.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:09:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:09:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:09:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:09:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:09:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:09:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:09:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:09:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:09:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:09:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:09:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3073: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:09:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3074: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:49 compute-0 nova_compute[351685]: 2025-10-03 11:09:49.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:49 compute-0 nova_compute[351685]: 2025-10-03 11:09:49.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3075: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:51 compute-0 podman[507819]: 2025-10-03 11:09:51.848862132 +0000 UTC m=+0.089944633 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:09:51 compute-0 podman[507818]: 2025-10-03 11:09:51.869832921 +0000 UTC m=+0.118498124 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  3 11:09:51 compute-0 podman[507825]: 2025-10-03 11:09:51.871867507 +0000 UTC m=+0.100265793 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, tcib_managed=true)
Oct  3 11:09:51 compute-0 podman[507816]: 2025-10-03 11:09:51.872108504 +0000 UTC m=+0.127647837 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:09:51 compute-0 podman[507817]: 2025-10-03 11:09:51.873054064 +0000 UTC m=+0.123672130 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, architecture=x86_64, name=ubi9-minimal, io.openshift.expose-services=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Oct  3 11:09:51 compute-0 podman[507836]: 2025-10-03 11:09:51.885416319 +0000 UTC m=+0.119658551 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Oct  3 11:09:51 compute-0 podman[507842]: 2025-10-03 11:09:51.889817949 +0000 UTC m=+0.104122695 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 11:09:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:09:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3076: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:09:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2659241468' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:09:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:09:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2659241468' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:09:54 compute-0 nova_compute[351685]: 2025-10-03 11:09:54.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:54 compute-0 nova_compute[351685]: 2025-10-03 11:09:54.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3077: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:09:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:09:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3078: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:09:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3079: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:09:59 compute-0 nova_compute[351685]: 2025-10-03 11:09:59.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:59 compute-0 nova_compute[351685]: 2025-10-03 11:09:59.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:09:59 compute-0 podman[157165]: time="2025-10-03T11:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:09:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:09:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9111 "" "Go-http-client/1.1"
Oct  3 11:10:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3080: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:01 compute-0 openstack_network_exporter[367524]: ERROR   11:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:10:01 compute-0 openstack_network_exporter[367524]: ERROR   11:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:10:01 compute-0 openstack_network_exporter[367524]: ERROR   11:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:10:01 compute-0 openstack_network_exporter[367524]: ERROR   11:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:10:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:10:01 compute-0 openstack_network_exporter[367524]: ERROR   11:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:10:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:10:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:10:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3081: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:03 compute-0 podman[507956]: 2025-10-03 11:10:03.863037871 +0000 UTC m=+0.123501694 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 11:10:03 compute-0 podman[507958]: 2025-10-03 11:10:03.878830526 +0000 UTC m=+0.117362588 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:10:03 compute-0 podman[507957]: 2025-10-03 11:10:03.885526539 +0000 UTC m=+0.137827761 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, version=9.4, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, name=ubi9, release=1214.1726694543, release-0.7.12=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_id=edpm, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Oct  3 11:10:04 compute-0 nova_compute[351685]: 2025-10-03 11:10:04.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:04 compute-0 nova_compute[351685]: 2025-10-03 11:10:04.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3082: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3083: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:10:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3084: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:09 compute-0 nova_compute[351685]: 2025-10-03 11:10:09.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:09 compute-0 nova_compute[351685]: 2025-10-03 11:10:09.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3085: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:12 compute-0 nova_compute[351685]: 2025-10-03 11:10:12.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:10:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:10:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3086: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:14 compute-0 nova_compute[351685]: 2025-10-03 11:10:14.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:14 compute-0 nova_compute[351685]: 2025-10-03 11:10:14.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3087: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:10:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:10:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:10:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:10:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:10:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:10:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3088: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:10:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3089: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:19 compute-0 nova_compute[351685]: 2025-10-03 11:10:19.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:19 compute-0 nova_compute[351685]: 2025-10-03 11:10:19.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3090: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:22 compute-0 podman[508019]: 2025-10-03 11:10:22.568610698 +0000 UTC m=+0.130023903 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:10:22 compute-0 podman[508020]: 2025-10-03 11:10:22.585071913 +0000 UTC m=+0.140630821 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Oct  3 11:10:22 compute-0 podman[508021]: 2025-10-03 11:10:22.590944231 +0000 UTC m=+0.133314058 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Oct  3 11:10:22 compute-0 podman[508022]: 2025-10-03 11:10:22.595450184 +0000 UTC m=+0.120771107 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct  3 11:10:22 compute-0 podman[508036]: 2025-10-03 11:10:22.602126988 +0000 UTC m=+0.131549771 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  3 11:10:22 compute-0 podman[508028]: 2025-10-03 11:10:22.604100991 +0000 UTC m=+0.129227707 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Oct  3 11:10:22 compute-0 podman[508035]: 2025-10-03 11:10:22.64102022 +0000 UTC m=+0.170236186 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller)
Oct  3 11:10:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:10:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3091: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:24 compute-0 nova_compute[351685]: 2025-10-03 11:10:24.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:24 compute-0 nova_compute[351685]: 2025-10-03 11:10:24.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3092: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3093: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:10:28 compute-0 nova_compute[351685]: 2025-10-03 11:10:28.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:10:28 compute-0 nova_compute[351685]: 2025-10-03 11:10:28.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:10:28 compute-0 nova_compute[351685]: 2025-10-03 11:10:28.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:10:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3094: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:29 compute-0 nova_compute[351685]: 2025-10-03 11:10:29.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:29 compute-0 nova_compute[351685]: 2025-10-03 11:10:29.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:29 compute-0 nova_compute[351685]: 2025-10-03 11:10:29.542 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:10:29 compute-0 nova_compute[351685]: 2025-10-03 11:10:29.543 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:10:29 compute-0 nova_compute[351685]: 2025-10-03 11:10:29.544 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:10:29 compute-0 nova_compute[351685]: 2025-10-03 11:10:29.545 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:10:29 compute-0 podman[157165]: time="2025-10-03T11:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:10:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:10:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9113 "" "Go-http-client/1.1"
Oct  3 11:10:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3095: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:31 compute-0 openstack_network_exporter[367524]: ERROR   11:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:10:31 compute-0 openstack_network_exporter[367524]: ERROR   11:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:10:31 compute-0 openstack_network_exporter[367524]: ERROR   11:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:10:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:10:31 compute-0 openstack_network_exporter[367524]: ERROR   11:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:10:31 compute-0 openstack_network_exporter[367524]: ERROR   11:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:10:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:10:31 compute-0 nova_compute[351685]: 2025-10-03 11:10:31.587 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:10:31 compute-0 nova_compute[351685]: 2025-10-03 11:10:31.606 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:10:31 compute-0 nova_compute[351685]: 2025-10-03 11:10:31.607 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:10:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:10:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3096: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:34 compute-0 nova_compute[351685]: 2025-10-03 11:10:34.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:34 compute-0 nova_compute[351685]: 2025-10-03 11:10:34.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:34 compute-0 podman[508154]: 2025-10-03 11:10:34.870154371 +0000 UTC m=+0.106655287 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.openshift.expose-services=, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, vcs-type=git, config_id=edpm, io.openshift.tags=base rhel9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, distribution-scope=public, io.buildah.version=1.29.0)
Oct  3 11:10:34 compute-0 podman[508155]: 2025-10-03 11:10:34.897571166 +0000 UTC m=+0.121009335 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:10:34 compute-0 podman[508153]: 2025-10-03 11:10:34.906949976 +0000 UTC m=+0.148991158 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 11:10:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3097: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:36 compute-0 nova_compute[351685]: 2025-10-03 11:10:36.604 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:10:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3098: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:10:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3099: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:39 compute-0 nova_compute[351685]: 2025-10-03 11:10:39.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:39 compute-0 nova_compute[351685]: 2025-10-03 11:10:39.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:40 compute-0 nova_compute[351685]: 2025-10-03 11:10:40.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:10:40 compute-0 nova_compute[351685]: 2025-10-03 11:10:40.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:10:40 compute-0 nova_compute[351685]: 2025-10-03 11:10:40.767 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:10:40 compute-0 nova_compute[351685]: 2025-10-03 11:10:40.768 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:10:40 compute-0 nova_compute[351685]: 2025-10-03 11:10:40.768 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:10:40 compute-0 nova_compute[351685]: 2025-10-03 11:10:40.768 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:10:40 compute-0 nova_compute[351685]: 2025-10-03 11:10:40.769 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:10:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3100: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:10:41 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2659131797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:10:41 compute-0 nova_compute[351685]: 2025-10-03 11:10:41.307 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:10:41 compute-0 nova_compute[351685]: 2025-10-03 11:10:41.411 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:10:41 compute-0 nova_compute[351685]: 2025-10-03 11:10:41.411 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:10:41 compute-0 nova_compute[351685]: 2025-10-03 11:10:41.412 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:10:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:10:41.680 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:10:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:10:41.682 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:10:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:10:41.683 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:10:41 compute-0 nova_compute[351685]: 2025-10-03 11:10:41.922 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:10:41 compute-0 nova_compute[351685]: 2025-10-03 11:10:41.924 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3813MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:10:41 compute-0 nova_compute[351685]: 2025-10-03 11:10:41.924 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:10:41 compute-0 nova_compute[351685]: 2025-10-03 11:10:41.924 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:10:42 compute-0 nova_compute[351685]: 2025-10-03 11:10:42.036 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:10:42 compute-0 nova_compute[351685]: 2025-10-03 11:10:42.037 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:10:42 compute-0 nova_compute[351685]: 2025-10-03 11:10:42.037 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:10:42 compute-0 nova_compute[351685]: 2025-10-03 11:10:42.092 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:10:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:10:42 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3542964264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:10:42 compute-0 nova_compute[351685]: 2025-10-03 11:10:42.591 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:10:42 compute-0 nova_compute[351685]: 2025-10-03 11:10:42.599 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:10:42 compute-0 nova_compute[351685]: 2025-10-03 11:10:42.618 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:10:42 compute-0 nova_compute[351685]: 2025-10-03 11:10:42.620 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:10:42 compute-0 nova_compute[351685]: 2025-10-03 11:10:42.620 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:10:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:10:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3101: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:44 compute-0 nova_compute[351685]: 2025-10-03 11:10:44.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:44 compute-0 nova_compute[351685]: 2025-10-03 11:10:44.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:44 compute-0 nova_compute[351685]: 2025-10-03 11:10:44.620 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:10:44 compute-0 nova_compute[351685]: 2025-10-03 11:10:44.621 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:10:44 compute-0 nova_compute[351685]: 2025-10-03 11:10:44.621 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:10:44 compute-0 nova_compute[351685]: 2025-10-03 11:10:44.622 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:10:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3102: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:10:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:10:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:10:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:10:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:10:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:10:45 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev c0303857-7d94-4c3c-a60c-134693038991 does not exist
Oct  3 11:10:45 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b4018023-24b0-48f9-ac15-949f8d652445 does not exist
Oct  3 11:10:45 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e9fd7ee6-3303-49b8-903a-67ec9af266f6 does not exist
Oct  3 11:10:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:10:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:10:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:10:45 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:10:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:10:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:10:45 compute-0 nova_compute[351685]: 2025-10-03 11:10:45.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:10:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:10:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:10:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:10:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:10:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:10:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:10:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:10:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:10:46 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:10:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:10:46
Oct  3 11:10:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:10:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:10:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'backups', 'default.rgw.control']
Oct  3 11:10:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:10:46 compute-0 podman[508525]: 2025-10-03 11:10:46.433137035 +0000 UTC m=+0.081627147 container create 593b5d9c7cebee8559b3024d78175aba1a3f5039d024516a1b3e6842f1a4f399 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hopper, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 11:10:46 compute-0 podman[508525]: 2025-10-03 11:10:46.396754273 +0000 UTC m=+0.045244465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:10:46 compute-0 systemd[1]: Started libpod-conmon-593b5d9c7cebee8559b3024d78175aba1a3f5039d024516a1b3e6842f1a4f399.scope.
Oct  3 11:10:46 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:10:46 compute-0 podman[508525]: 2025-10-03 11:10:46.573097194 +0000 UTC m=+0.221587356 container init 593b5d9c7cebee8559b3024d78175aba1a3f5039d024516a1b3e6842f1a4f399 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hopper, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:10:46 compute-0 podman[508525]: 2025-10-03 11:10:46.593904308 +0000 UTC m=+0.242394450 container start 593b5d9c7cebee8559b3024d78175aba1a3f5039d024516a1b3e6842f1a4f399 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hopper, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:10:46 compute-0 podman[508525]: 2025-10-03 11:10:46.601554332 +0000 UTC m=+0.250044524 container attach 593b5d9c7cebee8559b3024d78175aba1a3f5039d024516a1b3e6842f1a4f399 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:10:46 compute-0 interesting_hopper[508541]: 167 167
Oct  3 11:10:46 compute-0 systemd[1]: libpod-593b5d9c7cebee8559b3024d78175aba1a3f5039d024516a1b3e6842f1a4f399.scope: Deactivated successfully.
Oct  3 11:10:46 compute-0 podman[508546]: 2025-10-03 11:10:46.700358497 +0000 UTC m=+0.066934898 container died 593b5d9c7cebee8559b3024d78175aba1a3f5039d024516a1b3e6842f1a4f399 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hopper, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 11:10:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-23c9aca1062494e2003bd56e00d245670ffa5833d271d8ebdc1736858be80d5a-merged.mount: Deactivated successfully.
Oct  3 11:10:46 compute-0 podman[508546]: 2025-10-03 11:10:46.772290063 +0000 UTC m=+0.138866414 container remove 593b5d9c7cebee8559b3024d78175aba1a3f5039d024516a1b3e6842f1a4f399 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:10:46 compute-0 systemd[1]: libpod-conmon-593b5d9c7cebee8559b3024d78175aba1a3f5039d024516a1b3e6842f1a4f399.scope: Deactivated successfully.
Oct  3 11:10:47 compute-0 podman[508568]: 2025-10-03 11:10:47.046578701 +0000 UTC m=+0.064038126 container create d29668f54c0b483d4fd8f3024f346906b90be68fc2c881c0c430b4c29a55957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 11:10:47 compute-0 podman[508568]: 2025-10-03 11:10:47.023184644 +0000 UTC m=+0.040644139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:10:47 compute-0 systemd[1]: Started libpod-conmon-d29668f54c0b483d4fd8f3024f346906b90be68fc2c881c0c430b4c29a55957c.scope.
Oct  3 11:10:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:10:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:10:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:10:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:10:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:10:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:10:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:10:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:10:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:10:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:10:47 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e290723c68af9a49bfd88486f49d4eccb6999c1ef14a9e1a2671c45633ffccab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e290723c68af9a49bfd88486f49d4eccb6999c1ef14a9e1a2671c45633ffccab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e290723c68af9a49bfd88486f49d4eccb6999c1ef14a9e1a2671c45633ffccab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e290723c68af9a49bfd88486f49d4eccb6999c1ef14a9e1a2671c45633ffccab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:10:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e290723c68af9a49bfd88486f49d4eccb6999c1ef14a9e1a2671c45633ffccab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:10:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3103: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:47 compute-0 podman[508568]: 2025-10-03 11:10:47.209539424 +0000 UTC m=+0.226998869 container init d29668f54c0b483d4fd8f3024f346906b90be68fc2c881c0c430b4c29a55957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct  3 11:10:47 compute-0 podman[508568]: 2025-10-03 11:10:47.233360825 +0000 UTC m=+0.250820230 container start d29668f54c0b483d4fd8f3024f346906b90be68fc2c881c0c430b4c29a55957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 11:10:47 compute-0 podman[508568]: 2025-10-03 11:10:47.238691235 +0000 UTC m=+0.256150690 container attach d29668f54c0b483d4fd8f3024f346906b90be68fc2c881c0c430b4c29a55957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 11:10:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:10:48 compute-0 ecstatic_johnson[508584]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:10:48 compute-0 ecstatic_johnson[508584]: --> relative data size: 1.0
Oct  3 11:10:48 compute-0 ecstatic_johnson[508584]: --> All data devices are unavailable
Oct  3 11:10:48 compute-0 systemd[1]: libpod-d29668f54c0b483d4fd8f3024f346906b90be68fc2c881c0c430b4c29a55957c.scope: Deactivated successfully.
Oct  3 11:10:48 compute-0 systemd[1]: libpod-d29668f54c0b483d4fd8f3024f346906b90be68fc2c881c0c430b4c29a55957c.scope: Consumed 1.296s CPU time.
Oct  3 11:10:48 compute-0 podman[508568]: 2025-10-03 11:10:48.607034973 +0000 UTC m=+1.624494418 container died d29668f54c0b483d4fd8f3024f346906b90be68fc2c881c0c430b4c29a55957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:10:48 compute-0 nova_compute[351685]: 2025-10-03 11:10:48.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e290723c68af9a49bfd88486f49d4eccb6999c1ef14a9e1a2671c45633ffccab-merged.mount: Deactivated successfully.
Oct  3 11:10:48 compute-0 podman[508568]: 2025-10-03 11:10:48.938586429 +0000 UTC m=+1.956045874 container remove d29668f54c0b483d4fd8f3024f346906b90be68fc2c881c0c430b4c29a55957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:10:49 compute-0 systemd[1]: libpod-conmon-d29668f54c0b483d4fd8f3024f346906b90be68fc2c881c0c430b4c29a55957c.scope: Deactivated successfully.
Oct  3 11:10:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3104: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:49 compute-0 nova_compute[351685]: 2025-10-03 11:10:49.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:49 compute-0 nova_compute[351685]: 2025-10-03 11:10:49.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:50 compute-0 podman[508761]: 2025-10-03 11:10:50.173468955 +0000 UTC m=+0.090669516 container create a00400c43d42cc58db2ee9183407668be9f870c07319de5aca6cbeb8796b58dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 11:10:50 compute-0 podman[508761]: 2025-10-03 11:10:50.133577242 +0000 UTC m=+0.050777853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:10:50 compute-0 systemd[1]: Started libpod-conmon-a00400c43d42cc58db2ee9183407668be9f870c07319de5aca6cbeb8796b58dc.scope.
Oct  3 11:10:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:10:50 compute-0 podman[508761]: 2025-10-03 11:10:50.313062812 +0000 UTC m=+0.230263353 container init a00400c43d42cc58db2ee9183407668be9f870c07319de5aca6cbeb8796b58dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 11:10:50 compute-0 podman[508761]: 2025-10-03 11:10:50.327572435 +0000 UTC m=+0.244772996 container start a00400c43d42cc58db2ee9183407668be9f870c07319de5aca6cbeb8796b58dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:10:50 compute-0 podman[508761]: 2025-10-03 11:10:50.335133177 +0000 UTC m=+0.252333788 container attach a00400c43d42cc58db2ee9183407668be9f870c07319de5aca6cbeb8796b58dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:10:50 compute-0 great_murdock[508775]: 167 167
Oct  3 11:10:50 compute-0 systemd[1]: libpod-a00400c43d42cc58db2ee9183407668be9f870c07319de5aca6cbeb8796b58dc.scope: Deactivated successfully.
Oct  3 11:10:50 compute-0 podman[508761]: 2025-10-03 11:10:50.338131423 +0000 UTC m=+0.255332014 container died a00400c43d42cc58db2ee9183407668be9f870c07319de5aca6cbeb8796b58dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 11:10:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-bae19e94c7736eea643d531ba0253b5cf210bf69df4302395b46f578239bb51f-merged.mount: Deactivated successfully.
Oct  3 11:10:50 compute-0 podman[508761]: 2025-10-03 11:10:50.408609852 +0000 UTC m=+0.325810383 container remove a00400c43d42cc58db2ee9183407668be9f870c07319de5aca6cbeb8796b58dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:10:50 compute-0 systemd[1]: libpod-conmon-a00400c43d42cc58db2ee9183407668be9f870c07319de5aca6cbeb8796b58dc.scope: Deactivated successfully.
Oct  3 11:10:50 compute-0 podman[508799]: 2025-10-03 11:10:50.681112383 +0000 UTC m=+0.099248779 container create 088e7d23ef4b91706c935f4ac9af314f39c5ab25f93a61b372fc7e203f8ff1f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bouman, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:10:50 compute-0 podman[508799]: 2025-10-03 11:10:50.640457015 +0000 UTC m=+0.058593461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:10:50 compute-0 systemd[1]: Started libpod-conmon-088e7d23ef4b91706c935f4ac9af314f39c5ab25f93a61b372fc7e203f8ff1f7.scope.
Oct  3 11:10:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15610e66f4161adb8a95e9f48e08f4ffd7ec56b37a0d601c647ebdb78bb2b8b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15610e66f4161adb8a95e9f48e08f4ffd7ec56b37a0d601c647ebdb78bb2b8b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15610e66f4161adb8a95e9f48e08f4ffd7ec56b37a0d601c647ebdb78bb2b8b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15610e66f4161adb8a95e9f48e08f4ffd7ec56b37a0d601c647ebdb78bb2b8b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:10:50 compute-0 podman[508799]: 2025-10-03 11:10:50.782993426 +0000 UTC m=+0.201129822 container init 088e7d23ef4b91706c935f4ac9af314f39c5ab25f93a61b372fc7e203f8ff1f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:10:50 compute-0 podman[508799]: 2025-10-03 11:10:50.796917791 +0000 UTC m=+0.215054187 container start 088e7d23ef4b91706c935f4ac9af314f39c5ab25f93a61b372fc7e203f8ff1f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bouman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 11:10:50 compute-0 podman[508799]: 2025-10-03 11:10:50.802652454 +0000 UTC m=+0.220788860 container attach 088e7d23ef4b91706c935f4ac9af314f39c5ab25f93a61b372fc7e203f8ff1f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 11:10:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3105: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:51 compute-0 recursing_bouman[508815]: {
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:    "0": [
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:        {
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "devices": [
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "/dev/loop3"
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            ],
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "lv_name": "ceph_lv0",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "lv_size": "21470642176",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "name": "ceph_lv0",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "tags": {
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.cluster_name": "ceph",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.crush_device_class": "",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.encrypted": "0",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.osd_id": "0",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.type": "block",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.vdo": "0"
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            },
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "type": "block",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "vg_name": "ceph_vg0"
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:        }
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:    ],
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:    "1": [
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:        {
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "devices": [
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "/dev/loop4"
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            ],
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "lv_name": "ceph_lv1",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "lv_size": "21470642176",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "name": "ceph_lv1",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "tags": {
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.cluster_name": "ceph",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.crush_device_class": "",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.encrypted": "0",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.osd_id": "1",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.type": "block",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.vdo": "0"
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            },
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "type": "block",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "vg_name": "ceph_vg1"
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:        }
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:    ],
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:    "2": [
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:        {
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "devices": [
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "/dev/loop5"
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            ],
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "lv_name": "ceph_lv2",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "lv_size": "21470642176",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "name": "ceph_lv2",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "tags": {
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.cluster_name": "ceph",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.crush_device_class": "",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.encrypted": "0",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.osd_id": "2",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.type": "block",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:                "ceph.vdo": "0"
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            },
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "type": "block",
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:            "vg_name": "ceph_vg2"
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:        }
Oct  3 11:10:51 compute-0 recursing_bouman[508815]:    ]
Oct  3 11:10:51 compute-0 recursing_bouman[508815]: }
Oct  3 11:10:51 compute-0 systemd[1]: libpod-088e7d23ef4b91706c935f4ac9af314f39c5ab25f93a61b372fc7e203f8ff1f7.scope: Deactivated successfully.
Oct  3 11:10:51 compute-0 podman[508799]: 2025-10-03 11:10:51.695292054 +0000 UTC m=+1.113428420 container died 088e7d23ef4b91706c935f4ac9af314f39c5ab25f93a61b372fc7e203f8ff1f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bouman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:10:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-15610e66f4161adb8a95e9f48e08f4ffd7ec56b37a0d601c647ebdb78bb2b8b8-merged.mount: Deactivated successfully.
Oct  3 11:10:51 compute-0 podman[508799]: 2025-10-03 11:10:51.878939486 +0000 UTC m=+1.297075842 container remove 088e7d23ef4b91706c935f4ac9af314f39c5ab25f93a61b372fc7e203f8ff1f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Oct  3 11:10:51 compute-0 systemd[1]: libpod-conmon-088e7d23ef4b91706c935f4ac9af314f39c5ab25f93a61b372fc7e203f8ff1f7.scope: Deactivated successfully.
Oct  3 11:10:52 compute-0 podman[509018]: 2025-10-03 11:10:52.886104344 +0000 UTC m=+0.073940643 container create 5facf6a2aeb5abd90f807c06b1338e0eac25646d27702c12f84be36ff991f4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 11:10:52 compute-0 podman[508974]: 2025-10-03 11:10:52.889907944 +0000 UTC m=+0.123480963 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  3 11:10:52 compute-0 podman[508968]: 2025-10-03 11:10:52.908822958 +0000 UTC m=+0.125657763 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:10:52 compute-0 podman[508975]: 2025-10-03 11:10:52.923555539 +0000 UTC m=+0.142169510 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct  3 11:10:52 compute-0 podman[508981]: 2025-10-03 11:10:52.927594598 +0000 UTC m=+0.137738499 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 11:10:52 compute-0 systemd[1]: Started libpod-conmon-5facf6a2aeb5abd90f807c06b1338e0eac25646d27702c12f84be36ff991f4f1.scope.
Oct  3 11:10:52 compute-0 podman[508973]: 2025-10-03 11:10:52.933150896 +0000 UTC m=+0.172112637 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct  3 11:10:52 compute-0 podman[508988]: 2025-10-03 11:10:52.935503221 +0000 UTC m=+0.141842930 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 11:10:52 compute-0 podman[509018]: 2025-10-03 11:10:52.850030271 +0000 UTC m=+0.037866590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:10:52 compute-0 podman[508989]: 2025-10-03 11:10:52.95114773 +0000 UTC m=+0.160697682 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  3 11:10:52 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:10:52 compute-0 podman[509018]: 2025-10-03 11:10:52.981880891 +0000 UTC m=+0.169717180 container init 5facf6a2aeb5abd90f807c06b1338e0eac25646d27702c12f84be36ff991f4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:10:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:10:52 compute-0 podman[509018]: 2025-10-03 11:10:52.994026639 +0000 UTC m=+0.181862938 container start 5facf6a2aeb5abd90f807c06b1338e0eac25646d27702c12f84be36ff991f4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:10:52 compute-0 podman[509018]: 2025-10-03 11:10:52.998611105 +0000 UTC m=+0.186447444 container attach 5facf6a2aeb5abd90f807c06b1338e0eac25646d27702c12f84be36ff991f4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  3 11:10:53 compute-0 happy_lumiere[509121]: 167 167
Oct  3 11:10:53 compute-0 systemd[1]: libpod-5facf6a2aeb5abd90f807c06b1338e0eac25646d27702c12f84be36ff991f4f1.scope: Deactivated successfully.
Oct  3 11:10:53 compute-0 podman[509018]: 2025-10-03 11:10:53.002443518 +0000 UTC m=+0.190279837 container died 5facf6a2aeb5abd90f807c06b1338e0eac25646d27702c12f84be36ff991f4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 11:10:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-87e2396a4789e61b6d6b2cc1276356c18f1c4fb5bd719af7b9f28e27926e4232-merged.mount: Deactivated successfully.
Oct  3 11:10:53 compute-0 podman[509018]: 2025-10-03 11:10:53.056229675 +0000 UTC m=+0.244065974 container remove 5facf6a2aeb5abd90f807c06b1338e0eac25646d27702c12f84be36ff991f4f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lumiere, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 11:10:53 compute-0 systemd[1]: libpod-conmon-5facf6a2aeb5abd90f807c06b1338e0eac25646d27702c12f84be36ff991f4f1.scope: Deactivated successfully.
Oct  3 11:10:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3106: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:53 compute-0 podman[509148]: 2025-10-03 11:10:53.271921552 +0000 UTC m=+0.073902881 container create a0c7041e82c73f78bb9ab24ae84a2a65940668bfb017fb5ad06cd2f13c43625b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dhawan, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:10:53 compute-0 podman[509148]: 2025-10-03 11:10:53.239412383 +0000 UTC m=+0.041393722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:10:53 compute-0 systemd[1]: Started libpod-conmon-a0c7041e82c73f78bb9ab24ae84a2a65940668bfb017fb5ad06cd2f13c43625b.scope.
Oct  3 11:10:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9eb31ab03cace516f8c5aee97d87e12a9ecdc6c3ba3f00b02dd606e77265103/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9eb31ab03cace516f8c5aee97d87e12a9ecdc6c3ba3f00b02dd606e77265103/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9eb31ab03cace516f8c5aee97d87e12a9ecdc6c3ba3f00b02dd606e77265103/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:10:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9eb31ab03cace516f8c5aee97d87e12a9ecdc6c3ba3f00b02dd606e77265103/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:10:53 compute-0 podman[509148]: 2025-10-03 11:10:53.421748335 +0000 UTC m=+0.223729654 container init a0c7041e82c73f78bb9ab24ae84a2a65940668bfb017fb5ad06cd2f13c43625b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dhawan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 11:10:53 compute-0 podman[509148]: 2025-10-03 11:10:53.459004965 +0000 UTC m=+0.260986264 container start a0c7041e82c73f78bb9ab24ae84a2a65940668bfb017fb5ad06cd2f13c43625b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dhawan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  3 11:10:53 compute-0 podman[509148]: 2025-10-03 11:10:53.46510974 +0000 UTC m=+0.267091139 container attach a0c7041e82c73f78bb9ab24ae84a2a65940668bfb017fb5ad06cd2f13c43625b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 11:10:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:10:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3372862764' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:10:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:10:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3372862764' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:10:54 compute-0 nova_compute[351685]: 2025-10-03 11:10:54.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:54 compute-0 nova_compute[351685]: 2025-10-03 11:10:54.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:54 compute-0 determined_dhawan[509165]: {
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:        "osd_id": 1,
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:        "type": "bluestore"
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:    },
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:        "osd_id": 2,
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:        "type": "bluestore"
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:    },
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:        "osd_id": 0,
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:        "type": "bluestore"
Oct  3 11:10:54 compute-0 determined_dhawan[509165]:    }
Oct  3 11:10:54 compute-0 determined_dhawan[509165]: }
Oct  3 11:10:54 compute-0 systemd[1]: libpod-a0c7041e82c73f78bb9ab24ae84a2a65940668bfb017fb5ad06cd2f13c43625b.scope: Deactivated successfully.
Oct  3 11:10:54 compute-0 podman[509148]: 2025-10-03 11:10:54.619710473 +0000 UTC m=+1.421691812 container died a0c7041e82c73f78bb9ab24ae84a2a65940668bfb017fb5ad06cd2f13c43625b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 11:10:54 compute-0 systemd[1]: libpod-a0c7041e82c73f78bb9ab24ae84a2a65940668bfb017fb5ad06cd2f13c43625b.scope: Consumed 1.155s CPU time.
Oct  3 11:10:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9eb31ab03cace516f8c5aee97d87e12a9ecdc6c3ba3f00b02dd606e77265103-merged.mount: Deactivated successfully.
Oct  3 11:10:54 compute-0 podman[509148]: 2025-10-03 11:10:54.704980076 +0000 UTC m=+1.506961365 container remove a0c7041e82c73f78bb9ab24ae84a2a65940668bfb017fb5ad06cd2f13c43625b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dhawan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:10:54 compute-0 systemd[1]: libpod-conmon-a0c7041e82c73f78bb9ab24ae84a2a65940668bfb017fb5ad06cd2f13c43625b.scope: Deactivated successfully.
Oct  3 11:10:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:10:54 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:10:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:10:54 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:10:54 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev cdd1e2bf-cfea-4534-8ede-30a8ebca0e68 does not exist
Oct  3 11:10:54 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 069ecf76-7ec6-4719-831f-cfe7e23ca380 does not exist
Oct  3 11:10:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3107: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:10:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 13K writes, 62K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.01 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.09 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1318 writes, 6234 keys, 1318 commit groups, 1.0 writes per commit group, ingest: 8.68 MB, 0.01 MB/s#012Interval WAL: 1318 writes, 1318 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     27.3      2.89              0.33        46    0.063       0      0       0.0       0.0#012  L6      1/0    8.85 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.5     88.4     73.9      4.85              1.33        45    0.108    269K    24K       0.0       0.0#012 Sum      1/0    8.85 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.5     55.3     56.5      7.74              1.66        91    0.085    269K    24K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.8     68.1     69.7      0.88              0.28        12    0.073     45K   3041       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     88.4     73.9      4.85              1.33        45    0.108    269K    24K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     27.4      2.88              0.33        45    0.064       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.6      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.077, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.43 GB write, 0.07 MB/s write, 0.42 GB read, 0.07 MB/s read, 7.7 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56005dddb1f0#2 capacity: 304.00 MB usage: 51.97 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000362 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3286,50.07 MB,16.471%) FilterBlock(92,764.73 KB,0.245661%) IndexBlock(92,1.15 MB,0.379487%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  3 11:10:55 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:10:55 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:10:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:10:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3108: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:10:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3109: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:10:59 compute-0 nova_compute[351685]: 2025-10-03 11:10:59.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:59 compute-0 nova_compute[351685]: 2025-10-03 11:10:59.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:10:59 compute-0 podman[157165]: time="2025-10-03T11:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:10:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:10:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9109 "" "Go-http-client/1.1"
Oct  3 11:11:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3110: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:01 compute-0 openstack_network_exporter[367524]: ERROR   11:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:11:01 compute-0 openstack_network_exporter[367524]: ERROR   11:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:11:01 compute-0 openstack_network_exporter[367524]: ERROR   11:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:11:01 compute-0 openstack_network_exporter[367524]: ERROR   11:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:11:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:11:01 compute-0 openstack_network_exporter[367524]: ERROR   11:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:11:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:11:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:11:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3111: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:04 compute-0 nova_compute[351685]: 2025-10-03 11:11:04.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:04 compute-0 nova_compute[351685]: 2025-10-03 11:11:04.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3112: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:05 compute-0 podman[509262]: 2025-10-03 11:11:05.894053941 +0000 UTC m=+0.138840664 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 11:11:05 compute-0 podman[509264]: 2025-10-03 11:11:05.916791907 +0000 UTC m=+0.148229714 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 11:11:05 compute-0 podman[509263]: 2025-10-03 11:11:05.938102187 +0000 UTC m=+0.175359079 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_id=edpm, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, maintainer=Red Hat, Inc., container_name=kepler, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9)
Oct  3 11:11:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3113: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:11:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3114: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:09 compute-0 nova_compute[351685]: 2025-10-03 11:11:09.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:09 compute-0 nova_compute[351685]: 2025-10-03 11:11:09.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3115: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:11:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3116: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:14 compute-0 nova_compute[351685]: 2025-10-03 11:11:14.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:14 compute-0 nova_compute[351685]: 2025-10-03 11:11:14.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3117: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:11:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:11:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:11:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:11:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:11:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:11:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3118: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:11:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3119: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:19 compute-0 nova_compute[351685]: 2025-10-03 11:11:19.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:19 compute-0 nova_compute[351685]: 2025-10-03 11:11:19.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3120: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:22 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:11:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3121: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:23 compute-0 podman[509328]: 2025-10-03 11:11:23.897042705 +0000 UTC m=+0.115251199 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Oct  3 11:11:23 compute-0 podman[509324]: 2025-10-03 11:11:23.898027117 +0000 UTC m=+0.135377113 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:11:23 compute-0 podman[509335]: 2025-10-03 11:11:23.898021147 +0000 UTC m=+0.102390400 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  3 11:11:23 compute-0 podman[509326]: 2025-10-03 11:11:23.915473294 +0000 UTC m=+0.142866072 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 11:11:23 compute-0 podman[509325]: 2025-10-03 11:11:23.919150042 +0000 UTC m=+0.144179005 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Oct  3 11:11:23 compute-0 podman[509327]: 2025-10-03 11:11:23.921960502 +0000 UTC m=+0.144466974 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:11:23 compute-0 podman[509329]: 2025-10-03 11:11:23.933157839 +0000 UTC m=+0.139339520 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 11:11:24 compute-0 nova_compute[351685]: 2025-10-03 11:11:24.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:24 compute-0 nova_compute[351685]: 2025-10-03 11:11:24.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3122: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3123: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:27 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:11:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3124: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:29 compute-0 nova_compute[351685]: 2025-10-03 11:11:29.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:29 compute-0 nova_compute[351685]: 2025-10-03 11:11:29.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:29 compute-0 nova_compute[351685]: 2025-10-03 11:11:29.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:11:29 compute-0 nova_compute[351685]: 2025-10-03 11:11:29.732 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:11:29 compute-0 nova_compute[351685]: 2025-10-03 11:11:29.732 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:11:29 compute-0 podman[157165]: time="2025-10-03T11:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:11:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:11:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9109 "" "Go-http-client/1.1"
Oct  3 11:11:30 compute-0 nova_compute[351685]: 2025-10-03 11:11:30.645 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:11:30 compute-0 nova_compute[351685]: 2025-10-03 11:11:30.646 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:11:30 compute-0 nova_compute[351685]: 2025-10-03 11:11:30.647 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:11:30 compute-0 nova_compute[351685]: 2025-10-03 11:11:30.648 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:11:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3125: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:31 compute-0 openstack_network_exporter[367524]: ERROR   11:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:11:31 compute-0 openstack_network_exporter[367524]: ERROR   11:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:11:31 compute-0 openstack_network_exporter[367524]: ERROR   11:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:11:31 compute-0 openstack_network_exporter[367524]: ERROR   11:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:11:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:11:31 compute-0 openstack_network_exporter[367524]: ERROR   11:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:11:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:11:32 compute-0 nova_compute[351685]: 2025-10-03 11:11:32.650 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:11:32 compute-0 nova_compute[351685]: 2025-10-03 11:11:32.666 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:11:32 compute-0 nova_compute[351685]: 2025-10-03 11:11:32.667 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:11:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:11:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3126: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:34 compute-0 nova_compute[351685]: 2025-10-03 11:11:34.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:34 compute-0 nova_compute[351685]: 2025-10-03 11:11:34.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3127: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:36 compute-0 podman[509463]: 2025-10-03 11:11:36.878038823 +0000 UTC m=+0.110164058 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:11:36 compute-0 podman[509461]: 2025-10-03 11:11:36.880426269 +0000 UTC m=+0.123659899 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 11:11:36 compute-0 podman[509462]: 2025-10-03 11:11:36.910163309 +0000 UTC m=+0.148037158 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, vcs-type=git, version=9.4, container_name=kepler, distribution-scope=public, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, name=ubi9)
Oct  3 11:11:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3128: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.662 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.731 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.732 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.733 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.734 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.735 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.736 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.764 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.787 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.788 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Image id 37f03e8a-3aed-46a5-8219-fc87e355127e yields fingerprint 8123da205344dbbb79d5d821c9749dc540280b1e _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.788 2 INFO nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] image 37f03e8a-3aed-46a5-8219-fc87e355127e at (/var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e): checking#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.789 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] image 37f03e8a-3aed-46a5-8219-fc87e355127e at (/var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.791 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.792 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] b43db93c-a4fe-46e9-8418-eedf4f5c135a is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.792 2 WARNING nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/e7f67a70e606c08bfea45c9da4c170e96d463110#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.792 2 INFO nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Active base files: /var/lib/nova/instances/_base/8123da205344dbbb79d5d821c9749dc540280b1e#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.793 2 INFO nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Removable base files: /var/lib/nova/instances/_base/e7f67a70e606c08bfea45c9da4c170e96d463110#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.793 2 INFO nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/e7f67a70e606c08bfea45c9da4c170e96d463110#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.793 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.794 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct  3 11:11:38 compute-0 nova_compute[351685]: 2025-10-03 11:11:38.794 2 DEBUG nova.virt.libvirt.imagecache [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct  3 11:11:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3129: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:39 compute-0 nova_compute[351685]: 2025-10-03 11:11:39.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:39 compute-0 nova_compute[351685]: 2025-10-03 11:11:39.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.902 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.902 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.902 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.903 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e12150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.914 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.914 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.914 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.915 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.915 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.916 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:11:40.915201) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.924 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.925 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.925 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.926 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.926 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.926 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.926 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.926 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.927 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.927 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.928 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:11:40.926670) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.928 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.929 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.929 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:11:40.930503) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.967 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.968 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.969 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.970 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.971 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.971 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.972 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.972 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.973 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:40.974 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:11:40.973510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.051 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.052 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.052 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.053 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.053 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.053 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.053 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.054 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.054 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.054 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.055 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.055 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.055 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.055 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.055 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.055 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.056 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.057 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.057 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:11:41.053708) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:11:41.055583) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.058 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.058 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.059 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.059 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.059 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:11:41.057777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.059 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.060 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:11:41.059903) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.060 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.060 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.061 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.061 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.061 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.062 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.062 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.062 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.063 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:11:41.061843) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.063 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.064 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:11:41.064150) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.105 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.107 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.107 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.108 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.108 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.108 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.109 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.110 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:11:41.108562) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.111 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.111 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.112 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.112 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.113 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.113 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.114 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.115 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.116 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.117 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.117 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.118 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:11:41.113559) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.118 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.118 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.119 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.121 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.121 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.121 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.121 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.123 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.123 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.123 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:11:41.118527) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:11:41.121780) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:11:41.124442) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.124 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.125 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.126 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.126 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:11:41.127509) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.127 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.129 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.129 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.130 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:11:41.130031) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.131 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.132 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.133 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.133 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 93130000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.134 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:11:41.133068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.135 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.135 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.135 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.135 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.136 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.136 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.136 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:11:41.135981) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.137 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.137 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.137 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.138 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.138 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.138 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:11:41.138553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.139 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.139 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.140 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.140 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.140 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.140 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.140 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.141 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.143 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.143 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.144 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.144 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:11:41.140798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.144 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.144 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.144 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.144 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.145 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:11:41.144654) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.145 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.145 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.145 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.146 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.146 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.146 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.146 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.147 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.147 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:11:41.146651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.147 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.148 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.148 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.148 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.148 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.148 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:11:41.148355) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.149 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.149 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.149 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.149 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.149 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.150 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.150 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.150 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.150 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.150 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.150 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.150 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.150 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.150 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.150 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.151 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.151 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.151 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.151 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.151 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.151 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.151 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.151 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.152 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.152 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.152 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:11:41.152 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:11:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3130: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:11:41.682 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:11:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:11:41.684 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:11:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:11:41.685 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:11:41 compute-0 nova_compute[351685]: 2025-10-03 11:11:41.795 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:11:41 compute-0 nova_compute[351685]: 2025-10-03 11:11:41.796 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:11:41 compute-0 nova_compute[351685]: 2025-10-03 11:11:41.838 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:11:41 compute-0 nova_compute[351685]: 2025-10-03 11:11:41.838 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:11:41 compute-0 nova_compute[351685]: 2025-10-03 11:11:41.839 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:11:41 compute-0 nova_compute[351685]: 2025-10-03 11:11:41.839 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:11:41 compute-0 nova_compute[351685]: 2025-10-03 11:11:41.840 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:11:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:11:42 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/622131330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:11:42 compute-0 nova_compute[351685]: 2025-10-03 11:11:42.361 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:11:42 compute-0 nova_compute[351685]: 2025-10-03 11:11:42.462 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:11:42 compute-0 nova_compute[351685]: 2025-10-03 11:11:42.463 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:11:42 compute-0 nova_compute[351685]: 2025-10-03 11:11:42.463 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:11:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:11:43 compute-0 nova_compute[351685]: 2025-10-03 11:11:43.025 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:11:43 compute-0 nova_compute[351685]: 2025-10-03 11:11:43.026 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3794MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:11:43 compute-0 nova_compute[351685]: 2025-10-03 11:11:43.026 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:11:43 compute-0 nova_compute[351685]: 2025-10-03 11:11:43.027 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:11:43 compute-0 nova_compute[351685]: 2025-10-03 11:11:43.099 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:11:43 compute-0 nova_compute[351685]: 2025-10-03 11:11:43.100 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:11:43 compute-0 nova_compute[351685]: 2025-10-03 11:11:43.100 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:11:43 compute-0 nova_compute[351685]: 2025-10-03 11:11:43.145 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:11:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3131: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:11:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2094238580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:11:43 compute-0 nova_compute[351685]: 2025-10-03 11:11:43.694 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:11:43 compute-0 nova_compute[351685]: 2025-10-03 11:11:43.708 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:11:43 compute-0 nova_compute[351685]: 2025-10-03 11:11:43.734 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:11:43 compute-0 nova_compute[351685]: 2025-10-03 11:11:43.738 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:11:43 compute-0 nova_compute[351685]: 2025-10-03 11:11:43.738 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:11:44 compute-0 nova_compute[351685]: 2025-10-03 11:11:44.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:44 compute-0 nova_compute[351685]: 2025-10-03 11:11:44.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:44 compute-0 nova_compute[351685]: 2025-10-03 11:11:44.673 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:11:44 compute-0 nova_compute[351685]: 2025-10-03 11:11:44.674 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:11:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3132: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:45 compute-0 nova_compute[351685]: 2025-10-03 11:11:45.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:11:45 compute-0 nova_compute[351685]: 2025-10-03 11:11:45.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:11:45 compute-0 nova_compute[351685]: 2025-10-03 11:11:45.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:11:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:11:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:11:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:11:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:11:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:11:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:11:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:11:46
Oct  3 11:11:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:11:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:11:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'backups', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', '.rgw.root']
Oct  3 11:11:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:11:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:11:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:11:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:11:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:11:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:11:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:11:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:11:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:11:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:11:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:11:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3133: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:11:48 compute-0 nova_compute[351685]: 2025-10-03 11:11:48.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:11:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3134: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:49 compute-0 nova_compute[351685]: 2025-10-03 11:11:49.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:49 compute-0 nova_compute[351685]: 2025-10-03 11:11:49.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3135: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:11:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3136: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:11:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/522859436' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:11:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:11:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/522859436' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:11:54 compute-0 nova_compute[351685]: 2025-10-03 11:11:54.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:54 compute-0 nova_compute[351685]: 2025-10-03 11:11:54.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:54 compute-0 podman[509571]: 2025-10-03 11:11:54.874285826 +0000 UTC m=+0.089784218 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 11:11:54 compute-0 podman[509573]: 2025-10-03 11:11:54.878420567 +0000 UTC m=+0.109982673 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 11:11:54 compute-0 podman[509570]: 2025-10-03 11:11:54.879593625 +0000 UTC m=+0.109047093 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:11:54 compute-0 podman[509567]: 2025-10-03 11:11:54.883499039 +0000 UTC m=+0.128156963 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:11:54 compute-0 podman[509569]: 2025-10-03 11:11:54.899424938 +0000 UTC m=+0.131717016 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:11:54 compute-0 podman[509568]: 2025-10-03 11:11:54.899074887 +0000 UTC m=+0.128960129 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, release=1755695350, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, config_id=edpm)
Oct  3 11:11:54 compute-0 podman[509572]: 2025-10-03 11:11:54.941808971 +0000 UTC m=+0.164452662 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 11:11:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3137: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:11:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:11:56 compute-0 podman[509867]: 2025-10-03 11:11:56.344781654 +0000 UTC m=+0.145932460 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 11:11:56 compute-0 podman[509867]: 2025-10-03 11:11:56.472608566 +0000 UTC m=+0.273759372 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:11:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3138: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:11:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:11:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:11:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:11:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:11:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:11:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:11:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:11:58 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:11:58 compute-0 nova_compute[351685]: 2025-10-03 11:11:58.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:11:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:11:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:11:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:11:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:11:58 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 414c087d-c633-4322-88bb-55d4cf7f32c6 does not exist
Oct  3 11:11:58 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 86ca2ca7-8541-40c8-9cd6-46877f54fd06 does not exist
Oct  3 11:11:58 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev be41a95f-31ee-4b38-8546-5f615bd4384e does not exist
Oct  3 11:11:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:11:58 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:11:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:11:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:11:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:11:58 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:11:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3139: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:11:59 compute-0 nova_compute[351685]: 2025-10-03 11:11:59.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:59 compute-0 nova_compute[351685]: 2025-10-03 11:11:59.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:11:59 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:11:59 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:11:59 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:11:59 compute-0 podman[157165]: time="2025-10-03T11:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:11:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:11:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9102 "" "Go-http-client/1.1"
Oct  3 11:11:59 compute-0 podman[510284]: 2025-10-03 11:11:59.938329067 +0000 UTC m=+0.092157423 container create 266c7dccc24290010f3dca392cb4a099f45afc180f5f52bdc06ab1a6b70cfd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kirch, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:11:59 compute-0 podman[510284]: 2025-10-03 11:11:59.894863119 +0000 UTC m=+0.048691515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:12:00 compute-0 systemd[1]: Started libpod-conmon-266c7dccc24290010f3dca392cb4a099f45afc180f5f52bdc06ab1a6b70cfd6a.scope.
Oct  3 11:12:00 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:12:00 compute-0 podman[510284]: 2025-10-03 11:12:00.11319842 +0000 UTC m=+0.267026756 container init 266c7dccc24290010f3dca392cb4a099f45afc180f5f52bdc06ab1a6b70cfd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 11:12:00 compute-0 podman[510284]: 2025-10-03 11:12:00.131181244 +0000 UTC m=+0.285009570 container start 266c7dccc24290010f3dca392cb4a099f45afc180f5f52bdc06ab1a6b70cfd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kirch, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:12:00 compute-0 podman[510284]: 2025-10-03 11:12:00.13728684 +0000 UTC m=+0.291115166 container attach 266c7dccc24290010f3dca392cb4a099f45afc180f5f52bdc06ab1a6b70cfd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 11:12:00 compute-0 sleepy_kirch[510300]: 167 167
Oct  3 11:12:00 compute-0 systemd[1]: libpod-266c7dccc24290010f3dca392cb4a099f45afc180f5f52bdc06ab1a6b70cfd6a.scope: Deactivated successfully.
Oct  3 11:12:00 compute-0 podman[510284]: 2025-10-03 11:12:00.141118871 +0000 UTC m=+0.294947187 container died 266c7dccc24290010f3dca392cb4a099f45afc180f5f52bdc06ab1a6b70cfd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kirch, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 11:12:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-48b872ef58941c94c1b31b360f45ec255f002271c890ad5b4d16f312483f38e4-merged.mount: Deactivated successfully.
Oct  3 11:12:00 compute-0 podman[510284]: 2025-10-03 11:12:00.207770619 +0000 UTC m=+0.361598935 container remove 266c7dccc24290010f3dca392cb4a099f45afc180f5f52bdc06ab1a6b70cfd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_kirch, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:12:00 compute-0 systemd[1]: libpod-conmon-266c7dccc24290010f3dca392cb4a099f45afc180f5f52bdc06ab1a6b70cfd6a.scope: Deactivated successfully.
Oct  3 11:12:00 compute-0 podman[510322]: 2025-10-03 11:12:00.493022768 +0000 UTC m=+0.092656120 container create 9cd9bfbec20caf013b6f7dc6e81aa8d96f82009c301d10e8503632c229ad3ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:12:00 compute-0 podman[510322]: 2025-10-03 11:12:00.460021273 +0000 UTC m=+0.059654615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:12:00 compute-0 systemd[1]: Started libpod-conmon-9cd9bfbec20caf013b6f7dc6e81aa8d96f82009c301d10e8503632c229ad3ade.scope.
Oct  3 11:12:00 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:12:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efab268d4cf0b9c0d01af59e29046aca6598a0edde4a377e12383e85a5196d03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:12:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efab268d4cf0b9c0d01af59e29046aca6598a0edde4a377e12383e85a5196d03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:12:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efab268d4cf0b9c0d01af59e29046aca6598a0edde4a377e12383e85a5196d03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:12:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efab268d4cf0b9c0d01af59e29046aca6598a0edde4a377e12383e85a5196d03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:12:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efab268d4cf0b9c0d01af59e29046aca6598a0edde4a377e12383e85a5196d03/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:12:00 compute-0 podman[510322]: 2025-10-03 11:12:00.679821281 +0000 UTC m=+0.279454603 container init 9cd9bfbec20caf013b6f7dc6e81aa8d96f82009c301d10e8503632c229ad3ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:12:00 compute-0 podman[510322]: 2025-10-03 11:12:00.711323877 +0000 UTC m=+0.310957199 container start 9cd9bfbec20caf013b6f7dc6e81aa8d96f82009c301d10e8503632c229ad3ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  3 11:12:00 compute-0 podman[510322]: 2025-10-03 11:12:00.716841313 +0000 UTC m=+0.316474645 container attach 9cd9bfbec20caf013b6f7dc6e81aa8d96f82009c301d10e8503632c229ad3ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 11:12:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3140: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:01 compute-0 openstack_network_exporter[367524]: ERROR   11:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:12:01 compute-0 openstack_network_exporter[367524]: ERROR   11:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:12:01 compute-0 openstack_network_exporter[367524]: ERROR   11:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:12:01 compute-0 openstack_network_exporter[367524]: ERROR   11:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:12:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:12:01 compute-0 openstack_network_exporter[367524]: ERROR   11:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:12:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:12:01 compute-0 pedantic_colden[510338]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:12:01 compute-0 pedantic_colden[510338]: --> relative data size: 1.0
Oct  3 11:12:01 compute-0 pedantic_colden[510338]: --> All data devices are unavailable
Oct  3 11:12:02 compute-0 systemd[1]: libpod-9cd9bfbec20caf013b6f7dc6e81aa8d96f82009c301d10e8503632c229ad3ade.scope: Deactivated successfully.
Oct  3 11:12:02 compute-0 systemd[1]: libpod-9cd9bfbec20caf013b6f7dc6e81aa8d96f82009c301d10e8503632c229ad3ade.scope: Consumed 1.254s CPU time.
Oct  3 11:12:02 compute-0 podman[510322]: 2025-10-03 11:12:02.026187077 +0000 UTC m=+1.625820459 container died 9cd9bfbec20caf013b6f7dc6e81aa8d96f82009c301d10e8503632c229ad3ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 11:12:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-efab268d4cf0b9c0d01af59e29046aca6598a0edde4a377e12383e85a5196d03-merged.mount: Deactivated successfully.
Oct  3 11:12:02 compute-0 podman[510322]: 2025-10-03 11:12:02.137986216 +0000 UTC m=+1.737619568 container remove 9cd9bfbec20caf013b6f7dc6e81aa8d96f82009c301d10e8503632c229ad3ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_colden, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:12:02 compute-0 systemd[1]: libpod-conmon-9cd9bfbec20caf013b6f7dc6e81aa8d96f82009c301d10e8503632c229ad3ade.scope: Deactivated successfully.
Oct  3 11:12:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:12:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3141: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:03 compute-0 podman[510516]: 2025-10-03 11:12:03.300844064 +0000 UTC m=+0.071117862 container create 85d81855683e935f48feba4294a665db7a841bc0fadc41b2192950765f758d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcclintock, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:12:03 compute-0 systemd[1]: Started libpod-conmon-85d81855683e935f48feba4294a665db7a841bc0fadc41b2192950765f758d04.scope.
Oct  3 11:12:03 compute-0 podman[510516]: 2025-10-03 11:12:03.279890634 +0000 UTC m=+0.050164412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:12:03 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:12:03 compute-0 podman[510516]: 2025-10-03 11:12:03.43041065 +0000 UTC m=+0.200684498 container init 85d81855683e935f48feba4294a665db7a841bc0fadc41b2192950765f758d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:12:03 compute-0 podman[510516]: 2025-10-03 11:12:03.45169958 +0000 UTC m=+0.221973378 container start 85d81855683e935f48feba4294a665db7a841bc0fadc41b2192950765f758d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcclintock, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:12:03 compute-0 bold_mcclintock[510531]: 167 167
Oct  3 11:12:03 compute-0 systemd[1]: libpod-85d81855683e935f48feba4294a665db7a841bc0fadc41b2192950765f758d04.scope: Deactivated successfully.
Oct  3 11:12:03 compute-0 podman[510516]: 2025-10-03 11:12:03.459943903 +0000 UTC m=+0.230217671 container attach 85d81855683e935f48feba4294a665db7a841bc0fadc41b2192950765f758d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcclintock, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:12:03 compute-0 podman[510516]: 2025-10-03 11:12:03.462230076 +0000 UTC m=+0.232503844 container died 85d81855683e935f48feba4294a665db7a841bc0fadc41b2192950765f758d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 11:12:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ded81ca841da52ffb73605f0d53a931473a3324eeb904605d3e30ea368421fa-merged.mount: Deactivated successfully.
Oct  3 11:12:03 compute-0 podman[510516]: 2025-10-03 11:12:03.538309655 +0000 UTC m=+0.308583413 container remove 85d81855683e935f48feba4294a665db7a841bc0fadc41b2192950765f758d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mcclintock, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:12:03 compute-0 systemd[1]: libpod-conmon-85d81855683e935f48feba4294a665db7a841bc0fadc41b2192950765f758d04.scope: Deactivated successfully.
Oct  3 11:12:03 compute-0 podman[510555]: 2025-10-03 11:12:03.805462544 +0000 UTC m=+0.081102410 container create c5499116e3c73970b15b1ca979701a81a643f34f79500a0ac36d9a50fd4b0776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:12:03 compute-0 podman[510555]: 2025-10-03 11:12:03.769079042 +0000 UTC m=+0.044718968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:12:03 compute-0 systemd[1]: Started libpod-conmon-c5499116e3c73970b15b1ca979701a81a643f34f79500a0ac36d9a50fd4b0776.scope.
Oct  3 11:12:03 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/087c890246be92441a6c8fcac25257ba4e12e452f82130fa2b0e87314736befb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/087c890246be92441a6c8fcac25257ba4e12e452f82130fa2b0e87314736befb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/087c890246be92441a6c8fcac25257ba4e12e452f82130fa2b0e87314736befb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/087c890246be92441a6c8fcac25257ba4e12e452f82130fa2b0e87314736befb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:12:03 compute-0 podman[510555]: 2025-10-03 11:12:03.966190806 +0000 UTC m=+0.241830702 container init c5499116e3c73970b15b1ca979701a81a643f34f79500a0ac36d9a50fd4b0776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:12:03 compute-0 podman[510555]: 2025-10-03 11:12:03.998531528 +0000 UTC m=+0.274171404 container start c5499116e3c73970b15b1ca979701a81a643f34f79500a0ac36d9a50fd4b0776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curran, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:12:04 compute-0 podman[510555]: 2025-10-03 11:12:04.007476644 +0000 UTC m=+0.283116550 container attach c5499116e3c73970b15b1ca979701a81a643f34f79500a0ac36d9a50fd4b0776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curran, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 11:12:04 compute-0 nova_compute[351685]: 2025-10-03 11:12:04.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:04 compute-0 nova_compute[351685]: 2025-10-03 11:12:04.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:04 compute-0 interesting_curran[510571]: {
Oct  3 11:12:04 compute-0 interesting_curran[510571]:    "0": [
Oct  3 11:12:04 compute-0 interesting_curran[510571]:        {
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "devices": [
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "/dev/loop3"
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            ],
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "lv_name": "ceph_lv0",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "lv_size": "21470642176",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "name": "ceph_lv0",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "tags": {
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.cluster_name": "ceph",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.crush_device_class": "",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.encrypted": "0",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.osd_id": "0",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.type": "block",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.vdo": "0"
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            },
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "type": "block",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "vg_name": "ceph_vg0"
Oct  3 11:12:04 compute-0 interesting_curran[510571]:        }
Oct  3 11:12:04 compute-0 interesting_curran[510571]:    ],
Oct  3 11:12:04 compute-0 interesting_curran[510571]:    "1": [
Oct  3 11:12:04 compute-0 interesting_curran[510571]:        {
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "devices": [
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "/dev/loop4"
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            ],
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "lv_name": "ceph_lv1",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "lv_size": "21470642176",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "name": "ceph_lv1",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "tags": {
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.cluster_name": "ceph",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.crush_device_class": "",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.encrypted": "0",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:12:04 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.osd_id": "1",
Oct  3 11:12:04 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.type": "block",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.vdo": "0"
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            },
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "type": "block",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "vg_name": "ceph_vg1"
Oct  3 11:12:04 compute-0 interesting_curran[510571]:        }
Oct  3 11:12:04 compute-0 interesting_curran[510571]:    ],
Oct  3 11:12:04 compute-0 interesting_curran[510571]:    "2": [
Oct  3 11:12:04 compute-0 interesting_curran[510571]:        {
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "devices": [
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "/dev/loop5"
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            ],
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "lv_name": "ceph_lv2",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "lv_size": "21470642176",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "name": "ceph_lv2",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "tags": {
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.cluster_name": "ceph",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.crush_device_class": "",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.encrypted": "0",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.osd_id": "2",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.type": "block",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:                "ceph.vdo": "0"
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            },
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "type": "block",
Oct  3 11:12:04 compute-0 interesting_curran[510571]:            "vg_name": "ceph_vg2"
Oct  3 11:12:04 compute-0 interesting_curran[510571]:        }
Oct  3 11:12:04 compute-0 interesting_curran[510571]:    ]
Oct  3 11:12:04 compute-0 interesting_curran[510571]: }
Oct  3 11:12:04 compute-0 systemd[1]: libpod-c5499116e3c73970b15b1ca979701a81a643f34f79500a0ac36d9a50fd4b0776.scope: Deactivated successfully.
Oct  3 11:12:04 compute-0 podman[510555]: 2025-10-03 11:12:04.889881557 +0000 UTC m=+1.165521443 container died c5499116e3c73970b15b1ca979701a81a643f34f79500a0ac36d9a50fd4b0776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curran, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:12:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3142: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-087c890246be92441a6c8fcac25257ba4e12e452f82130fa2b0e87314736befb-merged.mount: Deactivated successfully.
Oct  3 11:12:05 compute-0 podman[510555]: 2025-10-03 11:12:05.485631848 +0000 UTC m=+1.761271754 container remove c5499116e3c73970b15b1ca979701a81a643f34f79500a0ac36d9a50fd4b0776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 11:12:05 compute-0 systemd[1]: libpod-conmon-c5499116e3c73970b15b1ca979701a81a643f34f79500a0ac36d9a50fd4b0776.scope: Deactivated successfully.
Oct  3 11:12:06 compute-0 podman[510729]: 2025-10-03 11:12:06.558690779 +0000 UTC m=+0.079283343 container create 56b945a901510dbf7c1ed4c8083cb9b3956cfc9a39a065d3a9416d19bb636a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nightingale, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:12:06 compute-0 podman[510729]: 2025-10-03 11:12:06.526546722 +0000 UTC m=+0.047139336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:12:06 compute-0 systemd[1]: Started libpod-conmon-56b945a901510dbf7c1ed4c8083cb9b3956cfc9a39a065d3a9416d19bb636a33.scope.
Oct  3 11:12:06 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:12:06 compute-0 podman[510729]: 2025-10-03 11:12:06.692990816 +0000 UTC m=+0.213583360 container init 56b945a901510dbf7c1ed4c8083cb9b3956cfc9a39a065d3a9416d19bb636a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nightingale, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  3 11:12:06 compute-0 podman[510729]: 2025-10-03 11:12:06.709223344 +0000 UTC m=+0.229815898 container start 56b945a901510dbf7c1ed4c8083cb9b3956cfc9a39a065d3a9416d19bb636a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:12:06 compute-0 podman[510729]: 2025-10-03 11:12:06.716482106 +0000 UTC m=+0.237074620 container attach 56b945a901510dbf7c1ed4c8083cb9b3956cfc9a39a065d3a9416d19bb636a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nightingale, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 11:12:06 compute-0 cranky_nightingale[510745]: 167 167
Oct  3 11:12:06 compute-0 systemd[1]: libpod-56b945a901510dbf7c1ed4c8083cb9b3956cfc9a39a065d3a9416d19bb636a33.scope: Deactivated successfully.
Oct  3 11:12:06 compute-0 podman[510729]: 2025-10-03 11:12:06.720506585 +0000 UTC m=+0.241099149 container died 56b945a901510dbf7c1ed4c8083cb9b3956cfc9a39a065d3a9416d19bb636a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nightingale, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:12:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a96f6a99cbb2cbc82fe5d60e5c5c7df5049bb6c6fa0bcbcda9be01dbfd49efa-merged.mount: Deactivated successfully.
Oct  3 11:12:06 compute-0 podman[510729]: 2025-10-03 11:12:06.798525206 +0000 UTC m=+0.319117770 container remove 56b945a901510dbf7c1ed4c8083cb9b3956cfc9a39a065d3a9416d19bb636a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nightingale, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:12:06 compute-0 systemd[1]: libpod-conmon-56b945a901510dbf7c1ed4c8083cb9b3956cfc9a39a065d3a9416d19bb636a33.scope: Deactivated successfully.
Oct  3 11:12:07 compute-0 podman[510767]: 2025-10-03 11:12:07.111733975 +0000 UTC m=+0.087888496 container create cd6f6a95eea39c284e2f4ae3871e22fa317a29234492de5e1771fe579af2312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:12:07 compute-0 systemd[1]: Started libpod-conmon-cd6f6a95eea39c284e2f4ae3871e22fa317a29234492de5e1771fe579af2312d.scope.
Oct  3 11:12:07 compute-0 podman[510767]: 2025-10-03 11:12:07.078739892 +0000 UTC m=+0.054894413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:12:07 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0d0a121a5fae422551b3809fc968c9117a6e5f5b67568531185992986e44d74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0d0a121a5fae422551b3809fc968c9117a6e5f5b67568531185992986e44d74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0d0a121a5fae422551b3809fc968c9117a6e5f5b67568531185992986e44d74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0d0a121a5fae422551b3809fc968c9117a6e5f5b67568531185992986e44d74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:12:07 compute-0 podman[510767]: 2025-10-03 11:12:07.231875891 +0000 UTC m=+0.208030402 container init cd6f6a95eea39c284e2f4ae3871e22fa317a29234492de5e1771fe579af2312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:12:07 compute-0 podman[510767]: 2025-10-03 11:12:07.245401384 +0000 UTC m=+0.221555855 container start cd6f6a95eea39c284e2f4ae3871e22fa317a29234492de5e1771fe579af2312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 11:12:07 compute-0 podman[510767]: 2025-10-03 11:12:07.250048972 +0000 UTC m=+0.226203483 container attach cd6f6a95eea39c284e2f4ae3871e22fa317a29234492de5e1771fe579af2312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:12:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3143: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:07 compute-0 podman[510783]: 2025-10-03 11:12:07.280835965 +0000 UTC m=+0.101912295 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, distribution-scope=public, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=)
Oct  3 11:12:07 compute-0 podman[510781]: 2025-10-03 11:12:07.282624232 +0000 UTC m=+0.109127836 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:12:07 compute-0 podman[510785]: 2025-10-03 11:12:07.301736932 +0000 UTC m=+0.128347449 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=edpm)
Oct  3 11:12:07 compute-0 nova_compute[351685]: 2025-10-03 11:12:07.748 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:12:07 compute-0 nova_compute[351685]: 2025-10-03 11:12:07.749 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 11:12:07 compute-0 nova_compute[351685]: 2025-10-03 11:12:07.772 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 11:12:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:12:08 compute-0 charming_ride[510796]: {
Oct  3 11:12:08 compute-0 charming_ride[510796]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:12:08 compute-0 charming_ride[510796]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:12:08 compute-0 charming_ride[510796]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:12:08 compute-0 charming_ride[510796]:        "osd_id": 1,
Oct  3 11:12:08 compute-0 charming_ride[510796]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:12:08 compute-0 charming_ride[510796]:        "type": "bluestore"
Oct  3 11:12:08 compute-0 charming_ride[510796]:    },
Oct  3 11:12:08 compute-0 charming_ride[510796]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:12:08 compute-0 charming_ride[510796]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:12:08 compute-0 charming_ride[510796]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:12:08 compute-0 charming_ride[510796]:        "osd_id": 2,
Oct  3 11:12:08 compute-0 charming_ride[510796]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:12:08 compute-0 charming_ride[510796]:        "type": "bluestore"
Oct  3 11:12:08 compute-0 charming_ride[510796]:    },
Oct  3 11:12:08 compute-0 charming_ride[510796]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:12:08 compute-0 charming_ride[510796]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:12:08 compute-0 charming_ride[510796]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:12:08 compute-0 charming_ride[510796]:        "osd_id": 0,
Oct  3 11:12:08 compute-0 charming_ride[510796]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:12:08 compute-0 charming_ride[510796]:        "type": "bluestore"
Oct  3 11:12:08 compute-0 charming_ride[510796]:    }
Oct  3 11:12:08 compute-0 charming_ride[510796]: }
Oct  3 11:12:08 compute-0 systemd[1]: libpod-cd6f6a95eea39c284e2f4ae3871e22fa317a29234492de5e1771fe579af2312d.scope: Deactivated successfully.
Oct  3 11:12:08 compute-0 systemd[1]: libpod-cd6f6a95eea39c284e2f4ae3871e22fa317a29234492de5e1771fe579af2312d.scope: Consumed 1.152s CPU time.
Oct  3 11:12:08 compute-0 conmon[510796]: conmon cd6f6a95eea39c284e2f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd6f6a95eea39c284e2f4ae3871e22fa317a29234492de5e1771fe579af2312d.scope/container/memory.events
Oct  3 11:12:08 compute-0 podman[510767]: 2025-10-03 11:12:08.414083337 +0000 UTC m=+1.390237838 container died cd6f6a95eea39c284e2f4ae3871e22fa317a29234492de5e1771fe579af2312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:12:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0d0a121a5fae422551b3809fc968c9117a6e5f5b67568531185992986e44d74-merged.mount: Deactivated successfully.
Oct  3 11:12:08 compute-0 podman[510767]: 2025-10-03 11:12:08.507410877 +0000 UTC m=+1.483565348 container remove cd6f6a95eea39c284e2f4ae3871e22fa317a29234492de5e1771fe579af2312d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ride, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:12:08 compute-0 systemd[1]: libpod-conmon-cd6f6a95eea39c284e2f4ae3871e22fa317a29234492de5e1771fe579af2312d.scope: Deactivated successfully.
Oct  3 11:12:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:12:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:12:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:12:08 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:12:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev c3fc7eb9-cf48-4ace-9f2a-def02adc684f does not exist
Oct  3 11:12:08 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 84b58a32-e979-48a9-bee3-50e20a312e08 does not exist
Oct  3 11:12:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3144: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:09 compute-0 nova_compute[351685]: 2025-10-03 11:12:09.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:09 compute-0 nova_compute[351685]: 2025-10-03 11:12:09.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:12:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:12:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3145: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:12:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3146: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:13 compute-0 nova_compute[351685]: 2025-10-03 11:12:13.748 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:12:14 compute-0 nova_compute[351685]: 2025-10-03 11:12:14.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:14 compute-0 nova_compute[351685]: 2025-10-03 11:12:14.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3147: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:12:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:12:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:12:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:12:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:12:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:12:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3148: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:12:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3149: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:19 compute-0 nova_compute[351685]: 2025-10-03 11:12:19.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:19 compute-0 nova_compute[351685]: 2025-10-03 11:12:19.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3150: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:12:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3151: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:24 compute-0 nova_compute[351685]: 2025-10-03 11:12:24.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:24 compute-0 nova_compute[351685]: 2025-10-03 11:12:24.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3152: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:25 compute-0 podman[510942]: 2025-10-03 11:12:25.846434565 +0000 UTC m=+0.091130980 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 11:12:25 compute-0 podman[510941]: 2025-10-03 11:12:25.854832373 +0000 UTC m=+0.102940737 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, release=1755695350, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 11:12:25 compute-0 podman[510944]: 2025-10-03 11:12:25.854949017 +0000 UTC m=+0.094562440 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute)
Oct  3 11:12:25 compute-0 podman[510943]: 2025-10-03 11:12:25.86474732 +0000 UTC m=+0.099651313 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  3 11:12:25 compute-0 podman[510940]: 2025-10-03 11:12:25.872921381 +0000 UTC m=+0.114315071 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 11:12:25 compute-0 podman[510951]: 2025-10-03 11:12:25.877967662 +0000 UTC m=+0.109553429 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid)
Oct  3 11:12:25 compute-0 podman[510945]: 2025-10-03 11:12:25.887220938 +0000 UTC m=+0.119178347 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  3 11:12:26 compute-0 nova_compute[351685]: 2025-10-03 11:12:26.058 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:12:26 compute-0 nova_compute[351685]: 2025-10-03 11:12:26.083 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Triggering sync for uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  3 11:12:26 compute-0 nova_compute[351685]: 2025-10-03 11:12:26.084 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:12:26 compute-0 nova_compute[351685]: 2025-10-03 11:12:26.085 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:12:26 compute-0 nova_compute[351685]: 2025-10-03 11:12:26.125 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:12:26 compute-0 nova_compute[351685]: 2025-10-03 11:12:26.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:12:26 compute-0 nova_compute[351685]: 2025-10-03 11:12:26.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 11:12:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3153: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:12:28.017094) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489948017138, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1531, "num_deletes": 250, "total_data_size": 2505182, "memory_usage": 2543704, "flush_reason": "Manual Compaction"}
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489948034640, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 1450422, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62445, "largest_seqno": 63975, "table_properties": {"data_size": 1445185, "index_size": 2504, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13569, "raw_average_key_size": 20, "raw_value_size": 1433669, "raw_average_value_size": 2182, "num_data_blocks": 115, "num_entries": 657, "num_filter_entries": 657, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759489783, "oldest_key_time": 1759489783, "file_creation_time": 1759489948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 17678 microseconds, and 8652 cpu microseconds.
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:12:28.034766) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 1450422 bytes OK
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:12:28.034800) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:12:28.037710) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:12:28.037741) EVENT_LOG_v1 {"time_micros": 1759489948037730, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:12:28.037768) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2498511, prev total WAL file size 2498511, number of live WAL files 2.
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:12:28.040067) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373534' seq:72057594037927935, type:22 .. '6D6772737461740033303035' seq:0, type:0; will stop at (end)
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(1416KB)], [152(9066KB)]
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489948040109, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 10734477, "oldest_snapshot_seqno": -1}
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 7426 keys, 8433434 bytes, temperature: kUnknown
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489948105158, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 8433434, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8389872, "index_size": 23865, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18629, "raw_key_size": 194886, "raw_average_key_size": 26, "raw_value_size": 8261220, "raw_average_value_size": 1112, "num_data_blocks": 942, "num_entries": 7426, "num_filter_entries": 7426, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759489948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:12:28.106056) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 8433434 bytes
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:12:28.109918) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.4 rd, 129.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 8.9 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(13.2) write-amplify(5.8) OK, records in: 7862, records dropped: 436 output_compression: NoCompression
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:12:28.109958) EVENT_LOG_v1 {"time_micros": 1759489948109940, "job": 94, "event": "compaction_finished", "compaction_time_micros": 65289, "compaction_time_cpu_micros": 45830, "output_level": 6, "num_output_files": 1, "total_output_size": 8433434, "num_input_records": 7862, "num_output_records": 7426, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489948111351, "job": 94, "event": "table_file_deletion", "file_number": 154}
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759489948116741, "job": 94, "event": "table_file_deletion", "file_number": 152}
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:12:28.039755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:12:28.117005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:12:28.117012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:12:28.117015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:12:28.117018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:12:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:12:28.117021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:12:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3154: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:29 compute-0 nova_compute[351685]: 2025-10-03 11:12:29.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:29 compute-0 nova_compute[351685]: 2025-10-03 11:12:29.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:29 compute-0 podman[157165]: time="2025-10-03T11:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:12:29 compute-0 nova_compute[351685]: 2025-10-03 11:12:29.755 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:12:29 compute-0 nova_compute[351685]: 2025-10-03 11:12:29.756 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:12:29 compute-0 nova_compute[351685]: 2025-10-03 11:12:29.756 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:12:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:12:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9113 "" "Go-http-client/1.1"
Oct  3 11:12:30 compute-0 nova_compute[351685]: 2025-10-03 11:12:30.654 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:12:30 compute-0 nova_compute[351685]: 2025-10-03 11:12:30.655 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:12:30 compute-0 nova_compute[351685]: 2025-10-03 11:12:30.655 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:12:30 compute-0 nova_compute[351685]: 2025-10-03 11:12:30.656 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:12:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3155: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:31 compute-0 openstack_network_exporter[367524]: ERROR   11:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:12:31 compute-0 openstack_network_exporter[367524]: ERROR   11:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:12:31 compute-0 openstack_network_exporter[367524]: ERROR   11:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:12:31 compute-0 openstack_network_exporter[367524]: ERROR   11:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:12:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:12:31 compute-0 openstack_network_exporter[367524]: ERROR   11:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:12:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:12:32 compute-0 nova_compute[351685]: 2025-10-03 11:12:32.610 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:12:32 compute-0 nova_compute[351685]: 2025-10-03 11:12:32.631 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:12:32 compute-0 nova_compute[351685]: 2025-10-03 11:12:32.631 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:12:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:12:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3156: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:34 compute-0 nova_compute[351685]: 2025-10-03 11:12:34.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:34 compute-0 nova_compute[351685]: 2025-10-03 11:12:34.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3157: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3158: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:37 compute-0 podman[511085]: 2025-10-03 11:12:37.86763477 +0000 UTC m=+0.117675819 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 11:12:37 compute-0 podman[511087]: 2025-10-03 11:12:37.901910314 +0000 UTC m=+0.144869426 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  3 11:12:37 compute-0 podman[511086]: 2025-10-03 11:12:37.902805993 +0000 UTC m=+0.149062330 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, release-0.7.12=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.29.0, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Oct  3 11:12:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:12:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3159: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:39 compute-0 nova_compute[351685]: 2025-10-03 11:12:39.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:39 compute-0 nova_compute[351685]: 2025-10-03 11:12:39.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:39 compute-0 nova_compute[351685]: 2025-10-03 11:12:39.601 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:12:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3160: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:12:41.684 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:12:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:12:41.685 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:12:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:12:41.686 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:12:41 compute-0 nova_compute[351685]: 2025-10-03 11:12:41.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:12:41 compute-0 nova_compute[351685]: 2025-10-03 11:12:41.763 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:12:41 compute-0 nova_compute[351685]: 2025-10-03 11:12:41.763 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:12:41 compute-0 nova_compute[351685]: 2025-10-03 11:12:41.764 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:12:41 compute-0 nova_compute[351685]: 2025-10-03 11:12:41.764 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:12:41 compute-0 nova_compute[351685]: 2025-10-03 11:12:41.765 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:12:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:12:42 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1556847305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:12:42 compute-0 nova_compute[351685]: 2025-10-03 11:12:42.258 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:12:42 compute-0 nova_compute[351685]: 2025-10-03 11:12:42.341 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:12:42 compute-0 nova_compute[351685]: 2025-10-03 11:12:42.341 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:12:42 compute-0 nova_compute[351685]: 2025-10-03 11:12:42.341 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:12:42 compute-0 nova_compute[351685]: 2025-10-03 11:12:42.750 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:12:42 compute-0 nova_compute[351685]: 2025-10-03 11:12:42.752 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3798MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:12:42 compute-0 nova_compute[351685]: 2025-10-03 11:12:42.752 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:12:42 compute-0 nova_compute[351685]: 2025-10-03 11:12:42.753 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:12:42 compute-0 nova_compute[351685]: 2025-10-03 11:12:42.836 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:12:42 compute-0 nova_compute[351685]: 2025-10-03 11:12:42.837 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:12:42 compute-0 nova_compute[351685]: 2025-10-03 11:12:42.838 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:12:42 compute-0 nova_compute[351685]: 2025-10-03 11:12:42.895 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:12:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:12:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3161: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:12:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1520755264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:12:43 compute-0 nova_compute[351685]: 2025-10-03 11:12:43.396 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:12:43 compute-0 nova_compute[351685]: 2025-10-03 11:12:43.411 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:12:43 compute-0 nova_compute[351685]: 2025-10-03 11:12:43.435 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:12:43 compute-0 nova_compute[351685]: 2025-10-03 11:12:43.439 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:12:43 compute-0 nova_compute[351685]: 2025-10-03 11:12:43.439 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:12:44 compute-0 nova_compute[351685]: 2025-10-03 11:12:44.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:44 compute-0 nova_compute[351685]: 2025-10-03 11:12:44.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:44 compute-0 nova_compute[351685]: 2025-10-03 11:12:44.440 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:12:44 compute-0 nova_compute[351685]: 2025-10-03 11:12:44.441 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:12:44 compute-0 nova_compute[351685]: 2025-10-03 11:12:44.442 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:12:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3162: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:45 compute-0 nova_compute[351685]: 2025-10-03 11:12:45.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:12:45 compute-0 nova_compute[351685]: 2025-10-03 11:12:45.732 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:12:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:12:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:12:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:12:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:12:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:12:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:12:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:12:46
Oct  3 11:12:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:12:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:12:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'vms', '.mgr', 'default.rgw.meta', 'volumes', 'images', 'default.rgw.control', 'backups']
Oct  3 11:12:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:12:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:12:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:12:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:12:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:12:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:12:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:12:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:12:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:12:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:12:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:12:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3163: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:47 compute-0 nova_compute[351685]: 2025-10-03 11:12:47.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:12:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:12:48 compute-0 nova_compute[351685]: 2025-10-03 11:12:48.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:12:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3164: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:49 compute-0 nova_compute[351685]: 2025-10-03 11:12:49.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:49 compute-0 nova_compute[351685]: 2025-10-03 11:12:49.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:12:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.2 total, 600.0 interval#012Cumulative writes: 9130 writes, 33K keys, 9130 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 9130 writes, 2438 syncs, 3.74 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d1bcb0dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561d1bcb0dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt
Oct  3 11:12:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3165: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:12:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3166: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:12:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3624551023' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:12:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:12:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3624551023' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:12:54 compute-0 nova_compute[351685]: 2025-10-03 11:12:54.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:54 compute-0 nova_compute[351685]: 2025-10-03 11:12:54.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3167: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:12:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:12:56 compute-0 podman[511200]: 2025-10-03 11:12:56.900151257 +0000 UTC m=+0.110123668 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Oct  3 11:12:56 compute-0 podman[511191]: 2025-10-03 11:12:56.904120224 +0000 UTC m=+0.156016543 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 11:12:56 compute-0 podman[511193]: 2025-10-03 11:12:56.908110821 +0000 UTC m=+0.135661452 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:12:56 compute-0 podman[511199]: 2025-10-03 11:12:56.917565673 +0000 UTC m=+0.140961402 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  3 11:12:56 compute-0 podman[511192]: 2025-10-03 11:12:56.920117604 +0000 UTC m=+0.155439114 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, container_name=openstack_network_exporter, release=1755695350, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Oct  3 11:12:56 compute-0 podman[511205]: 2025-10-03 11:12:56.932074256 +0000 UTC m=+0.147417778 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 11:12:56 compute-0 podman[511207]: 2025-10-03 11:12:56.934226065 +0000 UTC m=+0.145711564 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  3 11:12:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3168: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:12:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 9884 writes, 35K keys, 9884 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 9884 writes, 2638 syncs, 3.75 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5650663e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5650663e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memta
Oct  3 11:12:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:12:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3169: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:12:59 compute-0 nova_compute[351685]: 2025-10-03 11:12:59.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:59 compute-0 nova_compute[351685]: 2025-10-03 11:12:59.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:12:59 compute-0 podman[157165]: time="2025-10-03T11:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:12:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:12:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9117 "" "Go-http-client/1.1"
Oct  3 11:13:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3170: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:01 compute-0 openstack_network_exporter[367524]: ERROR   11:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:13:01 compute-0 openstack_network_exporter[367524]: ERROR   11:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:13:01 compute-0 openstack_network_exporter[367524]: ERROR   11:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:13:01 compute-0 openstack_network_exporter[367524]: ERROR   11:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:13:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:13:01 compute-0 openstack_network_exporter[367524]: ERROR   11:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:13:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:13:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:13:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3171: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:13:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 8065 writes, 29K keys, 8065 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 8065 writes, 1991 syncs, 4.05 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56167acccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56167acccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memta
Oct  3 11:13:04 compute-0 nova_compute[351685]: 2025-10-03 11:13:04.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:04 compute-0 nova_compute[351685]: 2025-10-03 11:13:04.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3172: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:05 compute-0 ceph-mgr[192071]: [devicehealth INFO root] Check health
Oct  3 11:13:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3173: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:13:08 compute-0 podman[511326]: 2025-10-03 11:13:08.838958309 +0000 UTC m=+0.094540451 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:13:08 compute-0 podman[511327]: 2025-10-03 11:13:08.874644248 +0000 UTC m=+0.113974431 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., name=ubi9, release=1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vcs-type=git, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc.)
Oct  3 11:13:08 compute-0 podman[511328]: 2025-10-03 11:13:08.880371831 +0000 UTC m=+0.116037357 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  3 11:13:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3174: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:09 compute-0 nova_compute[351685]: 2025-10-03 11:13:09.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:09 compute-0 nova_compute[351685]: 2025-10-03 11:13:09.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:13:10 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:13:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:13:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:13:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:13:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:13:10 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 63128c49-95bf-460c-b6ab-558bccc7020f does not exist
Oct  3 11:13:10 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 09aaaa46-40ef-4ee5-a02f-ba89580f54a3 does not exist
Oct  3 11:13:10 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e610ac57-1296-4b73-82ec-f32cf797f1bc does not exist
Oct  3 11:13:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:13:10 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:13:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:13:10 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:13:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:13:10 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:13:10 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:13:10 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:13:10 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:13:11 compute-0 podman[511654]: 2025-10-03 11:13:11.222588682 +0000 UTC m=+0.086439411 container create 2a1e85853c11fcf9b92385efeb1f1b1c8b06c52ac5d0f6129f0b876ea2a9b1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bartik, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 11:13:11 compute-0 podman[511654]: 2025-10-03 11:13:11.193703299 +0000 UTC m=+0.057554078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:13:11 compute-0 systemd[1]: Started libpod-conmon-2a1e85853c11fcf9b92385efeb1f1b1c8b06c52ac5d0f6129f0b876ea2a9b1a0.scope.
Oct  3 11:13:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3175: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:11 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:13:11 compute-0 podman[511654]: 2025-10-03 11:13:11.381572317 +0000 UTC m=+0.245423136 container init 2a1e85853c11fcf9b92385efeb1f1b1c8b06c52ac5d0f6129f0b876ea2a9b1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:13:11 compute-0 podman[511654]: 2025-10-03 11:13:11.401584997 +0000 UTC m=+0.265435766 container start 2a1e85853c11fcf9b92385efeb1f1b1c8b06c52ac5d0f6129f0b876ea2a9b1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:13:11 compute-0 podman[511654]: 2025-10-03 11:13:11.408614042 +0000 UTC m=+0.272464881 container attach 2a1e85853c11fcf9b92385efeb1f1b1c8b06c52ac5d0f6129f0b876ea2a9b1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bartik, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 11:13:11 compute-0 wonderful_bartik[511669]: 167 167
Oct  3 11:13:11 compute-0 systemd[1]: libpod-2a1e85853c11fcf9b92385efeb1f1b1c8b06c52ac5d0f6129f0b876ea2a9b1a0.scope: Deactivated successfully.
Oct  3 11:13:11 compute-0 podman[511654]: 2025-10-03 11:13:11.415606094 +0000 UTC m=+0.279456863 container died 2a1e85853c11fcf9b92385efeb1f1b1c8b06c52ac5d0f6129f0b876ea2a9b1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bartik, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:13:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ad50d73d572bbb3a24a3a34b8dd5e8e94cb4c83d550d0c41a30e9d3a97f632d-merged.mount: Deactivated successfully.
Oct  3 11:13:11 compute-0 podman[511654]: 2025-10-03 11:13:11.500935169 +0000 UTC m=+0.364785928 container remove 2a1e85853c11fcf9b92385efeb1f1b1c8b06c52ac5d0f6129f0b876ea2a9b1a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bartik, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:13:11 compute-0 systemd[1]: libpod-conmon-2a1e85853c11fcf9b92385efeb1f1b1c8b06c52ac5d0f6129f0b876ea2a9b1a0.scope: Deactivated successfully.
Oct  3 11:13:11 compute-0 podman[511691]: 2025-10-03 11:13:11.76688007 +0000 UTC m=+0.053362765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:13:11 compute-0 podman[511691]: 2025-10-03 11:13:11.863643659 +0000 UTC m=+0.150126334 container create 68a62223e3b67cddfab2b3ab8c332362c9510ba3ec2e2f7ceff63653cea09a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 11:13:11 compute-0 systemd[1]: Started libpod-conmon-68a62223e3b67cddfab2b3ab8c332362c9510ba3ec2e2f7ceff63653cea09a7f.scope.
Oct  3 11:13:12 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20432a1abbc106b3175de2f482f686c4a43533c67f373dc046a3f1d630dac67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20432a1abbc106b3175de2f482f686c4a43533c67f373dc046a3f1d630dac67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20432a1abbc106b3175de2f482f686c4a43533c67f373dc046a3f1d630dac67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20432a1abbc106b3175de2f482f686c4a43533c67f373dc046a3f1d630dac67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20432a1abbc106b3175de2f482f686c4a43533c67f373dc046a3f1d630dac67/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:13:12 compute-0 podman[511691]: 2025-10-03 11:13:12.054349058 +0000 UTC m=+0.340831733 container init 68a62223e3b67cddfab2b3ab8c332362c9510ba3ec2e2f7ceff63653cea09a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 11:13:12 compute-0 podman[511691]: 2025-10-03 11:13:12.075534154 +0000 UTC m=+0.362016809 container start 68a62223e3b67cddfab2b3ab8c332362c9510ba3ec2e2f7ceff63653cea09a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_snyder, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:13:12 compute-0 podman[511691]: 2025-10-03 11:13:12.080923986 +0000 UTC m=+0.367406681 container attach 68a62223e3b67cddfab2b3ab8c332362c9510ba3ec2e2f7ceff63653cea09a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_snyder, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:13:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:13:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3176: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:13 compute-0 suspicious_snyder[511706]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:13:13 compute-0 suspicious_snyder[511706]: --> relative data size: 1.0
Oct  3 11:13:13 compute-0 suspicious_snyder[511706]: --> All data devices are unavailable
Oct  3 11:13:13 compute-0 systemd[1]: libpod-68a62223e3b67cddfab2b3ab8c332362c9510ba3ec2e2f7ceff63653cea09a7f.scope: Deactivated successfully.
Oct  3 11:13:13 compute-0 systemd[1]: libpod-68a62223e3b67cddfab2b3ab8c332362c9510ba3ec2e2f7ceff63653cea09a7f.scope: Consumed 1.362s CPU time.
Oct  3 11:13:13 compute-0 podman[511691]: 2025-10-03 11:13:13.503737353 +0000 UTC m=+1.790219978 container died 68a62223e3b67cddfab2b3ab8c332362c9510ba3ec2e2f7ceff63653cea09a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_snyder, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  3 11:13:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a20432a1abbc106b3175de2f482f686c4a43533c67f373dc046a3f1d630dac67-merged.mount: Deactivated successfully.
Oct  3 11:13:13 compute-0 podman[511691]: 2025-10-03 11:13:13.599366777 +0000 UTC m=+1.885849412 container remove 68a62223e3b67cddfab2b3ab8c332362c9510ba3ec2e2f7ceff63653cea09a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_snyder, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  3 11:13:13 compute-0 systemd[1]: libpod-conmon-68a62223e3b67cddfab2b3ab8c332362c9510ba3ec2e2f7ceff63653cea09a7f.scope: Deactivated successfully.
Oct  3 11:13:14 compute-0 nova_compute[351685]: 2025-10-03 11:13:14.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:14 compute-0 nova_compute[351685]: 2025-10-03 11:13:14.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:14 compute-0 podman[511886]: 2025-10-03 11:13:14.896079738 +0000 UTC m=+0.076785413 container create 335409f409ec750b09a8b0517bf97673a2a466117a6cf0c1740fd3c0a824511e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:13:14 compute-0 podman[511886]: 2025-10-03 11:13:14.864617053 +0000 UTC m=+0.045322768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:13:14 compute-0 systemd[1]: Started libpod-conmon-335409f409ec750b09a8b0517bf97673a2a466117a6cf0c1740fd3c0a824511e.scope.
Oct  3 11:13:15 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:13:15 compute-0 podman[511886]: 2025-10-03 11:13:15.078614386 +0000 UTC m=+0.259320121 container init 335409f409ec750b09a8b0517bf97673a2a466117a6cf0c1740fd3c0a824511e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_johnson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 11:13:15 compute-0 podman[511886]: 2025-10-03 11:13:15.098220112 +0000 UTC m=+0.278925757 container start 335409f409ec750b09a8b0517bf97673a2a466117a6cf0c1740fd3c0a824511e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  3 11:13:15 compute-0 podman[511886]: 2025-10-03 11:13:15.105132032 +0000 UTC m=+0.285837747 container attach 335409f409ec750b09a8b0517bf97673a2a466117a6cf0c1740fd3c0a824511e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:13:15 compute-0 nice_johnson[511901]: 167 167
Oct  3 11:13:15 compute-0 podman[511886]: 2025-10-03 11:13:15.112452826 +0000 UTC m=+0.293158541 container died 335409f409ec750b09a8b0517bf97673a2a466117a6cf0c1740fd3c0a824511e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_johnson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:13:15 compute-0 systemd[1]: libpod-335409f409ec750b09a8b0517bf97673a2a466117a6cf0c1740fd3c0a824511e.scope: Deactivated successfully.
Oct  3 11:13:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-71f0073a39b29ecfba38fffe6863589ddd5d21b9cec73411d6a3b3170204d0a5-merged.mount: Deactivated successfully.
Oct  3 11:13:15 compute-0 podman[511886]: 2025-10-03 11:13:15.188964358 +0000 UTC m=+0.369670003 container remove 335409f409ec750b09a8b0517bf97673a2a466117a6cf0c1740fd3c0a824511e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:13:15 compute-0 systemd[1]: libpod-conmon-335409f409ec750b09a8b0517bf97673a2a466117a6cf0c1740fd3c0a824511e.scope: Deactivated successfully.
Oct  3 11:13:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3177: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:15 compute-0 podman[511924]: 2025-10-03 11:13:15.441374517 +0000 UTC m=+0.084048354 container create efbc2b464bc11e15196e7b636f5f7dea7e28438d6f7c5f4fd19198b2ba1d896c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 11:13:15 compute-0 podman[511924]: 2025-10-03 11:13:15.408719555 +0000 UTC m=+0.051393442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:13:15 compute-0 systemd[1]: Started libpod-conmon-efbc2b464bc11e15196e7b636f5f7dea7e28438d6f7c5f4fd19198b2ba1d896c.scope.
Oct  3 11:13:15 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a052f7126eb21568cf780b7813b7d4c9c7b455386bd0d0a5643e56bf8c0ab4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a052f7126eb21568cf780b7813b7d4c9c7b455386bd0d0a5643e56bf8c0ab4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a052f7126eb21568cf780b7813b7d4c9c7b455386bd0d0a5643e56bf8c0ab4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a052f7126eb21568cf780b7813b7d4c9c7b455386bd0d0a5643e56bf8c0ab4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:13:15 compute-0 podman[511924]: 2025-10-03 11:13:15.604515046 +0000 UTC m=+0.247188863 container init efbc2b464bc11e15196e7b636f5f7dea7e28438d6f7c5f4fd19198b2ba1d896c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:13:15 compute-0 podman[511924]: 2025-10-03 11:13:15.621997235 +0000 UTC m=+0.264671052 container start efbc2b464bc11e15196e7b636f5f7dea7e28438d6f7c5f4fd19198b2ba1d896c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  3 11:13:15 compute-0 podman[511924]: 2025-10-03 11:13:15.62717944 +0000 UTC m=+0.269853277 container attach efbc2b464bc11e15196e7b636f5f7dea7e28438d6f7c5f4fd19198b2ba1d896c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:13:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:13:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:13:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:13:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:13:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:13:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]: {
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:    "0": [
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:        {
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "devices": [
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "/dev/loop3"
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            ],
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "lv_name": "ceph_lv0",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "lv_size": "21470642176",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "name": "ceph_lv0",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "tags": {
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.cluster_name": "ceph",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.crush_device_class": "",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.encrypted": "0",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.osd_id": "0",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.type": "block",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.vdo": "0"
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            },
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "type": "block",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "vg_name": "ceph_vg0"
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:        }
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:    ],
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:    "1": [
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:        {
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "devices": [
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "/dev/loop4"
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            ],
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "lv_name": "ceph_lv1",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "lv_size": "21470642176",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "name": "ceph_lv1",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "tags": {
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.cluster_name": "ceph",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.crush_device_class": "",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.encrypted": "0",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.osd_id": "1",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.type": "block",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.vdo": "0"
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            },
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "type": "block",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "vg_name": "ceph_vg1"
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:        }
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:    ],
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:    "2": [
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:        {
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "devices": [
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "/dev/loop5"
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            ],
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "lv_name": "ceph_lv2",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "lv_size": "21470642176",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "name": "ceph_lv2",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "tags": {
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.cluster_name": "ceph",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.crush_device_class": "",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.encrypted": "0",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.osd_id": "2",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.type": "block",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:                "ceph.vdo": "0"
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            },
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "type": "block",
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:            "vg_name": "ceph_vg2"
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:        }
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]:    ]
Oct  3 11:13:16 compute-0 wonderful_heyrovsky[511941]: }
Oct  3 11:13:16 compute-0 systemd[1]: libpod-efbc2b464bc11e15196e7b636f5f7dea7e28438d6f7c5f4fd19198b2ba1d896c.scope: Deactivated successfully.
Oct  3 11:13:16 compute-0 podman[511924]: 2025-10-03 11:13:16.400001364 +0000 UTC m=+1.042675161 container died efbc2b464bc11e15196e7b636f5f7dea7e28438d6f7c5f4fd19198b2ba1d896c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 11:13:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a052f7126eb21568cf780b7813b7d4c9c7b455386bd0d0a5643e56bf8c0ab4a-merged.mount: Deactivated successfully.
Oct  3 11:13:16 compute-0 podman[511924]: 2025-10-03 11:13:16.476116705 +0000 UTC m=+1.118790502 container remove efbc2b464bc11e15196e7b636f5f7dea7e28438d6f7c5f4fd19198b2ba1d896c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:13:16 compute-0 systemd[1]: libpod-conmon-efbc2b464bc11e15196e7b636f5f7dea7e28438d6f7c5f4fd19198b2ba1d896c.scope: Deactivated successfully.
Oct  3 11:13:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3178: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:17 compute-0 podman[512100]: 2025-10-03 11:13:17.557580253 +0000 UTC m=+0.086439621 container create 833b03c3a19cbd802f17894d5dd7e7fcea052eb146374224ac4435bc95bd5019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_austin, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:13:17 compute-0 podman[512100]: 2025-10-03 11:13:17.523850606 +0000 UTC m=+0.052710014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:13:17 compute-0 systemd[1]: Started libpod-conmon-833b03c3a19cbd802f17894d5dd7e7fcea052eb146374224ac4435bc95bd5019.scope.
Oct  3 11:13:17 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:13:17 compute-0 podman[512100]: 2025-10-03 11:13:17.697988516 +0000 UTC m=+0.226847964 container init 833b03c3a19cbd802f17894d5dd7e7fcea052eb146374224ac4435bc95bd5019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:13:17 compute-0 podman[512100]: 2025-10-03 11:13:17.716372753 +0000 UTC m=+0.245232081 container start 833b03c3a19cbd802f17894d5dd7e7fcea052eb146374224ac4435bc95bd5019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_austin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 11:13:17 compute-0 podman[512100]: 2025-10-03 11:13:17.721809847 +0000 UTC m=+0.250669265 container attach 833b03c3a19cbd802f17894d5dd7e7fcea052eb146374224ac4435bc95bd5019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 11:13:17 compute-0 gracious_austin[512115]: 167 167
Oct  3 11:13:17 compute-0 systemd[1]: libpod-833b03c3a19cbd802f17894d5dd7e7fcea052eb146374224ac4435bc95bd5019.scope: Deactivated successfully.
Oct  3 11:13:17 compute-0 podman[512100]: 2025-10-03 11:13:17.727171308 +0000 UTC m=+0.256030646 container died 833b03c3a19cbd802f17894d5dd7e7fcea052eb146374224ac4435bc95bd5019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:13:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e3c7cf0daa701e51459d8d7d6888d1d4da9a93e0efe4357d5cf192afccbd5f5-merged.mount: Deactivated successfully.
Oct  3 11:13:17 compute-0 podman[512100]: 2025-10-03 11:13:17.813182064 +0000 UTC m=+0.342041402 container remove 833b03c3a19cbd802f17894d5dd7e7fcea052eb146374224ac4435bc95bd5019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_austin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:13:17 compute-0 systemd[1]: libpod-conmon-833b03c3a19cbd802f17894d5dd7e7fcea052eb146374224ac4435bc95bd5019.scope: Deactivated successfully.
Oct  3 11:13:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:13:18 compute-0 podman[512140]: 2025-10-03 11:13:18.088605068 +0000 UTC m=+0.076908017 container create 7932568051e3dcf6aa2ad5fda384d7e75712afcf22f6c78d3f7c278eacaedf7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wiles, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 11:13:18 compute-0 podman[512140]: 2025-10-03 11:13:18.057187754 +0000 UTC m=+0.045490723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:13:18 compute-0 systemd[1]: Started libpod-conmon-7932568051e3dcf6aa2ad5fda384d7e75712afcf22f6c78d3f7c278eacaedf7c.scope.
Oct  3 11:13:18 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:13:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b77911ea70a850a1ce50f9210134b283c3dccb06363c18f36265fbdf72668a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:13:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b77911ea70a850a1ce50f9210134b283c3dccb06363c18f36265fbdf72668a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:13:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b77911ea70a850a1ce50f9210134b283c3dccb06363c18f36265fbdf72668a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:13:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b77911ea70a850a1ce50f9210134b283c3dccb06363c18f36265fbdf72668a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:13:18 compute-0 podman[512140]: 2025-10-03 11:13:18.240814127 +0000 UTC m=+0.229117076 container init 7932568051e3dcf6aa2ad5fda384d7e75712afcf22f6c78d3f7c278eacaedf7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wiles, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:13:18 compute-0 podman[512140]: 2025-10-03 11:13:18.269313887 +0000 UTC m=+0.257616826 container start 7932568051e3dcf6aa2ad5fda384d7e75712afcf22f6c78d3f7c278eacaedf7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:13:18 compute-0 podman[512140]: 2025-10-03 11:13:18.275170654 +0000 UTC m=+0.263473643 container attach 7932568051e3dcf6aa2ad5fda384d7e75712afcf22f6c78d3f7c278eacaedf7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wiles, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 11:13:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3179: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:19 compute-0 nova_compute[351685]: 2025-10-03 11:13:19.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:19 compute-0 nova_compute[351685]: 2025-10-03 11:13:19.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:19 compute-0 romantic_wiles[512157]: {
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:        "osd_id": 1,
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:        "type": "bluestore"
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:    },
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:        "osd_id": 2,
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:        "type": "bluestore"
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:    },
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:        "osd_id": 0,
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:        "type": "bluestore"
Oct  3 11:13:19 compute-0 romantic_wiles[512157]:    }
Oct  3 11:13:19 compute-0 romantic_wiles[512157]: }
Oct  3 11:13:19 compute-0 systemd[1]: libpod-7932568051e3dcf6aa2ad5fda384d7e75712afcf22f6c78d3f7c278eacaedf7c.scope: Deactivated successfully.
Oct  3 11:13:19 compute-0 systemd[1]: libpod-7932568051e3dcf6aa2ad5fda384d7e75712afcf22f6c78d3f7c278eacaedf7c.scope: Consumed 1.268s CPU time.
Oct  3 11:13:19 compute-0 podman[512140]: 2025-10-03 11:13:19.540913866 +0000 UTC m=+1.529216805 container died 7932568051e3dcf6aa2ad5fda384d7e75712afcf22f6c78d3f7c278eacaedf7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 11:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b77911ea70a850a1ce50f9210134b283c3dccb06363c18f36265fbdf72668a9-merged.mount: Deactivated successfully.
Oct  3 11:13:19 compute-0 podman[512140]: 2025-10-03 11:13:19.651770005 +0000 UTC m=+1.640072954 container remove 7932568051e3dcf6aa2ad5fda384d7e75712afcf22f6c78d3f7c278eacaedf7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:13:19 compute-0 systemd[1]: libpod-conmon-7932568051e3dcf6aa2ad5fda384d7e75712afcf22f6c78d3f7c278eacaedf7c.scope: Deactivated successfully.
Oct  3 11:13:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:13:19 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:13:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:13:19 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:13:19 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev c0f883a2-d969-42b6-9536-0b4e210a5672 does not exist
Oct  3 11:13:19 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a2634003-759a-4932-84d9-ea4651d0fc75 does not exist
Oct  3 11:13:20 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:13:20 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:13:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3180: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:13:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3181: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:24 compute-0 nova_compute[351685]: 2025-10-03 11:13:24.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:24 compute-0 nova_compute[351685]: 2025-10-03 11:13:24.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3182: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3183: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:27 compute-0 podman[512256]: 2025-10-03 11:13:27.891736075 +0000 UTC m=+0.119660872 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  3 11:13:27 compute-0 podman[512254]: 2025-10-03 11:13:27.902911181 +0000 UTC m=+0.129980771 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=edpm, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 11:13:27 compute-0 podman[512257]: 2025-10-03 11:13:27.903735528 +0000 UTC m=+0.109841508 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Oct  3 11:13:27 compute-0 podman[512272]: 2025-10-03 11:13:27.914921265 +0000 UTC m=+0.119372813 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct  3 11:13:27 compute-0 podman[512255]: 2025-10-03 11:13:27.914814982 +0000 UTC m=+0.136156949 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  3 11:13:27 compute-0 podman[512253]: 2025-10-03 11:13:27.920516914 +0000 UTC m=+0.153914315 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 11:13:27 compute-0 podman[512263]: 2025-10-03 11:13:27.980815459 +0000 UTC m=+0.181239528 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  3 11:13:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:13:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3184: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:29 compute-0 nova_compute[351685]: 2025-10-03 11:13:29.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:29 compute-0 nova_compute[351685]: 2025-10-03 11:13:29.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:29 compute-0 podman[157165]: time="2025-10-03T11:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:13:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:13:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9117 "" "Go-http-client/1.1"
Oct  3 11:13:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3185: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:31 compute-0 openstack_network_exporter[367524]: ERROR   11:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:13:31 compute-0 openstack_network_exporter[367524]: ERROR   11:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:13:31 compute-0 openstack_network_exporter[367524]: ERROR   11:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:13:31 compute-0 openstack_network_exporter[367524]: ERROR   11:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:13:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:13:31 compute-0 openstack_network_exporter[367524]: ERROR   11:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:13:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:13:31 compute-0 nova_compute[351685]: 2025-10-03 11:13:31.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:13:31 compute-0 nova_compute[351685]: 2025-10-03 11:13:31.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:13:31 compute-0 nova_compute[351685]: 2025-10-03 11:13:31.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:13:32 compute-0 nova_compute[351685]: 2025-10-03 11:13:32.145 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:13:32 compute-0 nova_compute[351685]: 2025-10-03 11:13:32.145 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:13:32 compute-0 nova_compute[351685]: 2025-10-03 11:13:32.146 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:13:32 compute-0 nova_compute[351685]: 2025-10-03 11:13:32.146 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:13:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:13:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3186: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:33 compute-0 nova_compute[351685]: 2025-10-03 11:13:33.750 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:13:33 compute-0 nova_compute[351685]: 2025-10-03 11:13:33.776 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:13:33 compute-0 nova_compute[351685]: 2025-10-03 11:13:33.777 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:13:34 compute-0 nova_compute[351685]: 2025-10-03 11:13:34.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:34 compute-0 nova_compute[351685]: 2025-10-03 11:13:34.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3187: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:13:35.855655) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490015855718, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 765, "num_deletes": 251, "total_data_size": 1013547, "memory_usage": 1027304, "flush_reason": "Manual Compaction"}
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490015865943, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 1004463, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63976, "largest_seqno": 64740, "table_properties": {"data_size": 1000440, "index_size": 1802, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8791, "raw_average_key_size": 19, "raw_value_size": 992475, "raw_average_value_size": 2200, "num_data_blocks": 80, "num_entries": 451, "num_filter_entries": 451, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759489948, "oldest_key_time": 1759489948, "file_creation_time": 1759490015, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 10343 microseconds, and 4269 cpu microseconds.
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:13:35.866008) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 1004463 bytes OK
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:13:35.866033) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:13:35.868323) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:13:35.868344) EVENT_LOG_v1 {"time_micros": 1759490015868337, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:13:35.868371) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 1009669, prev total WAL file size 1009669, number of live WAL files 2.
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:13:35.869470) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(980KB)], [155(8235KB)]
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490015869552, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 9437897, "oldest_snapshot_seqno": -1}
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 7364 keys, 7677013 bytes, temperature: kUnknown
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490015941504, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 7677013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7634579, "index_size": 22929, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18437, "raw_key_size": 194274, "raw_average_key_size": 26, "raw_value_size": 7507662, "raw_average_value_size": 1019, "num_data_blocks": 895, "num_entries": 7364, "num_filter_entries": 7364, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759490015, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:13:35.941932) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 7677013 bytes
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:13:35.943724) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.7 rd, 106.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.0 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(17.0) write-amplify(7.6) OK, records in: 7877, records dropped: 513 output_compression: NoCompression
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:13:35.943869) EVENT_LOG_v1 {"time_micros": 1759490015943732, "job": 96, "event": "compaction_finished", "compaction_time_micros": 72193, "compaction_time_cpu_micros": 41453, "output_level": 6, "num_output_files": 1, "total_output_size": 7677013, "num_input_records": 7877, "num_output_records": 7364, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490015944171, "job": 96, "event": "table_file_deletion", "file_number": 157}
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490015946086, "job": 96, "event": "table_file_deletion", "file_number": 155}
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:13:35.869105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:13:35.946228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:13:35.946255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:13:35.946256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:13:35.946257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:13:35 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:13:35.946259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:13:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3188: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:13:38 compute-0 nova_compute[351685]: 2025-10-03 11:13:38.772 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:13:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3189: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:39 compute-0 nova_compute[351685]: 2025-10-03 11:13:39.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:39 compute-0 nova_compute[351685]: 2025-10-03 11:13:39.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:39 compute-0 podman[512390]: 2025-10-03 11:13:39.852560032 +0000 UTC m=+0.101867454 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:13:39 compute-0 podman[512392]: 2025-10-03 11:13:39.876709512 +0000 UTC m=+0.118047000 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  3 11:13:39 compute-0 podman[512391]: 2025-10-03 11:13:39.888559861 +0000 UTC m=+0.128878236 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.29.0, version=9.4, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, com.redhat.component=ubi9-container, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.902 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.903 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.903 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.904 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.910 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.911 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.911 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.911 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.912 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.913 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:13:40.912797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{'inspect_vnics': {}}], pollster history [{'network.outgoing.packets.drop': [<NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.918 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.919 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.919 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.919 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.919 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.919 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.920 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.920 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.920 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.920 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.920 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.920 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.921 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:13:40.919669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.921 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:13:40.920588) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.943 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.943 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.943 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.944 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.944 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.944 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.944 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.944 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.944 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.945 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:13:40.944552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.987 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.987 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.988 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.988 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.989 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.989 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.989 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.989 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.990 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.990 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.990 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:13:40.989455) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.990 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.991 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.991 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.991 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.991 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.991 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.992 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.992 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:13:40.991710) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.992 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.992 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.993 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.993 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.993 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.993 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.994 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.994 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:13:40.993910) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.994 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.995 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.995 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.995 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.995 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.995 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.996 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.996 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:13:40.996097) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.996 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.997 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.997 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.997 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.998 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.998 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.998 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.998 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.999 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.999 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:13:40.998578) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.999 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:40.999 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.000 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.000 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.000 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.000 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.000 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.001 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:13:41.000819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.027 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.027 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.028 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.028 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.028 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.028 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.028 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:13:41.028437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.029 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.029 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.029 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.029 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.030 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.030 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.030 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.030 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:13:41.030204) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.030 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.030 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.031 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.032 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.032 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.032 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.033 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.033 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.033 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:13:41.033410) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.034 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.035 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.036 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.036 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.036 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.036 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.037 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.037 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:13:41.036784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.038 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.038 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.039 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.039 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.039 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.039 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.040 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:13:41.039567) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.040 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.041 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.041 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.041 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:13:41.041511) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.042 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.042 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.042 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.043 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.043 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:13:41.042986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.043 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.044 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.044 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.045 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 95070000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.046 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.046 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.046 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:13:41.044802) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.046 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.046 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.046 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.047 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:13:41.046780) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.047 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.048 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.048 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.048 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.048 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.048 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:13:41.048739) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.049 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.049 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.050 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.050 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.050 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.050 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.051 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:13:41.050715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.051 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.052 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.052 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.052 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.052 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.053 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:13:41.052669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.053 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.054 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.054 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.054 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.055 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.055 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:13:41.054912) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.055 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.056 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.056 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.056 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.057 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:13:41.056916) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:13:41.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:13:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3190: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:13:41.686 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:13:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:13:41.687 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:13:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:13:41.687 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:13:41 compute-0 nova_compute[351685]: 2025-10-03 11:13:41.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:13:41 compute-0 nova_compute[351685]: 2025-10-03 11:13:41.760 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:13:41 compute-0 nova_compute[351685]: 2025-10-03 11:13:41.760 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:13:41 compute-0 nova_compute[351685]: 2025-10-03 11:13:41.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:13:41 compute-0 nova_compute[351685]: 2025-10-03 11:13:41.761 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:13:41 compute-0 nova_compute[351685]: 2025-10-03 11:13:41.762 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:13:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:13:42 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/379005081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:13:42 compute-0 nova_compute[351685]: 2025-10-03 11:13:42.291 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:13:42 compute-0 nova_compute[351685]: 2025-10-03 11:13:42.393 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:13:42 compute-0 nova_compute[351685]: 2025-10-03 11:13:42.394 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:13:42 compute-0 nova_compute[351685]: 2025-10-03 11:13:42.394 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:13:42 compute-0 nova_compute[351685]: 2025-10-03 11:13:42.962 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:13:42 compute-0 nova_compute[351685]: 2025-10-03 11:13:42.964 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3779MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:13:42 compute-0 nova_compute[351685]: 2025-10-03 11:13:42.964 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:13:42 compute-0 nova_compute[351685]: 2025-10-03 11:13:42.964 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:13:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:13:43 compute-0 nova_compute[351685]: 2025-10-03 11:13:43.051 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:13:43 compute-0 nova_compute[351685]: 2025-10-03 11:13:43.052 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:13:43 compute-0 nova_compute[351685]: 2025-10-03 11:13:43.052 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:13:43 compute-0 nova_compute[351685]: 2025-10-03 11:13:43.078 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 11:13:43 compute-0 nova_compute[351685]: 2025-10-03 11:13:43.098 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 11:13:43 compute-0 nova_compute[351685]: 2025-10-03 11:13:43.099 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 11:13:43 compute-0 nova_compute[351685]: 2025-10-03 11:13:43.121 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 11:13:43 compute-0 nova_compute[351685]: 2025-10-03 11:13:43.162 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 11:13:43 compute-0 nova_compute[351685]: 2025-10-03 11:13:43.198 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:13:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3191: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:13:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/333919175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:13:43 compute-0 nova_compute[351685]: 2025-10-03 11:13:43.738 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:13:43 compute-0 nova_compute[351685]: 2025-10-03 11:13:43.752 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:13:43 compute-0 nova_compute[351685]: 2025-10-03 11:13:43.790 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:13:43 compute-0 nova_compute[351685]: 2025-10-03 11:13:43.794 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:13:43 compute-0 nova_compute[351685]: 2025-10-03 11:13:43.795 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:13:44 compute-0 nova_compute[351685]: 2025-10-03 11:13:44.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:44 compute-0 nova_compute[351685]: 2025-10-03 11:13:44.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:44 compute-0 nova_compute[351685]: 2025-10-03 11:13:44.796 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:13:44 compute-0 nova_compute[351685]: 2025-10-03 11:13:44.797 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:13:44 compute-0 nova_compute[351685]: 2025-10-03 11:13:44.798 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:13:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3192: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:13:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:13:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:13:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:13:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:13:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:13:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:13:46
Oct  3 11:13:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:13:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:13:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['backups', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'images']
Oct  3 11:13:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:13:46 compute-0 nova_compute[351685]: 2025-10-03 11:13:46.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:13:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:13:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:13:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:13:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:13:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:13:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:13:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:13:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:13:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:13:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:13:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3193: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:47 compute-0 nova_compute[351685]: 2025-10-03 11:13:47.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:13:47 compute-0 nova_compute[351685]: 2025-10-03 11:13:47.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:13:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:13:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3194: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:49 compute-0 nova_compute[351685]: 2025-10-03 11:13:49.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:49 compute-0 nova_compute[351685]: 2025-10-03 11:13:49.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:50 compute-0 nova_compute[351685]: 2025-10-03 11:13:50.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:13:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3195: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:13:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3196: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:13:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3323638169' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:13:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:13:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3323638169' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:13:54 compute-0 nova_compute[351685]: 2025-10-03 11:13:54.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:54 compute-0 nova_compute[351685]: 2025-10-03 11:13:54.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3197: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:13:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:13:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3198: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:13:58 compute-0 podman[512521]: 2025-10-03 11:13:58.885473763 +0000 UTC m=+0.080491071 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 11:13:58 compute-0 podman[512499]: 2025-10-03 11:13:58.885531324 +0000 UTC m=+0.128171543 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:13:58 compute-0 podman[512503]: 2025-10-03 11:13:58.885659799 +0000 UTC m=+0.102237846 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:13:58 compute-0 podman[512501]: 2025-10-03 11:13:58.897086414 +0000 UTC m=+0.128268127 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  3 11:13:58 compute-0 podman[512509]: 2025-10-03 11:13:58.901842465 +0000 UTC m=+0.118292358 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct  3 11:13:58 compute-0 podman[512500]: 2025-10-03 11:13:58.902459305 +0000 UTC m=+0.135742825 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1755695350)
Oct  3 11:13:58 compute-0 podman[512513]: 2025-10-03 11:13:58.92111898 +0000 UTC m=+0.119913129 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 11:13:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3199: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:13:59 compute-0 nova_compute[351685]: 2025-10-03 11:13:59.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:59 compute-0 nova_compute[351685]: 2025-10-03 11:13:59.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:13:59 compute-0 podman[157165]: time="2025-10-03T11:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:13:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:13:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9099 "" "Go-http-client/1.1"
Oct  3 11:14:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3200: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:01 compute-0 openstack_network_exporter[367524]: ERROR   11:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:14:01 compute-0 openstack_network_exporter[367524]: ERROR   11:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:14:01 compute-0 openstack_network_exporter[367524]: ERROR   11:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:14:01 compute-0 openstack_network_exporter[367524]: ERROR   11:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:14:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:14:01 compute-0 openstack_network_exporter[367524]: ERROR   11:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:14:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:14:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:14:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3201: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:04 compute-0 nova_compute[351685]: 2025-10-03 11:14:04.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:04 compute-0 nova_compute[351685]: 2025-10-03 11:14:04.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3202: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3203: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:14:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3204: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:09 compute-0 nova_compute[351685]: 2025-10-03 11:14:09.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:09 compute-0 nova_compute[351685]: 2025-10-03 11:14:09.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:10 compute-0 podman[512634]: 2025-10-03 11:14:10.851191696 +0000 UTC m=+0.101314186 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:14:10 compute-0 podman[512635]: 2025-10-03 11:14:10.854369627 +0000 UTC m=+0.103085152 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.expose-services=, release=1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct  3 11:14:10 compute-0 podman[512636]: 2025-10-03 11:14:10.881781922 +0000 UTC m=+0.116104998 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:14:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3205: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:14:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3206: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:14 compute-0 nova_compute[351685]: 2025-10-03 11:14:14.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:14 compute-0 nova_compute[351685]: 2025-10-03 11:14:14.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3207: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:14:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:14:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:14:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:14:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:14:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:14:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3208: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:17 compute-0 nova_compute[351685]: 2025-10-03 11:14:17.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:14:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:14:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3209: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:19 compute-0 nova_compute[351685]: 2025-10-03 11:14:19.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:19 compute-0 nova_compute[351685]: 2025-10-03 11:14:19.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3210: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:14:21 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:14:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:14:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:14:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:14:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:14:21 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 7136cafd-0be7-4402-9a56-c0ca318cccc5 does not exist
Oct  3 11:14:21 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 51ea8c23-d9f7-46b1-8699-11dbf5d5804d does not exist
Oct  3 11:14:21 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a44b77f9-d5fd-4e64-ba13-857fc51937f4 does not exist
Oct  3 11:14:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:14:21 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:14:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:14:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:14:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:14:21 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:14:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:14:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:14:21 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:14:22 compute-0 podman[512960]: 2025-10-03 11:14:22.477474259 +0000 UTC m=+0.090492720 container create 4f3103275d3f844f058399f6c104e862f7d2a9d4c3c342c76379cfa03ef813a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 11:14:22 compute-0 podman[512960]: 2025-10-03 11:14:22.44241404 +0000 UTC m=+0.055432491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:14:22 compute-0 systemd[1]: Started libpod-conmon-4f3103275d3f844f058399f6c104e862f7d2a9d4c3c342c76379cfa03ef813a4.scope.
Oct  3 11:14:22 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:14:22 compute-0 podman[512960]: 2025-10-03 11:14:22.633367246 +0000 UTC m=+0.246385767 container init 4f3103275d3f844f058399f6c104e862f7d2a9d4c3c342c76379cfa03ef813a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:14:22 compute-0 podman[512960]: 2025-10-03 11:14:22.650944628 +0000 UTC m=+0.263963069 container start 4f3103275d3f844f058399f6c104e862f7d2a9d4c3c342c76379cfa03ef813a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_volhard, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:14:22 compute-0 podman[512960]: 2025-10-03 11:14:22.658102247 +0000 UTC m=+0.271120768 container attach 4f3103275d3f844f058399f6c104e862f7d2a9d4c3c342c76379cfa03ef813a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 11:14:22 compute-0 elastic_volhard[512976]: 167 167
Oct  3 11:14:22 compute-0 systemd[1]: libpod-4f3103275d3f844f058399f6c104e862f7d2a9d4c3c342c76379cfa03ef813a4.scope: Deactivated successfully.
Oct  3 11:14:22 compute-0 podman[512960]: 2025-10-03 11:14:22.665076539 +0000 UTC m=+0.278094970 container died 4f3103275d3f844f058399f6c104e862f7d2a9d4c3c342c76379cfa03ef813a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_volhard, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:14:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-361c871538e7d16ef1556c8d8994c82532c19ec8098d4a0f178a950633e00a9d-merged.mount: Deactivated successfully.
Oct  3 11:14:22 compute-0 podman[512960]: 2025-10-03 11:14:22.735989903 +0000 UTC m=+0.349008334 container remove 4f3103275d3f844f058399f6c104e862f7d2a9d4c3c342c76379cfa03ef813a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 11:14:22 compute-0 systemd[1]: libpod-conmon-4f3103275d3f844f058399f6c104e862f7d2a9d4c3c342c76379cfa03ef813a4.scope: Deactivated successfully.
Oct  3 11:14:22 compute-0 podman[512999]: 2025-10-03 11:14:22.962744643 +0000 UTC m=+0.070464341 container create c260c1a83ace97c7d8fea841a04d79bb4aca1ac9b977b8969d950732b8ac0e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:14:23 compute-0 podman[512999]: 2025-10-03 11:14:22.935044419 +0000 UTC m=+0.042764127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:14:23 compute-0 systemd[1]: Started libpod-conmon-c260c1a83ace97c7d8fea841a04d79bb4aca1ac9b977b8969d950732b8ac0e08.scope.
Oct  3 11:14:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:14:23 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2333884d7b3e28b464da8d1ad011715afc58629e08a7092469a8c1af2f7ce8bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2333884d7b3e28b464da8d1ad011715afc58629e08a7092469a8c1af2f7ce8bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2333884d7b3e28b464da8d1ad011715afc58629e08a7092469a8c1af2f7ce8bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2333884d7b3e28b464da8d1ad011715afc58629e08a7092469a8c1af2f7ce8bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2333884d7b3e28b464da8d1ad011715afc58629e08a7092469a8c1af2f7ce8bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:14:23 compute-0 podman[512999]: 2025-10-03 11:14:23.132005627 +0000 UTC m=+0.239725345 container init c260c1a83ace97c7d8fea841a04d79bb4aca1ac9b977b8969d950732b8ac0e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:14:23 compute-0 podman[512999]: 2025-10-03 11:14:23.147187032 +0000 UTC m=+0.254906720 container start c260c1a83ace97c7d8fea841a04d79bb4aca1ac9b977b8969d950732b8ac0e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 11:14:23 compute-0 podman[512999]: 2025-10-03 11:14:23.153502894 +0000 UTC m=+0.261222612 container attach c260c1a83ace97c7d8fea841a04d79bb4aca1ac9b977b8969d950732b8ac0e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 11:14:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3211: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:24 compute-0 angry_euler[513015]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:14:24 compute-0 angry_euler[513015]: --> relative data size: 1.0
Oct  3 11:14:24 compute-0 angry_euler[513015]: --> All data devices are unavailable
Oct  3 11:14:24 compute-0 systemd[1]: libpod-c260c1a83ace97c7d8fea841a04d79bb4aca1ac9b977b8969d950732b8ac0e08.scope: Deactivated successfully.
Oct  3 11:14:24 compute-0 podman[512999]: 2025-10-03 11:14:24.426427895 +0000 UTC m=+1.534147633 container died c260c1a83ace97c7d8fea841a04d79bb4aca1ac9b977b8969d950732b8ac0e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 11:14:24 compute-0 systemd[1]: libpod-c260c1a83ace97c7d8fea841a04d79bb4aca1ac9b977b8969d950732b8ac0e08.scope: Consumed 1.226s CPU time.
Oct  3 11:14:24 compute-0 nova_compute[351685]: 2025-10-03 11:14:24.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:24 compute-0 nova_compute[351685]: 2025-10-03 11:14:24.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2333884d7b3e28b464da8d1ad011715afc58629e08a7092469a8c1af2f7ce8bc-merged.mount: Deactivated successfully.
Oct  3 11:14:24 compute-0 podman[512999]: 2025-10-03 11:14:24.524750324 +0000 UTC m=+1.632470032 container remove c260c1a83ace97c7d8fea841a04d79bb4aca1ac9b977b8969d950732b8ac0e08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 11:14:24 compute-0 systemd[1]: libpod-conmon-c260c1a83ace97c7d8fea841a04d79bb4aca1ac9b977b8969d950732b8ac0e08.scope: Deactivated successfully.
Oct  3 11:14:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3212: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:25 compute-0 podman[513198]: 2025-10-03 11:14:25.628199525 +0000 UTC m=+0.058453348 container create ddd5df4fb6910b9a651df43c5682a76d1992f4b9dd6b37e9b1f7bfe56ff0647f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  3 11:14:25 compute-0 systemd[1]: Started libpod-conmon-ddd5df4fb6910b9a651df43c5682a76d1992f4b9dd6b37e9b1f7bfe56ff0647f.scope.
Oct  3 11:14:25 compute-0 podman[513198]: 2025-10-03 11:14:25.595548432 +0000 UTC m=+0.025802345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:14:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:14:25 compute-0 podman[513198]: 2025-10-03 11:14:25.74581085 +0000 UTC m=+0.176064673 container init ddd5df4fb6910b9a651df43c5682a76d1992f4b9dd6b37e9b1f7bfe56ff0647f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:14:25 compute-0 podman[513198]: 2025-10-03 11:14:25.75709531 +0000 UTC m=+0.187349173 container start ddd5df4fb6910b9a651df43c5682a76d1992f4b9dd6b37e9b1f7bfe56ff0647f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goodall, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 11:14:25 compute-0 zealous_goodall[513214]: 167 167
Oct  3 11:14:25 compute-0 podman[513198]: 2025-10-03 11:14:25.7636854 +0000 UTC m=+0.193939263 container attach ddd5df4fb6910b9a651df43c5682a76d1992f4b9dd6b37e9b1f7bfe56ff0647f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goodall, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:14:25 compute-0 systemd[1]: libpod-ddd5df4fb6910b9a651df43c5682a76d1992f4b9dd6b37e9b1f7bfe56ff0647f.scope: Deactivated successfully.
Oct  3 11:14:25 compute-0 podman[513198]: 2025-10-03 11:14:25.764691493 +0000 UTC m=+0.194945346 container died ddd5df4fb6910b9a651df43c5682a76d1992f4b9dd6b37e9b1f7bfe56ff0647f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goodall, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 11:14:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd754e35c45c6643405f221c38824cf937a82949cac79c08b228e9eb6a3f159f-merged.mount: Deactivated successfully.
Oct  3 11:14:25 compute-0 podman[513198]: 2025-10-03 11:14:25.825870296 +0000 UTC m=+0.256124129 container remove ddd5df4fb6910b9a651df43c5682a76d1992f4b9dd6b37e9b1f7bfe56ff0647f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goodall, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:14:25 compute-0 systemd[1]: libpod-conmon-ddd5df4fb6910b9a651df43c5682a76d1992f4b9dd6b37e9b1f7bfe56ff0647f.scope: Deactivated successfully.
Oct  3 11:14:26 compute-0 podman[513237]: 2025-10-03 11:14:26.080718983 +0000 UTC m=+0.086858915 container create cf1cbe99e4e358f4d2f9a0c346d1bbac66059e9ae351775a0090a806e706f0e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wozniak, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 11:14:26 compute-0 podman[513237]: 2025-10-03 11:14:26.043841875 +0000 UTC m=+0.049981867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:14:26 compute-0 systemd[1]: Started libpod-conmon-cf1cbe99e4e358f4d2f9a0c346d1bbac66059e9ae351775a0090a806e706f0e3.scope.
Oct  3 11:14:26 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6c5eaae9a616dce24d25fdb00bc9c3f513ba40a245e547c63f9ddbdf507b23a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6c5eaae9a616dce24d25fdb00bc9c3f513ba40a245e547c63f9ddbdf507b23a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6c5eaae9a616dce24d25fdb00bc9c3f513ba40a245e547c63f9ddbdf507b23a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6c5eaae9a616dce24d25fdb00bc9c3f513ba40a245e547c63f9ddbdf507b23a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:14:26 compute-0 podman[513237]: 2025-10-03 11:14:26.253642073 +0000 UTC m=+0.259782055 container init cf1cbe99e4e358f4d2f9a0c346d1bbac66059e9ae351775a0090a806e706f0e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wozniak, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:14:26 compute-0 podman[513237]: 2025-10-03 11:14:26.267413253 +0000 UTC m=+0.273553165 container start cf1cbe99e4e358f4d2f9a0c346d1bbac66059e9ae351775a0090a806e706f0e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wozniak, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:14:26 compute-0 podman[513237]: 2025-10-03 11:14:26.276425051 +0000 UTC m=+0.282565033 container attach cf1cbe99e4e358f4d2f9a0c346d1bbac66059e9ae351775a0090a806e706f0e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 11:14:27 compute-0 silly_wozniak[513253]: {
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:    "0": [
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:        {
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "devices": [
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "/dev/loop3"
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            ],
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "lv_name": "ceph_lv0",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "lv_size": "21470642176",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "name": "ceph_lv0",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "tags": {
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.cluster_name": "ceph",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.crush_device_class": "",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.encrypted": "0",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.osd_id": "0",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.type": "block",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.vdo": "0"
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            },
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "type": "block",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "vg_name": "ceph_vg0"
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:        }
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:    ],
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:    "1": [
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:        {
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "devices": [
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "/dev/loop4"
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            ],
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "lv_name": "ceph_lv1",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "lv_size": "21470642176",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "name": "ceph_lv1",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "tags": {
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.cluster_name": "ceph",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.crush_device_class": "",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.encrypted": "0",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.osd_id": "1",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.type": "block",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.vdo": "0"
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            },
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "type": "block",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "vg_name": "ceph_vg1"
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:        }
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:    ],
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:    "2": [
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:        {
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "devices": [
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "/dev/loop5"
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            ],
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "lv_name": "ceph_lv2",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "lv_size": "21470642176",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "name": "ceph_lv2",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "tags": {
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.cluster_name": "ceph",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.crush_device_class": "",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.encrypted": "0",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.osd_id": "2",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.type": "block",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:                "ceph.vdo": "0"
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            },
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "type": "block",
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:            "vg_name": "ceph_vg2"
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:        }
Oct  3 11:14:27 compute-0 silly_wozniak[513253]:    ]
Oct  3 11:14:27 compute-0 silly_wozniak[513253]: }
Oct  3 11:14:27 compute-0 systemd[1]: libpod-cf1cbe99e4e358f4d2f9a0c346d1bbac66059e9ae351775a0090a806e706f0e3.scope: Deactivated successfully.
Oct  3 11:14:27 compute-0 podman[513237]: 2025-10-03 11:14:27.108926931 +0000 UTC m=+1.115066883 container died cf1cbe99e4e358f4d2f9a0c346d1bbac66059e9ae351775a0090a806e706f0e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:14:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6c5eaae9a616dce24d25fdb00bc9c3f513ba40a245e547c63f9ddbdf507b23a-merged.mount: Deactivated successfully.
Oct  3 11:14:27 compute-0 podman[513237]: 2025-10-03 11:14:27.190193776 +0000 UTC m=+1.196333688 container remove cf1cbe99e4e358f4d2f9a0c346d1bbac66059e9ae351775a0090a806e706f0e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wozniak, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:14:27 compute-0 systemd[1]: libpod-conmon-cf1cbe99e4e358f4d2f9a0c346d1bbac66059e9ae351775a0090a806e706f0e3.scope: Deactivated successfully.
Oct  3 11:14:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3213: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:14:28 compute-0 podman[513410]: 2025-10-03 11:14:28.276784338 +0000 UTC m=+0.083728415 container create b09ba8a4bb8f3486126618ae986fed2226d1a398bef8db292cf75796952d9af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:14:28 compute-0 podman[513410]: 2025-10-03 11:14:28.234171047 +0000 UTC m=+0.041115164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:14:28 compute-0 systemd[1]: Started libpod-conmon-b09ba8a4bb8f3486126618ae986fed2226d1a398bef8db292cf75796952d9af0.scope.
Oct  3 11:14:28 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:14:28 compute-0 podman[513410]: 2025-10-03 11:14:28.408524893 +0000 UTC m=+0.215468980 container init b09ba8a4bb8f3486126618ae986fed2226d1a398bef8db292cf75796952d9af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:14:28 compute-0 podman[513410]: 2025-10-03 11:14:28.419901727 +0000 UTC m=+0.226845774 container start b09ba8a4bb8f3486126618ae986fed2226d1a398bef8db292cf75796952d9af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 11:14:28 compute-0 podman[513410]: 2025-10-03 11:14:28.425380532 +0000 UTC m=+0.232324619 container attach b09ba8a4bb8f3486126618ae986fed2226d1a398bef8db292cf75796952d9af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nobel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:14:28 compute-0 pensive_nobel[513426]: 167 167
Oct  3 11:14:28 compute-0 systemd[1]: libpod-b09ba8a4bb8f3486126618ae986fed2226d1a398bef8db292cf75796952d9af0.scope: Deactivated successfully.
Oct  3 11:14:28 compute-0 podman[513410]: 2025-10-03 11:14:28.430190356 +0000 UTC m=+0.237134423 container died b09ba8a4bb8f3486126618ae986fed2226d1a398bef8db292cf75796952d9af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nobel, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:14:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-560f8ca3088bc0447d972fad63a8874b7bd525006d42280b495509fda14cf2b1-merged.mount: Deactivated successfully.
Oct  3 11:14:28 compute-0 podman[513410]: 2025-10-03 11:14:28.485475341 +0000 UTC m=+0.292419398 container remove b09ba8a4bb8f3486126618ae986fed2226d1a398bef8db292cf75796952d9af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_nobel, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:14:28 compute-0 systemd[1]: libpod-conmon-b09ba8a4bb8f3486126618ae986fed2226d1a398bef8db292cf75796952d9af0.scope: Deactivated successfully.
Oct  3 11:14:28 compute-0 podman[513450]: 2025-10-03 11:14:28.706408674 +0000 UTC m=+0.066884706 container create da0fd7da22bc6426c3b984405e1240be981e6c43cb89d82924a4de3a546b8263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_napier, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 11:14:28 compute-0 podman[513450]: 2025-10-03 11:14:28.670696984 +0000 UTC m=+0.031173056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:14:28 compute-0 systemd[1]: Started libpod-conmon-da0fd7da22bc6426c3b984405e1240be981e6c43cb89d82924a4de3a546b8263.scope.
Oct  3 11:14:28 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e0c1956c601d691dc1d48932f078947b789208bcd52d0e8119c5d58fcfa88e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e0c1956c601d691dc1d48932f078947b789208bcd52d0e8119c5d58fcfa88e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e0c1956c601d691dc1d48932f078947b789208bcd52d0e8119c5d58fcfa88e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8e0c1956c601d691dc1d48932f078947b789208bcd52d0e8119c5d58fcfa88e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:14:28 compute-0 podman[513450]: 2025-10-03 11:14:28.875623328 +0000 UTC m=+0.236099380 container init da0fd7da22bc6426c3b984405e1240be981e6c43cb89d82924a4de3a546b8263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_napier, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 11:14:28 compute-0 podman[513450]: 2025-10-03 11:14:28.884035936 +0000 UTC m=+0.244511958 container start da0fd7da22bc6426c3b984405e1240be981e6c43cb89d82924a4de3a546b8263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:14:28 compute-0 podman[513450]: 2025-10-03 11:14:28.888323583 +0000 UTC m=+0.248799665 container attach da0fd7da22bc6426c3b984405e1240be981e6c43cb89d82924a4de3a546b8263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_napier, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:14:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3214: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:29 compute-0 nova_compute[351685]: 2025-10-03 11:14:29.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:29 compute-0 nova_compute[351685]: 2025-10-03 11:14:29.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:29 compute-0 podman[157165]: time="2025-10-03T11:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:14:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47832 "" "Go-http-client/1.1"
Oct  3 11:14:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9519 "" "Go-http-client/1.1"
Oct  3 11:14:29 compute-0 podman[513482]: 2025-10-03 11:14:29.890733157 +0000 UTC m=+0.104665993 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 11:14:29 compute-0 podman[513477]: 2025-10-03 11:14:29.901402658 +0000 UTC m=+0.140432195 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 11:14:29 compute-0 podman[513476]: 2025-10-03 11:14:29.904213707 +0000 UTC m=+0.143344107 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:14:29 compute-0 podman[513497]: 2025-10-03 11:14:29.908589328 +0000 UTC m=+0.103729143 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:14:29 compute-0 podman[513478]: 2025-10-03 11:14:29.919384132 +0000 UTC m=+0.131104267 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:14:29 compute-0 podman[513481]: 2025-10-03 11:14:29.923960348 +0000 UTC m=+0.134542537 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  3 11:14:29 compute-0 podman[513494]: 2025-10-03 11:14:29.940355531 +0000 UTC m=+0.148555514 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 11:14:30 compute-0 cool_napier[513466]: {
Oct  3 11:14:30 compute-0 cool_napier[513466]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:14:30 compute-0 cool_napier[513466]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:14:30 compute-0 cool_napier[513466]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:14:30 compute-0 cool_napier[513466]:        "osd_id": 1,
Oct  3 11:14:30 compute-0 cool_napier[513466]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:14:30 compute-0 cool_napier[513466]:        "type": "bluestore"
Oct  3 11:14:30 compute-0 cool_napier[513466]:    },
Oct  3 11:14:30 compute-0 cool_napier[513466]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:14:30 compute-0 cool_napier[513466]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:14:30 compute-0 cool_napier[513466]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:14:30 compute-0 cool_napier[513466]:        "osd_id": 2,
Oct  3 11:14:30 compute-0 cool_napier[513466]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:14:30 compute-0 cool_napier[513466]:        "type": "bluestore"
Oct  3 11:14:30 compute-0 cool_napier[513466]:    },
Oct  3 11:14:30 compute-0 cool_napier[513466]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:14:30 compute-0 cool_napier[513466]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:14:30 compute-0 cool_napier[513466]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:14:30 compute-0 cool_napier[513466]:        "osd_id": 0,
Oct  3 11:14:30 compute-0 cool_napier[513466]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:14:30 compute-0 cool_napier[513466]:        "type": "bluestore"
Oct  3 11:14:30 compute-0 cool_napier[513466]:    }
Oct  3 11:14:30 compute-0 cool_napier[513466]: }
Oct  3 11:14:30 compute-0 systemd[1]: libpod-da0fd7da22bc6426c3b984405e1240be981e6c43cb89d82924a4de3a546b8263.scope: Deactivated successfully.
Oct  3 11:14:30 compute-0 conmon[513466]: conmon da0fd7da22bc6426c3b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-da0fd7da22bc6426c3b984405e1240be981e6c43cb89d82924a4de3a546b8263.scope/container/memory.events
Oct  3 11:14:30 compute-0 podman[513450]: 2025-10-03 11:14:30.054055112 +0000 UTC m=+1.414531134 container died da0fd7da22bc6426c3b984405e1240be981e6c43cb89d82924a4de3a546b8263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_napier, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 11:14:30 compute-0 systemd[1]: libpod-da0fd7da22bc6426c3b984405e1240be981e6c43cb89d82924a4de3a546b8263.scope: Consumed 1.141s CPU time.
Oct  3 11:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8e0c1956c601d691dc1d48932f078947b789208bcd52d0e8119c5d58fcfa88e-merged.mount: Deactivated successfully.
Oct  3 11:14:30 compute-0 podman[513450]: 2025-10-03 11:14:30.132686793 +0000 UTC m=+1.493162815 container remove da0fd7da22bc6426c3b984405e1240be981e6c43cb89d82924a4de3a546b8263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:14:30 compute-0 systemd[1]: libpod-conmon-da0fd7da22bc6426c3b984405e1240be981e6c43cb89d82924a4de3a546b8263.scope: Deactivated successfully.
Oct  3 11:14:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:14:30 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:14:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:14:30 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:14:30 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b8492a04-da94-4ce0-8b70-705b645caea2 does not exist
Oct  3 11:14:30 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3a31f1b0-e9cf-412a-897f-1164e5cccf38 does not exist
Oct  3 11:14:30 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:14:30 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:14:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3215: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:31 compute-0 openstack_network_exporter[367524]: ERROR   11:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:14:31 compute-0 openstack_network_exporter[367524]: ERROR   11:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:14:31 compute-0 openstack_network_exporter[367524]: ERROR   11:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:14:31 compute-0 openstack_network_exporter[367524]: ERROR   11:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:14:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:14:31 compute-0 openstack_network_exporter[367524]: ERROR   11:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:14:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:14:31 compute-0 nova_compute[351685]: 2025-10-03 11:14:31.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:14:31 compute-0 nova_compute[351685]: 2025-10-03 11:14:31.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:14:31 compute-0 nova_compute[351685]: 2025-10-03 11:14:31.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:14:32 compute-0 nova_compute[351685]: 2025-10-03 11:14:32.139 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:14:32 compute-0 nova_compute[351685]: 2025-10-03 11:14:32.140 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:14:32 compute-0 nova_compute[351685]: 2025-10-03 11:14:32.140 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:14:32 compute-0 nova_compute[351685]: 2025-10-03 11:14:32.141 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:14:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:14:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3216: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:33 compute-0 nova_compute[351685]: 2025-10-03 11:14:33.442 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:14:33 compute-0 nova_compute[351685]: 2025-10-03 11:14:33.465 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:14:33 compute-0 nova_compute[351685]: 2025-10-03 11:14:33.466 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:14:34 compute-0 nova_compute[351685]: 2025-10-03 11:14:34.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:34 compute-0 nova_compute[351685]: 2025-10-03 11:14:34.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3217: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Oct  3 11:14:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3218: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct  3 11:14:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:14:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3219: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Oct  3 11:14:39 compute-0 nova_compute[351685]: 2025-10-03 11:14:39.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:39 compute-0 nova_compute[351685]: 2025-10-03 11:14:39.461 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:14:39 compute-0 nova_compute[351685]: 2025-10-03 11:14:39.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3220: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 11:14:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:14:41.687 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:14:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:14:41.688 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:14:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:14:41.689 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:14:41 compute-0 podman[513692]: 2025-10-03 11:14:41.880771717 +0000 UTC m=+0.114834087 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:14:41 compute-0 podman[513694]: 2025-10-03 11:14:41.891053666 +0000 UTC m=+0.112933927 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  3 11:14:41 compute-0 podman[513693]: 2025-10-03 11:14:41.934026738 +0000 UTC m=+0.161976603 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_id=edpm, release-0.7.12=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, io.openshift.expose-services=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9)
Oct  3 11:14:42 compute-0 nova_compute[351685]: 2025-10-03 11:14:42.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:14:42 compute-0 nova_compute[351685]: 2025-10-03 11:14:42.768 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:14:42 compute-0 nova_compute[351685]: 2025-10-03 11:14:42.769 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:14:42 compute-0 nova_compute[351685]: 2025-10-03 11:14:42.770 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:14:42 compute-0 nova_compute[351685]: 2025-10-03 11:14:42.770 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:14:42 compute-0 nova_compute[351685]: 2025-10-03 11:14:42.771 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:14:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:14:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:14:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2722233859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:14:43 compute-0 nova_compute[351685]: 2025-10-03 11:14:43.341 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:14:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3221: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 11:14:43 compute-0 nova_compute[351685]: 2025-10-03 11:14:43.445 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:14:43 compute-0 nova_compute[351685]: 2025-10-03 11:14:43.447 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:14:43 compute-0 nova_compute[351685]: 2025-10-03 11:14:43.447 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:14:43 compute-0 nova_compute[351685]: 2025-10-03 11:14:43.921 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:14:43 compute-0 nova_compute[351685]: 2025-10-03 11:14:43.923 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3802MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:14:43 compute-0 nova_compute[351685]: 2025-10-03 11:14:43.923 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:14:43 compute-0 nova_compute[351685]: 2025-10-03 11:14:43.924 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:14:44 compute-0 nova_compute[351685]: 2025-10-03 11:14:44.072 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:14:44 compute-0 nova_compute[351685]: 2025-10-03 11:14:44.073 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:14:44 compute-0 nova_compute[351685]: 2025-10-03 11:14:44.073 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:14:44 compute-0 nova_compute[351685]: 2025-10-03 11:14:44.201 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:14:44 compute-0 nova_compute[351685]: 2025-10-03 11:14:44.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:44 compute-0 nova_compute[351685]: 2025-10-03 11:14:44.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:14:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2343626277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:14:44 compute-0 nova_compute[351685]: 2025-10-03 11:14:44.815 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:14:44 compute-0 nova_compute[351685]: 2025-10-03 11:14:44.828 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:14:44 compute-0 nova_compute[351685]: 2025-10-03 11:14:44.849 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:14:44 compute-0 nova_compute[351685]: 2025-10-03 11:14:44.851 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:14:44 compute-0 nova_compute[351685]: 2025-10-03 11:14:44.851 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:14:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3222: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 11:14:45 compute-0 nova_compute[351685]: 2025-10-03 11:14:45.852 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:14:45 compute-0 nova_compute[351685]: 2025-10-03 11:14:45.854 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:14:45 compute-0 nova_compute[351685]: 2025-10-03 11:14:45.855 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:14:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:14:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:14:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:14:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:14:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:14:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:14:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:14:46
Oct  3 11:14:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:14:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:14:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'images', 'backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Oct  3 11:14:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:14:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:14:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:14:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:14:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:14:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:14:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:14:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:14:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:14:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:14:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:14:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3223: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Oct  3 11:14:47 compute-0 nova_compute[351685]: 2025-10-03 11:14:47.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:14:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:14:48 compute-0 nova_compute[351685]: 2025-10-03 11:14:48.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:14:48 compute-0 nova_compute[351685]: 2025-10-03 11:14:48.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:14:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3224: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct  3 11:14:49 compute-0 nova_compute[351685]: 2025-10-03 11:14:49.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:49 compute-0 nova_compute[351685]: 2025-10-03 11:14:49.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3225: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Oct  3 11:14:52 compute-0 nova_compute[351685]: 2025-10-03 11:14:52.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:14:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:14:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3226: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:14:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3216709966' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:14:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:14:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3216709966' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:14:54 compute-0 nova_compute[351685]: 2025-10-03 11:14:54.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:54 compute-0 nova_compute[351685]: 2025-10-03 11:14:54.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3227: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:14:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:14:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3228: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:14:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3229: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:14:59 compute-0 nova_compute[351685]: 2025-10-03 11:14:59.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:59 compute-0 nova_compute[351685]: 2025-10-03 11:14:59.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:14:59 compute-0 podman[157165]: time="2025-10-03T11:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:14:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:14:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9112 "" "Go-http-client/1.1"
Oct  3 11:15:00 compute-0 podman[513799]: 2025-10-03 11:15:00.855284463 +0000 UTC m=+0.107509194 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, name=ubi9-minimal, container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct  3 11:15:00 compute-0 podman[513801]: 2025-10-03 11:15:00.861091238 +0000 UTC m=+0.096178732 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:15:00 compute-0 podman[513802]: 2025-10-03 11:15:00.873343879 +0000 UTC m=+0.108341110 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:15:00 compute-0 podman[513800]: 2025-10-03 11:15:00.882980417 +0000 UTC m=+0.114144936 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:15:00 compute-0 podman[513798]: 2025-10-03 11:15:00.886154019 +0000 UTC m=+0.138513374 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:15:00 compute-0 podman[513816]: 2025-10-03 11:15:00.893827193 +0000 UTC m=+0.120393875 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  3 11:15:00 compute-0 podman[513803]: 2025-10-03 11:15:00.900589899 +0000 UTC m=+0.136550071 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  3 11:15:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3230: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:01 compute-0 openstack_network_exporter[367524]: ERROR   11:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:15:01 compute-0 openstack_network_exporter[367524]: ERROR   11:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:15:01 compute-0 openstack_network_exporter[367524]: ERROR   11:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:15:01 compute-0 openstack_network_exporter[367524]: ERROR   11:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:15:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:15:01 compute-0 openstack_network_exporter[367524]: ERROR   11:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:15:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:15:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:15:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3231: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:04 compute-0 nova_compute[351685]: 2025-10-03 11:15:04.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:04 compute-0 nova_compute[351685]: 2025-10-03 11:15:04.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3232: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3233: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:15:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3234: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:09 compute-0 nova_compute[351685]: 2025-10-03 11:15:09.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:09 compute-0 nova_compute[351685]: 2025-10-03 11:15:09.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3235: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:12 compute-0 podman[513938]: 2025-10-03 11:15:12.879941999 +0000 UTC m=+0.128264596 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:15:12 compute-0 podman[513940]: 2025-10-03 11:15:12.898770171 +0000 UTC m=+0.133082271 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 11:15:12 compute-0 podman[513939]: 2025-10-03 11:15:12.903895624 +0000 UTC m=+0.148161971 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-type=git, version=9.4, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=)
Oct  3 11:15:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:15:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3236: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:14 compute-0 nova_compute[351685]: 2025-10-03 11:15:14.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:14 compute-0 nova_compute[351685]: 2025-10-03 11:15:14.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3237: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:15:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:15:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:15:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:15:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:15:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:15:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3238: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:15:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3239: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:19 compute-0 nova_compute[351685]: 2025-10-03 11:15:19.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:19 compute-0 nova_compute[351685]: 2025-10-03 11:15:19.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3240: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:15:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3241: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:24 compute-0 nova_compute[351685]: 2025-10-03 11:15:24.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:24 compute-0 nova_compute[351685]: 2025-10-03 11:15:24.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3242: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3243: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:15:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3244: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:29 compute-0 nova_compute[351685]: 2025-10-03 11:15:29.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:29 compute-0 nova_compute[351685]: 2025-10-03 11:15:29.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:29 compute-0 podman[157165]: time="2025-10-03T11:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:15:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:15:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9100 "" "Go-http-client/1.1"
Oct  3 11:15:31 compute-0 podman[514098]: 2025-10-03 11:15:31.132849635 +0000 UTC m=+0.118231056 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 11:15:31 compute-0 podman[514099]: 2025-10-03 11:15:31.143086921 +0000 UTC m=+0.115389675 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, name=ubi9-minimal, io.buildah.version=1.33.7, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers)
Oct  3 11:15:31 compute-0 podman[514120]: 2025-10-03 11:15:31.165607161 +0000 UTC m=+0.101271195 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Oct  3 11:15:31 compute-0 podman[514132]: 2025-10-03 11:15:31.166641433 +0000 UTC m=+0.090300193 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:15:31 compute-0 podman[514100]: 2025-10-03 11:15:31.173399949 +0000 UTC m=+0.137710308 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  3 11:15:31 compute-0 podman[514107]: 2025-10-03 11:15:31.181050223 +0000 UTC m=+0.128113381 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:15:31 compute-0 podman[514125]: 2025-10-03 11:15:31.222493006 +0000 UTC m=+0.141460157 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  3 11:15:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3245: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:31 compute-0 openstack_network_exporter[367524]: ERROR   11:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:15:31 compute-0 openstack_network_exporter[367524]: ERROR   11:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:15:31 compute-0 openstack_network_exporter[367524]: ERROR   11:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:15:31 compute-0 openstack_network_exporter[367524]: ERROR   11:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:15:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:15:31 compute-0 openstack_network_exporter[367524]: ERROR   11:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:15:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:15:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:15:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:15:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:15:31 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:15:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:15:31 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:15:31 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 4a74ac95-53fe-40ac-8f4c-fa1da221baca does not exist
Oct  3 11:15:31 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 088f87c4-f960-4500-ab4a-c9e9340289af does not exist
Oct  3 11:15:31 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6175f31c-7feb-458e-b144-55412d64299d does not exist
Oct  3 11:15:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:15:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:15:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:15:31 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:15:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:15:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:15:31 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:15:31 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:15:31 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:15:32 compute-0 podman[514402]: 2025-10-03 11:15:32.823760342 +0000 UTC m=+0.081443422 container create 3cb7cda7e93cff68146017e35bc116be96a0c8fb22d8b254098d5a21797c2f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ritchie, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:15:32 compute-0 podman[514402]: 2025-10-03 11:15:32.797946057 +0000 UTC m=+0.055629117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:15:32 compute-0 systemd[1]: Started libpod-conmon-3cb7cda7e93cff68146017e35bc116be96a0c8fb22d8b254098d5a21797c2f20.scope.
Oct  3 11:15:32 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:15:33 compute-0 podman[514402]: 2025-10-03 11:15:33.007079174 +0000 UTC m=+0.264762274 container init 3cb7cda7e93cff68146017e35bc116be96a0c8fb22d8b254098d5a21797c2f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 11:15:33 compute-0 podman[514402]: 2025-10-03 11:15:33.029092207 +0000 UTC m=+0.286775277 container start 3cb7cda7e93cff68146017e35bc116be96a0c8fb22d8b254098d5a21797c2f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:15:33 compute-0 podman[514402]: 2025-10-03 11:15:33.036458923 +0000 UTC m=+0.294142043 container attach 3cb7cda7e93cff68146017e35bc116be96a0c8fb22d8b254098d5a21797c2f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ritchie, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:15:33 compute-0 hopeful_ritchie[514418]: 167 167
Oct  3 11:15:33 compute-0 systemd[1]: libpod-3cb7cda7e93cff68146017e35bc116be96a0c8fb22d8b254098d5a21797c2f20.scope: Deactivated successfully.
Oct  3 11:15:33 compute-0 podman[514402]: 2025-10-03 11:15:33.044122927 +0000 UTC m=+0.301806007 container died 3cb7cda7e93cff68146017e35bc116be96a0c8fb22d8b254098d5a21797c2f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ritchie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 11:15:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:15:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a39502a9d51744574341408de6c5497bf402449a120d29c73460933a0a642021-merged.mount: Deactivated successfully.
Oct  3 11:15:33 compute-0 podman[514402]: 2025-10-03 11:15:33.141996742 +0000 UTC m=+0.399679822 container remove 3cb7cda7e93cff68146017e35bc116be96a0c8fb22d8b254098d5a21797c2f20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ritchie, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 11:15:33 compute-0 systemd[1]: libpod-conmon-3cb7cda7e93cff68146017e35bc116be96a0c8fb22d8b254098d5a21797c2f20.scope: Deactivated successfully.
Oct  3 11:15:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3246: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:33 compute-0 podman[514443]: 2025-10-03 11:15:33.396602221 +0000 UTC m=+0.075848883 container create f3d55498b910d40a35812b804d44556abbb012f4d5cfaf0ef61704398b5d29c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cori, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:15:33 compute-0 podman[514443]: 2025-10-03 11:15:33.372347037 +0000 UTC m=+0.051593799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:15:33 compute-0 systemd[1]: Started libpod-conmon-f3d55498b910d40a35812b804d44556abbb012f4d5cfaf0ef61704398b5d29c5.scope.
Oct  3 11:15:33 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:15:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81afc46bc0f0d3e870bf7a0ae06bd30e8f852fbfc990321ff11645e2bea6d10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:15:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81afc46bc0f0d3e870bf7a0ae06bd30e8f852fbfc990321ff11645e2bea6d10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:15:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81afc46bc0f0d3e870bf7a0ae06bd30e8f852fbfc990321ff11645e2bea6d10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:15:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81afc46bc0f0d3e870bf7a0ae06bd30e8f852fbfc990321ff11645e2bea6d10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:15:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e81afc46bc0f0d3e870bf7a0ae06bd30e8f852fbfc990321ff11645e2bea6d10/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:15:33 compute-0 podman[514443]: 2025-10-03 11:15:33.567232169 +0000 UTC m=+0.246478911 container init f3d55498b910d40a35812b804d44556abbb012f4d5cfaf0ef61704398b5d29c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cori, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:15:33 compute-0 podman[514443]: 2025-10-03 11:15:33.580475771 +0000 UTC m=+0.259722473 container start f3d55498b910d40a35812b804d44556abbb012f4d5cfaf0ef61704398b5d29c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 11:15:33 compute-0 podman[514443]: 2025-10-03 11:15:33.587591409 +0000 UTC m=+0.266838101 container attach f3d55498b910d40a35812b804d44556abbb012f4d5cfaf0ef61704398b5d29c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cori, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 11:15:33 compute-0 nova_compute[351685]: 2025-10-03 11:15:33.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:15:33 compute-0 nova_compute[351685]: 2025-10-03 11:15:33.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:15:33 compute-0 nova_compute[351685]: 2025-10-03 11:15:33.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:15:34 compute-0 nova_compute[351685]: 2025-10-03 11:15:34.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:34 compute-0 nova_compute[351685]: 2025-10-03 11:15:34.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:34 compute-0 nova_compute[351685]: 2025-10-03 11:15:34.732 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:15:34 compute-0 nova_compute[351685]: 2025-10-03 11:15:34.732 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:15:34 compute-0 nova_compute[351685]: 2025-10-03 11:15:34.733 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:15:34 compute-0 nova_compute[351685]: 2025-10-03 11:15:34.733 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:15:34 compute-0 awesome_cori[514460]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:15:34 compute-0 awesome_cori[514460]: --> relative data size: 1.0
Oct  3 11:15:34 compute-0 awesome_cori[514460]: --> All data devices are unavailable
Oct  3 11:15:34 compute-0 systemd[1]: libpod-f3d55498b910d40a35812b804d44556abbb012f4d5cfaf0ef61704398b5d29c5.scope: Deactivated successfully.
Oct  3 11:15:34 compute-0 systemd[1]: libpod-f3d55498b910d40a35812b804d44556abbb012f4d5cfaf0ef61704398b5d29c5.scope: Consumed 1.166s CPU time.
Oct  3 11:15:34 compute-0 podman[514489]: 2025-10-03 11:15:34.88397753 +0000 UTC m=+0.042620952 container died f3d55498b910d40a35812b804d44556abbb012f4d5cfaf0ef61704398b5d29c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cori, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:15:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e81afc46bc0f0d3e870bf7a0ae06bd30e8f852fbfc990321ff11645e2bea6d10-merged.mount: Deactivated successfully.
Oct  3 11:15:34 compute-0 podman[514489]: 2025-10-03 11:15:34.971342939 +0000 UTC m=+0.129986351 container remove f3d55498b910d40a35812b804d44556abbb012f4d5cfaf0ef61704398b5d29c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 11:15:34 compute-0 systemd[1]: libpod-conmon-f3d55498b910d40a35812b804d44556abbb012f4d5cfaf0ef61704398b5d29c5.scope: Deactivated successfully.
Oct  3 11:15:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3247: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:36 compute-0 podman[514639]: 2025-10-03 11:15:36.142761419 +0000 UTC m=+0.062368883 container create f8ba4a3cc54759e58356e7792fe78c1a1f12c0ec05085bd12409cc060ac1cf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:15:36 compute-0 systemd[1]: Started libpod-conmon-f8ba4a3cc54759e58356e7792fe78c1a1f12c0ec05085bd12409cc060ac1cf4a.scope.
Oct  3 11:15:36 compute-0 podman[514639]: 2025-10-03 11:15:36.11620628 +0000 UTC m=+0.035813724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:15:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:15:36 compute-0 podman[514639]: 2025-10-03 11:15:36.25178329 +0000 UTC m=+0.171390774 container init f8ba4a3cc54759e58356e7792fe78c1a1f12c0ec05085bd12409cc060ac1cf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elion, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:15:36 compute-0 podman[514639]: 2025-10-03 11:15:36.261730847 +0000 UTC m=+0.181338301 container start f8ba4a3cc54759e58356e7792fe78c1a1f12c0ec05085bd12409cc060ac1cf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elion, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 11:15:36 compute-0 podman[514639]: 2025-10-03 11:15:36.268381359 +0000 UTC m=+0.187988883 container attach f8ba4a3cc54759e58356e7792fe78c1a1f12c0ec05085bd12409cc060ac1cf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 11:15:36 compute-0 eloquent_elion[514654]: 167 167
Oct  3 11:15:36 compute-0 systemd[1]: libpod-f8ba4a3cc54759e58356e7792fe78c1a1f12c0ec05085bd12409cc060ac1cf4a.scope: Deactivated successfully.
Oct  3 11:15:36 compute-0 podman[514639]: 2025-10-03 11:15:36.27465612 +0000 UTC m=+0.194263544 container died f8ba4a3cc54759e58356e7792fe78c1a1f12c0ec05085bd12409cc060ac1cf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elion, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:15:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-05184553a66175802932fda59bd20766994a11c4557eb65495574680dfca7274-merged.mount: Deactivated successfully.
Oct  3 11:15:36 compute-0 podman[514639]: 2025-10-03 11:15:36.326830085 +0000 UTC m=+0.246437509 container remove f8ba4a3cc54759e58356e7792fe78c1a1f12c0ec05085bd12409cc060ac1cf4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elion, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:15:36 compute-0 systemd[1]: libpod-conmon-f8ba4a3cc54759e58356e7792fe78c1a1f12c0ec05085bd12409cc060ac1cf4a.scope: Deactivated successfully.
Oct  3 11:15:36 compute-0 podman[514677]: 2025-10-03 11:15:36.620372077 +0000 UTC m=+0.083248349 container create 37cfc7a50ca4312eb4d64d5fa6fac1580a3fd3fe11978824a8ad6bcd47c9d710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_varahamihira, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 11:15:36 compute-0 podman[514677]: 2025-10-03 11:15:36.594505562 +0000 UTC m=+0.057381844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:15:36 compute-0 systemd[1]: Started libpod-conmon-37cfc7a50ca4312eb4d64d5fa6fac1580a3fd3fe11978824a8ad6bcd47c9d710.scope.
Oct  3 11:15:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:15:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de34be1b671c7b76929b93c1879d1be3c99caf2ee91e9eab02106ab1f174cd1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:15:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de34be1b671c7b76929b93c1879d1be3c99caf2ee91e9eab02106ab1f174cd1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:15:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de34be1b671c7b76929b93c1879d1be3c99caf2ee91e9eab02106ab1f174cd1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:15:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de34be1b671c7b76929b93c1879d1be3c99caf2ee91e9eab02106ab1f174cd1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:15:36 compute-0 podman[514677]: 2025-10-03 11:15:36.7541777 +0000 UTC m=+0.217054012 container init 37cfc7a50ca4312eb4d64d5fa6fac1580a3fd3fe11978824a8ad6bcd47c9d710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_varahamihira, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 11:15:36 compute-0 podman[514677]: 2025-10-03 11:15:36.779189998 +0000 UTC m=+0.242066270 container start 37cfc7a50ca4312eb4d64d5fa6fac1580a3fd3fe11978824a8ad6bcd47c9d710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Oct  3 11:15:36 compute-0 podman[514677]: 2025-10-03 11:15:36.785761127 +0000 UTC m=+0.248637419 container attach 37cfc7a50ca4312eb4d64d5fa6fac1580a3fd3fe11978824a8ad6bcd47c9d710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_varahamihira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 11:15:37 compute-0 nova_compute[351685]: 2025-10-03 11:15:36.996 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:15:37 compute-0 nova_compute[351685]: 2025-10-03 11:15:37.012 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:15:37 compute-0 nova_compute[351685]: 2025-10-03 11:15:37.012 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:15:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3248: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]: {
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:    "0": [
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:        {
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "devices": [
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "/dev/loop3"
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            ],
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "lv_name": "ceph_lv0",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "lv_size": "21470642176",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "name": "ceph_lv0",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "tags": {
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.cluster_name": "ceph",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.crush_device_class": "",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.encrypted": "0",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.osd_id": "0",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.type": "block",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.vdo": "0"
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            },
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "type": "block",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "vg_name": "ceph_vg0"
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:        }
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:    ],
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:    "1": [
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:        {
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "devices": [
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "/dev/loop4"
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            ],
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "lv_name": "ceph_lv1",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "lv_size": "21470642176",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "name": "ceph_lv1",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "tags": {
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.cluster_name": "ceph",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.crush_device_class": "",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.encrypted": "0",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.osd_id": "1",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.type": "block",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.vdo": "0"
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            },
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "type": "block",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "vg_name": "ceph_vg1"
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:        }
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:    ],
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:    "2": [
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:        {
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "devices": [
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "/dev/loop5"
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            ],
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "lv_name": "ceph_lv2",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "lv_size": "21470642176",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "name": "ceph_lv2",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "tags": {
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.cluster_name": "ceph",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.crush_device_class": "",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.encrypted": "0",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.osd_id": "2",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.type": "block",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:                "ceph.vdo": "0"
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            },
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "type": "block",
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:            "vg_name": "ceph_vg2"
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:        }
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]:    ]
Oct  3 11:15:37 compute-0 cool_varahamihira[514693]: }
Oct  3 11:15:37 compute-0 systemd[1]: libpod-37cfc7a50ca4312eb4d64d5fa6fac1580a3fd3fe11978824a8ad6bcd47c9d710.scope: Deactivated successfully.
Oct  3 11:15:37 compute-0 podman[514677]: 2025-10-03 11:15:37.683046366 +0000 UTC m=+1.145922638 container died 37cfc7a50ca4312eb4d64d5fa6fac1580a3fd3fe11978824a8ad6bcd47c9d710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_varahamihira, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:15:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-de34be1b671c7b76929b93c1879d1be3c99caf2ee91e9eab02106ab1f174cd1f-merged.mount: Deactivated successfully.
Oct  3 11:15:37 compute-0 podman[514677]: 2025-10-03 11:15:37.770084075 +0000 UTC m=+1.232960317 container remove 37cfc7a50ca4312eb4d64d5fa6fac1580a3fd3fe11978824a8ad6bcd47c9d710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_varahamihira, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct  3 11:15:37 compute-0 systemd[1]: libpod-conmon-37cfc7a50ca4312eb4d64d5fa6fac1580a3fd3fe11978824a8ad6bcd47c9d710.scope: Deactivated successfully.
Oct  3 11:15:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:15:38 compute-0 podman[514852]: 2025-10-03 11:15:38.91531574 +0000 UTC m=+0.085211952 container create bcf7e990379108c41981a4be82051af8466ffbbd1f2e3a488c6f5b556f01258a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_torvalds, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 11:15:38 compute-0 podman[514852]: 2025-10-03 11:15:38.87274362 +0000 UTC m=+0.042639892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:15:38 compute-0 systemd[1]: Started libpod-conmon-bcf7e990379108c41981a4be82051af8466ffbbd1f2e3a488c6f5b556f01258a.scope.
Oct  3 11:15:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:15:39 compute-0 podman[514852]: 2025-10-03 11:15:39.074802582 +0000 UTC m=+0.244698834 container init bcf7e990379108c41981a4be82051af8466ffbbd1f2e3a488c6f5b556f01258a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  3 11:15:39 compute-0 podman[514852]: 2025-10-03 11:15:39.091498615 +0000 UTC m=+0.261394817 container start bcf7e990379108c41981a4be82051af8466ffbbd1f2e3a488c6f5b556f01258a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_torvalds, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:15:39 compute-0 podman[514852]: 2025-10-03 11:15:39.09886525 +0000 UTC m=+0.268761522 container attach bcf7e990379108c41981a4be82051af8466ffbbd1f2e3a488c6f5b556f01258a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_torvalds, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 11:15:39 compute-0 objective_torvalds[514868]: 167 167
Oct  3 11:15:39 compute-0 systemd[1]: libpod-bcf7e990379108c41981a4be82051af8466ffbbd1f2e3a488c6f5b556f01258a.scope: Deactivated successfully.
Oct  3 11:15:39 compute-0 podman[514852]: 2025-10-03 11:15:39.104678655 +0000 UTC m=+0.274574857 container died bcf7e990379108c41981a4be82051af8466ffbbd1f2e3a488c6f5b556f01258a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_torvalds, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct  3 11:15:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-801ea1698e5697ee4336e83e27f13e24c1740f2800e065391f308c1fbc0237b3-merged.mount: Deactivated successfully.
Oct  3 11:15:39 compute-0 podman[514852]: 2025-10-03 11:15:39.169396082 +0000 UTC m=+0.339292264 container remove bcf7e990379108c41981a4be82051af8466ffbbd1f2e3a488c6f5b556f01258a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 11:15:39 compute-0 systemd[1]: libpod-conmon-bcf7e990379108c41981a4be82051af8466ffbbd1f2e3a488c6f5b556f01258a.scope: Deactivated successfully.
Oct  3 11:15:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3249: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:39 compute-0 podman[514890]: 2025-10-03 11:15:39.425699275 +0000 UTC m=+0.090986246 container create 603eefd62027d82c7dc65bae7a3b52a240b0feac9e3686867e84876c4f5e4d49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:15:39 compute-0 podman[514890]: 2025-10-03 11:15:39.389484628 +0000 UTC m=+0.054771659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:15:39 compute-0 systemd[1]: Started libpod-conmon-603eefd62027d82c7dc65bae7a3b52a240b0feac9e3686867e84876c4f5e4d49.scope.
Oct  3 11:15:39 compute-0 nova_compute[351685]: 2025-10-03 11:15:39.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:39 compute-0 nova_compute[351685]: 2025-10-03 11:15:39.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9002100fd48225c8db1ee5d6ae52e18e14b1dfe22e5c1ae610cdc4b45e0df4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9002100fd48225c8db1ee5d6ae52e18e14b1dfe22e5c1ae610cdc4b45e0df4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9002100fd48225c8db1ee5d6ae52e18e14b1dfe22e5c1ae610cdc4b45e0df4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9002100fd48225c8db1ee5d6ae52e18e14b1dfe22e5c1ae610cdc4b45e0df4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:15:39 compute-0 podman[514890]: 2025-10-03 11:15:39.596757866 +0000 UTC m=+0.262044837 container init 603eefd62027d82c7dc65bae7a3b52a240b0feac9e3686867e84876c4f5e4d49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 11:15:39 compute-0 podman[514890]: 2025-10-03 11:15:39.622619752 +0000 UTC m=+0.287906713 container start 603eefd62027d82c7dc65bae7a3b52a240b0feac9e3686867e84876c4f5e4d49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 11:15:39 compute-0 podman[514890]: 2025-10-03 11:15:39.628643134 +0000 UTC m=+0.293930105 container attach 603eefd62027d82c7dc65bae7a3b52a240b0feac9e3686867e84876c4f5e4d49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bardeen, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]: {
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:        "osd_id": 1,
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:        "type": "bluestore"
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:    },
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:        "osd_id": 2,
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:        "type": "bluestore"
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:    },
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:        "osd_id": 0,
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:        "type": "bluestore"
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]:    }
Oct  3 11:15:40 compute-0 wonderful_bardeen[514904]: }
Oct  3 11:15:40 compute-0 systemd[1]: libpod-603eefd62027d82c7dc65bae7a3b52a240b0feac9e3686867e84876c4f5e4d49.scope: Deactivated successfully.
Oct  3 11:15:40 compute-0 systemd[1]: libpod-603eefd62027d82c7dc65bae7a3b52a240b0feac9e3686867e84876c4f5e4d49.scope: Consumed 1.222s CPU time.
Oct  3 11:15:40 compute-0 podman[514890]: 2025-10-03 11:15:40.850092573 +0000 UTC m=+1.515379544 container died 603eefd62027d82c7dc65bae7a3b52a240b0feac9e3686867e84876c4f5e4d49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bardeen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 11:15:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c9002100fd48225c8db1ee5d6ae52e18e14b1dfe22e5c1ae610cdc4b45e0df4-merged.mount: Deactivated successfully.
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.903 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.904 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.905 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.915 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.915 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.916 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.916 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.916 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:15:40.916749) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.924 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.926 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.926 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.926 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.926 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.927 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.927 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.927 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.927 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:15:40.927183) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.928 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.928 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.928 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.929 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.929 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.930 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:15:40.930127) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:40 compute-0 podman[514890]: 2025-10-03 11:15:40.931389527 +0000 UTC m=+1.596676458 container remove 603eefd62027d82c7dc65bae7a3b52a240b0feac9e3686867e84876c4f5e4d49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bardeen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:15:40 compute-0 systemd[1]: libpod-conmon-603eefd62027d82c7dc65bae7a3b52a240b0feac9e3686867e84876c4f5e4d49.scope: Deactivated successfully.
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.958 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.959 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.959 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.960 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.961 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.961 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.962 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.962 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:40.963 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:15:40.962283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:15:40 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:15:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:15:40 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:15:41 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 08cc94ff-7b71-4a8d-a6fe-f34e927d599e does not exist
Oct  3 11:15:41 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a0b38e01-5fef-4be9-80e6-210f522a172b does not exist
Oct  3 11:15:41 compute-0 nova_compute[351685]: 2025-10-03 11:15:41.006 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.015 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.015 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.016 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.017 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.017 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.017 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.017 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.017 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.017 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.018 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.018 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.018 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.018 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.018 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.019 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.019 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.019 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.020 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.020 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.021 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.021 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.021 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.022 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.021 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:15:41.017380) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.022 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:15:41.018793) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.022 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:15:41.020107) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:15:41.021432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.022 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.023 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.023 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.023 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.023 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.024 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.024 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:15:41.022761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.024 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:15:41.024197) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.057 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.057 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.058 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.058 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.058 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.058 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.058 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.058 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:15:41.058188) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.058 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.059 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.059 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.059 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.059 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.059 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.060 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.060 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:15:41.059675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.060 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.061 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.061 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.062 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.062 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:15:41.061125) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.062 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.062 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.062 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.063 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.063 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.064 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:15:41.062352) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:15:41.063135) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.064 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.064 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.065 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.065 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.065 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.065 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.066 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.066 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 97090000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.066 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:15:41.064204) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:15:41.065000) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.067 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:15:41.066101) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.067 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:15:41.067214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.068 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.068 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.069 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.069 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.069 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.069 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.069 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.069 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.070 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.070 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:15:41.068113) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.070 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.070 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:15:41.069018) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.070 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:15:41.069999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.071 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.071 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.077 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.078 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:15:41.075385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.082 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.084 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.084 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:15:41.084364) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:15:41.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:15:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3250: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:15:41.688 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:15:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:15:41.689 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:15:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:15:41.690 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:15:41 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:15:41 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:15:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:15:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3251: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:43 compute-0 nova_compute[351685]: 2025-10-03 11:15:43.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:15:43 compute-0 podman[515001]: 2025-10-03 11:15:43.874659148 +0000 UTC m=+0.123315909 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, version=9.4, architecture=x86_64, name=ubi9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=edpm, io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.component=ubi9-container)
Oct  3 11:15:43 compute-0 podman[515002]: 2025-10-03 11:15:43.876124795 +0000 UTC m=+0.118754503 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Oct  3 11:15:43 compute-0 podman[515000]: 2025-10-03 11:15:43.891601938 +0000 UTC m=+0.140732784 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:15:44 compute-0 nova_compute[351685]: 2025-10-03 11:15:44.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:44 compute-0 nova_compute[351685]: 2025-10-03 11:15:44.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:44 compute-0 nova_compute[351685]: 2025-10-03 11:15:44.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:15:44 compute-0 nova_compute[351685]: 2025-10-03 11:15:44.769 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:15:44 compute-0 nova_compute[351685]: 2025-10-03 11:15:44.770 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:15:44 compute-0 nova_compute[351685]: 2025-10-03 11:15:44.770 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:15:44 compute-0 nova_compute[351685]: 2025-10-03 11:15:44.770 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:15:44 compute-0 nova_compute[351685]: 2025-10-03 11:15:44.771 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:15:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:15:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1059434562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:15:45 compute-0 nova_compute[351685]: 2025-10-03 11:15:45.321 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:15:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3252: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:45 compute-0 nova_compute[351685]: 2025-10-03 11:15:45.436 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:15:45 compute-0 nova_compute[351685]: 2025-10-03 11:15:45.438 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:15:45 compute-0 nova_compute[351685]: 2025-10-03 11:15:45.438 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:15:45 compute-0 nova_compute[351685]: 2025-10-03 11:15:45.898 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:15:45 compute-0 nova_compute[351685]: 2025-10-03 11:15:45.901 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3764MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:15:45 compute-0 nova_compute[351685]: 2025-10-03 11:15:45.901 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:15:45 compute-0 nova_compute[351685]: 2025-10-03 11:15:45.902 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:15:45 compute-0 nova_compute[351685]: 2025-10-03 11:15:45.996 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:15:45 compute-0 nova_compute[351685]: 2025-10-03 11:15:45.998 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:15:46 compute-0 nova_compute[351685]: 2025-10-03 11:15:45.999 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:15:46 compute-0 nova_compute[351685]: 2025-10-03 11:15:46.049 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:15:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:15:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:15:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:15:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:15:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:15:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:15:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:15:46
Oct  3 11:15:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:15:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:15:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.control', 'default.rgw.log']
Oct  3 11:15:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:15:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:15:46 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2575525925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:15:46 compute-0 nova_compute[351685]: 2025-10-03 11:15:46.570 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:15:46 compute-0 nova_compute[351685]: 2025-10-03 11:15:46.583 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:15:46 compute-0 nova_compute[351685]: 2025-10-03 11:15:46.609 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:15:46 compute-0 nova_compute[351685]: 2025-10-03 11:15:46.613 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:15:46 compute-0 nova_compute[351685]: 2025-10-03 11:15:46.614 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:15:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:15:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:15:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:15:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:15:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:15:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:15:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:15:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:15:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:15:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:15:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3253: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:47 compute-0 nova_compute[351685]: 2025-10-03 11:15:47.616 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:15:47 compute-0 nova_compute[351685]: 2025-10-03 11:15:47.617 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:15:47 compute-0 nova_compute[351685]: 2025-10-03 11:15:47.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:15:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:15:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3254: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:49 compute-0 nova_compute[351685]: 2025-10-03 11:15:49.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:49 compute-0 nova_compute[351685]: 2025-10-03 11:15:49.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:49 compute-0 nova_compute[351685]: 2025-10-03 11:15:49.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:15:50 compute-0 nova_compute[351685]: 2025-10-03 11:15:50.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:15:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3255: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:15:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3256: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:15:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3910510575' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:15:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:15:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3910510575' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:15:54 compute-0 nova_compute[351685]: 2025-10-03 11:15:54.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:54 compute-0 nova_compute[351685]: 2025-10-03 11:15:54.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:54 compute-0 nova_compute[351685]: 2025-10-03 11:15:54.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:15:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3257: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:15:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:15:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3258: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:15:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3259: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:15:59 compute-0 nova_compute[351685]: 2025-10-03 11:15:59.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:59 compute-0 nova_compute[351685]: 2025-10-03 11:15:59.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:15:59 compute-0 podman[157165]: time="2025-10-03T11:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:15:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:15:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9111 "" "Go-http-client/1.1"
Oct  3 11:16:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3260: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:01 compute-0 openstack_network_exporter[367524]: ERROR   11:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:16:01 compute-0 openstack_network_exporter[367524]: ERROR   11:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:16:01 compute-0 openstack_network_exporter[367524]: ERROR   11:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:16:01 compute-0 openstack_network_exporter[367524]: ERROR   11:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:16:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:16:01 compute-0 openstack_network_exporter[367524]: ERROR   11:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:16:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:16:01 compute-0 podman[515104]: 2025-10-03 11:16:01.887381734 +0000 UTC m=+0.130202638 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41)
Oct  3 11:16:01 compute-0 podman[515105]: 2025-10-03 11:16:01.890975379 +0000 UTC m=+0.120802758 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  3 11:16:01 compute-0 podman[515122]: 2025-10-03 11:16:01.90670407 +0000 UTC m=+0.113960929 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  3 11:16:01 compute-0 podman[515103]: 2025-10-03 11:16:01.909760048 +0000 UTC m=+0.157644214 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 11:16:01 compute-0 podman[515112]: 2025-10-03 11:16:01.924951064 +0000 UTC m=+0.149167664 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true)
Oct  3 11:16:01 compute-0 podman[515106]: 2025-10-03 11:16:01.925368107 +0000 UTC m=+0.144755153 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd)
Oct  3 11:16:01 compute-0 podman[515117]: 2025-10-03 11:16:01.931349797 +0000 UTC m=+0.145479436 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 11:16:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:16:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3261: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:04 compute-0 nova_compute[351685]: 2025-10-03 11:16:04.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:16:04 compute-0 nova_compute[351685]: 2025-10-03 11:16:04.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:16:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3262: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3263: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:16:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3264: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:09 compute-0 nova_compute[351685]: 2025-10-03 11:16:09.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:16:09 compute-0 nova_compute[351685]: 2025-10-03 11:16:09.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:16:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3265: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:16:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3266: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:14 compute-0 nova_compute[351685]: 2025-10-03 11:16:14.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:16:14 compute-0 nova_compute[351685]: 2025-10-03 11:16:14.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:16:14 compute-0 podman[515237]: 2025-10-03 11:16:14.825606818 +0000 UTC m=+0.102154574 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 11:16:14 compute-0 podman[515239]: 2025-10-03 11:16:14.830868745 +0000 UTC m=+0.092421011 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible)
Oct  3 11:16:14 compute-0 podman[515238]: 2025-10-03 11:16:14.84762503 +0000 UTC m=+0.126995176 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, io.openshift.expose-services=, version=9.4, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vcs-type=git)
Oct  3 11:16:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3267: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:16:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:16:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:16:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:16:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:16:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:16:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3268: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:16:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3269: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:19 compute-0 nova_compute[351685]: 2025-10-03 11:16:19.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:16:19 compute-0 nova_compute[351685]: 2025-10-03 11:16:19.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:16:19 compute-0 nova_compute[351685]: 2025-10-03 11:16:19.724 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:16:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3270: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:16:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3271: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:24 compute-0 nova_compute[351685]: 2025-10-03 11:16:24.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:16:24 compute-0 nova_compute[351685]: 2025-10-03 11:16:24.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:16:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3272: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3273: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:16:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3274: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:29 compute-0 nova_compute[351685]: 2025-10-03 11:16:29.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:16:29 compute-0 nova_compute[351685]: 2025-10-03 11:16:29.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:16:29 compute-0 nova_compute[351685]: 2025-10-03 11:16:29.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  3 11:16:29 compute-0 nova_compute[351685]: 2025-10-03 11:16:29.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:16:29 compute-0 nova_compute[351685]: 2025-10-03 11:16:29.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:16:29 compute-0 nova_compute[351685]: 2025-10-03 11:16:29.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:16:29 compute-0 podman[157165]: time="2025-10-03T11:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:16:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:16:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9096 "" "Go-http-client/1.1"
Oct  3 11:16:31 compute-0 openstack_network_exporter[367524]: ERROR   11:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:16:31 compute-0 openstack_network_exporter[367524]: ERROR   11:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:16:31 compute-0 openstack_network_exporter[367524]: ERROR   11:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:16:31 compute-0 openstack_network_exporter[367524]: ERROR   11:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:16:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:16:31 compute-0 openstack_network_exporter[367524]: ERROR   11:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:16:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:16:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3275: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:32 compute-0 podman[515296]: 2025-10-03 11:16:32.867109458 +0000 UTC m=+0.110859290 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 11:16:32 compute-0 podman[515297]: 2025-10-03 11:16:32.875664281 +0000 UTC m=+0.116037985 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct  3 11:16:32 compute-0 podman[515299]: 2025-10-03 11:16:32.876507918 +0000 UTC m=+0.096570754 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  3 11:16:32 compute-0 podman[515304]: 2025-10-03 11:16:32.888775389 +0000 UTC m=+0.108009289 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:16:32 compute-0 podman[515298]: 2025-10-03 11:16:32.893688657 +0000 UTC m=+0.130535469 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  3 11:16:32 compute-0 podman[515321]: 2025-10-03 11:16:32.910554756 +0000 UTC m=+0.122016978 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct  3 11:16:32 compute-0 podman[515312]: 2025-10-03 11:16:32.929719767 +0000 UTC m=+0.155109353 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  3 11:16:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:16:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3276: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:33 compute-0 nova_compute[351685]: 2025-10-03 11:16:33.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:16:33 compute-0 nova_compute[351685]: 2025-10-03 11:16:33.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:16:33 compute-0 nova_compute[351685]: 2025-10-03 11:16:33.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:16:34 compute-0 nova_compute[351685]: 2025-10-03 11:16:34.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:16:34 compute-0 nova_compute[351685]: 2025-10-03 11:16:34.735 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:16:34 compute-0 nova_compute[351685]: 2025-10-03 11:16:34.736 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:16:34 compute-0 nova_compute[351685]: 2025-10-03 11:16:34.736 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:16:34 compute-0 nova_compute[351685]: 2025-10-03 11:16:34.737 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:16:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3277: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3278: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:37 compute-0 nova_compute[351685]: 2025-10-03 11:16:37.743 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:16:37 compute-0 nova_compute[351685]: 2025-10-03 11:16:37.756 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:16:37 compute-0 nova_compute[351685]: 2025-10-03 11:16:37.757 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:16:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:16:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3279: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:39 compute-0 nova_compute[351685]: 2025-10-03 11:16:39.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:16:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3280: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:16:41.690 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:16:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:16:41.691 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:16:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:16:41.691 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:16:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:16:42 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:16:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:16:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:16:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:16:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:16:42 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e485f388-635c-4979-9b83-a519d7035ffa does not exist
Oct  3 11:16:42 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 277901be-f5e3-432c-924d-98ef025fedce does not exist
Oct  3 11:16:42 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ae9dd7d0-4a59-494e-9245-0384cb8b8798 does not exist
Oct  3 11:16:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:16:42 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:16:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:16:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:16:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:16:42 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:16:42 compute-0 nova_compute[351685]: 2025-10-03 11:16:42.752 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:16:42 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:16:42 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:16:42 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:16:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:16:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3281: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:43 compute-0 podman[515698]: 2025-10-03 11:16:43.560880791 +0000 UTC m=+0.074129847 container create b9ba52d7262c19d8ca290bcffab0f5b198c30b8f92750ca6b5a8b776f7c74cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaum, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 11:16:43 compute-0 podman[515698]: 2025-10-03 11:16:43.532535147 +0000 UTC m=+0.045784283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:16:43 compute-0 systemd[1]: Started libpod-conmon-b9ba52d7262c19d8ca290bcffab0f5b198c30b8f92750ca6b5a8b776f7c74cea.scope.
Oct  3 11:16:43 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:16:43 compute-0 podman[515698]: 2025-10-03 11:16:43.727822352 +0000 UTC m=+0.241071428 container init b9ba52d7262c19d8ca290bcffab0f5b198c30b8f92750ca6b5a8b776f7c74cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaum, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 11:16:43 compute-0 podman[515698]: 2025-10-03 11:16:43.747045996 +0000 UTC m=+0.260295032 container start b9ba52d7262c19d8ca290bcffab0f5b198c30b8f92750ca6b5a8b776f7c74cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaum, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:16:43 compute-0 podman[515698]: 2025-10-03 11:16:43.752470679 +0000 UTC m=+0.265719755 container attach b9ba52d7262c19d8ca290bcffab0f5b198c30b8f92750ca6b5a8b776f7c74cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaum, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 11:16:43 compute-0 agitated_chaum[515714]: 167 167
Oct  3 11:16:43 compute-0 systemd[1]: libpod-b9ba52d7262c19d8ca290bcffab0f5b198c30b8f92750ca6b5a8b776f7c74cea.scope: Deactivated successfully.
Oct  3 11:16:43 compute-0 conmon[515714]: conmon b9ba52d7262c19d8ca29 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9ba52d7262c19d8ca290bcffab0f5b198c30b8f92750ca6b5a8b776f7c74cea.scope/container/memory.events
Oct  3 11:16:43 compute-0 podman[515698]: 2025-10-03 11:16:43.763379737 +0000 UTC m=+0.276628803 container died b9ba52d7262c19d8ca290bcffab0f5b198c30b8f92750ca6b5a8b776f7c74cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaum, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 11:16:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b5207ad1739475bc45c03c3c1d01f847199c97b69a7c3723dc085c01689dcb9-merged.mount: Deactivated successfully.
Oct  3 11:16:43 compute-0 podman[515698]: 2025-10-03 11:16:43.848475474 +0000 UTC m=+0.361724520 container remove b9ba52d7262c19d8ca290bcffab0f5b198c30b8f92750ca6b5a8b776f7c74cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaum, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 11:16:43 compute-0 systemd[1]: libpod-conmon-b9ba52d7262c19d8ca290bcffab0f5b198c30b8f92750ca6b5a8b776f7c74cea.scope: Deactivated successfully.
Oct  3 11:16:44 compute-0 podman[515736]: 2025-10-03 11:16:44.124557278 +0000 UTC m=+0.073596600 container create c1e1bd880dabf1dcbb4dc80ee994d4988cd4b525cf916239a772bd2aa7544df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamport, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:16:44 compute-0 podman[515736]: 2025-10-03 11:16:44.093050882 +0000 UTC m=+0.042090254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:16:44 compute-0 systemd[1]: Started libpod-conmon-c1e1bd880dabf1dcbb4dc80ee994d4988cd4b525cf916239a772bd2aa7544df5.scope.
Oct  3 11:16:44 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:16:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8824c4561b6f46149078830347e597f0392a1ed138a4b580f7e976cbc8904b75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:16:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8824c4561b6f46149078830347e597f0392a1ed138a4b580f7e976cbc8904b75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:16:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8824c4561b6f46149078830347e597f0392a1ed138a4b580f7e976cbc8904b75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:16:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8824c4561b6f46149078830347e597f0392a1ed138a4b580f7e976cbc8904b75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:16:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8824c4561b6f46149078830347e597f0392a1ed138a4b580f7e976cbc8904b75/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:16:44 compute-0 podman[515736]: 2025-10-03 11:16:44.3156563 +0000 UTC m=+0.264695602 container init c1e1bd880dabf1dcbb4dc80ee994d4988cd4b525cf916239a772bd2aa7544df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamport, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:16:44 compute-0 podman[515736]: 2025-10-03 11:16:44.335620358 +0000 UTC m=+0.284659660 container start c1e1bd880dabf1dcbb4dc80ee994d4988cd4b525cf916239a772bd2aa7544df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 11:16:44 compute-0 podman[515736]: 2025-10-03 11:16:44.34447189 +0000 UTC m=+0.293511252 container attach c1e1bd880dabf1dcbb4dc80ee994d4988cd4b525cf916239a772bd2aa7544df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamport, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:16:44 compute-0 nova_compute[351685]: 2025-10-03 11:16:44.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:16:44 compute-0 nova_compute[351685]: 2025-10-03 11:16:44.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:16:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3282: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:45 compute-0 silly_lamport[515751]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:16:45 compute-0 silly_lamport[515751]: --> relative data size: 1.0
Oct  3 11:16:45 compute-0 silly_lamport[515751]: --> All data devices are unavailable
Oct  3 11:16:45 compute-0 systemd[1]: libpod-c1e1bd880dabf1dcbb4dc80ee994d4988cd4b525cf916239a772bd2aa7544df5.scope: Deactivated successfully.
Oct  3 11:16:45 compute-0 systemd[1]: libpod-c1e1bd880dabf1dcbb4dc80ee994d4988cd4b525cf916239a772bd2aa7544df5.scope: Consumed 1.257s CPU time.
Oct  3 11:16:45 compute-0 podman[515736]: 2025-10-03 11:16:45.690352801 +0000 UTC m=+1.639392133 container died c1e1bd880dabf1dcbb4dc80ee994d4988cd4b525cf916239a772bd2aa7544df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  3 11:16:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-8824c4561b6f46149078830347e597f0392a1ed138a4b580f7e976cbc8904b75-merged.mount: Deactivated successfully.
Oct  3 11:16:45 compute-0 podman[515736]: 2025-10-03 11:16:45.78898104 +0000 UTC m=+1.738020332 container remove c1e1bd880dabf1dcbb4dc80ee994d4988cd4b525cf916239a772bd2aa7544df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamport, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 11:16:45 compute-0 systemd[1]: libpod-conmon-c1e1bd880dabf1dcbb4dc80ee994d4988cd4b525cf916239a772bd2aa7544df5.scope: Deactivated successfully.
Oct  3 11:16:45 compute-0 podman[515780]: 2025-10-03 11:16:45.855976339 +0000 UTC m=+0.117615106 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 11:16:45 compute-0 podman[515784]: 2025-10-03 11:16:45.862424324 +0000 UTC m=+0.126585191 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:16:45 compute-0 podman[515783]: 2025-10-03 11:16:45.870018097 +0000 UTC m=+0.133216034 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.openshift.expose-services=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc.)
Oct  3 11:16:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:16:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:16:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:16:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:16:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:16:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:16:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:16:46
Oct  3 11:16:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:16:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:16:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'backups', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.mgr', 'default.rgw.meta', 'images']
Oct  3 11:16:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:16:46 compute-0 nova_compute[351685]: 2025-10-03 11:16:46.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:16:46 compute-0 nova_compute[351685]: 2025-10-03 11:16:46.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:16:46 compute-0 nova_compute[351685]: 2025-10-03 11:16:46.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:16:46 compute-0 nova_compute[351685]: 2025-10-03 11:16:46.762 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:16:46 compute-0 nova_compute[351685]: 2025-10-03 11:16:46.764 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:16:46 compute-0 nova_compute[351685]: 2025-10-03 11:16:46.764 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:16:46 compute-0 nova_compute[351685]: 2025-10-03 11:16:46.765 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:16:46 compute-0 nova_compute[351685]: 2025-10-03 11:16:46.765 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:16:46 compute-0 podman[515990]: 2025-10-03 11:16:46.825432781 +0000 UTC m=+0.063065424 container create 50b7c7a2e3ad75c250f43bcebb932cb124392f2b224e7f65ec0dfdb338bd22e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:16:46 compute-0 podman[515990]: 2025-10-03 11:16:46.801299901 +0000 UTC m=+0.038932534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:16:46 compute-0 systemd[1]: Started libpod-conmon-50b7c7a2e3ad75c250f43bcebb932cb124392f2b224e7f65ec0dfdb338bd22e1.scope.
Oct  3 11:16:46 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:16:46 compute-0 podman[515990]: 2025-10-03 11:16:46.969142799 +0000 UTC m=+0.206775452 container init 50b7c7a2e3ad75c250f43bcebb932cb124392f2b224e7f65ec0dfdb338bd22e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 11:16:46 compute-0 podman[515990]: 2025-10-03 11:16:46.990289124 +0000 UTC m=+0.227921777 container start 50b7c7a2e3ad75c250f43bcebb932cb124392f2b224e7f65ec0dfdb338bd22e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:16:47 compute-0 podman[515990]: 2025-10-03 11:16:46.999957883 +0000 UTC m=+0.237590546 container attach 50b7c7a2e3ad75c250f43bcebb932cb124392f2b224e7f65ec0dfdb338bd22e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:16:47 compute-0 elegant_vaughan[516017]: 167 167
Oct  3 11:16:47 compute-0 systemd[1]: libpod-50b7c7a2e3ad75c250f43bcebb932cb124392f2b224e7f65ec0dfdb338bd22e1.scope: Deactivated successfully.
Oct  3 11:16:47 compute-0 conmon[516017]: conmon 50b7c7a2e3ad75c250f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50b7c7a2e3ad75c250f43bcebb932cb124392f2b224e7f65ec0dfdb338bd22e1.scope/container/memory.events
Oct  3 11:16:47 compute-0 podman[515990]: 2025-10-03 11:16:47.014824328 +0000 UTC m=+0.252457001 container died 50b7c7a2e3ad75c250f43bcebb932cb124392f2b224e7f65ec0dfdb338bd22e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:16:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-23420aa2a35f7324cd271c8e05fbfe8824b9f399cc57a51f0e6b6a0ba1179975-merged.mount: Deactivated successfully.
Oct  3 11:16:47 compute-0 podman[515990]: 2025-10-03 11:16:47.091446674 +0000 UTC m=+0.329079297 container remove 50b7c7a2e3ad75c250f43bcebb932cb124392f2b224e7f65ec0dfdb338bd22e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 11:16:47 compute-0 systemd[1]: libpod-conmon-50b7c7a2e3ad75c250f43bcebb932cb124392f2b224e7f65ec0dfdb338bd22e1.scope: Deactivated successfully.
Oct  3 11:16:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:16:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:16:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:16:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:16:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:16:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:16:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:16:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:16:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:16:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:16:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:16:47 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/918257009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:16:47 compute-0 podman[516049]: 2025-10-03 11:16:47.350127514 +0000 UTC m=+0.088850998 container create ad403d6b1371ee05fb6558852caf04a0706b29f751d625c6e8cd60d1ddb27db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:16:47 compute-0 nova_compute[351685]: 2025-10-03 11:16:47.360 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:16:47 compute-0 podman[516049]: 2025-10-03 11:16:47.313141452 +0000 UTC m=+0.051864956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:16:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3283: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:47 compute-0 systemd[1]: Started libpod-conmon-ad403d6b1371ee05fb6558852caf04a0706b29f751d625c6e8cd60d1ddb27db0.scope.
Oct  3 11:16:47 compute-0 nova_compute[351685]: 2025-10-03 11:16:47.454 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:16:47 compute-0 nova_compute[351685]: 2025-10-03 11:16:47.455 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:16:47 compute-0 nova_compute[351685]: 2025-10-03 11:16:47.455 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:16:47 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a54c308602039c6987b5635afa861666886a81e9f4aac95d5893f2a12e05233/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a54c308602039c6987b5635afa861666886a81e9f4aac95d5893f2a12e05233/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a54c308602039c6987b5635afa861666886a81e9f4aac95d5893f2a12e05233/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a54c308602039c6987b5635afa861666886a81e9f4aac95d5893f2a12e05233/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:16:47 compute-0 podman[516049]: 2025-10-03 11:16:47.554188389 +0000 UTC m=+0.292911873 container init ad403d6b1371ee05fb6558852caf04a0706b29f751d625c6e8cd60d1ddb27db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:16:47 compute-0 podman[516049]: 2025-10-03 11:16:47.566016316 +0000 UTC m=+0.304739800 container start ad403d6b1371ee05fb6558852caf04a0706b29f751d625c6e8cd60d1ddb27db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:16:47 compute-0 podman[516049]: 2025-10-03 11:16:47.570584282 +0000 UTC m=+0.309307766 container attach ad403d6b1371ee05fb6558852caf04a0706b29f751d625c6e8cd60d1ddb27db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:16:47 compute-0 nova_compute[351685]: 2025-10-03 11:16:47.938 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:16:47 compute-0 nova_compute[351685]: 2025-10-03 11:16:47.941 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3726MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:16:47 compute-0 nova_compute[351685]: 2025-10-03 11:16:47.941 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:16:47 compute-0 nova_compute[351685]: 2025-10-03 11:16:47.941 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:16:48 compute-0 nova_compute[351685]: 2025-10-03 11:16:48.019 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:16:48 compute-0 nova_compute[351685]: 2025-10-03 11:16:48.020 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:16:48 compute-0 nova_compute[351685]: 2025-10-03 11:16:48.020 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:16:48 compute-0 nova_compute[351685]: 2025-10-03 11:16:48.056 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:16:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:16:48 compute-0 busy_dhawan[516067]: {
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:    "0": [
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:        {
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "devices": [
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "/dev/loop3"
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            ],
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "lv_name": "ceph_lv0",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "lv_size": "21470642176",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "name": "ceph_lv0",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "tags": {
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.cluster_name": "ceph",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.crush_device_class": "",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.encrypted": "0",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.osd_id": "0",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.type": "block",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.vdo": "0"
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            },
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "type": "block",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "vg_name": "ceph_vg0"
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:        }
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:    ],
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:    "1": [
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:        {
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "devices": [
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "/dev/loop4"
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            ],
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "lv_name": "ceph_lv1",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "lv_size": "21470642176",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "name": "ceph_lv1",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "tags": {
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.cluster_name": "ceph",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.crush_device_class": "",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.encrypted": "0",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.osd_id": "1",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.type": "block",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.vdo": "0"
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            },
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "type": "block",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "vg_name": "ceph_vg1"
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:        }
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:    ],
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:    "2": [
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:        {
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "devices": [
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "/dev/loop5"
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            ],
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "lv_name": "ceph_lv2",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "lv_size": "21470642176",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "name": "ceph_lv2",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "tags": {
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.cluster_name": "ceph",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.crush_device_class": "",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.encrypted": "0",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.osd_id": "2",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.type": "block",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:                "ceph.vdo": "0"
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            },
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "type": "block",
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:            "vg_name": "ceph_vg2"
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:        }
Oct  3 11:16:48 compute-0 busy_dhawan[516067]:    ]
Oct  3 11:16:48 compute-0 busy_dhawan[516067]: }
Oct  3 11:16:48 compute-0 systemd[1]: libpod-ad403d6b1371ee05fb6558852caf04a0706b29f751d625c6e8cd60d1ddb27db0.scope: Deactivated successfully.
Oct  3 11:16:48 compute-0 conmon[516067]: conmon ad403d6b1371ee05fb65 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad403d6b1371ee05fb6558852caf04a0706b29f751d625c6e8cd60d1ddb27db0.scope/container/memory.events
Oct  3 11:16:48 compute-0 podman[516049]: 2025-10-03 11:16:48.490212404 +0000 UTC m=+1.228935928 container died ad403d6b1371ee05fb6558852caf04a0706b29f751d625c6e8cd60d1ddb27db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:16:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a54c308602039c6987b5635afa861666886a81e9f4aac95d5893f2a12e05233-merged.mount: Deactivated successfully.
Oct  3 11:16:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:16:48 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2246093960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:16:48 compute-0 podman[516049]: 2025-10-03 11:16:48.591031773 +0000 UTC m=+1.329755267 container remove ad403d6b1371ee05fb6558852caf04a0706b29f751d625c6e8cd60d1ddb27db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:16:48 compute-0 nova_compute[351685]: 2025-10-03 11:16:48.593 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:16:48 compute-0 nova_compute[351685]: 2025-10-03 11:16:48.604 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:16:48 compute-0 systemd[1]: libpod-conmon-ad403d6b1371ee05fb6558852caf04a0706b29f751d625c6e8cd60d1ddb27db0.scope: Deactivated successfully.
Oct  3 11:16:48 compute-0 nova_compute[351685]: 2025-10-03 11:16:48.630 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:16:48 compute-0 nova_compute[351685]: 2025-10-03 11:16:48.632 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:16:48 compute-0 nova_compute[351685]: 2025-10-03 11:16:48.632 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:16:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3284: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:49 compute-0 nova_compute[351685]: 2025-10-03 11:16:49.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:16:49 compute-0 nova_compute[351685]: 2025-10-03 11:16:49.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:16:49 compute-0 podman[516250]: 2025-10-03 11:16:49.825757015 +0000 UTC m=+0.091783802 container create 4b7f065e846e7922052238d69e87658173a0be8517821c6f9bdbed63719de80e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:16:49 compute-0 podman[516250]: 2025-10-03 11:16:49.790119586 +0000 UTC m=+0.056146453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:16:49 compute-0 systemd[1]: Started libpod-conmon-4b7f065e846e7922052238d69e87658173a0be8517821c6f9bdbed63719de80e.scope.
Oct  3 11:16:49 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:16:49 compute-0 podman[516250]: 2025-10-03 11:16:49.959789714 +0000 UTC m=+0.225816511 container init 4b7f065e846e7922052238d69e87658173a0be8517821c6f9bdbed63719de80e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:16:49 compute-0 podman[516250]: 2025-10-03 11:16:49.97596711 +0000 UTC m=+0.241993927 container start 4b7f065e846e7922052238d69e87658173a0be8517821c6f9bdbed63719de80e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:16:49 compute-0 podman[516250]: 2025-10-03 11:16:49.981805737 +0000 UTC m=+0.247832614 container attach 4b7f065e846e7922052238d69e87658173a0be8517821c6f9bdbed63719de80e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 11:16:49 compute-0 hopeful_pike[516265]: 167 167
Oct  3 11:16:49 compute-0 systemd[1]: libpod-4b7f065e846e7922052238d69e87658173a0be8517821c6f9bdbed63719de80e.scope: Deactivated successfully.
Oct  3 11:16:49 compute-0 podman[516250]: 2025-10-03 11:16:49.985659069 +0000 UTC m=+0.251685886 container died 4b7f065e846e7922052238d69e87658173a0be8517821c6f9bdbed63719de80e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:16:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-12facf4929bc401ab6bcf16d3bebbe08b3c104b882a0ef8bf5918742b7d60beb-merged.mount: Deactivated successfully.
Oct  3 11:16:50 compute-0 podman[516250]: 2025-10-03 11:16:50.057475723 +0000 UTC m=+0.323502550 container remove 4b7f065e846e7922052238d69e87658173a0be8517821c6f9bdbed63719de80e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 11:16:50 compute-0 systemd[1]: libpod-conmon-4b7f065e846e7922052238d69e87658173a0be8517821c6f9bdbed63719de80e.scope: Deactivated successfully.
Oct  3 11:16:50 compute-0 podman[516290]: 2025-10-03 11:16:50.28636143 +0000 UTC m=+0.067972581 container create 57e805c56dafbf83984226ec28035b4319443a84e86ccb5bebabdde4b00144ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 11:16:50 compute-0 podman[516290]: 2025-10-03 11:16:50.263539302 +0000 UTC m=+0.045150463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:16:50 compute-0 systemd[1]: Started libpod-conmon-57e805c56dafbf83984226ec28035b4319443a84e86ccb5bebabdde4b00144ef.scope.
Oct  3 11:16:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cbf2e2c7071704e0b4c49af5437e4a8c185187802faa8a280639b07926c431/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cbf2e2c7071704e0b4c49af5437e4a8c185187802faa8a280639b07926c431/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cbf2e2c7071704e0b4c49af5437e4a8c185187802faa8a280639b07926c431/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:16:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68cbf2e2c7071704e0b4c49af5437e4a8c185187802faa8a280639b07926c431/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:16:50 compute-0 podman[516290]: 2025-10-03 11:16:50.451939057 +0000 UTC m=+0.233550278 container init 57e805c56dafbf83984226ec28035b4319443a84e86ccb5bebabdde4b00144ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_torvalds, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:16:50 compute-0 podman[516290]: 2025-10-03 11:16:50.485102566 +0000 UTC m=+0.266713737 container start 57e805c56dafbf83984226ec28035b4319443a84e86ccb5bebabdde4b00144ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 11:16:50 compute-0 podman[516290]: 2025-10-03 11:16:50.491333965 +0000 UTC m=+0.272945166 container attach 57e805c56dafbf83984226ec28035b4319443a84e86ccb5bebabdde4b00144ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_torvalds, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 11:16:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3285: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:51 compute-0 nova_compute[351685]: 2025-10-03 11:16:51.633 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:16:51 compute-0 nova_compute[351685]: 2025-10-03 11:16:51.637 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]: {
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:        "osd_id": 1,
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:        "type": "bluestore"
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:    },
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:        "osd_id": 2,
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:        "type": "bluestore"
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:    },
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:        "osd_id": 0,
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:        "type": "bluestore"
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]:    }
Oct  3 11:16:51 compute-0 blissful_torvalds[516306]: }
Oct  3 11:16:51 compute-0 systemd[1]: libpod-57e805c56dafbf83984226ec28035b4319443a84e86ccb5bebabdde4b00144ef.scope: Deactivated successfully.
Oct  3 11:16:51 compute-0 systemd[1]: libpod-57e805c56dafbf83984226ec28035b4319443a84e86ccb5bebabdde4b00144ef.scope: Consumed 1.257s CPU time.
Oct  3 11:16:51 compute-0 podman[516290]: 2025-10-03 11:16:51.732861784 +0000 UTC m=+1.514472935 container died 57e805c56dafbf83984226ec28035b4319443a84e86ccb5bebabdde4b00144ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_torvalds, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:16:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-68cbf2e2c7071704e0b4c49af5437e4a8c185187802faa8a280639b07926c431-merged.mount: Deactivated successfully.
Oct  3 11:16:51 compute-0 podman[516290]: 2025-10-03 11:16:51.81916117 +0000 UTC m=+1.600772321 container remove 57e805c56dafbf83984226ec28035b4319443a84e86ccb5bebabdde4b00144ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 11:16:51 compute-0 systemd[1]: libpod-conmon-57e805c56dafbf83984226ec28035b4319443a84e86ccb5bebabdde4b00144ef.scope: Deactivated successfully.
Oct  3 11:16:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:16:51 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:16:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:16:51 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:16:51 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev de4936e8-eb69-4e2a-933e-ef9ba67d742c does not exist
Oct  3 11:16:51 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev bb467274-1549-4520-8f32-e41efccbccc2 does not exist
Oct  3 11:16:52 compute-0 nova_compute[351685]: 2025-10-03 11:16:52.734 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:16:52 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:16:52 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:16:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:16:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3286: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:16:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3086822209' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:16:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:16:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3086822209' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:16:54 compute-0 nova_compute[351685]: 2025-10-03 11:16:54.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:16:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3287: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:55 compute-0 nova_compute[351685]: 2025-10-03 11:16:55.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:16:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:16:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3288: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:16:58.086360) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490218086413, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1825, "num_deletes": 256, "total_data_size": 3000592, "memory_usage": 3042464, "flush_reason": "Manual Compaction"}
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490218101349, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 2949950, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64741, "largest_seqno": 66565, "table_properties": {"data_size": 2941488, "index_size": 5211, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16773, "raw_average_key_size": 19, "raw_value_size": 2924709, "raw_average_value_size": 3444, "num_data_blocks": 232, "num_entries": 849, "num_filter_entries": 849, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759490015, "oldest_key_time": 1759490015, "file_creation_time": 1759490218, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 15022 microseconds, and 7724 cpu microseconds.
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:16:58.101387) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 2949950 bytes OK
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:16:58.101406) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:16:58.103802) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:16:58.103824) EVENT_LOG_v1 {"time_micros": 1759490218103818, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:16:58.103847) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 2992817, prev total WAL file size 2992817, number of live WAL files 2.
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:16:58.105320) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373633' seq:72057594037927935, type:22 .. '6C6F676D0033303135' seq:0, type:0; will stop at (end)
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2880KB)], [158(7497KB)]
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490218105415, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 10626963, "oldest_snapshot_seqno": -1}
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 7689 keys, 10527120 bytes, temperature: kUnknown
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490218154215, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 10527120, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10478997, "index_size": 27786, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19269, "raw_key_size": 201903, "raw_average_key_size": 26, "raw_value_size": 10342854, "raw_average_value_size": 1345, "num_data_blocks": 1104, "num_entries": 7689, "num_filter_entries": 7689, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759490218, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:16:58.154794) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 10527120 bytes
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:16:58.157287) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.3 rd, 215.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 7.3 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(7.2) write-amplify(3.6) OK, records in: 8213, records dropped: 524 output_compression: NoCompression
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:16:58.157302) EVENT_LOG_v1 {"time_micros": 1759490218157295, "job": 98, "event": "compaction_finished", "compaction_time_micros": 48901, "compaction_time_cpu_micros": 30428, "output_level": 6, "num_output_files": 1, "total_output_size": 10527120, "num_input_records": 8213, "num_output_records": 7689, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490218158300, "job": 98, "event": "table_file_deletion", "file_number": 160}
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490218160169, "job": 98, "event": "table_file_deletion", "file_number": 158}
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:16:58.105145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:16:58.160538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:16:58.160548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:16:58.160551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:16:58.160553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:16:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:16:58.160555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:16:58 compute-0 nova_compute[351685]: 2025-10-03 11:16:58.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:16:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3289: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:16:59 compute-0 nova_compute[351685]: 2025-10-03 11:16:59.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:16:59 compute-0 podman[157165]: time="2025-10-03T11:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:16:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:16:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9103 "" "Go-http-client/1.1"
Oct  3 11:17:01 compute-0 openstack_network_exporter[367524]: ERROR   11:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:17:01 compute-0 openstack_network_exporter[367524]: ERROR   11:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:17:01 compute-0 openstack_network_exporter[367524]: ERROR   11:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:17:01 compute-0 openstack_network_exporter[367524]: ERROR   11:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:17:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:17:01 compute-0 openstack_network_exporter[367524]: ERROR   11:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:17:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:17:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3290: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:17:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3291: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:03 compute-0 podman[516401]: 2025-10-03 11:17:03.889703448 +0000 UTC m=+0.113767293 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:17:03 compute-0 podman[516400]: 2025-10-03 11:17:03.902153796 +0000 UTC m=+0.132677818 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, name=ubi9-minimal, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct  3 11:17:03 compute-0 podman[516399]: 2025-10-03 11:17:03.915903974 +0000 UTC m=+0.147218871 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 11:17:03 compute-0 podman[516402]: 2025-10-03 11:17:03.924804238 +0000 UTC m=+0.138817493 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  3 11:17:03 compute-0 podman[516403]: 2025-10-03 11:17:03.930305024 +0000 UTC m=+0.141183889 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  3 11:17:03 compute-0 podman[516421]: 2025-10-03 11:17:03.934221279 +0000 UTC m=+0.134904989 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  3 11:17:03 compute-0 podman[516415]: 2025-10-03 11:17:03.961098188 +0000 UTC m=+0.157312164 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 11:17:04 compute-0 nova_compute[351685]: 2025-10-03 11:17:04.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:17:04 compute-0 nova_compute[351685]: 2025-10-03 11:17:04.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:17:04 compute-0 nova_compute[351685]: 2025-10-03 11:17:04.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  3 11:17:04 compute-0 nova_compute[351685]: 2025-10-03 11:17:04.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:17:04 compute-0 nova_compute[351685]: 2025-10-03 11:17:04.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:17:04 compute-0 nova_compute[351685]: 2025-10-03 11:17:04.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:17:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3292: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3293: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:17:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3294: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:09 compute-0 nova_compute[351685]: 2025-10-03 11:17:09.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:17:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3295: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:11 compute-0 nova_compute[351685]: 2025-10-03 11:17:11.758 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:17:11 compute-0 nova_compute[351685]: 2025-10-03 11:17:11.758 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 11:17:11 compute-0 nova_compute[351685]: 2025-10-03 11:17:11.784 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 11:17:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:17:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3296: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:14 compute-0 nova_compute[351685]: 2025-10-03 11:17:14.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:17:14 compute-0 nova_compute[351685]: 2025-10-03 11:17:14.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:17:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3297: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:17:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:17:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:17:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:17:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:17:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:17:16 compute-0 podman[516536]: 2025-10-03 11:17:16.882420158 +0000 UTC m=+0.123362760 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, distribution-scope=public, name=ubi9, vcs-type=git, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release-0.7.12=)
Oct  3 11:17:16 compute-0 podman[516535]: 2025-10-03 11:17:16.898930385 +0000 UTC m=+0.146097715 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 11:17:16 compute-0 podman[516537]: 2025-10-03 11:17:16.91441315 +0000 UTC m=+0.146193389 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 11:17:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3298: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:17:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3299: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:19 compute-0 nova_compute[351685]: 2025-10-03 11:17:19.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:17:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3300: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:17:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3301: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:24 compute-0 nova_compute[351685]: 2025-10-03 11:17:24.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:17:24 compute-0 nova_compute[351685]: 2025-10-03 11:17:24.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:17:24 compute-0 nova_compute[351685]: 2025-10-03 11:17:24.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  3 11:17:24 compute-0 nova_compute[351685]: 2025-10-03 11:17:24.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:17:24 compute-0 nova_compute[351685]: 2025-10-03 11:17:24.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:17:24 compute-0 nova_compute[351685]: 2025-10-03 11:17:24.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:17:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3302: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:26 compute-0 nova_compute[351685]: 2025-10-03 11:17:26.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:17:26 compute-0 nova_compute[351685]: 2025-10-03 11:17:26.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 11:17:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3303: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:17:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3304: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:29 compute-0 nova_compute[351685]: 2025-10-03 11:17:29.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:17:29.731200) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490249731315, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 475, "num_deletes": 251, "total_data_size": 458147, "memory_usage": 467784, "flush_reason": "Manual Compaction"}
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490249739565, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 454248, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66566, "largest_seqno": 67040, "table_properties": {"data_size": 451490, "index_size": 793, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6333, "raw_average_key_size": 18, "raw_value_size": 446176, "raw_average_value_size": 1320, "num_data_blocks": 36, "num_entries": 338, "num_filter_entries": 338, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759490219, "oldest_key_time": 1759490219, "file_creation_time": 1759490249, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 8466 microseconds, and 3647 cpu microseconds.
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:17:29.739663) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 454248 bytes OK
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:17:29.739691) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:17:29.742672) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:17:29.742693) EVENT_LOG_v1 {"time_micros": 1759490249742686, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:17:29.742715) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 455339, prev total WAL file size 455339, number of live WAL files 2.
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:17:29.743843) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(443KB)], [161(10MB)]
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490249743929, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 10981368, "oldest_snapshot_seqno": -1}
Oct  3 11:17:29 compute-0 podman[157165]: time="2025-10-03T11:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:17:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:17:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9113 "" "Go-http-client/1.1"
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 7517 keys, 9249764 bytes, temperature: kUnknown
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490249815514, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 9249764, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9203993, "index_size": 25882, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18821, "raw_key_size": 198993, "raw_average_key_size": 26, "raw_value_size": 9071983, "raw_average_value_size": 1206, "num_data_blocks": 1015, "num_entries": 7517, "num_filter_entries": 7517, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759490249, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:17:29.815769) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 9249764 bytes
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:17:29.818289) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.3 rd, 129.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.0 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(44.5) write-amplify(20.4) OK, records in: 8027, records dropped: 510 output_compression: NoCompression
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:17:29.818311) EVENT_LOG_v1 {"time_micros": 1759490249818301, "job": 100, "event": "compaction_finished", "compaction_time_micros": 71653, "compaction_time_cpu_micros": 54520, "output_level": 6, "num_output_files": 1, "total_output_size": 9249764, "num_input_records": 8027, "num_output_records": 7517, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490249818531, "job": 100, "event": "table_file_deletion", "file_number": 163}
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490249821043, "job": 100, "event": "table_file_deletion", "file_number": 161}
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:17:29.743563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:17:29.821281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:17:29.821286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:17:29.821288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:17:29.821290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:17:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:17:29.821291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:17:31 compute-0 openstack_network_exporter[367524]: ERROR   11:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:17:31 compute-0 openstack_network_exporter[367524]: ERROR   11:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:17:31 compute-0 openstack_network_exporter[367524]: ERROR   11:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:17:31 compute-0 openstack_network_exporter[367524]: ERROR   11:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:17:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:17:31 compute-0 openstack_network_exporter[367524]: ERROR   11:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:17:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:17:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3305: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:17:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3306: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:34 compute-0 nova_compute[351685]: 2025-10-03 11:17:34.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:17:34 compute-0 nova_compute[351685]: 2025-10-03 11:17:34.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:17:34 compute-0 nova_compute[351685]: 2025-10-03 11:17:34.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  3 11:17:34 compute-0 nova_compute[351685]: 2025-10-03 11:17:34.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:17:34 compute-0 nova_compute[351685]: 2025-10-03 11:17:34.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:17:34 compute-0 nova_compute[351685]: 2025-10-03 11:17:34.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:17:34 compute-0 nova_compute[351685]: 2025-10-03 11:17:34.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:17:34 compute-0 nova_compute[351685]: 2025-10-03 11:17:34.750 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:17:34 compute-0 nova_compute[351685]: 2025-10-03 11:17:34.751 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:17:34 compute-0 nova_compute[351685]: 2025-10-03 11:17:34.751 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:17:34 compute-0 podman[516595]: 2025-10-03 11:17:34.856689347 +0000 UTC m=+0.111200382 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:17:34 compute-0 podman[516619]: 2025-10-03 11:17:34.899499493 +0000 UTC m=+0.102168072 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 11:17:34 compute-0 podman[516602]: 2025-10-03 11:17:34.900892008 +0000 UTC m=+0.136016774 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  3 11:17:34 compute-0 podman[516609]: 2025-10-03 11:17:34.905582648 +0000 UTC m=+0.112940237 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true)
Oct  3 11:17:34 compute-0 podman[516596]: 2025-10-03 11:17:34.917956353 +0000 UTC m=+0.155356032 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.tags=minimal rhel9)
Oct  3 11:17:34 compute-0 podman[516618]: 2025-10-03 11:17:34.923313324 +0000 UTC m=+0.131351725 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  3 11:17:34 compute-0 podman[516603]: 2025-10-03 11:17:34.9257053 +0000 UTC m=+0.152116068 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  3 11:17:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3307: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:35 compute-0 nova_compute[351685]: 2025-10-03 11:17:35.735 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:17:35 compute-0 nova_compute[351685]: 2025-10-03 11:17:35.735 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:17:35 compute-0 nova_compute[351685]: 2025-10-03 11:17:35.735 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:17:35 compute-0 nova_compute[351685]: 2025-10-03 11:17:35.736 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:17:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3308: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:17:38 compute-0 nova_compute[351685]: 2025-10-03 11:17:38.338 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:17:38 compute-0 nova_compute[351685]: 2025-10-03 11:17:38.362 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:17:38 compute-0 nova_compute[351685]: 2025-10-03 11:17:38.363 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:17:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3309: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:39 compute-0 nova_compute[351685]: 2025-10-03 11:17:39.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:17:39 compute-0 nova_compute[351685]: 2025-10-03 11:17:39.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:17:39 compute-0 nova_compute[351685]: 2025-10-03 11:17:39.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  3 11:17:39 compute-0 nova_compute[351685]: 2025-10-03 11:17:39.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:17:39 compute-0 nova_compute[351685]: 2025-10-03 11:17:39.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:17:39 compute-0 nova_compute[351685]: 2025-10-03 11:17:39.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.903 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.904 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.904 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.904 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92e80950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.965 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.966 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.966 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.967 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.967 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.968 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:17:40.967599) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.975 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.975 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.976 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.976 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.976 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.977 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.977 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:17:40.977203) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.978 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.978 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.979 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.979 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.979 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.980 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:40.980 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:17:40.980019) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.016 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.016 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.016 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.017 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.017 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:17:41.017634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.090 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.092 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.093 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.094 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.095 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.096 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.097 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.097 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.098 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:17:41.097754) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.098 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.100 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.101 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.102 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.103 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.103 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.104 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.105 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:17:41.105079) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.106 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.107 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.109 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.110 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.111 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.112 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.114 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.115 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:17:41.114022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.115 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.115 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.116 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.117 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.118 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.118 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.119 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.120 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.121 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.123 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:17:41.118930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.122 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.123 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.123 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.124 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.124 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:17:41.124056) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.124 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.125 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.125 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.126 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.126 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.126 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.126 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:17:41.126481) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.166 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.168 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.168 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.168 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.168 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.169 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.169 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.170 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.170 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:17:41.169519) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.171 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.171 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.172 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.173 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.173 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.173 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.174 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.174 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.175 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:17:41.174336) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.175 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.175 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.176 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.177 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.178 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.178 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.178 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.180 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:17:41.179147) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.180 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.181 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.181 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.182 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:17:41.182655) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.183 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.183 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.183 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.184 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.184 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.184 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.184 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:17:41.184097) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.185 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.185 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.185 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.185 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.185 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.185 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:17:41.185727) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.186 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.186 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.186 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.186 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.187 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.187 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.187 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.187 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:17:41.187088) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.188 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.188 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.188 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:17:41.188652) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.189 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 99040000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.189 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.189 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.189 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.190 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.190 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.190 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:17:41.190113) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.190 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.191 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.191 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.192 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:17:41.191662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.192 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.192 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.192 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.192 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.193 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.193 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.193 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:17:41.193099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.194 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.194 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.194 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.194 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.194 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:17:41.194791) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.195 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.195 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.196 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.196 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.196 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.196 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.196 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.197 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:17:41.196637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.197 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.197 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.197 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.197 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.198 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:17:41.198033) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.198 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.198 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.199 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.199 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.199 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.199 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.200 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.200 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.200 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.200 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.200 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.200 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.201 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.202 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.202 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.202 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.202 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.202 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.202 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.202 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.203 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:17:41.203 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:17:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3310: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:17:41.691 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:17:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:17:41.692 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:17:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:17:41.692 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:17:42 compute-0 nova_compute[351685]: 2025-10-03 11:17:42.338 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:17:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:17:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3311: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:44 compute-0 nova_compute[351685]: 2025-10-03 11:17:44.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:17:44 compute-0 nova_compute[351685]: 2025-10-03 11:17:44.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:17:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3312: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:17:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:17:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:17:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:17:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:17:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:17:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:17:46
Oct  3 11:17:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:17:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:17:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'vms', 'backups']
Oct  3 11:17:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:17:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:17:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:17:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:17:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:17:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:17:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:17:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:17:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:17:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:17:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:17:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3313: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:47 compute-0 nova_compute[351685]: 2025-10-03 11:17:47.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:17:47 compute-0 nova_compute[351685]: 2025-10-03 11:17:47.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:17:47 compute-0 nova_compute[351685]: 2025-10-03 11:17:47.732 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:17:47 compute-0 nova_compute[351685]: 2025-10-03 11:17:47.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:17:47 compute-0 nova_compute[351685]: 2025-10-03 11:17:47.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:17:47 compute-0 nova_compute[351685]: 2025-10-03 11:17:47.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:17:47 compute-0 nova_compute[351685]: 2025-10-03 11:17:47.762 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:17:47 compute-0 nova_compute[351685]: 2025-10-03 11:17:47.762 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:17:47 compute-0 podman[516729]: 2025-10-03 11:17:47.841013619 +0000 UTC m=+0.087688350 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:17:47 compute-0 podman[516730]: 2025-10-03 11:17:47.872301919 +0000 UTC m=+0.108466135 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, release-0.7.12=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543)
Oct  3 11:17:47 compute-0 podman[516731]: 2025-10-03 11:17:47.87955009 +0000 UTC m=+0.110391576 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 11:17:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:17:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:17:48 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1252426732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:17:48 compute-0 nova_compute[351685]: 2025-10-03 11:17:48.308 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:17:48 compute-0 nova_compute[351685]: 2025-10-03 11:17:48.410 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:17:48 compute-0 nova_compute[351685]: 2025-10-03 11:17:48.410 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:17:48 compute-0 nova_compute[351685]: 2025-10-03 11:17:48.411 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:17:48 compute-0 nova_compute[351685]: 2025-10-03 11:17:48.952 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:17:48 compute-0 nova_compute[351685]: 2025-10-03 11:17:48.954 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3776MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:17:48 compute-0 nova_compute[351685]: 2025-10-03 11:17:48.954 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:17:48 compute-0 nova_compute[351685]: 2025-10-03 11:17:48.955 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:17:49 compute-0 nova_compute[351685]: 2025-10-03 11:17:49.043 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:17:49 compute-0 nova_compute[351685]: 2025-10-03 11:17:49.043 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:17:49 compute-0 nova_compute[351685]: 2025-10-03 11:17:49.044 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:17:49 compute-0 nova_compute[351685]: 2025-10-03 11:17:49.083 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:17:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3314: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:17:49 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2786462998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:17:49 compute-0 nova_compute[351685]: 2025-10-03 11:17:49.535 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:17:49 compute-0 nova_compute[351685]: 2025-10-03 11:17:49.544 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:17:49 compute-0 nova_compute[351685]: 2025-10-03 11:17:49.562 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:17:49 compute-0 nova_compute[351685]: 2025-10-03 11:17:49.564 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:17:49 compute-0 nova_compute[351685]: 2025-10-03 11:17:49.564 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:17:49 compute-0 nova_compute[351685]: 2025-10-03 11:17:49.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:17:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3315: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:52 compute-0 nova_compute[351685]: 2025-10-03 11:17:52.565 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:17:52 compute-0 nova_compute[351685]: 2025-10-03 11:17:52.566 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:17:52 compute-0 nova_compute[351685]: 2025-10-03 11:17:52.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:17:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:17:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:17:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:17:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:17:53 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:17:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:17:53 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:17:53 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3ebeb74d-5edb-4497-b098-bece3bf96011 does not exist
Oct  3 11:17:53 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 9f919f9d-02bc-46be-b45d-0f95a5e1cebc does not exist
Oct  3 11:17:53 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev db241368-0852-4632-95dc-1b16b56c24a3 does not exist
Oct  3 11:17:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:17:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:17:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:17:53 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:17:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:17:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:17:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3316: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:17:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:17:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:17:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:17:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2633510377' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:17:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:17:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2633510377' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:17:54 compute-0 podman[517102]: 2025-10-03 11:17:54.427586302 +0000 UTC m=+0.073439606 container create c5cc777a6b228b2a4019e3b813bf5a1da9f26fc5f65c9752b7d54896fa7e31b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elbakyan, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  3 11:17:54 compute-0 podman[517102]: 2025-10-03 11:17:54.393829044 +0000 UTC m=+0.039682358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:17:54 compute-0 systemd[1]: Started libpod-conmon-c5cc777a6b228b2a4019e3b813bf5a1da9f26fc5f65c9752b7d54896fa7e31b0.scope.
Oct  3 11:17:54 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:17:54 compute-0 podman[517102]: 2025-10-03 11:17:54.575826795 +0000 UTC m=+0.221680149 container init c5cc777a6b228b2a4019e3b813bf5a1da9f26fc5f65c9752b7d54896fa7e31b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elbakyan, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:17:54 compute-0 podman[517102]: 2025-10-03 11:17:54.592038762 +0000 UTC m=+0.237892076 container start c5cc777a6b228b2a4019e3b813bf5a1da9f26fc5f65c9752b7d54896fa7e31b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 11:17:54 compute-0 podman[517102]: 2025-10-03 11:17:54.598756827 +0000 UTC m=+0.244610161 container attach c5cc777a6b228b2a4019e3b813bf5a1da9f26fc5f65c9752b7d54896fa7e31b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elbakyan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 11:17:54 compute-0 awesome_elbakyan[517117]: 167 167
Oct  3 11:17:54 compute-0 nova_compute[351685]: 2025-10-03 11:17:54.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:17:54 compute-0 systemd[1]: libpod-c5cc777a6b228b2a4019e3b813bf5a1da9f26fc5f65c9752b7d54896fa7e31b0.scope: Deactivated successfully.
Oct  3 11:17:54 compute-0 podman[517102]: 2025-10-03 11:17:54.603714185 +0000 UTC m=+0.249567459 container died c5cc777a6b228b2a4019e3b813bf5a1da9f26fc5f65c9752b7d54896fa7e31b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Oct  3 11:17:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-31be855a098b1ecb96bfa81d3ba1099ce9ee3aa463ea9c412ff015b715c2eabd-merged.mount: Deactivated successfully.
Oct  3 11:17:54 compute-0 podman[517102]: 2025-10-03 11:17:54.677353507 +0000 UTC m=+0.323206791 container remove c5cc777a6b228b2a4019e3b813bf5a1da9f26fc5f65c9752b7d54896fa7e31b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 11:17:54 compute-0 systemd[1]: libpod-conmon-c5cc777a6b228b2a4019e3b813bf5a1da9f26fc5f65c9752b7d54896fa7e31b0.scope: Deactivated successfully.
Oct  3 11:17:54 compute-0 podman[517141]: 2025-10-03 11:17:54.947195632 +0000 UTC m=+0.072069042 container create 2e06693fde841b4d1141e2b40f2788c96fa581a0904c853199529b5bb018d063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_maxwell, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:17:55 compute-0 podman[517141]: 2025-10-03 11:17:54.920039305 +0000 UTC m=+0.044912705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:17:55 compute-0 systemd[1]: Started libpod-conmon-2e06693fde841b4d1141e2b40f2788c96fa581a0904c853199529b5bb018d063.scope.
Oct  3 11:17:55 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f8583b7df32419dc2aa08ac607f56b830c7964472f7155f29e8dfd2f4edffd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f8583b7df32419dc2aa08ac607f56b830c7964472f7155f29e8dfd2f4edffd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f8583b7df32419dc2aa08ac607f56b830c7964472f7155f29e8dfd2f4edffd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f8583b7df32419dc2aa08ac607f56b830c7964472f7155f29e8dfd2f4edffd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f8583b7df32419dc2aa08ac607f56b830c7964472f7155f29e8dfd2f4edffd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:17:55 compute-0 podman[517141]: 2025-10-03 11:17:55.105339011 +0000 UTC m=+0.230212411 container init 2e06693fde841b4d1141e2b40f2788c96fa581a0904c853199529b5bb018d063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 11:17:55 compute-0 podman[517141]: 2025-10-03 11:17:55.117557392 +0000 UTC m=+0.242430772 container start 2e06693fde841b4d1141e2b40f2788c96fa581a0904c853199529b5bb018d063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  3 11:17:55 compute-0 podman[517141]: 2025-10-03 11:17:55.122164488 +0000 UTC m=+0.247037868 container attach 2e06693fde841b4d1141e2b40f2788c96fa581a0904c853199529b5bb018d063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_maxwell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:17:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3317: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:55 compute-0 nova_compute[351685]: 2025-10-03 11:17:55.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:17:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:17:56 compute-0 determined_maxwell[517157]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:17:56 compute-0 determined_maxwell[517157]: --> relative data size: 1.0
Oct  3 11:17:56 compute-0 determined_maxwell[517157]: --> All data devices are unavailable
Oct  3 11:17:56 compute-0 systemd[1]: libpod-2e06693fde841b4d1141e2b40f2788c96fa581a0904c853199529b5bb018d063.scope: Deactivated successfully.
Oct  3 11:17:56 compute-0 systemd[1]: libpod-2e06693fde841b4d1141e2b40f2788c96fa581a0904c853199529b5bb018d063.scope: Consumed 1.247s CPU time.
Oct  3 11:17:56 compute-0 podman[517141]: 2025-10-03 11:17:56.421590806 +0000 UTC m=+1.546464226 container died 2e06693fde841b4d1141e2b40f2788c96fa581a0904c853199529b5bb018d063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_maxwell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 11:17:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-19f8583b7df32419dc2aa08ac607f56b830c7964472f7155f29e8dfd2f4edffd-merged.mount: Deactivated successfully.
Oct  3 11:17:56 compute-0 podman[517141]: 2025-10-03 11:17:56.526385362 +0000 UTC m=+1.651258752 container remove 2e06693fde841b4d1141e2b40f2788c96fa581a0904c853199529b5bb018d063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:17:56 compute-0 systemd[1]: libpod-conmon-2e06693fde841b4d1141e2b40f2788c96fa581a0904c853199529b5bb018d063.scope: Deactivated successfully.
Oct  3 11:17:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3318: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:57 compute-0 podman[517334]: 2025-10-03 11:17:57.566275643 +0000 UTC m=+0.092798304 container create 301152cccb4c1008807656b1072c85607525ec16a6842c7f2920fe8f1e7d3cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 11:17:57 compute-0 podman[517334]: 2025-10-03 11:17:57.534190248 +0000 UTC m=+0.060712999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:17:57 compute-0 systemd[1]: Started libpod-conmon-301152cccb4c1008807656b1072c85607525ec16a6842c7f2920fe8f1e7d3cd3.scope.
Oct  3 11:17:57 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:17:57 compute-0 podman[517334]: 2025-10-03 11:17:57.708706641 +0000 UTC m=+0.235229332 container init 301152cccb4c1008807656b1072c85607525ec16a6842c7f2920fe8f1e7d3cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:17:57 compute-0 podman[517334]: 2025-10-03 11:17:57.71995595 +0000 UTC m=+0.246478621 container start 301152cccb4c1008807656b1072c85607525ec16a6842c7f2920fe8f1e7d3cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 11:17:57 compute-0 podman[517334]: 2025-10-03 11:17:57.727369216 +0000 UTC m=+0.253891897 container attach 301152cccb4c1008807656b1072c85607525ec16a6842c7f2920fe8f1e7d3cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 11:17:57 compute-0 clever_moser[517350]: 167 167
Oct  3 11:17:57 compute-0 systemd[1]: libpod-301152cccb4c1008807656b1072c85607525ec16a6842c7f2920fe8f1e7d3cd3.scope: Deactivated successfully.
Oct  3 11:17:57 compute-0 podman[517334]: 2025-10-03 11:17:57.737609734 +0000 UTC m=+0.264132425 container died 301152cccb4c1008807656b1072c85607525ec16a6842c7f2920fe8f1e7d3cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 11:17:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3851cde9af7a2b104d1ed8b7bd5a3ce7d117fc3761c8990800863ed9489decab-merged.mount: Deactivated successfully.
Oct  3 11:17:57 compute-0 podman[517334]: 2025-10-03 11:17:57.793802198 +0000 UTC m=+0.320324859 container remove 301152cccb4c1008807656b1072c85607525ec16a6842c7f2920fe8f1e7d3cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 11:17:57 compute-0 systemd[1]: libpod-conmon-301152cccb4c1008807656b1072c85607525ec16a6842c7f2920fe8f1e7d3cd3.scope: Deactivated successfully.
Oct  3 11:17:58 compute-0 podman[517373]: 2025-10-03 11:17:58.057697513 +0000 UTC m=+0.081285917 container create 7c8d3d66308938c6ac84c28db3d04a43dccc326e9925ce021bf3aa8ce2e4220d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 11:17:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:17:58 compute-0 podman[517373]: 2025-10-03 11:17:58.02566118 +0000 UTC m=+0.049249604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:17:58 compute-0 systemd[1]: Started libpod-conmon-7c8d3d66308938c6ac84c28db3d04a43dccc326e9925ce021bf3aa8ce2e4220d.scope.
Oct  3 11:17:58 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fbe239783ad88e896c179db140eb415fb40b2b09144a0b6d6909f28a322f63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fbe239783ad88e896c179db140eb415fb40b2b09144a0b6d6909f28a322f63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fbe239783ad88e896c179db140eb415fb40b2b09144a0b6d6909f28a322f63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:17:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fbe239783ad88e896c179db140eb415fb40b2b09144a0b6d6909f28a322f63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:17:58 compute-0 podman[517373]: 2025-10-03 11:17:58.228344361 +0000 UTC m=+0.251932785 container init 7c8d3d66308938c6ac84c28db3d04a43dccc326e9925ce021bf3aa8ce2e4220d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:17:58 compute-0 podman[517373]: 2025-10-03 11:17:58.2414435 +0000 UTC m=+0.265031874 container start 7c8d3d66308938c6ac84c28db3d04a43dccc326e9925ce021bf3aa8ce2e4220d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 11:17:58 compute-0 podman[517373]: 2025-10-03 11:17:58.246666176 +0000 UTC m=+0.270254550 container attach 7c8d3d66308938c6ac84c28db3d04a43dccc326e9925ce021bf3aa8ce2e4220d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 11:17:59 compute-0 competent_wu[517389]: {
Oct  3 11:17:59 compute-0 competent_wu[517389]:    "0": [
Oct  3 11:17:59 compute-0 competent_wu[517389]:        {
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "devices": [
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "/dev/loop3"
Oct  3 11:17:59 compute-0 competent_wu[517389]:            ],
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "lv_name": "ceph_lv0",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "lv_size": "21470642176",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "name": "ceph_lv0",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "tags": {
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.cluster_name": "ceph",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.crush_device_class": "",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.encrypted": "0",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.osd_id": "0",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.type": "block",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.vdo": "0"
Oct  3 11:17:59 compute-0 competent_wu[517389]:            },
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "type": "block",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "vg_name": "ceph_vg0"
Oct  3 11:17:59 compute-0 competent_wu[517389]:        }
Oct  3 11:17:59 compute-0 competent_wu[517389]:    ],
Oct  3 11:17:59 compute-0 competent_wu[517389]:    "1": [
Oct  3 11:17:59 compute-0 competent_wu[517389]:        {
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "devices": [
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "/dev/loop4"
Oct  3 11:17:59 compute-0 competent_wu[517389]:            ],
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "lv_name": "ceph_lv1",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "lv_size": "21470642176",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "name": "ceph_lv1",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "tags": {
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.cluster_name": "ceph",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.crush_device_class": "",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.encrypted": "0",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.osd_id": "1",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.type": "block",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.vdo": "0"
Oct  3 11:17:59 compute-0 competent_wu[517389]:            },
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "type": "block",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "vg_name": "ceph_vg1"
Oct  3 11:17:59 compute-0 competent_wu[517389]:        }
Oct  3 11:17:59 compute-0 competent_wu[517389]:    ],
Oct  3 11:17:59 compute-0 competent_wu[517389]:    "2": [
Oct  3 11:17:59 compute-0 competent_wu[517389]:        {
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "devices": [
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "/dev/loop5"
Oct  3 11:17:59 compute-0 competent_wu[517389]:            ],
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "lv_name": "ceph_lv2",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "lv_size": "21470642176",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "name": "ceph_lv2",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "tags": {
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.cluster_name": "ceph",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.crush_device_class": "",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.encrypted": "0",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.osd_id": "2",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.type": "block",
Oct  3 11:17:59 compute-0 competent_wu[517389]:                "ceph.vdo": "0"
Oct  3 11:17:59 compute-0 competent_wu[517389]:            },
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "type": "block",
Oct  3 11:17:59 compute-0 competent_wu[517389]:            "vg_name": "ceph_vg2"
Oct  3 11:17:59 compute-0 competent_wu[517389]:        }
Oct  3 11:17:59 compute-0 competent_wu[517389]:    ]
Oct  3 11:17:59 compute-0 competent_wu[517389]: }
Oct  3 11:17:59 compute-0 systemd[1]: libpod-7c8d3d66308938c6ac84c28db3d04a43dccc326e9925ce021bf3aa8ce2e4220d.scope: Deactivated successfully.
Oct  3 11:17:59 compute-0 podman[517373]: 2025-10-03 11:17:59.068148054 +0000 UTC m=+1.091736498 container died 7c8d3d66308938c6ac84c28db3d04a43dccc326e9925ce021bf3aa8ce2e4220d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  3 11:17:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7fbe239783ad88e896c179db140eb415fb40b2b09144a0b6d6909f28a322f63-merged.mount: Deactivated successfully.
Oct  3 11:17:59 compute-0 podman[517373]: 2025-10-03 11:17:59.193427024 +0000 UTC m=+1.217015408 container remove 7c8d3d66308938c6ac84c28db3d04a43dccc326e9925ce021bf3aa8ce2e4220d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  3 11:17:59 compute-0 systemd[1]: libpod-conmon-7c8d3d66308938c6ac84c28db3d04a43dccc326e9925ce021bf3aa8ce2e4220d.scope: Deactivated successfully.
Oct  3 11:17:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3319: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:17:59 compute-0 nova_compute[351685]: 2025-10-03 11:17:59.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:17:59 compute-0 podman[157165]: time="2025-10-03T11:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:17:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:17:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9111 "" "Go-http-client/1.1"
Oct  3 11:18:00 compute-0 podman[517549]: 2025-10-03 11:18:00.309341563 +0000 UTC m=+0.048769579 container create 41b38bd2aa7eabd4118e186837f66c005e399400022007467ed67ae381efa86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mclaren, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:18:00 compute-0 systemd[1]: Started libpod-conmon-41b38bd2aa7eabd4118e186837f66c005e399400022007467ed67ae381efa86f.scope.
Oct  3 11:18:00 compute-0 podman[517549]: 2025-10-03 11:18:00.285656297 +0000 UTC m=+0.025084343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:18:00 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:18:00 compute-0 podman[517549]: 2025-10-03 11:18:00.422687052 +0000 UTC m=+0.162115078 container init 41b38bd2aa7eabd4118e186837f66c005e399400022007467ed67ae381efa86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 11:18:00 compute-0 podman[517549]: 2025-10-03 11:18:00.44082059 +0000 UTC m=+0.180248626 container start 41b38bd2aa7eabd4118e186837f66c005e399400022007467ed67ae381efa86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mclaren, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 11:18:00 compute-0 podman[517549]: 2025-10-03 11:18:00.447486554 +0000 UTC m=+0.186914560 container attach 41b38bd2aa7eabd4118e186837f66c005e399400022007467ed67ae381efa86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:18:00 compute-0 pensive_mclaren[517564]: 167 167
Oct  3 11:18:00 compute-0 systemd[1]: libpod-41b38bd2aa7eabd4118e186837f66c005e399400022007467ed67ae381efa86f.scope: Deactivated successfully.
Oct  3 11:18:00 compute-0 podman[517549]: 2025-10-03 11:18:00.451541833 +0000 UTC m=+0.190969869 container died 41b38bd2aa7eabd4118e186837f66c005e399400022007467ed67ae381efa86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 11:18:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-3314c34587e32eb049ab66e456fe14538676300e77719fe57d4bbed1a1638d8d-merged.mount: Deactivated successfully.
Oct  3 11:18:00 compute-0 podman[517549]: 2025-10-03 11:18:00.514429311 +0000 UTC m=+0.253857357 container remove 41b38bd2aa7eabd4118e186837f66c005e399400022007467ed67ae381efa86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 11:18:00 compute-0 systemd[1]: libpod-conmon-41b38bd2aa7eabd4118e186837f66c005e399400022007467ed67ae381efa86f.scope: Deactivated successfully.
Oct  3 11:18:00 compute-0 podman[517587]: 2025-10-03 11:18:00.785886518 +0000 UTC m=+0.092427632 container create 23ab99c23f0c9d3bccb3d56c2c4e833e898bfaa9b9ea1517f51f68d50a1f7e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:18:00 compute-0 systemd[1]: Started libpod-conmon-23ab99c23f0c9d3bccb3d56c2c4e833e898bfaa9b9ea1517f51f68d50a1f7e2b.scope.
Oct  3 11:18:00 compute-0 podman[517587]: 2025-10-03 11:18:00.756095556 +0000 UTC m=+0.062636730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:18:00 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/451398f743e2ab468950920c4d0e2985a5258a4d01634750e84a0eb9d24b0d37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/451398f743e2ab468950920c4d0e2985a5258a4d01634750e84a0eb9d24b0d37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/451398f743e2ab468950920c4d0e2985a5258a4d01634750e84a0eb9d24b0d37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/451398f743e2ab468950920c4d0e2985a5258a4d01634750e84a0eb9d24b0d37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:18:00 compute-0 podman[517587]: 2025-10-03 11:18:00.921231718 +0000 UTC m=+0.227772872 container init 23ab99c23f0c9d3bccb3d56c2c4e833e898bfaa9b9ea1517f51f68d50a1f7e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 11:18:00 compute-0 podman[517587]: 2025-10-03 11:18:00.950914667 +0000 UTC m=+0.257455781 container start 23ab99c23f0c9d3bccb3d56c2c4e833e898bfaa9b9ea1517f51f68d50a1f7e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:18:00 compute-0 podman[517587]: 2025-10-03 11:18:00.958431546 +0000 UTC m=+0.264972670 container attach 23ab99c23f0c9d3bccb3d56c2c4e833e898bfaa9b9ea1517f51f68d50a1f7e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:18:01 compute-0 openstack_network_exporter[367524]: ERROR   11:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:18:01 compute-0 openstack_network_exporter[367524]: ERROR   11:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:18:01 compute-0 openstack_network_exporter[367524]: ERROR   11:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:18:01 compute-0 openstack_network_exporter[367524]: ERROR   11:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:18:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:18:01 compute-0 openstack_network_exporter[367524]: ERROR   11:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:18:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:18:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3320: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:01 compute-0 quirky_neumann[517602]: {
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:        "osd_id": 1,
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:        "type": "bluestore"
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:    },
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:        "osd_id": 2,
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:        "type": "bluestore"
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:    },
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:        "osd_id": 0,
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:        "type": "bluestore"
Oct  3 11:18:01 compute-0 quirky_neumann[517602]:    }
Oct  3 11:18:01 compute-0 quirky_neumann[517602]: }
Oct  3 11:18:02 compute-0 systemd[1]: libpod-23ab99c23f0c9d3bccb3d56c2c4e833e898bfaa9b9ea1517f51f68d50a1f7e2b.scope: Deactivated successfully.
Oct  3 11:18:02 compute-0 systemd[1]: libpod-23ab99c23f0c9d3bccb3d56c2c4e833e898bfaa9b9ea1517f51f68d50a1f7e2b.scope: Consumed 1.067s CPU time.
Oct  3 11:18:02 compute-0 podman[517587]: 2025-10-03 11:18:02.020725893 +0000 UTC m=+1.327267007 container died 23ab99c23f0c9d3bccb3d56c2c4e833e898bfaa9b9ea1517f51f68d50a1f7e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 11:18:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-451398f743e2ab468950920c4d0e2985a5258a4d01634750e84a0eb9d24b0d37-merged.mount: Deactivated successfully.
Oct  3 11:18:02 compute-0 podman[517587]: 2025-10-03 11:18:02.115113037 +0000 UTC m=+1.421654141 container remove 23ab99c23f0c9d3bccb3d56c2c4e833e898bfaa9b9ea1517f51f68d50a1f7e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_neumann, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:18:02 compute-0 systemd[1]: libpod-conmon-23ab99c23f0c9d3bccb3d56c2c4e833e898bfaa9b9ea1517f51f68d50a1f7e2b.scope: Deactivated successfully.
Oct  3 11:18:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:18:02 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:18:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:18:02 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:18:02 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 2e74361b-6f80-41ce-b618-42519284d74f does not exist
Oct  3 11:18:02 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev dd103482-0895-4619-ac71-9d9c962f7a23 does not exist
Oct  3 11:18:02 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:18:02 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:18:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:18:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3321: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:04 compute-0 nova_compute[351685]: 2025-10-03 11:18:04.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:18:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3322: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:05 compute-0 podman[517699]: 2025-10-03 11:18:05.859158365 +0000 UTC m=+0.112251945 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:18:05 compute-0 podman[517697]: 2025-10-03 11:18:05.861169579 +0000 UTC m=+0.119101293 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 11:18:05 compute-0 podman[517709]: 2025-10-03 11:18:05.88253066 +0000 UTC m=+0.107636957 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 11:18:05 compute-0 podman[517698]: 2025-10-03 11:18:05.890588017 +0000 UTC m=+0.146708044 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter)
Oct  3 11:18:05 compute-0 podman[517719]: 2025-10-03 11:18:05.908318224 +0000 UTC m=+0.145270229 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 11:18:05 compute-0 podman[517720]: 2025-10-03 11:18:05.913633203 +0000 UTC m=+0.140925439 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:18:05 compute-0 podman[517700]: 2025-10-03 11:18:05.919453929 +0000 UTC m=+0.152252381 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:18:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3323: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:18:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3324: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:09 compute-0 nova_compute[351685]: 2025-10-03 11:18:09.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:18:09 compute-0 nova_compute[351685]: 2025-10-03 11:18:09.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:18:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3325: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:18:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3326: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:14 compute-0 nova_compute[351685]: 2025-10-03 11:18:14.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:18:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3327: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:18:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:18:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:18:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:18:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:18:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:18:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3328: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:18:18 compute-0 podman[517831]: 2025-10-03 11:18:18.83318872 +0000 UTC m=+0.083825028 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:18:18 compute-0 podman[517832]: 2025-10-03 11:18:18.845811293 +0000 UTC m=+0.091831382 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release=1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, release-0.7.12=, container_name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Oct  3 11:18:18 compute-0 podman[517833]: 2025-10-03 11:18:18.847106254 +0000 UTC m=+0.085510191 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 11:18:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3329: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:19 compute-0 nova_compute[351685]: 2025-10-03 11:18:19.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:18:20 compute-0 nova_compute[351685]: 2025-10-03 11:18:20.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:18:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3330: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:18:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3331: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:24 compute-0 nova_compute[351685]: 2025-10-03 11:18:24.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:18:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3332: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3333: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:18:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3334: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:29 compute-0 nova_compute[351685]: 2025-10-03 11:18:29.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:18:29 compute-0 nova_compute[351685]: 2025-10-03 11:18:29.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:18:29 compute-0 nova_compute[351685]: 2025-10-03 11:18:29.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  3 11:18:29 compute-0 nova_compute[351685]: 2025-10-03 11:18:29.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:18:29 compute-0 nova_compute[351685]: 2025-10-03 11:18:29.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:18:29 compute-0 nova_compute[351685]: 2025-10-03 11:18:29.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:18:29 compute-0 podman[157165]: time="2025-10-03T11:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:18:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:18:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9113 "" "Go-http-client/1.1"
Oct  3 11:18:31 compute-0 openstack_network_exporter[367524]: ERROR   11:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:18:31 compute-0 openstack_network_exporter[367524]: ERROR   11:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:18:31 compute-0 openstack_network_exporter[367524]: ERROR   11:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:18:31 compute-0 openstack_network_exporter[367524]: ERROR   11:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:18:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:18:31 compute-0 openstack_network_exporter[367524]: ERROR   11:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:18:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:18:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3335: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:18:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3336: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:34 compute-0 nova_compute[351685]: 2025-10-03 11:18:34.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:18:34 compute-0 nova_compute[351685]: 2025-10-03 11:18:34.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:18:34 compute-0 nova_compute[351685]: 2025-10-03 11:18:34.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  3 11:18:34 compute-0 nova_compute[351685]: 2025-10-03 11:18:34.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:18:34 compute-0 nova_compute[351685]: 2025-10-03 11:18:34.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:18:34 compute-0 nova_compute[351685]: 2025-10-03 11:18:34.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:18:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3337: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:35 compute-0 nova_compute[351685]: 2025-10-03 11:18:35.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:18:35 compute-0 nova_compute[351685]: 2025-10-03 11:18:35.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:18:35 compute-0 nova_compute[351685]: 2025-10-03 11:18:35.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:18:36 compute-0 nova_compute[351685]: 2025-10-03 11:18:36.816 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:18:36 compute-0 nova_compute[351685]: 2025-10-03 11:18:36.816 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:18:36 compute-0 nova_compute[351685]: 2025-10-03 11:18:36.816 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:18:36 compute-0 nova_compute[351685]: 2025-10-03 11:18:36.816 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:18:36 compute-0 podman[517892]: 2025-10-03 11:18:36.862760994 +0000 UTC m=+0.114343512 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:18:36 compute-0 podman[517895]: 2025-10-03 11:18:36.875066427 +0000 UTC m=+0.108539376 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:18:36 compute-0 podman[517893]: 2025-10-03 11:18:36.885705567 +0000 UTC m=+0.134317530 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, version=9.6, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Oct  3 11:18:36 compute-0 podman[517906]: 2025-10-03 11:18:36.886623246 +0000 UTC m=+0.115893901 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20250930)
Oct  3 11:18:36 compute-0 podman[517894]: 2025-10-03 11:18:36.889155827 +0000 UTC m=+0.122570634 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:18:36 compute-0 podman[517909]: 2025-10-03 11:18:36.895719147 +0000 UTC m=+0.118036910 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible)
Oct  3 11:18:36 compute-0 podman[517908]: 2025-10-03 11:18:36.907876275 +0000 UTC m=+0.134362121 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:18:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3338: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:18:38 compute-0 nova_compute[351685]: 2025-10-03 11:18:38.775 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:18:38 compute-0 nova_compute[351685]: 2025-10-03 11:18:38.795 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:18:38 compute-0 nova_compute[351685]: 2025-10-03 11:18:38.796 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:18:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3339: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:39 compute-0 nova_compute[351685]: 2025-10-03 11:18:39.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:18:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3340: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:18:41.692 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:18:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:18:41.693 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:18:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:18:41.694 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:18:41 compute-0 nova_compute[351685]: 2025-10-03 11:18:41.792 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:18:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:18:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3341: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:44 compute-0 nova_compute[351685]: 2025-10-03 11:18:44.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:18:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3342: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:45 compute-0 nova_compute[351685]: 2025-10-03 11:18:45.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:18:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:18:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:18:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:18:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:18:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:18:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:18:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:18:46
Oct  3 11:18:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:18:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:18:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['vms', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'images', 'backups', 'default.rgw.meta', 'volumes']
Oct  3 11:18:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:18:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:18:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:18:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:18:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:18:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:18:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:18:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:18:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:18:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:18:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:18:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3343: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:47 compute-0 nova_compute[351685]: 2025-10-03 11:18:47.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:18:47 compute-0 nova_compute[351685]: 2025-10-03 11:18:47.763 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:18:47 compute-0 nova_compute[351685]: 2025-10-03 11:18:47.764 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:18:47 compute-0 nova_compute[351685]: 2025-10-03 11:18:47.764 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:18:47 compute-0 nova_compute[351685]: 2025-10-03 11:18:47.764 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:18:47 compute-0 nova_compute[351685]: 2025-10-03 11:18:47.765 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:18:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:18:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:18:48 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2467479508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:18:48 compute-0 nova_compute[351685]: 2025-10-03 11:18:48.243 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:18:48 compute-0 nova_compute[351685]: 2025-10-03 11:18:48.337 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:18:48 compute-0 nova_compute[351685]: 2025-10-03 11:18:48.339 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:18:48 compute-0 nova_compute[351685]: 2025-10-03 11:18:48.339 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:18:48 compute-0 nova_compute[351685]: 2025-10-03 11:18:48.812 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:18:48 compute-0 nova_compute[351685]: 2025-10-03 11:18:48.814 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3756MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:18:48 compute-0 nova_compute[351685]: 2025-10-03 11:18:48.814 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:18:48 compute-0 nova_compute[351685]: 2025-10-03 11:18:48.815 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:18:48 compute-0 nova_compute[351685]: 2025-10-03 11:18:48.902 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:18:48 compute-0 nova_compute[351685]: 2025-10-03 11:18:48.903 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:18:48 compute-0 nova_compute[351685]: 2025-10-03 11:18:48.903 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:18:48 compute-0 nova_compute[351685]: 2025-10-03 11:18:48.927 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 11:18:48 compute-0 nova_compute[351685]: 2025-10-03 11:18:48.944 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 11:18:48 compute-0 nova_compute[351685]: 2025-10-03 11:18:48.944 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 11:18:48 compute-0 nova_compute[351685]: 2025-10-03 11:18:48.960 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 11:18:48 compute-0 nova_compute[351685]: 2025-10-03 11:18:48.986 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 11:18:49 compute-0 nova_compute[351685]: 2025-10-03 11:18:49.048 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:18:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:18:49 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2201447673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:18:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3344: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:49 compute-0 nova_compute[351685]: 2025-10-03 11:18:49.535 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:18:49 compute-0 nova_compute[351685]: 2025-10-03 11:18:49.546 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:18:49 compute-0 nova_compute[351685]: 2025-10-03 11:18:49.571 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:18:49 compute-0 nova_compute[351685]: 2025-10-03 11:18:49.573 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:18:49 compute-0 nova_compute[351685]: 2025-10-03 11:18:49.573 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:18:49 compute-0 nova_compute[351685]: 2025-10-03 11:18:49.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:18:49 compute-0 podman[518072]: 2025-10-03 11:18:49.883626439 +0000 UTC m=+0.122856783 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Oct  3 11:18:49 compute-0 podman[518070]: 2025-10-03 11:18:49.887626837 +0000 UTC m=+0.132019816 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 11:18:49 compute-0 podman[518071]: 2025-10-03 11:18:49.916827759 +0000 UTC m=+0.153607115 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, config_id=edpm, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 11:18:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3345: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:51 compute-0 nova_compute[351685]: 2025-10-03 11:18:51.575 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:18:51 compute-0 nova_compute[351685]: 2025-10-03 11:18:51.576 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:18:51 compute-0 nova_compute[351685]: 2025-10-03 11:18:51.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:18:52 compute-0 nova_compute[351685]: 2025-10-03 11:18:52.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:18:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:18:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3346: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:53 compute-0 nova_compute[351685]: 2025-10-03 11:18:53.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:18:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:18:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/71738225' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:18:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:18:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/71738225' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:18:54 compute-0 nova_compute[351685]: 2025-10-03 11:18:54.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:18:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3347: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:18:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:18:56 compute-0 nova_compute[351685]: 2025-10-03 11:18:56.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:18:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3348: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:18:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3349: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:18:59 compute-0 nova_compute[351685]: 2025-10-03 11:18:59.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:18:59 compute-0 nova_compute[351685]: 2025-10-03 11:18:59.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:18:59 compute-0 nova_compute[351685]: 2025-10-03 11:18:59.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  3 11:18:59 compute-0 nova_compute[351685]: 2025-10-03 11:18:59.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:18:59 compute-0 nova_compute[351685]: 2025-10-03 11:18:59.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:18:59 compute-0 nova_compute[351685]: 2025-10-03 11:18:59.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:18:59 compute-0 podman[157165]: time="2025-10-03T11:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:18:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:18:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9113 "" "Go-http-client/1.1"
Oct  3 11:19:01 compute-0 openstack_network_exporter[367524]: ERROR   11:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:19:01 compute-0 openstack_network_exporter[367524]: ERROR   11:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:19:01 compute-0 openstack_network_exporter[367524]: ERROR   11:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:19:01 compute-0 openstack_network_exporter[367524]: ERROR   11:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:19:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:19:01 compute-0 openstack_network_exporter[367524]: ERROR   11:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:19:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:19:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3350: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:19:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:19:03 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:19:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:19:03 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:19:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3351: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:04 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:19:04 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:19:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:19:04 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:19:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:19:04 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:19:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:19:04 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:19:04 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d6ad18e9-4f2c-43a5-983c-1347ab26dbcf does not exist
Oct  3 11:19:04 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d70dad59-4e31-4864-9d73-254e7bdc06d8 does not exist
Oct  3 11:19:04 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 4fecb7a2-ff60-4e38-a227-1d6cb53a138f does not exist
Oct  3 11:19:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:19:04 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:19:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:19:04 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:19:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:19:04 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:19:04 compute-0 nova_compute[351685]: 2025-10-03 11:19:04.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:19:05 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:19:05 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:19:05 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:19:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3352: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:05 compute-0 podman[518517]: 2025-10-03 11:19:05.564680602 +0000 UTC m=+0.084111826 container create fe70f15ef6d82f5b7f0ad07426644ba1a6feef19c68daf2c69dd36a6e2fecf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_darwin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:19:05 compute-0 podman[518517]: 2025-10-03 11:19:05.526862915 +0000 UTC m=+0.046294190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:19:05 compute-0 systemd[1]: Started libpod-conmon-fe70f15ef6d82f5b7f0ad07426644ba1a6feef19c68daf2c69dd36a6e2fecf6a.scope.
Oct  3 11:19:05 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:19:05 compute-0 podman[518517]: 2025-10-03 11:19:05.736067765 +0000 UTC m=+0.255499009 container init fe70f15ef6d82f5b7f0ad07426644ba1a6feef19c68daf2c69dd36a6e2fecf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 11:19:05 compute-0 podman[518517]: 2025-10-03 11:19:05.753830661 +0000 UTC m=+0.273261885 container start fe70f15ef6d82f5b7f0ad07426644ba1a6feef19c68daf2c69dd36a6e2fecf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_darwin, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  3 11:19:05 compute-0 podman[518517]: 2025-10-03 11:19:05.760386921 +0000 UTC m=+0.279818155 container attach fe70f15ef6d82f5b7f0ad07426644ba1a6feef19c68daf2c69dd36a6e2fecf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_darwin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:19:05 compute-0 sweet_darwin[518533]: 167 167
Oct  3 11:19:05 compute-0 systemd[1]: libpod-fe70f15ef6d82f5b7f0ad07426644ba1a6feef19c68daf2c69dd36a6e2fecf6a.scope: Deactivated successfully.
Oct  3 11:19:05 compute-0 podman[518517]: 2025-10-03 11:19:05.76788382 +0000 UTC m=+0.287315114 container died fe70f15ef6d82f5b7f0ad07426644ba1a6feef19c68daf2c69dd36a6e2fecf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Oct  3 11:19:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-afbe4cd89d0acd752e0127cdd5499bc197d6ae1a2ccf45ed1018d9f3622584d9-merged.mount: Deactivated successfully.
Oct  3 11:19:05 compute-0 podman[518517]: 2025-10-03 11:19:05.836082197 +0000 UTC m=+0.355513381 container remove fe70f15ef6d82f5b7f0ad07426644ba1a6feef19c68daf2c69dd36a6e2fecf6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_darwin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 11:19:05 compute-0 systemd[1]: libpod-conmon-fe70f15ef6d82f5b7f0ad07426644ba1a6feef19c68daf2c69dd36a6e2fecf6a.scope: Deactivated successfully.
Oct  3 11:19:06 compute-0 podman[518556]: 2025-10-03 11:19:06.109073504 +0000 UTC m=+0.092626668 container create 773ae07974df24bfae5a214325831a2de74c2d59e669409a5434dcc69483fa54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:19:06 compute-0 podman[518556]: 2025-10-03 11:19:06.073400354 +0000 UTC m=+0.056953608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:19:06 compute-0 systemd[1]: Started libpod-conmon-773ae07974df24bfae5a214325831a2de74c2d59e669409a5434dcc69483fa54.scope.
Oct  3 11:19:06 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936faef3d7a38dcc59b94ab4ff64a0a3c62f4248e37533f8ebbc6d9c7e7df715/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936faef3d7a38dcc59b94ab4ff64a0a3c62f4248e37533f8ebbc6d9c7e7df715/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936faef3d7a38dcc59b94ab4ff64a0a3c62f4248e37533f8ebbc6d9c7e7df715/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936faef3d7a38dcc59b94ab4ff64a0a3c62f4248e37533f8ebbc6d9c7e7df715/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936faef3d7a38dcc59b94ab4ff64a0a3c62f4248e37533f8ebbc6d9c7e7df715/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:19:06 compute-0 podman[518556]: 2025-10-03 11:19:06.295029011 +0000 UTC m=+0.278582205 container init 773ae07974df24bfae5a214325831a2de74c2d59e669409a5434dcc69483fa54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 11:19:06 compute-0 podman[518556]: 2025-10-03 11:19:06.320956439 +0000 UTC m=+0.304509613 container start 773ae07974df24bfae5a214325831a2de74c2d59e669409a5434dcc69483fa54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hawking, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 11:19:06 compute-0 podman[518556]: 2025-10-03 11:19:06.326767824 +0000 UTC m=+0.310321018 container attach 773ae07974df24bfae5a214325831a2de74c2d59e669409a5434dcc69483fa54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hawking, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 11:19:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3353: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:07 compute-0 wonderful_hawking[518572]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:19:07 compute-0 wonderful_hawking[518572]: --> relative data size: 1.0
Oct  3 11:19:07 compute-0 wonderful_hawking[518572]: --> All data devices are unavailable
Oct  3 11:19:07 compute-0 systemd[1]: libpod-773ae07974df24bfae5a214325831a2de74c2d59e669409a5434dcc69483fa54.scope: Deactivated successfully.
Oct  3 11:19:07 compute-0 podman[518556]: 2025-10-03 11:19:07.626322106 +0000 UTC m=+1.609875310 container died 773ae07974df24bfae5a214325831a2de74c2d59e669409a5434dcc69483fa54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:19:07 compute-0 systemd[1]: libpod-773ae07974df24bfae5a214325831a2de74c2d59e669409a5434dcc69483fa54.scope: Consumed 1.258s CPU time.
Oct  3 11:19:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-936faef3d7a38dcc59b94ab4ff64a0a3c62f4248e37533f8ebbc6d9c7e7df715-merged.mount: Deactivated successfully.
Oct  3 11:19:07 compute-0 podman[518556]: 2025-10-03 11:19:07.730362747 +0000 UTC m=+1.713915901 container remove 773ae07974df24bfae5a214325831a2de74c2d59e669409a5434dcc69483fa54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hawking, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:19:07 compute-0 systemd[1]: libpod-conmon-773ae07974df24bfae5a214325831a2de74c2d59e669409a5434dcc69483fa54.scope: Deactivated successfully.
Oct  3 11:19:07 compute-0 podman[518630]: 2025-10-03 11:19:07.806064514 +0000 UTC m=+0.093431173 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, container_name=iscsid)
Oct  3 11:19:07 compute-0 podman[518610]: 2025-10-03 11:19:07.817808729 +0000 UTC m=+0.145886039 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  3 11:19:07 compute-0 podman[518602]: 2025-10-03 11:19:07.823221242 +0000 UTC m=+0.154462213 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 11:19:07 compute-0 podman[518604]: 2025-10-03 11:19:07.823368647 +0000 UTC m=+0.141324413 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc.)
Oct  3 11:19:07 compute-0 podman[518611]: 2025-10-03 11:19:07.823221312 +0000 UTC m=+0.133637827 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:19:07 compute-0 podman[518612]: 2025-10-03 11:19:07.849324846 +0000 UTC m=+0.148035358 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Oct  3 11:19:07 compute-0 podman[518626]: 2025-10-03 11:19:07.90552325 +0000 UTC m=+0.209539271 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  3 11:19:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:19:08 compute-0 podman[518883]: 2025-10-03 11:19:08.604849748 +0000 UTC m=+0.080436759 container create 9a7fd941b92f7bd93a6501aa14c7df26e8f170b74106e64e832fbefaa3cf08c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 11:19:08 compute-0 podman[518883]: 2025-10-03 11:19:08.568864879 +0000 UTC m=+0.044451960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:19:08 compute-0 systemd[1]: Started libpod-conmon-9a7fd941b92f7bd93a6501aa14c7df26e8f170b74106e64e832fbefaa3cf08c4.scope.
Oct  3 11:19:08 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:19:08 compute-0 podman[518883]: 2025-10-03 11:19:08.754200507 +0000 UTC m=+0.229787588 container init 9a7fd941b92f7bd93a6501aa14c7df26e8f170b74106e64e832fbefaa3cf08c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_pascal, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:19:08 compute-0 podman[518883]: 2025-10-03 11:19:08.773398749 +0000 UTC m=+0.248985790 container start 9a7fd941b92f7bd93a6501aa14c7df26e8f170b74106e64e832fbefaa3cf08c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_pascal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 11:19:08 compute-0 podman[518883]: 2025-10-03 11:19:08.781441716 +0000 UTC m=+0.257028807 container attach 9a7fd941b92f7bd93a6501aa14c7df26e8f170b74106e64e832fbefaa3cf08c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 11:19:08 compute-0 thirsty_pascal[518898]: 167 167
Oct  3 11:19:08 compute-0 systemd[1]: libpod-9a7fd941b92f7bd93a6501aa14c7df26e8f170b74106e64e832fbefaa3cf08c4.scope: Deactivated successfully.
Oct  3 11:19:08 compute-0 podman[518883]: 2025-10-03 11:19:08.785931489 +0000 UTC m=+0.261518520 container died 9a7fd941b92f7bd93a6501aa14c7df26e8f170b74106e64e832fbefaa3cf08c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_pascal, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-84731df55d7566e0cba05627ebd1ece4e426c2962535d145fa9e3aa52307982a-merged.mount: Deactivated successfully.
Oct  3 11:19:08 compute-0 podman[518883]: 2025-10-03 11:19:08.853119764 +0000 UTC m=+0.328706815 container remove 9a7fd941b92f7bd93a6501aa14c7df26e8f170b74106e64e832fbefaa3cf08c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:19:08 compute-0 systemd[1]: libpod-conmon-9a7fd941b92f7bd93a6501aa14c7df26e8f170b74106e64e832fbefaa3cf08c4.scope: Deactivated successfully.
Oct  3 11:19:09 compute-0 podman[518922]: 2025-10-03 11:19:09.119091106 +0000 UTC m=+0.076875585 container create f297aa6acc40782a52e00a297197722b323e9da4a086fb6549422132dfe13084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:19:09 compute-0 podman[518922]: 2025-10-03 11:19:09.084771251 +0000 UTC m=+0.042555780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:19:09 compute-0 systemd[1]: Started libpod-conmon-f297aa6acc40782a52e00a297197722b323e9da4a086fb6549422132dfe13084.scope.
Oct  3 11:19:09 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa06d4c348828ba5a1d52d937e8ae2f136544dfa3daace52effd45bfadae3310/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa06d4c348828ba5a1d52d937e8ae2f136544dfa3daace52effd45bfadae3310/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa06d4c348828ba5a1d52d937e8ae2f136544dfa3daace52effd45bfadae3310/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:19:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa06d4c348828ba5a1d52d937e8ae2f136544dfa3daace52effd45bfadae3310/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:19:09 compute-0 podman[518922]: 2025-10-03 11:19:09.298807054 +0000 UTC m=+0.256591573 container init f297aa6acc40782a52e00a297197722b323e9da4a086fb6549422132dfe13084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:19:09 compute-0 podman[518922]: 2025-10-03 11:19:09.318153472 +0000 UTC m=+0.275937941 container start f297aa6acc40782a52e00a297197722b323e9da4a086fb6549422132dfe13084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 11:19:09 compute-0 podman[518922]: 2025-10-03 11:19:09.324854206 +0000 UTC m=+0.282638685 container attach f297aa6acc40782a52e00a297197722b323e9da4a086fb6549422132dfe13084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 11:19:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3354: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:09 compute-0 nova_compute[351685]: 2025-10-03 11:19:09.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:19:10 compute-0 awesome_noether[518938]: {
Oct  3 11:19:10 compute-0 awesome_noether[518938]:    "0": [
Oct  3 11:19:10 compute-0 awesome_noether[518938]:        {
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "devices": [
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "/dev/loop3"
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            ],
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "lv_name": "ceph_lv0",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "lv_size": "21470642176",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "name": "ceph_lv0",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "tags": {
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.cluster_name": "ceph",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.crush_device_class": "",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.encrypted": "0",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.osd_id": "0",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.type": "block",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.vdo": "0"
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            },
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "type": "block",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "vg_name": "ceph_vg0"
Oct  3 11:19:10 compute-0 awesome_noether[518938]:        }
Oct  3 11:19:10 compute-0 awesome_noether[518938]:    ],
Oct  3 11:19:10 compute-0 awesome_noether[518938]:    "1": [
Oct  3 11:19:10 compute-0 awesome_noether[518938]:        {
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "devices": [
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "/dev/loop4"
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            ],
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "lv_name": "ceph_lv1",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "lv_size": "21470642176",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "name": "ceph_lv1",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "tags": {
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.cluster_name": "ceph",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.crush_device_class": "",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.encrypted": "0",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.osd_id": "1",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.type": "block",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.vdo": "0"
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            },
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "type": "block",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "vg_name": "ceph_vg1"
Oct  3 11:19:10 compute-0 awesome_noether[518938]:        }
Oct  3 11:19:10 compute-0 awesome_noether[518938]:    ],
Oct  3 11:19:10 compute-0 awesome_noether[518938]:    "2": [
Oct  3 11:19:10 compute-0 awesome_noether[518938]:        {
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "devices": [
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "/dev/loop5"
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            ],
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "lv_name": "ceph_lv2",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "lv_size": "21470642176",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "name": "ceph_lv2",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "tags": {
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.cluster_name": "ceph",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.crush_device_class": "",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.encrypted": "0",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.osd_id": "2",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.type": "block",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:                "ceph.vdo": "0"
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            },
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "type": "block",
Oct  3 11:19:10 compute-0 awesome_noether[518938]:            "vg_name": "ceph_vg2"
Oct  3 11:19:10 compute-0 awesome_noether[518938]:        }
Oct  3 11:19:10 compute-0 awesome_noether[518938]:    ]
Oct  3 11:19:10 compute-0 awesome_noether[518938]: }
Oct  3 11:19:10 compute-0 systemd[1]: libpod-f297aa6acc40782a52e00a297197722b323e9da4a086fb6549422132dfe13084.scope: Deactivated successfully.
Oct  3 11:19:10 compute-0 podman[518947]: 2025-10-03 11:19:10.302828291 +0000 UTC m=+0.049274635 container died f297aa6acc40782a52e00a297197722b323e9da4a086fb6549422132dfe13084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:19:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa06d4c348828ba5a1d52d937e8ae2f136544dfa3daace52effd45bfadae3310-merged.mount: Deactivated successfully.
Oct  3 11:19:10 compute-0 podman[518947]: 2025-10-03 11:19:10.440760994 +0000 UTC m=+0.187207268 container remove f297aa6acc40782a52e00a297197722b323e9da4a086fb6549422132dfe13084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noether, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:19:10 compute-0 systemd[1]: libpod-conmon-f297aa6acc40782a52e00a297197722b323e9da4a086fb6549422132dfe13084.scope: Deactivated successfully.
Oct  3 11:19:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3355: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:11 compute-0 podman[519101]: 2025-10-03 11:19:11.599930273 +0000 UTC m=+0.079235000 container create fe0ad362d9ed4b7bb969733c0884da15bf2f70e1caff14fefe352ffc91c9c07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kepler, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 11:19:11 compute-0 podman[519101]: 2025-10-03 11:19:11.567396685 +0000 UTC m=+0.046701462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:19:11 compute-0 systemd[1]: Started libpod-conmon-fe0ad362d9ed4b7bb969733c0884da15bf2f70e1caff14fefe352ffc91c9c07f.scope.
Oct  3 11:19:11 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:19:11 compute-0 podman[519101]: 2025-10-03 11:19:11.738951293 +0000 UTC m=+0.218256000 container init fe0ad362d9ed4b7bb969733c0884da15bf2f70e1caff14fefe352ffc91c9c07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:19:11 compute-0 podman[519101]: 2025-10-03 11:19:11.749812299 +0000 UTC m=+0.229117006 container start fe0ad362d9ed4b7bb969733c0884da15bf2f70e1caff14fefe352ffc91c9c07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kepler, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:19:11 compute-0 podman[519101]: 2025-10-03 11:19:11.754883781 +0000 UTC m=+0.234188498 container attach fe0ad362d9ed4b7bb969733c0884da15bf2f70e1caff14fefe352ffc91c9c07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kepler, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:19:11 compute-0 sad_kepler[519116]: 167 167
Oct  3 11:19:11 compute-0 systemd[1]: libpod-fe0ad362d9ed4b7bb969733c0884da15bf2f70e1caff14fefe352ffc91c9c07f.scope: Deactivated successfully.
Oct  3 11:19:11 compute-0 podman[519101]: 2025-10-03 11:19:11.761088639 +0000 UTC m=+0.240393386 container died fe0ad362d9ed4b7bb969733c0884da15bf2f70e1caff14fefe352ffc91c9c07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kepler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct  3 11:19:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-882edf6b140915a98ce1060d790726632845d94e761354cc3d7b3104e91e98af-merged.mount: Deactivated successfully.
Oct  3 11:19:11 compute-0 podman[519101]: 2025-10-03 11:19:11.848081537 +0000 UTC m=+0.327386274 container remove fe0ad362d9ed4b7bb969733c0884da15bf2f70e1caff14fefe352ffc91c9c07f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kepler, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:19:11 compute-0 systemd[1]: libpod-conmon-fe0ad362d9ed4b7bb969733c0884da15bf2f70e1caff14fefe352ffc91c9c07f.scope: Deactivated successfully.
Oct  3 11:19:12 compute-0 podman[519139]: 2025-10-03 11:19:12.116144375 +0000 UTC m=+0.095537111 container create de27f7c6b56ecc86a2b70a88d85fcd8df836c8e873a031b9c91bb3752c8891d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:19:12 compute-0 podman[519139]: 2025-10-03 11:19:12.082723298 +0000 UTC m=+0.062116084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:19:12 compute-0 systemd[1]: Started libpod-conmon-de27f7c6b56ecc86a2b70a88d85fcd8df836c8e873a031b9c91bb3752c8891d4.scope.
Oct  3 11:19:12 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc88870477451fc414c9883688000f636d56c55043aefd5c6dd8e6f9549b5b29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc88870477451fc414c9883688000f636d56c55043aefd5c6dd8e6f9549b5b29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc88870477451fc414c9883688000f636d56c55043aefd5c6dd8e6f9549b5b29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc88870477451fc414c9883688000f636d56c55043aefd5c6dd8e6f9549b5b29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:19:12 compute-0 podman[519139]: 2025-10-03 11:19:12.265825803 +0000 UTC m=+0.245218549 container init de27f7c6b56ecc86a2b70a88d85fcd8df836c8e873a031b9c91bb3752c8891d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sutherland, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 11:19:12 compute-0 podman[519139]: 2025-10-03 11:19:12.287546198 +0000 UTC m=+0.266938924 container start de27f7c6b56ecc86a2b70a88d85fcd8df836c8e873a031b9c91bb3752c8891d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sutherland, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:19:12 compute-0 podman[519139]: 2025-10-03 11:19:12.29295976 +0000 UTC m=+0.272352506 container attach de27f7c6b56ecc86a2b70a88d85fcd8df836c8e873a031b9c91bb3752c8891d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 11:19:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]: {
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:        "osd_id": 1,
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:        "type": "bluestore"
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:    },
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:        "osd_id": 2,
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:        "type": "bluestore"
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:    },
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:        "osd_id": 0,
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:        "type": "bluestore"
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]:    }
Oct  3 11:19:13 compute-0 heuristic_sutherland[519155]: }
Oct  3 11:19:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3356: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:13 compute-0 systemd[1]: libpod-de27f7c6b56ecc86a2b70a88d85fcd8df836c8e873a031b9c91bb3752c8891d4.scope: Deactivated successfully.
Oct  3 11:19:13 compute-0 systemd[1]: libpod-de27f7c6b56ecc86a2b70a88d85fcd8df836c8e873a031b9c91bb3752c8891d4.scope: Consumed 1.265s CPU time.
Oct  3 11:19:13 compute-0 podman[519188]: 2025-10-03 11:19:13.634688517 +0000 UTC m=+0.052880889 container died de27f7c6b56ecc86a2b70a88d85fcd8df836c8e873a031b9c91bb3752c8891d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:19:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc88870477451fc414c9883688000f636d56c55043aefd5c6dd8e6f9549b5b29-merged.mount: Deactivated successfully.
Oct  3 11:19:13 compute-0 podman[519188]: 2025-10-03 11:19:13.705967024 +0000 UTC m=+0.124159396 container remove de27f7c6b56ecc86a2b70a88d85fcd8df836c8e873a031b9c91bb3752c8891d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:19:13 compute-0 systemd[1]: libpod-conmon-de27f7c6b56ecc86a2b70a88d85fcd8df836c8e873a031b9c91bb3752c8891d4.scope: Deactivated successfully.
Oct  3 11:19:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:19:13 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:19:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:19:13 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:19:13 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a5c390c3-6d18-4c03-985d-de2ab1f5322b does not exist
Oct  3 11:19:13 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev db3eeac5-4f3c-48ab-aaf7-46924175f8c8 does not exist
Oct  3 11:19:14 compute-0 nova_compute[351685]: 2025-10-03 11:19:14.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:19:14 compute-0 nova_compute[351685]: 2025-10-03 11:19:14.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:19:14 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:19:14 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:19:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3357: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:19:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:19:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:19:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:19:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:19:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:19:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3358: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:19:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3359: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:19 compute-0 nova_compute[351685]: 2025-10-03 11:19:19.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:19:20 compute-0 podman[519255]: 2025-10-03 11:19:20.873378908 +0000 UTC m=+0.105013144 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:19:20 compute-0 podman[519253]: 2025-10-03 11:19:20.904086169 +0000 UTC m=+0.147493910 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 11:19:20 compute-0 podman[519254]: 2025-10-03 11:19:20.919383147 +0000 UTC m=+0.161194638 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, com.redhat.component=ubi9-container, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler)
Oct  3 11:19:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3360: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:19:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3361: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:24 compute-0 nova_compute[351685]: 2025-10-03 11:19:24.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:19:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3362: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3363: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:19:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3364: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:29 compute-0 nova_compute[351685]: 2025-10-03 11:19:29.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:19:29 compute-0 podman[157165]: time="2025-10-03T11:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:19:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:19:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9115 "" "Go-http-client/1.1"
Oct  3 11:19:31 compute-0 openstack_network_exporter[367524]: ERROR   11:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:19:31 compute-0 openstack_network_exporter[367524]: ERROR   11:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:19:31 compute-0 openstack_network_exporter[367524]: ERROR   11:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:19:31 compute-0 openstack_network_exporter[367524]: ERROR   11:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:19:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:19:31 compute-0 openstack_network_exporter[367524]: ERROR   11:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:19:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:19:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3365: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:19:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3366: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:34 compute-0 nova_compute[351685]: 2025-10-03 11:19:34.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:19:34 compute-0 nova_compute[351685]: 2025-10-03 11:19:34.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:19:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3367: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3368: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:37 compute-0 nova_compute[351685]: 2025-10-03 11:19:37.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:19:37 compute-0 nova_compute[351685]: 2025-10-03 11:19:37.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:19:37 compute-0 nova_compute[351685]: 2025-10-03 11:19:37.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:19:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:19:38 compute-0 nova_compute[351685]: 2025-10-03 11:19:38.329 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:19:38 compute-0 nova_compute[351685]: 2025-10-03 11:19:38.330 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:19:38 compute-0 nova_compute[351685]: 2025-10-03 11:19:38.330 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:19:38 compute-0 nova_compute[351685]: 2025-10-03 11:19:38.330 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:19:38 compute-0 podman[519314]: 2025-10-03 11:19:38.87507959 +0000 UTC m=+0.115620533 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:19:38 compute-0 podman[519323]: 2025-10-03 11:19:38.881811824 +0000 UTC m=+0.100450318 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct  3 11:19:38 compute-0 podman[519317]: 2025-10-03 11:19:38.892939719 +0000 UTC m=+0.122326136 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd)
Oct  3 11:19:38 compute-0 podman[519316]: 2025-10-03 11:19:38.90642492 +0000 UTC m=+0.134879077 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 11:19:38 compute-0 podman[519315]: 2025-10-03 11:19:38.914587381 +0000 UTC m=+0.148606856 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, version=9.6, config_id=edpm, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 11:19:38 compute-0 podman[519344]: 2025-10-03 11:19:38.93432426 +0000 UTC m=+0.125275490 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 11:19:38 compute-0 podman[519335]: 2025-10-03 11:19:38.945465546 +0000 UTC m=+0.156089284 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 11:19:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3369: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:39 compute-0 nova_compute[351685]: 2025-10-03 11:19:39.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.904 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.905 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.905 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.906 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.916 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.916 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.917 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.917 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.917 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:19:40.917387) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.927 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.928 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.929 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.929 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.929 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.929 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.930 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.931 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.931 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:19:40.930096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.931 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.932 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.932 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.932 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.933 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:19:40.932516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.971 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.973 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.973 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.974 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.975 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.975 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.975 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.976 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:40.977 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:19:40.976224) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.048 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.049 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.051 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.051 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.051 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.052 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.052 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:19:41.052037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.054 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.055 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.055 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.055 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.055 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.056 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.056 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:19:41.055560) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.056 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.056 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.057 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.058 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.058 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.058 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.059 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.060 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:19:41.058965) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.060 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.062 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.062 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.063 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.064 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.065 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:19:41.063653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.066 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.067 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.068 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.068 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.069 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.070 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.070 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:19:41.067830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.071 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.072 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:19:41.071611) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.105 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.106 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.106 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.107 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.107 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.107 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.107 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.108 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.108 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.109 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:19:41.107516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.109 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.110 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.110 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.110 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.110 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.110 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.111 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.111 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:19:41.110697) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.112 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.112 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.113 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.113 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.113 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.114 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:19:41.113793) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.114 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.115 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.115 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.116 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.116 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.116 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:19:41.116214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.117 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.117 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.118 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.118 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:19:41.118507) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.119 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.119 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.120 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.120 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.120 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.121 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.121 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.122 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.122 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.122 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:19:41.121009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.123 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.123 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.123 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.124 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.125 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:19:41.123124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.125 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.125 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 100950000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:19:41.125634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.127 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.127 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.127 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.128 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:19:41.127803) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.128 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.129 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.129 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.129 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.129 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.130 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:19:41.129867) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.131 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.131 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.131 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.131 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.132 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.132 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:19:41.131993) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.133 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.133 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.133 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.133 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.134 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.134 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:19:41.134136) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.135 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.135 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.135 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.136 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.136 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.136 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.136 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.136 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.137 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:19:41.136906) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.137 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.138 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.138 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.138 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.138 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.138 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.139 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:19:41.139072) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.139 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.140 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.141 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.141 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.141 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.141 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.141 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.142 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.142 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.142 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.142 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.142 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.142 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.142 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.142 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.143 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.143 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.143 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.143 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.143 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.143 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.143 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.144 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.144 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.144 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.144 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.144 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:19:41.144 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:19:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3370: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:19:41.694 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:19:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:19:41.695 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:19:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:19:41.696 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:19:41 compute-0 nova_compute[351685]: 2025-10-03 11:19:41.830 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:19:41 compute-0 nova_compute[351685]: 2025-10-03 11:19:41.857 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:19:41 compute-0 nova_compute[351685]: 2025-10-03 11:19:41.858 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:19:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:19:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3371: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:44 compute-0 nova_compute[351685]: 2025-10-03 11:19:44.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:19:44 compute-0 nova_compute[351685]: 2025-10-03 11:19:44.854 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:19:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3372: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:19:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:19:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:19:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:19:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:19:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:19:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:19:46
Oct  3 11:19:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:19:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:19:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'backups', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.meta']
Oct  3 11:19:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:19:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:19:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:19:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:19:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:19:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:19:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:19:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:19:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:19:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:19:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:19:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3373: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:47 compute-0 nova_compute[351685]: 2025-10-03 11:19:47.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:19:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:19:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3374: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:49 compute-0 nova_compute[351685]: 2025-10-03 11:19:49.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:19:49 compute-0 nova_compute[351685]: 2025-10-03 11:19:49.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:19:49 compute-0 nova_compute[351685]: 2025-10-03 11:19:49.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  3 11:19:49 compute-0 nova_compute[351685]: 2025-10-03 11:19:49.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:19:49 compute-0 nova_compute[351685]: 2025-10-03 11:19:49.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:19:49 compute-0 nova_compute[351685]: 2025-10-03 11:19:49.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:19:49 compute-0 nova_compute[351685]: 2025-10-03 11:19:49.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:19:49 compute-0 nova_compute[351685]: 2025-10-03 11:19:49.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:19:49 compute-0 nova_compute[351685]: 2025-10-03 11:19:49.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:19:49 compute-0 nova_compute[351685]: 2025-10-03 11:19:49.775 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:19:49 compute-0 nova_compute[351685]: 2025-10-03 11:19:49.775 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:19:49 compute-0 nova_compute[351685]: 2025-10-03 11:19:49.776 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:19:49 compute-0 nova_compute[351685]: 2025-10-03 11:19:49.776 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:19:49 compute-0 nova_compute[351685]: 2025-10-03 11:19:49.776 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:19:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:19:50 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918452437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:19:50 compute-0 nova_compute[351685]: 2025-10-03 11:19:50.330 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:19:50 compute-0 nova_compute[351685]: 2025-10-03 11:19:50.439 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:19:50 compute-0 nova_compute[351685]: 2025-10-03 11:19:50.439 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:19:50 compute-0 nova_compute[351685]: 2025-10-03 11:19:50.439 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:19:51 compute-0 nova_compute[351685]: 2025-10-03 11:19:51.024 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:19:51 compute-0 nova_compute[351685]: 2025-10-03 11:19:51.025 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3777MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:19:51 compute-0 nova_compute[351685]: 2025-10-03 11:19:51.026 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:19:51 compute-0 nova_compute[351685]: 2025-10-03 11:19:51.026 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:19:51 compute-0 nova_compute[351685]: 2025-10-03 11:19:51.344 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:19:51 compute-0 nova_compute[351685]: 2025-10-03 11:19:51.344 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:19:51 compute-0 nova_compute[351685]: 2025-10-03 11:19:51.345 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:19:51 compute-0 nova_compute[351685]: 2025-10-03 11:19:51.524 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:19:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3375: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:51 compute-0 podman[519495]: 2025-10-03 11:19:51.831421951 +0000 UTC m=+0.088527548 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:19:51 compute-0 podman[519496]: 2025-10-03 11:19:51.859080504 +0000 UTC m=+0.114175666 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, container_name=kepler, vcs-type=git, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, config_id=edpm, io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Oct  3 11:19:51 compute-0 podman[519497]: 2025-10-03 11:19:51.870161988 +0000 UTC m=+0.107050239 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:19:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:19:52 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1906273601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:19:52 compute-0 nova_compute[351685]: 2025-10-03 11:19:52.036 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:19:52 compute-0 nova_compute[351685]: 2025-10-03 11:19:52.048 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:19:52 compute-0 nova_compute[351685]: 2025-10-03 11:19:52.077 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:19:52 compute-0 nova_compute[351685]: 2025-10-03 11:19:52.079 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:19:52 compute-0 nova_compute[351685]: 2025-10-03 11:19:52.079 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:19:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:19:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3376: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:19:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2410166433' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:19:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:19:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2410166433' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:19:54 compute-0 nova_compute[351685]: 2025-10-03 11:19:54.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:19:54 compute-0 nova_compute[351685]: 2025-10-03 11:19:54.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:19:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3377: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:56 compute-0 nova_compute[351685]: 2025-10-03 11:19:56.080 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:19:56 compute-0 nova_compute[351685]: 2025-10-03 11:19:56.080 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:19:56 compute-0 nova_compute[351685]: 2025-10-03 11:19:56.081 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:19:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:19:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3378: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:57 compute-0 nova_compute[351685]: 2025-10-03 11:19:57.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:19:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:19:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3379: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:19:59 compute-0 nova_compute[351685]: 2025-10-03 11:19:59.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:19:59 compute-0 podman[157165]: time="2025-10-03T11:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:19:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:19:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9115 "" "Go-http-client/1.1"
Oct  3 11:20:01 compute-0 openstack_network_exporter[367524]: ERROR   11:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:20:01 compute-0 openstack_network_exporter[367524]: ERROR   11:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:20:01 compute-0 openstack_network_exporter[367524]: ERROR   11:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:20:01 compute-0 openstack_network_exporter[367524]: ERROR   11:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:20:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:20:01 compute-0 openstack_network_exporter[367524]: ERROR   11:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:20:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:20:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3380: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:20:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3381: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:04 compute-0 nova_compute[351685]: 2025-10-03 11:20:04.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:20:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3382: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3383: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:20:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3384: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:09 compute-0 nova_compute[351685]: 2025-10-03 11:20:09.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:20:09 compute-0 podman[519557]: 2025-10-03 11:20:09.891705996 +0000 UTC m=+0.115756318 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible)
Oct  3 11:20:09 compute-0 podman[519574]: 2025-10-03 11:20:09.902032825 +0000 UTC m=+0.105992135 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  3 11:20:09 compute-0 podman[519556]: 2025-10-03 11:20:09.906552579 +0000 UTC m=+0.140227648 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  3 11:20:09 compute-0 podman[519554]: 2025-10-03 11:20:09.918378607 +0000 UTC m=+0.162400127 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:20:09 compute-0 podman[519555]: 2025-10-03 11:20:09.921157115 +0000 UTC m=+0.154369119 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct  3 11:20:09 compute-0 podman[519562]: 2025-10-03 11:20:09.923621604 +0000 UTC m=+0.142659586 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute)
Oct  3 11:20:09 compute-0 podman[519568]: 2025-10-03 11:20:09.939518262 +0000 UTC m=+0.143860504 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Oct  3 11:20:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3385: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:20:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3386: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:14 compute-0 nova_compute[351685]: 2025-10-03 11:20:14.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:20:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  3 11:20:15 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 11:20:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:20:15 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:20:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:20:15 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:20:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:20:15 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:20:15 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3facff96-2619-474e-b2a3-8dc689f9251c does not exist
Oct  3 11:20:15 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 4d61e73c-eb2d-4b6c-8e74-5c209ce0c716 does not exist
Oct  3 11:20:15 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6f372bb7-3d35-46ba-9f56-8c340740d233 does not exist
Oct  3 11:20:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:20:15 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:20:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:20:15 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:20:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:20:15 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:20:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3387: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:15 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 11:20:15 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:20:15 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:20:15 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:20:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:20:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:20:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:20:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:20:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:20:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:20:16 compute-0 podman[519958]: 2025-10-03 11:20:16.303186916 +0000 UTC m=+0.079246990 container create 756442f981e98c56bdee2301ef3c82d77313900604b59205463e22b413087e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct  3 11:20:16 compute-0 podman[519958]: 2025-10-03 11:20:16.265940418 +0000 UTC m=+0.042000502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:20:16 compute-0 systemd[1]: Started libpod-conmon-756442f981e98c56bdee2301ef3c82d77313900604b59205463e22b413087e4c.scope.
Oct  3 11:20:16 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:20:16 compute-0 podman[519958]: 2025-10-03 11:20:16.441645708 +0000 UTC m=+0.217705772 container init 756442f981e98c56bdee2301ef3c82d77313900604b59205463e22b413087e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lichterman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:20:16 compute-0 podman[519958]: 2025-10-03 11:20:16.459117426 +0000 UTC m=+0.235177470 container start 756442f981e98c56bdee2301ef3c82d77313900604b59205463e22b413087e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lichterman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:20:16 compute-0 podman[519958]: 2025-10-03 11:20:16.464782127 +0000 UTC m=+0.240842181 container attach 756442f981e98c56bdee2301ef3c82d77313900604b59205463e22b413087e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 11:20:16 compute-0 reverent_lichterman[519973]: 167 167
Oct  3 11:20:16 compute-0 systemd[1]: libpod-756442f981e98c56bdee2301ef3c82d77313900604b59205463e22b413087e4c.scope: Deactivated successfully.
Oct  3 11:20:16 compute-0 conmon[519973]: conmon 756442f981e98c56bdee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-756442f981e98c56bdee2301ef3c82d77313900604b59205463e22b413087e4c.scope/container/memory.events
Oct  3 11:20:16 compute-0 podman[519978]: 2025-10-03 11:20:16.520631579 +0000 UTC m=+0.035540526 container died 756442f981e98c56bdee2301ef3c82d77313900604b59205463e22b413087e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lichterman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 11:20:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-4322cacdcc1f764d5e2c09afe48d78d4b4902f5676fcf7e70ae13ee2236e6854-merged.mount: Deactivated successfully.
Oct  3 11:20:16 compute-0 podman[519978]: 2025-10-03 11:20:16.595879942 +0000 UTC m=+0.110788849 container remove 756442f981e98c56bdee2301ef3c82d77313900604b59205463e22b413087e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 11:20:16 compute-0 systemd[1]: libpod-conmon-756442f981e98c56bdee2301ef3c82d77313900604b59205463e22b413087e4c.scope: Deactivated successfully.
Oct  3 11:20:16 compute-0 podman[519999]: 2025-10-03 11:20:16.869327802 +0000 UTC m=+0.073459126 container create e1f86587c186983e6c95c5e4de14665753244643c2b9a0faf1593cb862afd37f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mayer, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 11:20:16 compute-0 podman[519999]: 2025-10-03 11:20:16.839900983 +0000 UTC m=+0.044032317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:20:16 compute-0 systemd[1]: Started libpod-conmon-e1f86587c186983e6c95c5e4de14665753244643c2b9a0faf1593cb862afd37f.scope.
Oct  3 11:20:17 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d07044c692756158b9a3aef70fdde346977cd37f7ea84f47e0a6c2f9fdaed4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d07044c692756158b9a3aef70fdde346977cd37f7ea84f47e0a6c2f9fdaed4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d07044c692756158b9a3aef70fdde346977cd37f7ea84f47e0a6c2f9fdaed4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d07044c692756158b9a3aef70fdde346977cd37f7ea84f47e0a6c2f9fdaed4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:20:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d07044c692756158b9a3aef70fdde346977cd37f7ea84f47e0a6c2f9fdaed4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:20:17 compute-0 podman[519999]: 2025-10-03 11:20:17.041572402 +0000 UTC m=+0.245703786 container init e1f86587c186983e6c95c5e4de14665753244643c2b9a0faf1593cb862afd37f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mayer, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  3 11:20:17 compute-0 podman[519999]: 2025-10-03 11:20:17.07501213 +0000 UTC m=+0.279143424 container start e1f86587c186983e6c95c5e4de14665753244643c2b9a0faf1593cb862afd37f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 11:20:17 compute-0 podman[519999]: 2025-10-03 11:20:17.080144813 +0000 UTC m=+0.284276197 container attach e1f86587c186983e6c95c5e4de14665753244643c2b9a0faf1593cb862afd37f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mayer, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:20:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3388: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:20:18 compute-0 upbeat_mayer[520015]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:20:18 compute-0 upbeat_mayer[520015]: --> relative data size: 1.0
Oct  3 11:20:18 compute-0 upbeat_mayer[520015]: --> All data devices are unavailable
Oct  3 11:20:18 compute-0 systemd[1]: libpod-e1f86587c186983e6c95c5e4de14665753244643c2b9a0faf1593cb862afd37f.scope: Deactivated successfully.
Oct  3 11:20:18 compute-0 systemd[1]: libpod-e1f86587c186983e6c95c5e4de14665753244643c2b9a0faf1593cb862afd37f.scope: Consumed 1.333s CPU time.
Oct  3 11:20:18 compute-0 podman[519999]: 2025-10-03 11:20:18.472804447 +0000 UTC m=+1.676935781 container died e1f86587c186983e6c95c5e4de14665753244643c2b9a0faf1593cb862afd37f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:20:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-31d07044c692756158b9a3aef70fdde346977cd37f7ea84f47e0a6c2f9fdaed4-merged.mount: Deactivated successfully.
Oct  3 11:20:18 compute-0 podman[519999]: 2025-10-03 11:20:18.595474384 +0000 UTC m=+1.799605708 container remove e1f86587c186983e6c95c5e4de14665753244643c2b9a0faf1593cb862afd37f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  3 11:20:18 compute-0 systemd[1]: libpod-conmon-e1f86587c186983e6c95c5e4de14665753244643c2b9a0faf1593cb862afd37f.scope: Deactivated successfully.
Oct  3 11:20:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3389: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:19 compute-0 nova_compute[351685]: 2025-10-03 11:20:19.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:20:19 compute-0 podman[520198]: 2025-10-03 11:20:19.860929327 +0000 UTC m=+0.078746575 container create a6100dda13422742d6cfef33306879c673a62fbbd34e8da7ff3d11cd31b814ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:20:19 compute-0 podman[520198]: 2025-10-03 11:20:19.826789747 +0000 UTC m=+0.044606955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:20:19 compute-0 systemd[1]: Started libpod-conmon-a6100dda13422742d6cfef33306879c673a62fbbd34e8da7ff3d11cd31b814ce.scope.
Oct  3 11:20:19 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:20:20 compute-0 podman[520198]: 2025-10-03 11:20:19.999749409 +0000 UTC m=+0.217566637 container init a6100dda13422742d6cfef33306879c673a62fbbd34e8da7ff3d11cd31b814ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:20:20 compute-0 podman[520198]: 2025-10-03 11:20:20.016014398 +0000 UTC m=+0.233831606 container start a6100dda13422742d6cfef33306879c673a62fbbd34e8da7ff3d11cd31b814ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 11:20:20 compute-0 podman[520198]: 2025-10-03 11:20:20.021829854 +0000 UTC m=+0.239647162 container attach a6100dda13422742d6cfef33306879c673a62fbbd34e8da7ff3d11cd31b814ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:20:20 compute-0 cranky_thompson[520213]: 167 167
Oct  3 11:20:20 compute-0 systemd[1]: libpod-a6100dda13422742d6cfef33306879c673a62fbbd34e8da7ff3d11cd31b814ce.scope: Deactivated successfully.
Oct  3 11:20:20 compute-0 podman[520198]: 2025-10-03 11:20:20.028643492 +0000 UTC m=+0.246460730 container died a6100dda13422742d6cfef33306879c673a62fbbd34e8da7ff3d11cd31b814ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:20:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-988134a49f2da03c4ea7130f76da7ab563be61c0ad78b84e6024068916d030ef-merged.mount: Deactivated successfully.
Oct  3 11:20:20 compute-0 podman[520198]: 2025-10-03 11:20:20.096704054 +0000 UTC m=+0.314521262 container remove a6100dda13422742d6cfef33306879c673a62fbbd34e8da7ff3d11cd31b814ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 11:20:20 compute-0 systemd[1]: libpod-conmon-a6100dda13422742d6cfef33306879c673a62fbbd34e8da7ff3d11cd31b814ce.scope: Deactivated successfully.
Oct  3 11:20:20 compute-0 podman[520235]: 2025-10-03 11:20:20.385571688 +0000 UTC m=+0.094849450 container create 6c55c212be254797dba96a5d0625dda7a117d3860f9840b898a3904b3e0d865a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jennings, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:20:20 compute-0 podman[520235]: 2025-10-03 11:20:20.347940057 +0000 UTC m=+0.057217849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:20:20 compute-0 systemd[1]: Started libpod-conmon-6c55c212be254797dba96a5d0625dda7a117d3860f9840b898a3904b3e0d865a.scope.
Oct  3 11:20:20 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0afb0c88d93c707e36f1b73885535f43f570c385148b69b7964f2c1d608358/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0afb0c88d93c707e36f1b73885535f43f570c385148b69b7964f2c1d608358/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0afb0c88d93c707e36f1b73885535f43f570c385148b69b7964f2c1d608358/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0afb0c88d93c707e36f1b73885535f43f570c385148b69b7964f2c1d608358/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:20:20 compute-0 podman[520235]: 2025-10-03 11:20:20.581857655 +0000 UTC m=+0.291135477 container init 6c55c212be254797dba96a5d0625dda7a117d3860f9840b898a3904b3e0d865a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 11:20:20 compute-0 podman[520235]: 2025-10-03 11:20:20.608712031 +0000 UTC m=+0.317989793 container start 6c55c212be254797dba96a5d0625dda7a117d3860f9840b898a3904b3e0d865a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jennings, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:20:20 compute-0 podman[520235]: 2025-10-03 11:20:20.615207929 +0000 UTC m=+0.324485701 container attach 6c55c212be254797dba96a5d0625dda7a117d3860f9840b898a3904b3e0d865a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:20:21 compute-0 boring_jennings[520251]: {
Oct  3 11:20:21 compute-0 boring_jennings[520251]:    "0": [
Oct  3 11:20:21 compute-0 boring_jennings[520251]:        {
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "devices": [
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "/dev/loop3"
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            ],
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "lv_name": "ceph_lv0",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "lv_size": "21470642176",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "name": "ceph_lv0",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "tags": {
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.cluster_name": "ceph",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.crush_device_class": "",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.encrypted": "0",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.osd_id": "0",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.type": "block",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.vdo": "0"
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            },
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "type": "block",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "vg_name": "ceph_vg0"
Oct  3 11:20:21 compute-0 boring_jennings[520251]:        }
Oct  3 11:20:21 compute-0 boring_jennings[520251]:    ],
Oct  3 11:20:21 compute-0 boring_jennings[520251]:    "1": [
Oct  3 11:20:21 compute-0 boring_jennings[520251]:        {
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "devices": [
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "/dev/loop4"
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            ],
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "lv_name": "ceph_lv1",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "lv_size": "21470642176",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "name": "ceph_lv1",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "tags": {
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.cluster_name": "ceph",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.crush_device_class": "",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.encrypted": "0",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.osd_id": "1",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.type": "block",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.vdo": "0"
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            },
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "type": "block",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "vg_name": "ceph_vg1"
Oct  3 11:20:21 compute-0 boring_jennings[520251]:        }
Oct  3 11:20:21 compute-0 boring_jennings[520251]:    ],
Oct  3 11:20:21 compute-0 boring_jennings[520251]:    "2": [
Oct  3 11:20:21 compute-0 boring_jennings[520251]:        {
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "devices": [
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "/dev/loop5"
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            ],
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "lv_name": "ceph_lv2",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "lv_size": "21470642176",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "name": "ceph_lv2",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "tags": {
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.cluster_name": "ceph",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.crush_device_class": "",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.encrypted": "0",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.osd_id": "2",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.type": "block",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:                "ceph.vdo": "0"
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            },
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "type": "block",
Oct  3 11:20:21 compute-0 boring_jennings[520251]:            "vg_name": "ceph_vg2"
Oct  3 11:20:21 compute-0 boring_jennings[520251]:        }
Oct  3 11:20:21 compute-0 boring_jennings[520251]:    ]
Oct  3 11:20:21 compute-0 boring_jennings[520251]: }
Oct  3 11:20:21 compute-0 systemd[1]: libpod-6c55c212be254797dba96a5d0625dda7a117d3860f9840b898a3904b3e0d865a.scope: Deactivated successfully.
Oct  3 11:20:21 compute-0 podman[520235]: 2025-10-03 11:20:21.489125152 +0000 UTC m=+1.198402894 container died 6c55c212be254797dba96a5d0625dda7a117d3860f9840b898a3904b3e0d865a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jennings, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 11:20:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad0afb0c88d93c707e36f1b73885535f43f570c385148b69b7964f2c1d608358-merged.mount: Deactivated successfully.
Oct  3 11:20:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3390: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:21 compute-0 podman[520235]: 2025-10-03 11:20:21.575055635 +0000 UTC m=+1.284333377 container remove 6c55c212be254797dba96a5d0625dda7a117d3860f9840b898a3904b3e0d865a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:20:21 compute-0 systemd[1]: libpod-conmon-6c55c212be254797dba96a5d0625dda7a117d3860f9840b898a3904b3e0d865a.scope: Deactivated successfully.
Oct  3 11:20:22 compute-0 podman[520348]: 2025-10-03 11:20:22.054292066 +0000 UTC m=+0.103866928 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct  3 11:20:22 compute-0 podman[520346]: 2025-10-03 11:20:22.068904803 +0000 UTC m=+0.108989491 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:20:22 compute-0 podman[520347]: 2025-10-03 11:20:22.081789313 +0000 UTC m=+0.120683004 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, distribution-scope=public, release-0.7.12=, version=9.4, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, architecture=x86_64)
Oct  3 11:20:22 compute-0 podman[520469]: 2025-10-03 11:20:22.521612186 +0000 UTC m=+0.079336563 container create f7be55bddefbf705c6b36431738f898bdb3d391d765a037db4722e9f1ba07abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:20:22 compute-0 podman[520469]: 2025-10-03 11:20:22.490639867 +0000 UTC m=+0.048364304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:20:22 compute-0 systemd[1]: Started libpod-conmon-f7be55bddefbf705c6b36431738f898bdb3d391d765a037db4722e9f1ba07abe.scope.
Oct  3 11:20:22 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:20:22 compute-0 podman[520469]: 2025-10-03 11:20:22.666722599 +0000 UTC m=+0.224447026 container init f7be55bddefbf705c6b36431738f898bdb3d391d765a037db4722e9f1ba07abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tharp, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:20:22 compute-0 podman[520469]: 2025-10-03 11:20:22.686408198 +0000 UTC m=+0.244132575 container start f7be55bddefbf705c6b36431738f898bdb3d391d765a037db4722e9f1ba07abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:20:22 compute-0 podman[520469]: 2025-10-03 11:20:22.692783611 +0000 UTC m=+0.250508038 container attach f7be55bddefbf705c6b36431738f898bdb3d391d765a037db4722e9f1ba07abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tharp, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  3 11:20:22 compute-0 clever_tharp[520485]: 167 167
Oct  3 11:20:22 compute-0 podman[520469]: 2025-10-03 11:20:22.700519578 +0000 UTC m=+0.258243965 container died f7be55bddefbf705c6b36431738f898bdb3d391d765a037db4722e9f1ba07abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:20:22 compute-0 systemd[1]: libpod-f7be55bddefbf705c6b36431738f898bdb3d391d765a037db4722e9f1ba07abe.scope: Deactivated successfully.
Oct  3 11:20:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-02384a2f9381266851da9e10ac180690453730416bc8ea2ba5edcf8029d251cd-merged.mount: Deactivated successfully.
Oct  3 11:20:22 compute-0 podman[520469]: 2025-10-03 11:20:22.783702975 +0000 UTC m=+0.341427422 container remove f7be55bddefbf705c6b36431738f898bdb3d391d765a037db4722e9f1ba07abe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  3 11:20:22 compute-0 systemd[1]: libpod-conmon-f7be55bddefbf705c6b36431738f898bdb3d391d765a037db4722e9f1ba07abe.scope: Deactivated successfully.
Oct  3 11:20:23 compute-0 podman[520507]: 2025-10-03 11:20:23.054070847 +0000 UTC m=+0.079183060 container create 643bb2498edfb0a8dfb202861109bbda10d94ca74335baf0ce237210d7a63359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 11:20:23 compute-0 podman[520507]: 2025-10-03 11:20:23.035085501 +0000 UTC m=+0.060197724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:20:23 compute-0 systemd[1]: Started libpod-conmon-643bb2498edfb0a8dfb202861109bbda10d94ca74335baf0ce237210d7a63359.scope.
Oct  3 11:20:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:20:23 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed33dabde248f0541f1400bdc9e044a2801a89c444d675e8acbfb36627a2eb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed33dabde248f0541f1400bdc9e044a2801a89c444d675e8acbfb36627a2eb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed33dabde248f0541f1400bdc9e044a2801a89c444d675e8acbfb36627a2eb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed33dabde248f0541f1400bdc9e044a2801a89c444d675e8acbfb36627a2eb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:20:23 compute-0 podman[520507]: 2025-10-03 11:20:23.238839166 +0000 UTC m=+0.263951399 container init 643bb2498edfb0a8dfb202861109bbda10d94ca74335baf0ce237210d7a63359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  3 11:20:23 compute-0 podman[520507]: 2025-10-03 11:20:23.258023898 +0000 UTC m=+0.283136131 container start 643bb2498edfb0a8dfb202861109bbda10d94ca74335baf0ce237210d7a63359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 11:20:23 compute-0 podman[520507]: 2025-10-03 11:20:23.264771254 +0000 UTC m=+0.289883497 container attach 643bb2498edfb0a8dfb202861109bbda10d94ca74335baf0ce237210d7a63359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 11:20:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3391: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:24 compute-0 vibrant_gates[520522]: {
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:        "osd_id": 1,
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:        "type": "bluestore"
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:    },
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:        "osd_id": 2,
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:        "type": "bluestore"
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:    },
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:        "osd_id": 0,
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:        "type": "bluestore"
Oct  3 11:20:24 compute-0 vibrant_gates[520522]:    }
Oct  3 11:20:24 compute-0 vibrant_gates[520522]: }
Oct  3 11:20:24 compute-0 systemd[1]: libpod-643bb2498edfb0a8dfb202861109bbda10d94ca74335baf0ce237210d7a63359.scope: Deactivated successfully.
Oct  3 11:20:24 compute-0 podman[520507]: 2025-10-03 11:20:24.517582323 +0000 UTC m=+1.542694536 container died 643bb2498edfb0a8dfb202861109bbda10d94ca74335baf0ce237210d7a63359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gates, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 11:20:24 compute-0 systemd[1]: libpod-643bb2498edfb0a8dfb202861109bbda10d94ca74335baf0ce237210d7a63359.scope: Consumed 1.252s CPU time.
Oct  3 11:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ed33dabde248f0541f1400bdc9e044a2801a89c444d675e8acbfb36627a2eb9-merged.mount: Deactivated successfully.
Oct  3 11:20:24 compute-0 podman[520507]: 2025-10-03 11:20:24.604176647 +0000 UTC m=+1.629288860 container remove 643bb2498edfb0a8dfb202861109bbda10d94ca74335baf0ce237210d7a63359 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gates, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 11:20:24 compute-0 systemd[1]: libpod-conmon-643bb2498edfb0a8dfb202861109bbda10d94ca74335baf0ce237210d7a63359.scope: Deactivated successfully.
Oct  3 11:20:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:20:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:20:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:20:24 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:20:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 09a17f8d-42e4-44fb-890b-89108d6c2804 does not exist
Oct  3 11:20:24 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b0d01716-602a-4329-a2dc-0c800cefbfc1 does not exist
Oct  3 11:20:24 compute-0 nova_compute[351685]: 2025-10-03 11:20:24.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:20:24 compute-0 nova_compute[351685]: 2025-10-03 11:20:24.726 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:20:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:20:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:20:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3392: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3393: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:20:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3394: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:29 compute-0 nova_compute[351685]: 2025-10-03 11:20:29.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:20:29 compute-0 nova_compute[351685]: 2025-10-03 11:20:29.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:20:29 compute-0 podman[157165]: time="2025-10-03T11:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:20:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:20:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9124 "" "Go-http-client/1.1"
Oct  3 11:20:31 compute-0 openstack_network_exporter[367524]: ERROR   11:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:20:31 compute-0 openstack_network_exporter[367524]: ERROR   11:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:20:31 compute-0 openstack_network_exporter[367524]: ERROR   11:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:20:31 compute-0 openstack_network_exporter[367524]: ERROR   11:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:20:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:20:31 compute-0 openstack_network_exporter[367524]: ERROR   11:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:20:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:20:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3395: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:20:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3396: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:34 compute-0 nova_compute[351685]: 2025-10-03 11:20:34.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:20:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3397: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3398: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:37 compute-0 nova_compute[351685]: 2025-10-03 11:20:37.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:20:37 compute-0 nova_compute[351685]: 2025-10-03 11:20:37.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:20:37 compute-0 nova_compute[351685]: 2025-10-03 11:20:37.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:20:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:20:38 compute-0 nova_compute[351685]: 2025-10-03 11:20:38.373 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:20:38 compute-0 nova_compute[351685]: 2025-10-03 11:20:38.374 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:20:38 compute-0 nova_compute[351685]: 2025-10-03 11:20:38.374 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:20:38 compute-0 nova_compute[351685]: 2025-10-03 11:20:38.375 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:20:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3399: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:39 compute-0 nova_compute[351685]: 2025-10-03 11:20:39.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:20:40 compute-0 podman[520630]: 2025-10-03 11:20:40.875468995 +0000 UTC m=+0.093759204 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible)
Oct  3 11:20:40 compute-0 podman[520619]: 2025-10-03 11:20:40.882284773 +0000 UTC m=+0.113989060 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct  3 11:20:40 compute-0 podman[520621]: 2025-10-03 11:20:40.883412208 +0000 UTC m=+0.098296828 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 11:20:40 compute-0 podman[520618]: 2025-10-03 11:20:40.889149922 +0000 UTC m=+0.124780855 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, name=ubi9-minimal, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=edpm, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350)
Oct  3 11:20:40 compute-0 podman[520617]: 2025-10-03 11:20:40.913596393 +0000 UTC m=+0.155741324 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:20:40 compute-0 podman[520620]: 2025-10-03 11:20:40.914324685 +0000 UTC m=+0.144000278 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct  3 11:20:40 compute-0 podman[520627]: 2025-10-03 11:20:40.930464371 +0000 UTC m=+0.137680117 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 11:20:41 compute-0 nova_compute[351685]: 2025-10-03 11:20:41.119 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:20:41 compute-0 nova_compute[351685]: 2025-10-03 11:20:41.150 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:20:41 compute-0 nova_compute[351685]: 2025-10-03 11:20:41.151 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:20:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3400: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:20:41.696 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:20:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:20:41.697 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:20:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:20:41.698 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:20:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:20:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3401: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:44 compute-0 nova_compute[351685]: 2025-10-03 11:20:44.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:20:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3402: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:46 compute-0 nova_compute[351685]: 2025-10-03 11:20:46.146 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:20:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:20:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:20:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:20:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:20:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:20:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:20:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:20:46
Oct  3 11:20:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:20:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:20:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'volumes', 'default.rgw.meta', 'images']
Oct  3 11:20:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:20:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:20:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:20:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:20:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:20:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:20:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:20:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:20:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:20:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:20:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:20:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3403: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:47 compute-0 nova_compute[351685]: 2025-10-03 11:20:47.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:20:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:20:48.243916) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490448243968, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1798, "num_deletes": 250, "total_data_size": 2990155, "memory_usage": 3033464, "flush_reason": "Manual Compaction"}
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490448320035, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 1706248, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67041, "largest_seqno": 68838, "table_properties": {"data_size": 1700362, "index_size": 2960, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15337, "raw_average_key_size": 20, "raw_value_size": 1687274, "raw_average_value_size": 2277, "num_data_blocks": 136, "num_entries": 741, "num_filter_entries": 741, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759490250, "oldest_key_time": 1759490250, "file_creation_time": 1759490448, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 76217 microseconds, and 9910 cpu microseconds.
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:20:48.320130) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 1706248 bytes OK
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:20:48.320157) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:20:48.336039) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:20:48.336074) EVENT_LOG_v1 {"time_micros": 1759490448336064, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:20:48.336099) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 2982503, prev total WAL file size 2982503, number of live WAL files 2.
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:20:48.337944) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303034' seq:72057594037927935, type:22 .. '6D6772737461740033323535' seq:0, type:0; will stop at (end)
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(1666KB)], [164(9032KB)]
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490448338001, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 10956012, "oldest_snapshot_seqno": -1}
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 7839 keys, 8878402 bytes, temperature: kUnknown
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490448487474, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 8878402, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8832536, "index_size": 25145, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19653, "raw_key_size": 205797, "raw_average_key_size": 26, "raw_value_size": 8696855, "raw_average_value_size": 1109, "num_data_blocks": 992, "num_entries": 7839, "num_filter_entries": 7839, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759490448, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:20:48.487826) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 8878402 bytes
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:20:48.512993) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 73.2 rd, 59.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.8 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(11.6) write-amplify(5.2) OK, records in: 8258, records dropped: 419 output_compression: NoCompression
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:20:48.513026) EVENT_LOG_v1 {"time_micros": 1759490448513011, "job": 102, "event": "compaction_finished", "compaction_time_micros": 149573, "compaction_time_cpu_micros": 26349, "output_level": 6, "num_output_files": 1, "total_output_size": 8878402, "num_input_records": 8258, "num_output_records": 7839, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490448514314, "job": 102, "event": "table_file_deletion", "file_number": 166}
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490448518678, "job": 102, "event": "table_file_deletion", "file_number": 164}
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:20:48.337826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:20:48.518891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:20:48.518899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:20:48.518901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:20:48.518903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:20:48 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:20:48.518905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:20:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3404: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:49 compute-0 nova_compute[351685]: 2025-10-03 11:20:49.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:20:49 compute-0 nova_compute[351685]: 2025-10-03 11:20:49.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:20:49 compute-0 nova_compute[351685]: 2025-10-03 11:20:49.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  3 11:20:49 compute-0 nova_compute[351685]: 2025-10-03 11:20:49.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:20:49 compute-0 nova_compute[351685]: 2025-10-03 11:20:49.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:20:49 compute-0 nova_compute[351685]: 2025-10-03 11:20:49.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:20:49 compute-0 nova_compute[351685]: 2025-10-03 11:20:49.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:20:49 compute-0 nova_compute[351685]: 2025-10-03 11:20:49.777 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:20:49 compute-0 nova_compute[351685]: 2025-10-03 11:20:49.778 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:20:49 compute-0 nova_compute[351685]: 2025-10-03 11:20:49.779 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:20:49 compute-0 nova_compute[351685]: 2025-10-03 11:20:49.780 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:20:49 compute-0 nova_compute[351685]: 2025-10-03 11:20:49.781 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:20:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:20:50 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/855646307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:20:50 compute-0 nova_compute[351685]: 2025-10-03 11:20:50.299 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:20:50 compute-0 nova_compute[351685]: 2025-10-03 11:20:50.378 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:20:50 compute-0 nova_compute[351685]: 2025-10-03 11:20:50.379 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:20:50 compute-0 nova_compute[351685]: 2025-10-03 11:20:50.379 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:20:50 compute-0 nova_compute[351685]: 2025-10-03 11:20:50.823 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:20:50 compute-0 nova_compute[351685]: 2025-10-03 11:20:50.824 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3773MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:20:50 compute-0 nova_compute[351685]: 2025-10-03 11:20:50.825 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:20:50 compute-0 nova_compute[351685]: 2025-10-03 11:20:50.825 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:20:50 compute-0 nova_compute[351685]: 2025-10-03 11:20:50.938 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:20:50 compute-0 nova_compute[351685]: 2025-10-03 11:20:50.939 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:20:50 compute-0 nova_compute[351685]: 2025-10-03 11:20:50.940 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:20:50 compute-0 nova_compute[351685]: 2025-10-03 11:20:50.994 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:20:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:20:51 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2825247839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:20:51 compute-0 nova_compute[351685]: 2025-10-03 11:20:51.497 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:20:51 compute-0 nova_compute[351685]: 2025-10-03 11:20:51.508 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:20:51 compute-0 nova_compute[351685]: 2025-10-03 11:20:51.531 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:20:51 compute-0 nova_compute[351685]: 2025-10-03 11:20:51.535 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:20:51 compute-0 nova_compute[351685]: 2025-10-03 11:20:51.536 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:20:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3405: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:52 compute-0 nova_compute[351685]: 2025-10-03 11:20:52.538 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:20:52 compute-0 nova_compute[351685]: 2025-10-03 11:20:52.539 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:20:52 compute-0 podman[520796]: 2025-10-03 11:20:52.685369302 +0000 UTC m=+0.113301109 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:20:52 compute-0 podman[520797]: 2025-10-03 11:20:52.715788423 +0000 UTC m=+0.123063250 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, name=ubi9, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=base rhel9, config_id=edpm, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543)
Oct  3 11:20:52 compute-0 podman[520803]: 2025-10-03 11:20:52.728864261 +0000 UTC m=+0.126744587 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 11:20:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:20:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3406: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:20:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4146705654' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:20:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:20:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4146705654' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:20:54 compute-0 nova_compute[351685]: 2025-10-03 11:20:54.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:20:54 compute-0 nova_compute[351685]: 2025-10-03 11:20:54.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:20:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:20:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 15K writes, 68K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.01 MB/s#012Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.09 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1296 writes, 5892 keys, 1296 commit groups, 1.0 writes per commit group, ingest: 8.53 MB, 0.01 MB/s#012Interval WAL: 1296 writes, 1296 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     28.5      3.02              0.37        51    0.059       0      0       0.0       0.0#012  L6      1/0    8.47 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.7     91.1     76.3      5.25              1.53        50    0.105    309K    26K       0.0       0.0#012 Sum      1/0    8.47 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.7     57.8     58.9      8.27              1.89       101    0.082    309K    26K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.9     93.9     93.2      0.54              0.23        10    0.054     40K   2402       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     91.1     76.3      5.25              1.53        50    0.105    309K    26K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     28.6      3.01              0.37        50    0.060       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.6      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.084, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.48 GB write, 0.07 MB/s write, 0.47 GB read, 0.07 MB/s read, 8.3 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56005dddb1f0#2 capacity: 304.00 MB usage: 57.29 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000333 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3611,55.14 MB,18.1378%) FilterBlock(102,873.30 KB,0.280536%) IndexBlock(102,1.30 MB,0.426759%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  3 11:20:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3407: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:55 compute-0 nova_compute[351685]: 2025-10-03 11:20:55.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:20:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:20:56 compute-0 nova_compute[351685]: 2025-10-03 11:20:56.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:20:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3408: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:57 compute-0 nova_compute[351685]: 2025-10-03 11:20:57.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:20:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:20:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3409: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:20:59 compute-0 nova_compute[351685]: 2025-10-03 11:20:59.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:20:59 compute-0 nova_compute[351685]: 2025-10-03 11:20:59.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:20:59 compute-0 podman[157165]: time="2025-10-03T11:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:20:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:20:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9116 "" "Go-http-client/1.1"
Oct  3 11:21:01 compute-0 openstack_network_exporter[367524]: ERROR   11:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:21:01 compute-0 openstack_network_exporter[367524]: ERROR   11:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:21:01 compute-0 openstack_network_exporter[367524]: ERROR   11:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:21:01 compute-0 openstack_network_exporter[367524]: ERROR   11:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:21:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:21:01 compute-0 openstack_network_exporter[367524]: ERROR   11:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:21:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:21:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3410: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:21:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3411: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:04 compute-0 nova_compute[351685]: 2025-10-03 11:21:04.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:21:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3412: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3413: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:21:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3414: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:09 compute-0 nova_compute[351685]: 2025-10-03 11:21:09.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:21:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3415: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:11 compute-0 podman[520855]: 2025-10-03 11:21:11.869909847 +0000 UTC m=+0.112101700 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350)
Oct  3 11:21:11 compute-0 podman[520857]: 2025-10-03 11:21:11.87939843 +0000 UTC m=+0.108360761 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd)
Oct  3 11:21:11 compute-0 podman[520870]: 2025-10-03 11:21:11.883828482 +0000 UTC m=+0.099678894 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  3 11:21:11 compute-0 podman[520856]: 2025-10-03 11:21:11.900045299 +0000 UTC m=+0.128503864 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:21:11 compute-0 podman[520854]: 2025-10-03 11:21:11.905288846 +0000 UTC m=+0.144688960 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:21:11 compute-0 podman[520858]: 2025-10-03 11:21:11.914058197 +0000 UTC m=+0.134128184 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:21:11 compute-0 podman[520868]: 2025-10-03 11:21:11.948206697 +0000 UTC m=+0.173125648 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  3 11:21:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:21:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3416: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:14 compute-0 nova_compute[351685]: 2025-10-03 11:21:14.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:21:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3417: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:21:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:21:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:21:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:21:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:21:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:21:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3418: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:21:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3419: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:19 compute-0 nova_compute[351685]: 2025-10-03 11:21:19.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:21:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3420: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:21:21.759734) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490481759774, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 497, "num_deletes": 251, "total_data_size": 487810, "memory_usage": 498480, "flush_reason": "Manual Compaction"}
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490481767629, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 483462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68839, "largest_seqno": 69335, "table_properties": {"data_size": 480655, "index_size": 842, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6532, "raw_average_key_size": 18, "raw_value_size": 475149, "raw_average_value_size": 1369, "num_data_blocks": 38, "num_entries": 347, "num_filter_entries": 347, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759490449, "oldest_key_time": 1759490449, "file_creation_time": 1759490481, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 7974 microseconds, and 3648 cpu microseconds.
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:21:21.767708) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 483462 bytes OK
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:21:21.767729) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:21:21.770674) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:21:21.770701) EVENT_LOG_v1 {"time_micros": 1759490481770692, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:21:21.770726) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 484920, prev total WAL file size 484920, number of live WAL files 2.
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:21:21.771727) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(472KB)], [167(8670KB)]
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490481771762, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 9361864, "oldest_snapshot_seqno": -1}
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 7677 keys, 7592673 bytes, temperature: kUnknown
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490481847685, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 7592673, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7549170, "index_size": 23230, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 203110, "raw_average_key_size": 26, "raw_value_size": 7417529, "raw_average_value_size": 966, "num_data_blocks": 903, "num_entries": 7677, "num_filter_entries": 7677, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759490481, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:21:21.848057) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 7592673 bytes
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:21:21.850643) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.2 rd, 99.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 8.5 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(35.1) write-amplify(15.7) OK, records in: 8186, records dropped: 509 output_compression: NoCompression
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:21:21.850674) EVENT_LOG_v1 {"time_micros": 1759490481850661, "job": 104, "event": "compaction_finished", "compaction_time_micros": 76017, "compaction_time_cpu_micros": 41400, "output_level": 6, "num_output_files": 1, "total_output_size": 7592673, "num_input_records": 8186, "num_output_records": 7677, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490481851039, "job": 104, "event": "table_file_deletion", "file_number": 169}
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490481854569, "job": 104, "event": "table_file_deletion", "file_number": 167}
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:21:21.771509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:21:21.854841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:21:21.854849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:21:21.854852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:21:21.854855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:21:21 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:21:21.854858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:21:22 compute-0 podman[520990]: 2025-10-03 11:21:22.848783163 +0000 UTC m=+0.099747346 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:21:22 compute-0 podman[521013]: 2025-10-03 11:21:22.969189496 +0000 UTC m=+0.090249151 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., version=9.4, architecture=x86_64)
Oct  3 11:21:22 compute-0 podman[521014]: 2025-10-03 11:21:22.978939358 +0000 UTC m=+0.091325776 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  3 11:21:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:21:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3421: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:24 compute-0 nova_compute[351685]: 2025-10-03 11:21:24.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:21:24 compute-0 nova_compute[351685]: 2025-10-03 11:21:24.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:21:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3422: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:21:26 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:21:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:21:26 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:21:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:21:26 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:21:26 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e2bfaa4d-b0e5-4819-b18f-5cad0a6fe287 does not exist
Oct  3 11:21:26 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev e0765be7-361c-465a-8e39-8c0271cc2c52 does not exist
Oct  3 11:21:26 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ab26356a-96d0-4ffd-b285-441d468eb8e7 does not exist
Oct  3 11:21:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:21:26 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:21:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:21:26 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:21:26 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:21:26 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:21:26 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:21:26 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:21:26 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:21:27 compute-0 podman[521322]: 2025-10-03 11:21:27.116446168 +0000 UTC m=+0.069140018 container create 98947d887a9247dffc9b730089656daa91d4f7355024d06ad8115a9a06dba960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:21:27 compute-0 podman[521322]: 2025-10-03 11:21:27.08456621 +0000 UTC m=+0.037260140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:21:27 compute-0 systemd[1]: Started libpod-conmon-98947d887a9247dffc9b730089656daa91d4f7355024d06ad8115a9a06dba960.scope.
Oct  3 11:21:27 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:21:27 compute-0 podman[521322]: 2025-10-03 11:21:27.257414799 +0000 UTC m=+0.210108679 container init 98947d887a9247dffc9b730089656daa91d4f7355024d06ad8115a9a06dba960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 11:21:27 compute-0 podman[521322]: 2025-10-03 11:21:27.268505703 +0000 UTC m=+0.221199553 container start 98947d887a9247dffc9b730089656daa91d4f7355024d06ad8115a9a06dba960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:21:27 compute-0 podman[521322]: 2025-10-03 11:21:27.27282145 +0000 UTC m=+0.225515300 container attach 98947d887a9247dffc9b730089656daa91d4f7355024d06ad8115a9a06dba960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:21:27 compute-0 distracted_antonelli[521337]: 167 167
Oct  3 11:21:27 compute-0 systemd[1]: libpod-98947d887a9247dffc9b730089656daa91d4f7355024d06ad8115a9a06dba960.scope: Deactivated successfully.
Oct  3 11:21:27 compute-0 podman[521322]: 2025-10-03 11:21:27.283068768 +0000 UTC m=+0.235762658 container died 98947d887a9247dffc9b730089656daa91d4f7355024d06ad8115a9a06dba960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 11:21:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5753bed7a1e0f9f52bf9c2e4f9d3c113500f27479e51857e64f53201c731e18a-merged.mount: Deactivated successfully.
Oct  3 11:21:27 compute-0 podman[521322]: 2025-10-03 11:21:27.370640044 +0000 UTC m=+0.323333914 container remove 98947d887a9247dffc9b730089656daa91d4f7355024d06ad8115a9a06dba960 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_antonelli, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  3 11:21:27 compute-0 systemd[1]: libpod-conmon-98947d887a9247dffc9b730089656daa91d4f7355024d06ad8115a9a06dba960.scope: Deactivated successfully.
Oct  3 11:21:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3423: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:27 compute-0 podman[521361]: 2025-10-03 11:21:27.699906877 +0000 UTC m=+0.105207851 container create 3ad16c8c1600e969fc8bc6d7c52d4e34894e10970d1ee33feac6a01f562eba1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 11:21:27 compute-0 podman[521361]: 2025-10-03 11:21:27.662595205 +0000 UTC m=+0.067896219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:21:27 compute-0 systemd[1]: Started libpod-conmon-3ad16c8c1600e969fc8bc6d7c52d4e34894e10970d1ee33feac6a01f562eba1d.scope.
Oct  3 11:21:27 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fa17d02831cafc012151919e4da99da1a407120c39689ea9224b60913cb250/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fa17d02831cafc012151919e4da99da1a407120c39689ea9224b60913cb250/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fa17d02831cafc012151919e4da99da1a407120c39689ea9224b60913cb250/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fa17d02831cafc012151919e4da99da1a407120c39689ea9224b60913cb250/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8fa17d02831cafc012151919e4da99da1a407120c39689ea9224b60913cb250/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:21:27 compute-0 podman[521361]: 2025-10-03 11:21:27.860885836 +0000 UTC m=+0.266186810 container init 3ad16c8c1600e969fc8bc6d7c52d4e34894e10970d1ee33feac6a01f562eba1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 11:21:27 compute-0 podman[521361]: 2025-10-03 11:21:27.87915011 +0000 UTC m=+0.284451054 container start 3ad16c8c1600e969fc8bc6d7c52d4e34894e10970d1ee33feac6a01f562eba1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:21:27 compute-0 podman[521361]: 2025-10-03 11:21:27.884384207 +0000 UTC m=+0.289685191 container attach 3ad16c8c1600e969fc8bc6d7c52d4e34894e10970d1ee33feac6a01f562eba1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 11:21:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:21:29 compute-0 heuristic_chebyshev[521376]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:21:29 compute-0 heuristic_chebyshev[521376]: --> relative data size: 1.0
Oct  3 11:21:29 compute-0 heuristic_chebyshev[521376]: --> All data devices are unavailable
Oct  3 11:21:29 compute-0 systemd[1]: libpod-3ad16c8c1600e969fc8bc6d7c52d4e34894e10970d1ee33feac6a01f562eba1d.scope: Deactivated successfully.
Oct  3 11:21:29 compute-0 podman[521361]: 2025-10-03 11:21:29.250803183 +0000 UTC m=+1.656104157 container died 3ad16c8c1600e969fc8bc6d7c52d4e34894e10970d1ee33feac6a01f562eba1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:21:29 compute-0 systemd[1]: libpod-3ad16c8c1600e969fc8bc6d7c52d4e34894e10970d1ee33feac6a01f562eba1d.scope: Consumed 1.298s CPU time.
Oct  3 11:21:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8fa17d02831cafc012151919e4da99da1a407120c39689ea9224b60913cb250-merged.mount: Deactivated successfully.
Oct  3 11:21:29 compute-0 podman[521361]: 2025-10-03 11:21:29.346733656 +0000 UTC m=+1.752034590 container remove 3ad16c8c1600e969fc8bc6d7c52d4e34894e10970d1ee33feac6a01f562eba1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:21:29 compute-0 systemd[1]: libpod-conmon-3ad16c8c1600e969fc8bc6d7c52d4e34894e10970d1ee33feac6a01f562eba1d.scope: Deactivated successfully.
Oct  3 11:21:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3424: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:29 compute-0 nova_compute[351685]: 2025-10-03 11:21:29.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:21:29 compute-0 podman[157165]: time="2025-10-03T11:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:21:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:21:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9104 "" "Go-http-client/1.1"
Oct  3 11:21:30 compute-0 podman[521553]: 2025-10-03 11:21:30.414487597 +0000 UTC m=+0.057090544 container create 6dfb3d0a2499b9081d8f8a4dcb0d2428f24d1dc371fafcd92db83c5e1b82a233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hellman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct  3 11:21:30 compute-0 systemd[1]: Started libpod-conmon-6dfb3d0a2499b9081d8f8a4dcb0d2428f24d1dc371fafcd92db83c5e1b82a233.scope.
Oct  3 11:21:30 compute-0 podman[521553]: 2025-10-03 11:21:30.38951682 +0000 UTC m=+0.032119837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:21:30 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:21:30 compute-0 podman[521553]: 2025-10-03 11:21:30.535201641 +0000 UTC m=+0.177804588 container init 6dfb3d0a2499b9081d8f8a4dcb0d2428f24d1dc371fafcd92db83c5e1b82a233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hellman, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 11:21:30 compute-0 podman[521553]: 2025-10-03 11:21:30.545918533 +0000 UTC m=+0.188521490 container start 6dfb3d0a2499b9081d8f8a4dcb0d2428f24d1dc371fafcd92db83c5e1b82a233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hellman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 11:21:30 compute-0 podman[521553]: 2025-10-03 11:21:30.55175991 +0000 UTC m=+0.194362847 container attach 6dfb3d0a2499b9081d8f8a4dcb0d2428f24d1dc371fafcd92db83c5e1b82a233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hellman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:21:30 compute-0 loving_hellman[521569]: 167 167
Oct  3 11:21:30 compute-0 systemd[1]: libpod-6dfb3d0a2499b9081d8f8a4dcb0d2428f24d1dc371fafcd92db83c5e1b82a233.scope: Deactivated successfully.
Oct  3 11:21:30 compute-0 podman[521553]: 2025-10-03 11:21:30.557120181 +0000 UTC m=+0.199723158 container died 6dfb3d0a2499b9081d8f8a4dcb0d2428f24d1dc371fafcd92db83c5e1b82a233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  3 11:21:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4bb87204d53df8a97242572a8dd238474705c19f7ee5ba8c41337d20c7dbd1d-merged.mount: Deactivated successfully.
Oct  3 11:21:30 compute-0 podman[521553]: 2025-10-03 11:21:30.652221507 +0000 UTC m=+0.294824444 container remove 6dfb3d0a2499b9081d8f8a4dcb0d2428f24d1dc371fafcd92db83c5e1b82a233 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:21:30 compute-0 systemd[1]: libpod-conmon-6dfb3d0a2499b9081d8f8a4dcb0d2428f24d1dc371fafcd92db83c5e1b82a233.scope: Deactivated successfully.
Oct  3 11:21:30 compute-0 podman[521591]: 2025-10-03 11:21:30.908806209 +0000 UTC m=+0.095515370 container create 863e47dc15e9f0dfe7dd3037fce27d7e91f8f68ea2f990d6783a76d2734dab27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:21:30 compute-0 podman[521591]: 2025-10-03 11:21:30.874927838 +0000 UTC m=+0.061637089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:21:30 compute-0 systemd[1]: Started libpod-conmon-863e47dc15e9f0dfe7dd3037fce27d7e91f8f68ea2f990d6783a76d2734dab27.scope.
Oct  3 11:21:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:21:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad1a9c835d236d5adad12cc3be72103a75857f060902769e1515c59e4ef8cc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:21:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad1a9c835d236d5adad12cc3be72103a75857f060902769e1515c59e4ef8cc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:21:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad1a9c835d236d5adad12cc3be72103a75857f060902769e1515c59e4ef8cc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:21:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad1a9c835d236d5adad12cc3be72103a75857f060902769e1515c59e4ef8cc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:21:31 compute-0 podman[521591]: 2025-10-03 11:21:31.040433251 +0000 UTC m=+0.227142462 container init 863e47dc15e9f0dfe7dd3037fce27d7e91f8f68ea2f990d6783a76d2734dab27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 11:21:31 compute-0 podman[521591]: 2025-10-03 11:21:31.052788926 +0000 UTC m=+0.239498097 container start 863e47dc15e9f0dfe7dd3037fce27d7e91f8f68ea2f990d6783a76d2734dab27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 11:21:31 compute-0 podman[521591]: 2025-10-03 11:21:31.05884455 +0000 UTC m=+0.245553721 container attach 863e47dc15e9f0dfe7dd3037fce27d7e91f8f68ea2f990d6783a76d2734dab27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 11:21:31 compute-0 openstack_network_exporter[367524]: ERROR   11:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:21:31 compute-0 openstack_network_exporter[367524]: ERROR   11:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:21:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:21:31 compute-0 openstack_network_exporter[367524]: ERROR   11:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:21:31 compute-0 openstack_network_exporter[367524]: ERROR   11:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:21:31 compute-0 openstack_network_exporter[367524]: ERROR   11:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:21:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:21:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3425: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:31 compute-0 silly_galileo[521607]: {
Oct  3 11:21:31 compute-0 silly_galileo[521607]:    "0": [
Oct  3 11:21:31 compute-0 silly_galileo[521607]:        {
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "devices": [
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "/dev/loop3"
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            ],
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "lv_name": "ceph_lv0",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "lv_size": "21470642176",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "name": "ceph_lv0",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "tags": {
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.cluster_name": "ceph",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.crush_device_class": "",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.encrypted": "0",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.osd_id": "0",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.type": "block",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.vdo": "0"
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            },
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "type": "block",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "vg_name": "ceph_vg0"
Oct  3 11:21:31 compute-0 silly_galileo[521607]:        }
Oct  3 11:21:31 compute-0 silly_galileo[521607]:    ],
Oct  3 11:21:31 compute-0 silly_galileo[521607]:    "1": [
Oct  3 11:21:31 compute-0 silly_galileo[521607]:        {
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "devices": [
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "/dev/loop4"
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            ],
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "lv_name": "ceph_lv1",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "lv_size": "21470642176",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "name": "ceph_lv1",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "tags": {
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.cluster_name": "ceph",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.crush_device_class": "",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.encrypted": "0",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.osd_id": "1",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.type": "block",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.vdo": "0"
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            },
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "type": "block",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "vg_name": "ceph_vg1"
Oct  3 11:21:31 compute-0 silly_galileo[521607]:        }
Oct  3 11:21:31 compute-0 silly_galileo[521607]:    ],
Oct  3 11:21:31 compute-0 silly_galileo[521607]:    "2": [
Oct  3 11:21:31 compute-0 silly_galileo[521607]:        {
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "devices": [
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "/dev/loop5"
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            ],
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "lv_name": "ceph_lv2",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "lv_size": "21470642176",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "name": "ceph_lv2",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "tags": {
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.cluster_name": "ceph",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.crush_device_class": "",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.encrypted": "0",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.osd_id": "2",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.type": "block",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:                "ceph.vdo": "0"
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            },
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "type": "block",
Oct  3 11:21:31 compute-0 silly_galileo[521607]:            "vg_name": "ceph_vg2"
Oct  3 11:21:31 compute-0 silly_galileo[521607]:        }
Oct  3 11:21:31 compute-0 silly_galileo[521607]:    ]
Oct  3 11:21:31 compute-0 silly_galileo[521607]: }
Oct  3 11:21:31 compute-0 systemd[1]: libpod-863e47dc15e9f0dfe7dd3037fce27d7e91f8f68ea2f990d6783a76d2734dab27.scope: Deactivated successfully.
Oct  3 11:21:31 compute-0 podman[521591]: 2025-10-03 11:21:31.881001939 +0000 UTC m=+1.067711130 container died 863e47dc15e9f0dfe7dd3037fce27d7e91f8f68ea2f990d6783a76d2734dab27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 11:21:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ad1a9c835d236d5adad12cc3be72103a75857f060902769e1515c59e4ef8cc9-merged.mount: Deactivated successfully.
Oct  3 11:21:31 compute-0 podman[521591]: 2025-10-03 11:21:31.972915203 +0000 UTC m=+1.159624404 container remove 863e47dc15e9f0dfe7dd3037fce27d7e91f8f68ea2f990d6783a76d2734dab27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_galileo, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:21:31 compute-0 systemd[1]: libpod-conmon-863e47dc15e9f0dfe7dd3037fce27d7e91f8f68ea2f990d6783a76d2734dab27.scope: Deactivated successfully.
Oct  3 11:21:33 compute-0 podman[521766]: 2025-10-03 11:21:33.051531301 +0000 UTC m=+0.102334807 container create d46dd1e64191d2dead4618de40481edda74f5ec0d25e1a16b54ee83ba0998ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shamir, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:21:33 compute-0 podman[521766]: 2025-10-03 11:21:33.005650836 +0000 UTC m=+0.056454402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:21:33 compute-0 systemd[1]: Started libpod-conmon-d46dd1e64191d2dead4618de40481edda74f5ec0d25e1a16b54ee83ba0998ec2.scope.
Oct  3 11:21:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:21:33 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:21:33 compute-0 podman[521766]: 2025-10-03 11:21:33.208461112 +0000 UTC m=+0.259264668 container init d46dd1e64191d2dead4618de40481edda74f5ec0d25e1a16b54ee83ba0998ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:21:33 compute-0 podman[521766]: 2025-10-03 11:21:33.2268857 +0000 UTC m=+0.277689216 container start d46dd1e64191d2dead4618de40481edda74f5ec0d25e1a16b54ee83ba0998ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shamir, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:21:33 compute-0 podman[521766]: 2025-10-03 11:21:33.233413009 +0000 UTC m=+0.284216565 container attach d46dd1e64191d2dead4618de40481edda74f5ec0d25e1a16b54ee83ba0998ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 11:21:33 compute-0 stoic_shamir[521781]: 167 167
Oct  3 11:21:33 compute-0 systemd[1]: libpod-d46dd1e64191d2dead4618de40481edda74f5ec0d25e1a16b54ee83ba0998ec2.scope: Deactivated successfully.
Oct  3 11:21:33 compute-0 podman[521766]: 2025-10-03 11:21:33.240211816 +0000 UTC m=+0.291015352 container died d46dd1e64191d2dead4618de40481edda74f5ec0d25e1a16b54ee83ba0998ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 11:21:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-361d5a5c1225cd73925f793897dd9696a555b624fe3029a5222908efc7c44956-merged.mount: Deactivated successfully.
Oct  3 11:21:33 compute-0 podman[521766]: 2025-10-03 11:21:33.328294148 +0000 UTC m=+0.379097654 container remove d46dd1e64191d2dead4618de40481edda74f5ec0d25e1a16b54ee83ba0998ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  3 11:21:33 compute-0 systemd[1]: libpod-conmon-d46dd1e64191d2dead4618de40481edda74f5ec0d25e1a16b54ee83ba0998ec2.scope: Deactivated successfully.
Oct  3 11:21:33 compute-0 podman[521804]: 2025-10-03 11:21:33.602949647 +0000 UTC m=+0.083381573 container create 5f297e0ffcc69904bd4d4094678bb5ce0c67652b0dfeae982f8ba312552851af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilbur, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 11:21:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3426: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:33 compute-0 podman[521804]: 2025-10-03 11:21:33.569948124 +0000 UTC m=+0.050380130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:21:33 compute-0 systemd[1]: Started libpod-conmon-5f297e0ffcc69904bd4d4094678bb5ce0c67652b0dfeae982f8ba312552851af.scope.
Oct  3 11:21:33 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0958b7a4f374ac1cf41c84a35c3247e25ab91ea059c0e0ca09c04c87008aef34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0958b7a4f374ac1cf41c84a35c3247e25ab91ea059c0e0ca09c04c87008aef34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0958b7a4f374ac1cf41c84a35c3247e25ab91ea059c0e0ca09c04c87008aef34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:21:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0958b7a4f374ac1cf41c84a35c3247e25ab91ea059c0e0ca09c04c87008aef34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:21:33 compute-0 podman[521804]: 2025-10-03 11:21:33.757510942 +0000 UTC m=+0.237942958 container init 5f297e0ffcc69904bd4d4094678bb5ce0c67652b0dfeae982f8ba312552851af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilbur, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:21:33 compute-0 podman[521804]: 2025-10-03 11:21:33.778004747 +0000 UTC m=+0.258436673 container start 5f297e0ffcc69904bd4d4094678bb5ce0c67652b0dfeae982f8ba312552851af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 11:21:33 compute-0 podman[521804]: 2025-10-03 11:21:33.791712864 +0000 UTC m=+0.272144820 container attach 5f297e0ffcc69904bd4d4094678bb5ce0c67652b0dfeae982f8ba312552851af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilbur, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  3 11:21:34 compute-0 nova_compute[351685]: 2025-10-03 11:21:34.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:21:34 compute-0 cool_wilbur[521819]: {
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:        "osd_id": 1,
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:        "type": "bluestore"
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:    },
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:        "osd_id": 2,
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:        "type": "bluestore"
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:    },
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:        "osd_id": 0,
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:        "type": "bluestore"
Oct  3 11:21:34 compute-0 cool_wilbur[521819]:    }
Oct  3 11:21:34 compute-0 cool_wilbur[521819]: }
Oct  3 11:21:34 compute-0 systemd[1]: libpod-5f297e0ffcc69904bd4d4094678bb5ce0c67652b0dfeae982f8ba312552851af.scope: Deactivated successfully.
Oct  3 11:21:34 compute-0 podman[521804]: 2025-10-03 11:21:34.920654358 +0000 UTC m=+1.401086294 container died 5f297e0ffcc69904bd4d4094678bb5ce0c67652b0dfeae982f8ba312552851af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilbur, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 11:21:34 compute-0 systemd[1]: libpod-5f297e0ffcc69904bd4d4094678bb5ce0c67652b0dfeae982f8ba312552851af.scope: Consumed 1.143s CPU time.
Oct  3 11:21:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-0958b7a4f374ac1cf41c84a35c3247e25ab91ea059c0e0ca09c04c87008aef34-merged.mount: Deactivated successfully.
Oct  3 11:21:35 compute-0 podman[521804]: 2025-10-03 11:21:35.011341394 +0000 UTC m=+1.491773330 container remove 5f297e0ffcc69904bd4d4094678bb5ce0c67652b0dfeae982f8ba312552851af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 11:21:35 compute-0 systemd[1]: libpod-conmon-5f297e0ffcc69904bd4d4094678bb5ce0c67652b0dfeae982f8ba312552851af.scope: Deactivated successfully.
Oct  3 11:21:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:21:35 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:21:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:21:35 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:21:35 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6291563f-a9ef-4689-8996-ce72a12e90df does not exist
Oct  3 11:21:35 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d16dcbbe-d564-4429-862e-e679d118e2b3 does not exist
Oct  3 11:21:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3427: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:36 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:21:36 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:21:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3428: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:21:38 compute-0 nova_compute[351685]: 2025-10-03 11:21:38.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:21:38 compute-0 nova_compute[351685]: 2025-10-03 11:21:38.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:21:38 compute-0 nova_compute[351685]: 2025-10-03 11:21:38.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:21:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3429: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:39 compute-0 nova_compute[351685]: 2025-10-03 11:21:39.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:21:39 compute-0 nova_compute[351685]: 2025-10-03 11:21:39.934 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:21:39 compute-0 nova_compute[351685]: 2025-10-03 11:21:39.934 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:21:39 compute-0 nova_compute[351685]: 2025-10-03 11:21:39.935 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:21:39 compute-0 nova_compute[351685]: 2025-10-03 11:21:39.935 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.905 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.906 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.909 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.920 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.920 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.921 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.921 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.921 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:21:40.921743) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.931 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.932 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.932 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.933 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.933 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.933 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.933 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.934 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.935 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:21:40.933704) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.935 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.935 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.935 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.936 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.936 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.936 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:21:40.936533) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.977 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.978 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.978 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.980 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.981 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.981 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.981 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.982 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:40.983 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:21:40.982089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.047 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.048 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.048 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.049 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.049 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.050 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.050 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.050 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.051 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.051 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:21:41.050496) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.052 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.052 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.053 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.053 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.053 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.054 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.054 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:21:41.053783) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.054 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.055 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.056 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.056 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.056 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.057 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:21:41.057016) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.058 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.058 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.059 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.060 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.060 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.060 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:21:41.060564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.061 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.061 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.062 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.062 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.063 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.063 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.064 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.064 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.066 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.066 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.066 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:21:41.063672) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:21:41.066738) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.099 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.100 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.101 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.101 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.101 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.101 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.102 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.102 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.103 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.103 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:21:41.101726) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.104 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.104 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.104 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.105 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.105 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.105 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:21:41.105320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.106 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.107 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.107 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.108 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.108 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.108 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.108 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.109 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.109 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:21:41.108751) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.110 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.111 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.111 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.111 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.112 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.112 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:21:41.111957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.112 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.113 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.114 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.114 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.114 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:21:41.114560) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.115 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.115 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.116 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.116 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.116 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.116 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.117 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.118 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.118 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:21:41.116893) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.119 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.119 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:21:41.118966) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.120 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.120 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.120 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.121 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.121 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.121 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 102860000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.122 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:21:41.121293) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.122 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.123 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.123 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.123 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.123 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:21:41.123521) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.124 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.125 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.125 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.126 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.126 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:21:41.125617) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.127 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.127 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.127 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.127 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.129 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.129 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.130 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.131 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:21:41.127672) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:21:41.130027) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.131 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.132 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:21:41.132734) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.133 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.133 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.134 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.134 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.134 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.134 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.135 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:21:41.134596) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.140 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.140 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.140 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.140 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:21:41.140 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:21:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3430: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:21:41.698 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:21:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:21:41.699 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:21:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:21:41.700 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:21:42 compute-0 nova_compute[351685]: 2025-10-03 11:21:42.782 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:21:42 compute-0 nova_compute[351685]: 2025-10-03 11:21:42.800 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:21:42 compute-0 nova_compute[351685]: 2025-10-03 11:21:42.801 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:21:42 compute-0 podman[521914]: 2025-10-03 11:21:42.897121017 +0000 UTC m=+0.136423857 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:21:42 compute-0 podman[521916]: 2025-10-03 11:21:42.905271547 +0000 UTC m=+0.147377946 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  3 11:21:42 compute-0 podman[521931]: 2025-10-03 11:21:42.907953312 +0000 UTC m=+0.123201784 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20250930, managed_by=edpm_ansible)
Oct  3 11:21:42 compute-0 podman[521915]: 2025-10-03 11:21:42.908716037 +0000 UTC m=+0.151872340 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6, config_id=edpm)
Oct  3 11:21:42 compute-0 podman[521917]: 2025-10-03 11:21:42.930558014 +0000 UTC m=+0.138203473 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:21:42 compute-0 podman[521935]: 2025-10-03 11:21:42.936084281 +0000 UTC m=+0.151530199 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 11:21:42 compute-0 podman[521936]: 2025-10-03 11:21:42.940455681 +0000 UTC m=+0.144944549 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:21:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:21:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3431: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:44 compute-0 nova_compute[351685]: 2025-10-03 11:21:44.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:21:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3432: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:21:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:21:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:21:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:21:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:21:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:21:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:21:46
Oct  3 11:21:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:21:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:21:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'volumes', '.mgr', 'images', 'vms', 'default.rgw.meta', 'backups', '.rgw.root', 'cephfs.cephfs.data']
Oct  3 11:21:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:21:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:21:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:21:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:21:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:21:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:21:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:21:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:21:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:21:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:21:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:21:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3433: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:47 compute-0 nova_compute[351685]: 2025-10-03 11:21:47.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:21:47 compute-0 nova_compute[351685]: 2025-10-03 11:21:47.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:21:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:21:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3434: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:49 compute-0 nova_compute[351685]: 2025-10-03 11:21:49.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:21:50 compute-0 nova_compute[351685]: 2025-10-03 11:21:50.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:21:50 compute-0 nova_compute[351685]: 2025-10-03 11:21:50.759 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:21:50 compute-0 nova_compute[351685]: 2025-10-03 11:21:50.760 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:21:50 compute-0 nova_compute[351685]: 2025-10-03 11:21:50.760 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:21:50 compute-0 nova_compute[351685]: 2025-10-03 11:21:50.761 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:21:50 compute-0 nova_compute[351685]: 2025-10-03 11:21:50.761 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:21:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:21:51 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/224135677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:21:51 compute-0 nova_compute[351685]: 2025-10-03 11:21:51.279 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:21:51 compute-0 nova_compute[351685]: 2025-10-03 11:21:51.388 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:21:51 compute-0 nova_compute[351685]: 2025-10-03 11:21:51.390 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:21:51 compute-0 nova_compute[351685]: 2025-10-03 11:21:51.393 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:21:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3435: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:51 compute-0 nova_compute[351685]: 2025-10-03 11:21:51.988 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:21:51 compute-0 nova_compute[351685]: 2025-10-03 11:21:51.989 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3731MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:21:51 compute-0 nova_compute[351685]: 2025-10-03 11:21:51.990 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:21:51 compute-0 nova_compute[351685]: 2025-10-03 11:21:51.990 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:21:52 compute-0 nova_compute[351685]: 2025-10-03 11:21:52.084 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:21:52 compute-0 nova_compute[351685]: 2025-10-03 11:21:52.085 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:21:52 compute-0 nova_compute[351685]: 2025-10-03 11:21:52.086 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:21:52 compute-0 nova_compute[351685]: 2025-10-03 11:21:52.127 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:21:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:21:52 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169767257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:21:52 compute-0 nova_compute[351685]: 2025-10-03 11:21:52.628 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:21:52 compute-0 nova_compute[351685]: 2025-10-03 11:21:52.640 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:21:52 compute-0 nova_compute[351685]: 2025-10-03 11:21:52.659 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:21:52 compute-0 nova_compute[351685]: 2025-10-03 11:21:52.661 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:21:52 compute-0 nova_compute[351685]: 2025-10-03 11:21:52.661 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:21:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:21:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3436: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:53 compute-0 nova_compute[351685]: 2025-10-03 11:21:53.662 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:21:53 compute-0 nova_compute[351685]: 2025-10-03 11:21:53.663 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:21:53 compute-0 podman[522093]: 2025-10-03 11:21:53.879682372 +0000 UTC m=+0.122939186 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 11:21:53 compute-0 podman[522095]: 2025-10-03 11:21:53.898150112 +0000 UTC m=+0.130584091 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:21:53 compute-0 podman[522094]: 2025-10-03 11:21:53.902594974 +0000 UTC m=+0.139878377 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, release-0.7.12=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, config_id=edpm, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Oct  3 11:21:54 compute-0 nova_compute[351685]: 2025-10-03 11:21:54.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:21:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3437: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:55 compute-0 nova_compute[351685]: 2025-10-03 11:21:55.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:21:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:21:56 compute-0 nova_compute[351685]: 2025-10-03 11:21:56.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:21:56 compute-0 nova_compute[351685]: 2025-10-03 11:21:56.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:21:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3438: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:21:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3439: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:21:59 compute-0 nova_compute[351685]: 2025-10-03 11:21:59.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:21:59 compute-0 podman[157165]: time="2025-10-03T11:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:21:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:21:59 compute-0 nova_compute[351685]: 2025-10-03 11:21:59.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:21:59 compute-0 nova_compute[351685]: 2025-10-03 11:21:59.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:21:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9116 "" "Go-http-client/1.1"
Oct  3 11:22:01 compute-0 openstack_network_exporter[367524]: ERROR   11:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:22:01 compute-0 openstack_network_exporter[367524]: ERROR   11:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:22:01 compute-0 openstack_network_exporter[367524]: ERROR   11:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:22:01 compute-0 openstack_network_exporter[367524]: ERROR   11:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:22:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:22:01 compute-0 openstack_network_exporter[367524]: ERROR   11:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:22:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:22:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3440: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:22:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3441: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:04 compute-0 nova_compute[351685]: 2025-10-03 11:22:04.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:22:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3442: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3443: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:22:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3444: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:09 compute-0 nova_compute[351685]: 2025-10-03 11:22:09.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:22:09 compute-0 nova_compute[351685]: 2025-10-03 11:22:09.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:22:09 compute-0 nova_compute[351685]: 2025-10-03 11:22:09.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:22:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3445: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:22:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3446: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:13 compute-0 podman[522155]: 2025-10-03 11:22:13.87279287 +0000 UTC m=+0.100935756 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 11:22:13 compute-0 podman[522157]: 2025-10-03 11:22:13.891797279 +0000 UTC m=+0.115089890 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct  3 11:22:13 compute-0 podman[522156]: 2025-10-03 11:22:13.901960564 +0000 UTC m=+0.138828060 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, config_id=edpm, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6)
Oct  3 11:22:13 compute-0 podman[522183]: 2025-10-03 11:22:13.910330082 +0000 UTC m=+0.095577113 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:22:13 compute-0 podman[522158]: 2025-10-03 11:22:13.918109222 +0000 UTC m=+0.132343922 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  3 11:22:13 compute-0 podman[522171]: 2025-10-03 11:22:13.934624562 +0000 UTC m=+0.126921369 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  3 11:22:13 compute-0 podman[522162]: 2025-10-03 11:22:13.943932849 +0000 UTC m=+0.145547805 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 11:22:14 compute-0 nova_compute[351685]: 2025-10-03 11:22:14.098 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:22:14 compute-0 nova_compute[351685]: 2025-10-03 11:22:14.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:22:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3447: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:22:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:22:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:22:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:22:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:22:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:22:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3448: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:22:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3449: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:19 compute-0 nova_compute[351685]: 2025-10-03 11:22:19.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:22:19 compute-0 nova_compute[351685]: 2025-10-03 11:22:19.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:22:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3450: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:22:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3451: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:24 compute-0 nova_compute[351685]: 2025-10-03 11:22:24.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:22:24 compute-0 podman[522290]: 2025-10-03 11:22:24.848864422 +0000 UTC m=+0.095408019 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 11:22:24 compute-0 podman[522292]: 2025-10-03 11:22:24.869172063 +0000 UTC m=+0.104028455 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 11:22:24 compute-0 podman[522291]: 2025-10-03 11:22:24.869483583 +0000 UTC m=+0.107924681 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.29.0, version=9.4, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 11:22:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3452: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:25 compute-0 nova_compute[351685]: 2025-10-03 11:22:25.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:22:25 compute-0 nova_compute[351685]: 2025-10-03 11:22:25.750 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:22:25 compute-0 nova_compute[351685]: 2025-10-03 11:22:25.750 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 11:22:25 compute-0 nova_compute[351685]: 2025-10-03 11:22:25.766 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 11:22:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3453: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:22:29 compute-0 nova_compute[351685]: 2025-10-03 11:22:29.074 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:22:29 compute-0 nova_compute[351685]: 2025-10-03 11:22:29.102 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Triggering sync for uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  3 11:22:29 compute-0 nova_compute[351685]: 2025-10-03 11:22:29.103 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:22:29 compute-0 nova_compute[351685]: 2025-10-03 11:22:29.103 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:22:29 compute-0 nova_compute[351685]: 2025-10-03 11:22:29.131 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:22:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3454: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:29 compute-0 podman[157165]: time="2025-10-03T11:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:22:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:22:29 compute-0 nova_compute[351685]: 2025-10-03 11:22:29.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:22:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9115 "" "Go-http-client/1.1"
Oct  3 11:22:31 compute-0 openstack_network_exporter[367524]: ERROR   11:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:22:31 compute-0 openstack_network_exporter[367524]: ERROR   11:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:22:31 compute-0 openstack_network_exporter[367524]: ERROR   11:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:22:31 compute-0 openstack_network_exporter[367524]: ERROR   11:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:22:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:22:31 compute-0 openstack_network_exporter[367524]: ERROR   11:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:22:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:22:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3455: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:22:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3456: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:34 compute-0 nova_compute[351685]: 2025-10-03 11:22:34.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:22:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3457: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:36 compute-0 podman[522516]: 2025-10-03 11:22:36.56371287 +0000 UTC m=+0.099942745 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:22:36 compute-0 podman[522516]: 2025-10-03 11:22:36.679001775 +0000 UTC m=+0.215231560 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:22:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:22:37 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:22:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:22:37 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:22:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3458: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:22:38 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:22:38 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:22:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:22:38 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:22:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:22:38 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:22:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:22:38 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:22:38 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 32dff5c9-dbd8-4ed1-9f14-4943d0d6bcb9 does not exist
Oct  3 11:22:38 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 7d600153-2260-44cf-95ca-ad6e1102117b does not exist
Oct  3 11:22:38 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 0c163089-0e31-4ae1-888c-94f942e848df does not exist
Oct  3 11:22:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:22:38 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:22:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:22:38 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:22:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:22:38 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:22:38 compute-0 nova_compute[351685]: 2025-10-03 11:22:38.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:22:38 compute-0 nova_compute[351685]: 2025-10-03 11:22:38.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:22:38 compute-0 nova_compute[351685]: 2025-10-03 11:22:38.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:22:38 compute-0 nova_compute[351685]: 2025-10-03 11:22:38.954 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:22:38 compute-0 nova_compute[351685]: 2025-10-03 11:22:38.955 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:22:38 compute-0 nova_compute[351685]: 2025-10-03 11:22:38.955 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:22:38 compute-0 nova_compute[351685]: 2025-10-03 11:22:38.955 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:22:39 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:22:39 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:22:39 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:22:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3459: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:39 compute-0 podman[522938]: 2025-10-03 11:22:39.628000818 +0000 UTC m=+0.059733885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:22:39 compute-0 podman[522938]: 2025-10-03 11:22:39.772702255 +0000 UTC m=+0.204435272 container create cd49a518d8d61e7e1bc3ee1d61d83e7a5a66deda38e42df767921fd5bc830ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 11:22:39 compute-0 nova_compute[351685]: 2025-10-03 11:22:39.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:22:39 compute-0 systemd[1]: Started libpod-conmon-cd49a518d8d61e7e1bc3ee1d61d83e7a5a66deda38e42df767921fd5bc830ade.scope.
Oct  3 11:22:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:22:40 compute-0 podman[522938]: 2025-10-03 11:22:40.02307902 +0000 UTC m=+0.454812057 container init cd49a518d8d61e7e1bc3ee1d61d83e7a5a66deda38e42df767921fd5bc830ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:22:40 compute-0 podman[522938]: 2025-10-03 11:22:40.040744626 +0000 UTC m=+0.472477623 container start cd49a518d8d61e7e1bc3ee1d61d83e7a5a66deda38e42df767921fd5bc830ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:22:40 compute-0 nifty_bartik[522953]: 167 167
Oct  3 11:22:40 compute-0 systemd[1]: libpod-cd49a518d8d61e7e1bc3ee1d61d83e7a5a66deda38e42df767921fd5bc830ade.scope: Deactivated successfully.
Oct  3 11:22:40 compute-0 podman[522938]: 2025-10-03 11:22:40.072773632 +0000 UTC m=+0.504506699 container attach cd49a518d8d61e7e1bc3ee1d61d83e7a5a66deda38e42df767921fd5bc830ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 11:22:40 compute-0 podman[522938]: 2025-10-03 11:22:40.074180098 +0000 UTC m=+0.505913135 container died cd49a518d8d61e7e1bc3ee1d61d83e7a5a66deda38e42df767921fd5bc830ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:22:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-48f59f4dcdc1302e299a1ca866caa57c6883ca326ff00088a3d64c61f79c21aa-merged.mount: Deactivated successfully.
Oct  3 11:22:40 compute-0 podman[522938]: 2025-10-03 11:22:40.361639151 +0000 UTC m=+0.793372128 container remove cd49a518d8d61e7e1bc3ee1d61d83e7a5a66deda38e42df767921fd5bc830ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 11:22:40 compute-0 systemd[1]: libpod-conmon-cd49a518d8d61e7e1bc3ee1d61d83e7a5a66deda38e42df767921fd5bc830ade.scope: Deactivated successfully.
Oct  3 11:22:40 compute-0 podman[522976]: 2025-10-03 11:22:40.637500122 +0000 UTC m=+0.071082809 container create e9c1612010e73f20616bb018d2565e10a978bc83fb70c62dcb6ee43299d78ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_spence, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:22:40 compute-0 podman[522976]: 2025-10-03 11:22:40.60437135 +0000 UTC m=+0.037954057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:22:40 compute-0 systemd[1]: Started libpod-conmon-e9c1612010e73f20616bb018d2565e10a978bc83fb70c62dcb6ee43299d78ba2.scope.
Oct  3 11:22:40 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:22:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88480703fc5ee2d8e19c6c4df55845d2914d3d6b992ff07571d4a93510748521/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:22:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88480703fc5ee2d8e19c6c4df55845d2914d3d6b992ff07571d4a93510748521/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:22:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88480703fc5ee2d8e19c6c4df55845d2914d3d6b992ff07571d4a93510748521/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:22:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88480703fc5ee2d8e19c6c4df55845d2914d3d6b992ff07571d4a93510748521/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:22:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88480703fc5ee2d8e19c6c4df55845d2914d3d6b992ff07571d4a93510748521/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:22:40 compute-0 podman[522976]: 2025-10-03 11:22:40.812996547 +0000 UTC m=+0.246579314 container init e9c1612010e73f20616bb018d2565e10a978bc83fb70c62dcb6ee43299d78ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_spence, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:22:40 compute-0 podman[522976]: 2025-10-03 11:22:40.829377431 +0000 UTC m=+0.262960128 container start e9c1612010e73f20616bb018d2565e10a978bc83fb70c62dcb6ee43299d78ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 11:22:40 compute-0 podman[522976]: 2025-10-03 11:22:40.84494857 +0000 UTC m=+0.278531337 container attach e9c1612010e73f20616bb018d2565e10a978bc83fb70c62dcb6ee43299d78ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_spence, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 11:22:40 compute-0 nova_compute[351685]: 2025-10-03 11:22:40.952 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:22:40 compute-0 nova_compute[351685]: 2025-10-03 11:22:40.978 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:22:40 compute-0 nova_compute[351685]: 2025-10-03 11:22:40.979 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:22:40 compute-0 nova_compute[351685]: 2025-10-03 11:22:40.980 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:22:40 compute-0 nova_compute[351685]: 2025-10-03 11:22:40.980 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 11:22:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3460: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:22:41.700 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:22:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:22:41.702 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:22:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:22:41.704 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:22:41 compute-0 lucid_spence[522991]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:22:41 compute-0 lucid_spence[522991]: --> relative data size: 1.0
Oct  3 11:22:41 compute-0 lucid_spence[522991]: --> All data devices are unavailable
Oct  3 11:22:41 compute-0 systemd[1]: libpod-e9c1612010e73f20616bb018d2565e10a978bc83fb70c62dcb6ee43299d78ba2.scope: Deactivated successfully.
Oct  3 11:22:41 compute-0 systemd[1]: libpod-e9c1612010e73f20616bb018d2565e10a978bc83fb70c62dcb6ee43299d78ba2.scope: Consumed 1.110s CPU time.
Oct  3 11:22:42 compute-0 podman[523020]: 2025-10-03 11:22:42.058358629 +0000 UTC m=+0.042423470 container died e9c1612010e73f20616bb018d2565e10a978bc83fb70c62dcb6ee43299d78ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 11:22:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-88480703fc5ee2d8e19c6c4df55845d2914d3d6b992ff07571d4a93510748521-merged.mount: Deactivated successfully.
Oct  3 11:22:42 compute-0 podman[523020]: 2025-10-03 11:22:42.191080853 +0000 UTC m=+0.175145694 container remove e9c1612010e73f20616bb018d2565e10a978bc83fb70c62dcb6ee43299d78ba2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:22:42 compute-0 systemd[1]: libpod-conmon-e9c1612010e73f20616bb018d2565e10a978bc83fb70c62dcb6ee43299d78ba2.scope: Deactivated successfully.
Oct  3 11:22:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:22:43 compute-0 podman[523173]: 2025-10-03 11:22:43.313410653 +0000 UTC m=+0.087456274 container create a0ec0dd9de98b07bab5a30aef79c756f461b033bc4f2b701afb2b3e2ccce67fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hypatia, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 11:22:43 compute-0 podman[523173]: 2025-10-03 11:22:43.271921464 +0000 UTC m=+0.045967145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:22:43 compute-0 systemd[1]: Started libpod-conmon-a0ec0dd9de98b07bab5a30aef79c756f461b033bc4f2b701afb2b3e2ccce67fb.scope.
Oct  3 11:22:43 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:22:43 compute-0 podman[523173]: 2025-10-03 11:22:43.463194583 +0000 UTC m=+0.237240214 container init a0ec0dd9de98b07bab5a30aef79c756f461b033bc4f2b701afb2b3e2ccce67fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Oct  3 11:22:43 compute-0 podman[523173]: 2025-10-03 11:22:43.475028753 +0000 UTC m=+0.249074374 container start a0ec0dd9de98b07bab5a30aef79c756f461b033bc4f2b701afb2b3e2ccce67fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hypatia, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 11:22:43 compute-0 podman[523173]: 2025-10-03 11:22:43.480830199 +0000 UTC m=+0.254875880 container attach a0ec0dd9de98b07bab5a30aef79c756f461b033bc4f2b701afb2b3e2ccce67fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 11:22:43 compute-0 busy_hypatia[523189]: 167 167
Oct  3 11:22:43 compute-0 systemd[1]: libpod-a0ec0dd9de98b07bab5a30aef79c756f461b033bc4f2b701afb2b3e2ccce67fb.scope: Deactivated successfully.
Oct  3 11:22:43 compute-0 conmon[523189]: conmon a0ec0dd9de98b07bab5a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0ec0dd9de98b07bab5a30aef79c756f461b033bc4f2b701afb2b3e2ccce67fb.scope/container/memory.events
Oct  3 11:22:43 compute-0 podman[523173]: 2025-10-03 11:22:43.489507977 +0000 UTC m=+0.263553578 container died a0ec0dd9de98b07bab5a30aef79c756f461b033bc4f2b701afb2b3e2ccce67fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hypatia, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 11:22:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-06af4434bfb585c9f8b52cb4ae8fa4e0eea82fb94cb4fe573e302aa0fa3b1158-merged.mount: Deactivated successfully.
Oct  3 11:22:43 compute-0 podman[523173]: 2025-10-03 11:22:43.560568414 +0000 UTC m=+0.334614015 container remove a0ec0dd9de98b07bab5a30aef79c756f461b033bc4f2b701afb2b3e2ccce67fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hypatia, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:22:43 compute-0 systemd[1]: libpod-conmon-a0ec0dd9de98b07bab5a30aef79c756f461b033bc4f2b701afb2b3e2ccce67fb.scope: Deactivated successfully.
Oct  3 11:22:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3461: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:43 compute-0 podman[523211]: 2025-10-03 11:22:43.860106154 +0000 UTC m=+0.086343158 container create 498629d46cce8d596c745676fd1442fe8d4711d5ab576f69490d63c377920287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 11:22:43 compute-0 podman[523211]: 2025-10-03 11:22:43.826326152 +0000 UTC m=+0.052563196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:22:43 compute-0 systemd[1]: Started libpod-conmon-498629d46cce8d596c745676fd1442fe8d4711d5ab576f69490d63c377920287.scope.
Oct  3 11:22:43 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:22:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d2ede2c9f8a01766c664b1d2961458b7b458f4c8b8044581d362a3d177fd29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:22:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d2ede2c9f8a01766c664b1d2961458b7b458f4c8b8044581d362a3d177fd29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:22:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d2ede2c9f8a01766c664b1d2961458b7b458f4c8b8044581d362a3d177fd29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:22:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d2ede2c9f8a01766c664b1d2961458b7b458f4c8b8044581d362a3d177fd29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:22:43 compute-0 podman[523211]: 2025-10-03 11:22:43.984918135 +0000 UTC m=+0.211155139 container init 498629d46cce8d596c745676fd1442fe8d4711d5ab576f69490d63c377920287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:22:44 compute-0 podman[523211]: 2025-10-03 11:22:44.007380455 +0000 UTC m=+0.233617429 container start 498629d46cce8d596c745676fd1442fe8d4711d5ab576f69490d63c377920287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:22:44 compute-0 podman[523211]: 2025-10-03 11:22:44.014614366 +0000 UTC m=+0.240851330 container attach 498629d46cce8d596c745676fd1442fe8d4711d5ab576f69490d63c377920287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  3 11:22:44 compute-0 podman[523227]: 2025-10-03 11:22:44.10459193 +0000 UTC m=+0.156921481 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 11:22:44 compute-0 podman[523237]: 2025-10-03 11:22:44.117475623 +0000 UTC m=+0.149367168 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  3 11:22:44 compute-0 podman[523229]: 2025-10-03 11:22:44.125848232 +0000 UTC m=+0.169829005 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct  3 11:22:44 compute-0 podman[523231]: 2025-10-03 11:22:44.126306345 +0000 UTC m=+0.162901621 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-type=git, config_id=edpm, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=)
Oct  3 11:22:44 compute-0 podman[523232]: 2025-10-03 11:22:44.136386969 +0000 UTC m=+0.175599969 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:22:44 compute-0 podman[523244]: 2025-10-03 11:22:44.137121913 +0000 UTC m=+0.153784771 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Oct  3 11:22:44 compute-0 podman[523258]: 2025-10-03 11:22:44.15203277 +0000 UTC m=+0.158172070 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct  3 11:22:44 compute-0 nova_compute[351685]: 2025-10-03 11:22:44.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:22:44 compute-0 nova_compute[351685]: 2025-10-03 11:22:44.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:22:44 compute-0 trusting_brown[523226]: {
Oct  3 11:22:44 compute-0 trusting_brown[523226]:    "0": [
Oct  3 11:22:44 compute-0 trusting_brown[523226]:        {
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "devices": [
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "/dev/loop3"
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            ],
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "lv_name": "ceph_lv0",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "lv_size": "21470642176",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "name": "ceph_lv0",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "tags": {
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.cluster_name": "ceph",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.crush_device_class": "",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.encrypted": "0",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.osd_id": "0",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.type": "block",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.vdo": "0"
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            },
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "type": "block",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "vg_name": "ceph_vg0"
Oct  3 11:22:44 compute-0 trusting_brown[523226]:        }
Oct  3 11:22:44 compute-0 trusting_brown[523226]:    ],
Oct  3 11:22:44 compute-0 trusting_brown[523226]:    "1": [
Oct  3 11:22:44 compute-0 trusting_brown[523226]:        {
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "devices": [
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "/dev/loop4"
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            ],
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "lv_name": "ceph_lv1",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "lv_size": "21470642176",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "name": "ceph_lv1",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "tags": {
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.cluster_name": "ceph",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.crush_device_class": "",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.encrypted": "0",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.osd_id": "1",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.type": "block",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.vdo": "0"
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            },
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "type": "block",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "vg_name": "ceph_vg1"
Oct  3 11:22:44 compute-0 trusting_brown[523226]:        }
Oct  3 11:22:44 compute-0 trusting_brown[523226]:    ],
Oct  3 11:22:44 compute-0 trusting_brown[523226]:    "2": [
Oct  3 11:22:44 compute-0 trusting_brown[523226]:        {
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "devices": [
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "/dev/loop5"
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            ],
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "lv_name": "ceph_lv2",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "lv_size": "21470642176",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "name": "ceph_lv2",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "tags": {
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.cluster_name": "ceph",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.crush_device_class": "",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.encrypted": "0",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.osd_id": "2",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.type": "block",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:                "ceph.vdo": "0"
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            },
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "type": "block",
Oct  3 11:22:44 compute-0 trusting_brown[523226]:            "vg_name": "ceph_vg2"
Oct  3 11:22:44 compute-0 trusting_brown[523226]:        }
Oct  3 11:22:44 compute-0 trusting_brown[523226]:    ]
Oct  3 11:22:44 compute-0 trusting_brown[523226]: }
Oct  3 11:22:44 compute-0 systemd[1]: libpod-498629d46cce8d596c745676fd1442fe8d4711d5ab576f69490d63c377920287.scope: Deactivated successfully.
Oct  3 11:22:44 compute-0 podman[523211]: 2025-10-03 11:22:44.846862809 +0000 UTC m=+1.073099813 container died 498629d46cce8d596c745676fd1442fe8d4711d5ab576f69490d63c377920287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct  3 11:22:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-89d2ede2c9f8a01766c664b1d2961458b7b458f4c8b8044581d362a3d177fd29-merged.mount: Deactivated successfully.
Oct  3 11:22:44 compute-0 podman[523211]: 2025-10-03 11:22:44.94265418 +0000 UTC m=+1.168891134 container remove 498629d46cce8d596c745676fd1442fe8d4711d5ab576f69490d63c377920287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brown, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:22:44 compute-0 systemd[1]: libpod-conmon-498629d46cce8d596c745676fd1442fe8d4711d5ab576f69490d63c377920287.scope: Deactivated successfully.
Oct  3 11:22:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3462: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:45 compute-0 podman[523521]: 2025-10-03 11:22:45.943592378 +0000 UTC m=+0.075648635 container create e042458fd36d336dd7c0c584d76d8209f5246de882395655abca235471115798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:22:46 compute-0 podman[523521]: 2025-10-03 11:22:45.911091697 +0000 UTC m=+0.043148004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:22:46 compute-0 systemd[1]: Started libpod-conmon-e042458fd36d336dd7c0c584d76d8209f5246de882395655abca235471115798.scope.
Oct  3 11:22:46 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:22:46 compute-0 podman[523521]: 2025-10-03 11:22:46.088569074 +0000 UTC m=+0.220625391 container init e042458fd36d336dd7c0c584d76d8209f5246de882395655abca235471115798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elion, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:22:46 compute-0 podman[523521]: 2025-10-03 11:22:46.099893047 +0000 UTC m=+0.231949314 container start e042458fd36d336dd7c0c584d76d8209f5246de882395655abca235471115798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:22:46 compute-0 hopeful_elion[523537]: 167 167
Oct  3 11:22:46 compute-0 systemd[1]: libpod-e042458fd36d336dd7c0c584d76d8209f5246de882395655abca235471115798.scope: Deactivated successfully.
Oct  3 11:22:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:22:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:22:46 compute-0 podman[523521]: 2025-10-03 11:22:46.171992679 +0000 UTC m=+0.304048936 container attach e042458fd36d336dd7c0c584d76d8209f5246de882395655abca235471115798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elion, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 11:22:46 compute-0 podman[523521]: 2025-10-03 11:22:46.172777643 +0000 UTC m=+0.304833900 container died e042458fd36d336dd7c0c584d76d8209f5246de882395655abca235471115798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elion, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct  3 11:22:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:22:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:22:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:22:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:22:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-dae6c38441e3b0d87b8d43a46ad759b8599316c56f1058e449f31081a89c3545-merged.mount: Deactivated successfully.
Oct  3 11:22:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:22:46
Oct  3 11:22:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:22:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:22:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.rgw.root', 'vms', 'backups', '.mgr', 'images', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta']
Oct  3 11:22:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:22:46 compute-0 podman[523521]: 2025-10-03 11:22:46.489423372 +0000 UTC m=+0.621479629 container remove e042458fd36d336dd7c0c584d76d8209f5246de882395655abca235471115798 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_elion, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Oct  3 11:22:46 compute-0 systemd[1]: libpod-conmon-e042458fd36d336dd7c0c584d76d8209f5246de882395655abca235471115798.scope: Deactivated successfully.
Oct  3 11:22:46 compute-0 podman[523562]: 2025-10-03 11:22:46.732358848 +0000 UTC m=+0.090481401 container create 4c44aa5176b159e2f2051763229cc8a5d05b8808c82140cd95ff11d3674f1493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackwell, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 11:22:46 compute-0 systemd[1]: Started libpod-conmon-4c44aa5176b159e2f2051763229cc8a5d05b8808c82140cd95ff11d3674f1493.scope.
Oct  3 11:22:46 compute-0 podman[523562]: 2025-10-03 11:22:46.70341059 +0000 UTC m=+0.061533223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:22:46 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bf2a8964b69f45246d76a51a7efd040c152715fb5c6532e280d11dcb0dd7fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bf2a8964b69f45246d76a51a7efd040c152715fb5c6532e280d11dcb0dd7fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bf2a8964b69f45246d76a51a7efd040c152715fb5c6532e280d11dcb0dd7fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:22:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bf2a8964b69f45246d76a51a7efd040c152715fb5c6532e280d11dcb0dd7fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:22:46 compute-0 podman[523562]: 2025-10-03 11:22:46.906978324 +0000 UTC m=+0.265100957 container init 4c44aa5176b159e2f2051763229cc8a5d05b8808c82140cd95ff11d3674f1493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:22:46 compute-0 podman[523562]: 2025-10-03 11:22:46.92991735 +0000 UTC m=+0.288039943 container start 4c44aa5176b159e2f2051763229cc8a5d05b8808c82140cd95ff11d3674f1493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 11:22:46 compute-0 podman[523562]: 2025-10-03 11:22:46.936315355 +0000 UTC m=+0.294437968 container attach 4c44aa5176b159e2f2051763229cc8a5d05b8808c82140cd95ff11d3674f1493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackwell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:22:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:22:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:22:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:22:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:22:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:22:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:22:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:22:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:22:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:22:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:22:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3463: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:47 compute-0 nova_compute[351685]: 2025-10-03 11:22:47.991 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]: {
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:        "osd_id": 1,
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:        "type": "bluestore"
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:    },
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:        "osd_id": 2,
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:        "type": "bluestore"
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:    },
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:        "osd_id": 0,
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:        "type": "bluestore"
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]:    }
Oct  3 11:22:48 compute-0 lucid_blackwell[523577]: }
Oct  3 11:22:48 compute-0 systemd[1]: libpod-4c44aa5176b159e2f2051763229cc8a5d05b8808c82140cd95ff11d3674f1493.scope: Deactivated successfully.
Oct  3 11:22:48 compute-0 systemd[1]: libpod-4c44aa5176b159e2f2051763229cc8a5d05b8808c82140cd95ff11d3674f1493.scope: Consumed 1.116s CPU time.
Oct  3 11:22:48 compute-0 conmon[523577]: conmon 4c44aa5176b159e2f205 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c44aa5176b159e2f2051763229cc8a5d05b8808c82140cd95ff11d3674f1493.scope/container/memory.events
Oct  3 11:22:48 compute-0 podman[523562]: 2025-10-03 11:22:48.062121506 +0000 UTC m=+1.420244069 container died 4c44aa5176b159e2f2051763229cc8a5d05b8808c82140cd95ff11d3674f1493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:22:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-88bf2a8964b69f45246d76a51a7efd040c152715fb5c6532e280d11dcb0dd7fb-merged.mount: Deactivated successfully.
Oct  3 11:22:48 compute-0 podman[523562]: 2025-10-03 11:22:48.136951894 +0000 UTC m=+1.495074447 container remove 4c44aa5176b159e2f2051763229cc8a5d05b8808c82140cd95ff11d3674f1493 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:22:48 compute-0 systemd[1]: libpod-conmon-4c44aa5176b159e2f2051763229cc8a5d05b8808c82140cd95ff11d3674f1493.scope: Deactivated successfully.
Oct  3 11:22:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:22:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:22:48 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:22:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:22:48 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:22:48 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f3d0d731-202c-4544-b989-d9bd99299b54 does not exist
Oct  3 11:22:48 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3c3e4d06-5c11-4d0b-8fa4-e009c0b9883f does not exist
Oct  3 11:22:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:22:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:22:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3464: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:49 compute-0 nova_compute[351685]: 2025-10-03 11:22:49.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:22:49 compute-0 nova_compute[351685]: 2025-10-03 11:22:49.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:22:49 compute-0 nova_compute[351685]: 2025-10-03 11:22:49.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:22:50 compute-0 nova_compute[351685]: 2025-10-03 11:22:50.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:22:50 compute-0 nova_compute[351685]: 2025-10-03 11:22:50.759 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:22:50 compute-0 nova_compute[351685]: 2025-10-03 11:22:50.760 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:22:50 compute-0 nova_compute[351685]: 2025-10-03 11:22:50.760 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:22:50 compute-0 nova_compute[351685]: 2025-10-03 11:22:50.760 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:22:50 compute-0 nova_compute[351685]: 2025-10-03 11:22:50.761 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:22:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:22:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.2 total, 600.0 interval#012Cumulative writes: 9342 writes, 33K keys, 9342 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 9342 writes, 2544 syncs, 3.67 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 11:22:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:22:51 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2820497174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:22:51 compute-0 nova_compute[351685]: 2025-10-03 11:22:51.201 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:22:51 compute-0 nova_compute[351685]: 2025-10-03 11:22:51.299 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:22:51 compute-0 nova_compute[351685]: 2025-10-03 11:22:51.300 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:22:51 compute-0 nova_compute[351685]: 2025-10-03 11:22:51.301 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:22:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3465: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:51 compute-0 nova_compute[351685]: 2025-10-03 11:22:51.814 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:22:51 compute-0 nova_compute[351685]: 2025-10-03 11:22:51.816 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3734MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:22:51 compute-0 nova_compute[351685]: 2025-10-03 11:22:51.817 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:22:51 compute-0 nova_compute[351685]: 2025-10-03 11:22:51.818 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:22:51 compute-0 nova_compute[351685]: 2025-10-03 11:22:51.908 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:22:51 compute-0 nova_compute[351685]: 2025-10-03 11:22:51.909 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:22:51 compute-0 nova_compute[351685]: 2025-10-03 11:22:51.909 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:22:51 compute-0 nova_compute[351685]: 2025-10-03 11:22:51.944 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:22:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:22:52 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/233617645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:22:52 compute-0 nova_compute[351685]: 2025-10-03 11:22:52.460 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:22:52 compute-0 nova_compute[351685]: 2025-10-03 11:22:52.472 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:22:52 compute-0 nova_compute[351685]: 2025-10-03 11:22:52.541 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:22:52 compute-0 nova_compute[351685]: 2025-10-03 11:22:52.544 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:22:52 compute-0 nova_compute[351685]: 2025-10-03 11:22:52.545 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:22:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:22:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3466: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:22:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1378219654' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:22:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:22:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1378219654' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:22:54 compute-0 nova_compute[351685]: 2025-10-03 11:22:54.545 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:22:54 compute-0 nova_compute[351685]: 2025-10-03 11:22:54.546 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:22:54 compute-0 nova_compute[351685]: 2025-10-03 11:22:54.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:22:54 compute-0 nova_compute[351685]: 2025-10-03 11:22:54.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:22:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3467: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:55 compute-0 podman[523718]: 2025-10-03 11:22:55.824487121 +0000 UTC m=+0.079344984 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 11:22:55 compute-0 podman[523717]: 2025-10-03 11:22:55.825016278 +0000 UTC m=+0.085003925 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, name=ubi9, container_name=kepler, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, vcs-type=git)
Oct  3 11:22:55 compute-0 podman[523716]: 2025-10-03 11:22:55.842347333 +0000 UTC m=+0.102010440 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:22:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:22:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:22:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 10K writes, 35K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 10K writes, 2728 syncs, 3.69 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 11:22:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3468: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:57 compute-0 nova_compute[351685]: 2025-10-03 11:22:57.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:22:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:22:58 compute-0 nova_compute[351685]: 2025-10-03 11:22:58.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:22:58 compute-0 nova_compute[351685]: 2025-10-03 11:22:58.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:22:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3469: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:22:59 compute-0 nova_compute[351685]: 2025-10-03 11:22:59.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:22:59 compute-0 podman[157165]: time="2025-10-03T11:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:22:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:22:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9112 "" "Go-http-client/1.1"
Oct  3 11:22:59 compute-0 nova_compute[351685]: 2025-10-03 11:22:59.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:22:59 compute-0 nova_compute[351685]: 2025-10-03 11:22:59.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:23:01 compute-0 openstack_network_exporter[367524]: ERROR   11:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:23:01 compute-0 openstack_network_exporter[367524]: ERROR   11:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:23:01 compute-0 openstack_network_exporter[367524]: ERROR   11:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:23:01 compute-0 openstack_network_exporter[367524]: ERROR   11:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:23:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:23:01 compute-0 openstack_network_exporter[367524]: ERROR   11:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:23:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:23:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3470: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:23:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:23:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 8245 writes, 29K keys, 8245 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 8245 writes, 2081 syncs, 3.96 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 11:23:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3471: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:04 compute-0 nova_compute[351685]: 2025-10-03 11:23:04.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:23:04 compute-0 nova_compute[351685]: 2025-10-03 11:23:04.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:23:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3472: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:05 compute-0 ceph-mgr[192071]: [devicehealth INFO root] Check health
Oct  3 11:23:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3473: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:23:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3474: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:09 compute-0 nova_compute[351685]: 2025-10-03 11:23:09.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:23:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3475: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:23:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3476: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:14 compute-0 nova_compute[351685]: 2025-10-03 11:23:14.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:23:14 compute-0 nova_compute[351685]: 2025-10-03 11:23:14.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:23:14 compute-0 nova_compute[351685]: 2025-10-03 11:23:14.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  3 11:23:14 compute-0 nova_compute[351685]: 2025-10-03 11:23:14.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:23:14 compute-0 nova_compute[351685]: 2025-10-03 11:23:14.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:23:14 compute-0 nova_compute[351685]: 2025-10-03 11:23:14.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:23:14 compute-0 podman[523775]: 2025-10-03 11:23:14.84133089 +0000 UTC m=+0.119819751 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:23:14 compute-0 podman[523777]: 2025-10-03 11:23:14.848509709 +0000 UTC m=+0.119837611 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:23:14 compute-0 podman[523778]: 2025-10-03 11:23:14.87316135 +0000 UTC m=+0.133048275 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:23:14 compute-0 podman[523784]: 2025-10-03 11:23:14.880293488 +0000 UTC m=+0.114431938 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:23:14 compute-0 podman[523776]: 2025-10-03 11:23:14.882783368 +0000 UTC m=+0.150708501 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=)
Oct  3 11:23:14 compute-0 podman[523804]: 2025-10-03 11:23:14.890846136 +0000 UTC m=+0.117959780 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  3 11:23:14 compute-0 podman[523796]: 2025-10-03 11:23:14.916013663 +0000 UTC m=+0.158026845 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  3 11:23:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3477: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:23:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:23:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:23:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:23:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:23:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:23:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3478: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:23:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3479: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:19 compute-0 nova_compute[351685]: 2025-10-03 11:23:19.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:23:19 compute-0 nova_compute[351685]: 2025-10-03 11:23:19.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:23:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3480: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:23:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3481: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:24 compute-0 nova_compute[351685]: 2025-10-03 11:23:24.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:23:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3482: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:26 compute-0 podman[523910]: 2025-10-03 11:23:26.799566401 +0000 UTC m=+0.060404837 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 11:23:26 compute-0 podman[523917]: 2025-10-03 11:23:26.841830436 +0000 UTC m=+0.078188528 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct  3 11:23:26 compute-0 podman[523911]: 2025-10-03 11:23:26.84542148 +0000 UTC m=+0.099887622 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543)
Oct  3 11:23:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3483: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:23:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3484: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:29 compute-0 podman[157165]: time="2025-10-03T11:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:23:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:23:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9113 "" "Go-http-client/1.1"
Oct  3 11:23:29 compute-0 nova_compute[351685]: 2025-10-03 11:23:29.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:23:31 compute-0 openstack_network_exporter[367524]: ERROR   11:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:23:31 compute-0 openstack_network_exporter[367524]: ERROR   11:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:23:31 compute-0 openstack_network_exporter[367524]: ERROR   11:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:23:31 compute-0 openstack_network_exporter[367524]: ERROR   11:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:23:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:23:31 compute-0 openstack_network_exporter[367524]: ERROR   11:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:23:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:23:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3485: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:23:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3486: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:34 compute-0 nova_compute[351685]: 2025-10-03 11:23:34.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:23:34 compute-0 nova_compute[351685]: 2025-10-03 11:23:34.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:23:34 compute-0 nova_compute[351685]: 2025-10-03 11:23:34.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  3 11:23:34 compute-0 nova_compute[351685]: 2025-10-03 11:23:34.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:23:34 compute-0 nova_compute[351685]: 2025-10-03 11:23:34.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:23:34 compute-0 nova_compute[351685]: 2025-10-03 11:23:34.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:23:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3487: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3488: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:23:38 compute-0 nova_compute[351685]: 2025-10-03 11:23:38.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:23:38 compute-0 nova_compute[351685]: 2025-10-03 11:23:38.732 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:23:38 compute-0 nova_compute[351685]: 2025-10-03 11:23:38.732 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:23:38 compute-0 nova_compute[351685]: 2025-10-03 11:23:38.980 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:23:38 compute-0 nova_compute[351685]: 2025-10-03 11:23:38.982 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:23:38 compute-0 nova_compute[351685]: 2025-10-03 11:23:38.982 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:23:38 compute-0 nova_compute[351685]: 2025-10-03 11:23:38.983 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:23:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3489: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:39 compute-0 nova_compute[351685]: 2025-10-03 11:23:39.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.905 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.906 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.906 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a956ae2a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.915 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.916 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.916 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.916 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.916 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:23:40.916843) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.923 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:40 compute-0 nova_compute[351685]: 2025-10-03 11:23:40.924 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.925 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.925 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.925 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.925 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.925 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.926 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.926 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.926 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:23:40.925952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.927 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.927 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.927 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.927 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.927 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.927 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.928 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:23:40.927743) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:40 compute-0 nova_compute[351685]: 2025-10-03 11:23:40.950 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:23:40 compute-0 nova_compute[351685]: 2025-10-03 11:23:40.951 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.955 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.955 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.956 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.956 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.957 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.957 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.957 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.957 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:40.958 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:23:40.957574) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.004 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.005 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.005 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.006 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.006 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.006 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.006 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.007 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:23:41.006857) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.008 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.008 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.008 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.008 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.008 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:23:41.009009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.010 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.010 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.011 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:23:41.011203) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.012 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.012 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.013 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:23:41.013468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.014 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.014 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.015 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.015 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.015 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.015 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:23:41.015487) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.016 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.017 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.017 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.017 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.017 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:23:41.017703) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.051 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.051 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.051 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.052 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.052 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:23:41.052001) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.053 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.054 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.054 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.054 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.054 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.055 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:23:41.054497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.055 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.055 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.055 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.056 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.056 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.056 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.056 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:23:41.056819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.057 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.058 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.058 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.058 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.059 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:23:41.058881) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.059 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.060 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.060 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.060 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:23:41.060295) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.060 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.061 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.061 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.062 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.062 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:23:41.062073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.062 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.063 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:23:41.063521) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.064 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.064 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.064 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.065 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:23:41.064988) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.065 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 104660000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.066 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.066 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.066 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:23:41.066753) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.067 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.068 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.069 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:23:41.068653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.069 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.069 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.070 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.070 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.070 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.070 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:23:41.070686) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.071 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.071 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.072 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.072 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.072 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.073 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:23:41.072634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.072 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.073 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.074 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.074 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.074 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:23:41.074943) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.075 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.076 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.076 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.076 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.077 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.077 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.077 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.077 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:23:41.077169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.077 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.077 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.080 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:23:41.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:23:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3490: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:23:41.701 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:23:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:23:41.702 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:23:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:23:41.702 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:23:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:23:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3491: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:44 compute-0 nova_compute[351685]: 2025-10-03 11:23:44.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:23:44 compute-0 nova_compute[351685]: 2025-10-03 11:23:44.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:23:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3492: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:45 compute-0 podman[523974]: 2025-10-03 11:23:45.863606536 +0000 UTC m=+0.085859362 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  3 11:23:45 compute-0 podman[523975]: 2025-10-03 11:23:45.893167983 +0000 UTC m=+0.128844550 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:23:45 compute-0 podman[523976]: 2025-10-03 11:23:45.899904379 +0000 UTC m=+0.136839446 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 11:23:45 compute-0 podman[523972]: 2025-10-03 11:23:45.912834493 +0000 UTC m=+0.155353589 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:23:45 compute-0 podman[523973]: 2025-10-03 11:23:45.912998479 +0000 UTC m=+0.135826674 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal)
Oct  3 11:23:45 compute-0 podman[523978]: 2025-10-03 11:23:45.918168745 +0000 UTC m=+0.145136813 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 11:23:45 compute-0 podman[523977]: 2025-10-03 11:23:45.918944509 +0000 UTC m=+0.147836148 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  3 11:23:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:23:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:23:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:23:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:23:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:23:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:23:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:23:46
Oct  3 11:23:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:23:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:23:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.control']
Oct  3 11:23:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:23:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:23:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:23:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:23:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:23:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:23:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:23:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:23:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:23:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:23:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:23:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3493: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:47 compute-0 nova_compute[351685]: 2025-10-03 11:23:47.946 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:23:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:23:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:23:49 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:23:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:23:49 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:23:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:23:49 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:23:49 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 7a72db6f-604d-43eb-a874-2c7730f46742 does not exist
Oct  3 11:23:49 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f5f870d7-0fe9-40c5-b836-504ff6ac78d7 does not exist
Oct  3 11:23:49 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev c452923f-3ea0-4339-b751-5633061382f8 does not exist
Oct  3 11:23:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:23:49 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:23:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:23:49 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:23:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:23:49 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:23:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3494: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:49 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:23:49 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:23:49 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:23:49 compute-0 nova_compute[351685]: 2025-10-03 11:23:49.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:23:50 compute-0 podman[524376]: 2025-10-03 11:23:50.523225925 +0000 UTC m=+0.041859332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:23:50 compute-0 nova_compute[351685]: 2025-10-03 11:23:50.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:23:50 compute-0 podman[524376]: 2025-10-03 11:23:50.75146261 +0000 UTC m=+0.270095997 container create d966402e11e92313ff0165ef7fd5dd75f92a02e6a5ade7001a558dae7ca5207c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 11:23:50 compute-0 systemd[1]: Started libpod-conmon-d966402e11e92313ff0165ef7fd5dd75f92a02e6a5ade7001a558dae7ca5207c.scope.
Oct  3 11:23:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:23:50 compute-0 podman[524376]: 2025-10-03 11:23:50.924785454 +0000 UTC m=+0.443418921 container init d966402e11e92313ff0165ef7fd5dd75f92a02e6a5ade7001a558dae7ca5207c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:23:50 compute-0 podman[524376]: 2025-10-03 11:23:50.938154052 +0000 UTC m=+0.456787439 container start d966402e11e92313ff0165ef7fd5dd75f92a02e6a5ade7001a558dae7ca5207c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 11:23:50 compute-0 podman[524376]: 2025-10-03 11:23:50.943195055 +0000 UTC m=+0.461828522 container attach d966402e11e92313ff0165ef7fd5dd75f92a02e6a5ade7001a558dae7ca5207c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:23:50 compute-0 adoring_ptolemy[524392]: 167 167
Oct  3 11:23:50 compute-0 systemd[1]: libpod-d966402e11e92313ff0165ef7fd5dd75f92a02e6a5ade7001a558dae7ca5207c.scope: Deactivated successfully.
Oct  3 11:23:50 compute-0 podman[524376]: 2025-10-03 11:23:50.946990006 +0000 UTC m=+0.465623403 container died d966402e11e92313ff0165ef7fd5dd75f92a02e6a5ade7001a558dae7ca5207c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:23:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-72a4422d81604f278e0d8f6322778a6fcea215d056ea282204b337079aa376d8-merged.mount: Deactivated successfully.
Oct  3 11:23:51 compute-0 podman[524376]: 2025-10-03 11:23:51.010710197 +0000 UTC m=+0.529343594 container remove d966402e11e92313ff0165ef7fd5dd75f92a02e6a5ade7001a558dae7ca5207c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:23:51 compute-0 systemd[1]: libpod-conmon-d966402e11e92313ff0165ef7fd5dd75f92a02e6a5ade7001a558dae7ca5207c.scope: Deactivated successfully.
Oct  3 11:23:51 compute-0 podman[524415]: 2025-10-03 11:23:51.279408979 +0000 UTC m=+0.085273763 container create 27f2c642bc0c8a867baddde9c10b03830763cbc82026663ba27fa21cb1e27f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hoover, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:23:51 compute-0 podman[524415]: 2025-10-03 11:23:51.247873089 +0000 UTC m=+0.053737923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:23:51 compute-0 systemd[1]: Started libpod-conmon-27f2c642bc0c8a867baddde9c10b03830763cbc82026663ba27fa21cb1e27f81.scope.
Oct  3 11:23:51 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e63272b67200147e6dd19757c17287cb9e6dc58349e9d4dc4718dbe87145b44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e63272b67200147e6dd19757c17287cb9e6dc58349e9d4dc4718dbe87145b44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e63272b67200147e6dd19757c17287cb9e6dc58349e9d4dc4718dbe87145b44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e63272b67200147e6dd19757c17287cb9e6dc58349e9d4dc4718dbe87145b44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e63272b67200147e6dd19757c17287cb9e6dc58349e9d4dc4718dbe87145b44/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:23:51 compute-0 podman[524415]: 2025-10-03 11:23:51.446505805 +0000 UTC m=+0.252370589 container init 27f2c642bc0c8a867baddde9c10b03830763cbc82026663ba27fa21cb1e27f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hoover, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 11:23:51 compute-0 podman[524415]: 2025-10-03 11:23:51.475636268 +0000 UTC m=+0.281501002 container start 27f2c642bc0c8a867baddde9c10b03830763cbc82026663ba27fa21cb1e27f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hoover, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  3 11:23:51 compute-0 podman[524415]: 2025-10-03 11:23:51.480079681 +0000 UTC m=+0.285944505 container attach 27f2c642bc0c8a867baddde9c10b03830763cbc82026663ba27fa21cb1e27f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 11:23:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3495: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:51 compute-0 nova_compute[351685]: 2025-10-03 11:23:51.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:23:51 compute-0 nova_compute[351685]: 2025-10-03 11:23:51.775 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:23:51 compute-0 nova_compute[351685]: 2025-10-03 11:23:51.775 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:23:51 compute-0 nova_compute[351685]: 2025-10-03 11:23:51.776 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:23:51 compute-0 nova_compute[351685]: 2025-10-03 11:23:51.776 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:23:51 compute-0 nova_compute[351685]: 2025-10-03 11:23:51.777 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:23:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:23:52 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3744985752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:23:52 compute-0 nova_compute[351685]: 2025-10-03 11:23:52.301 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:23:52 compute-0 nova_compute[351685]: 2025-10-03 11:23:52.411 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:23:52 compute-0 nova_compute[351685]: 2025-10-03 11:23:52.412 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:23:52 compute-0 nova_compute[351685]: 2025-10-03 11:23:52.412 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:23:52 compute-0 lucid_hoover[524430]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:23:52 compute-0 lucid_hoover[524430]: --> relative data size: 1.0
Oct  3 11:23:52 compute-0 lucid_hoover[524430]: --> All data devices are unavailable
Oct  3 11:23:52 compute-0 systemd[1]: libpod-27f2c642bc0c8a867baddde9c10b03830763cbc82026663ba27fa21cb1e27f81.scope: Deactivated successfully.
Oct  3 11:23:52 compute-0 systemd[1]: libpod-27f2c642bc0c8a867baddde9c10b03830763cbc82026663ba27fa21cb1e27f81.scope: Consumed 1.212s CPU time.
Oct  3 11:23:52 compute-0 podman[524415]: 2025-10-03 11:23:52.782968477 +0000 UTC m=+1.588833261 container died 27f2c642bc0c8a867baddde9c10b03830763cbc82026663ba27fa21cb1e27f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:23:52 compute-0 nova_compute[351685]: 2025-10-03 11:23:52.828 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:23:52 compute-0 nova_compute[351685]: 2025-10-03 11:23:52.830 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3721MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:23:52 compute-0 nova_compute[351685]: 2025-10-03 11:23:52.831 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:23:52 compute-0 nova_compute[351685]: 2025-10-03 11:23:52.832 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e63272b67200147e6dd19757c17287cb9e6dc58349e9d4dc4718dbe87145b44-merged.mount: Deactivated successfully.
Oct  3 11:23:52 compute-0 podman[524415]: 2025-10-03 11:23:52.88354623 +0000 UTC m=+1.689410984 container remove 27f2c642bc0c8a867baddde9c10b03830763cbc82026663ba27fa21cb1e27f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hoover, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 11:23:52 compute-0 systemd[1]: libpod-conmon-27f2c642bc0c8a867baddde9c10b03830763cbc82026663ba27fa21cb1e27f81.scope: Deactivated successfully.
Oct  3 11:23:52 compute-0 nova_compute[351685]: 2025-10-03 11:23:52.927 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:23:52 compute-0 nova_compute[351685]: 2025-10-03 11:23:52.928 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:23:52 compute-0 nova_compute[351685]: 2025-10-03 11:23:52.928 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:23:52 compute-0 nova_compute[351685]: 2025-10-03 11:23:52.947 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 11:23:52 compute-0 nova_compute[351685]: 2025-10-03 11:23:52.979 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 11:23:52 compute-0 nova_compute[351685]: 2025-10-03 11:23:52.979 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 11:23:52 compute-0 nova_compute[351685]: 2025-10-03 11:23:52.994 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 11:23:53 compute-0 nova_compute[351685]: 2025-10-03 11:23:53.026 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 11:23:53 compute-0 nova_compute[351685]: 2025-10-03 11:23:53.066 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:23:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:23:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:23:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3040081708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:23:53 compute-0 nova_compute[351685]: 2025-10-03 11:23:53.553 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:23:53 compute-0 nova_compute[351685]: 2025-10-03 11:23:53.566 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:23:53 compute-0 nova_compute[351685]: 2025-10-03 11:23:53.582 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:23:53 compute-0 nova_compute[351685]: 2025-10-03 11:23:53.584 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:23:53 compute-0 nova_compute[351685]: 2025-10-03 11:23:53.584 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:23:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3496: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:53 compute-0 podman[524655]: 2025-10-03 11:23:53.958442821 +0000 UTC m=+0.064375394 container create 6d1e731b606d35a424e02fa0c33cc3b3fa41e08b99281b92098f16b3df9fd058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:23:54 compute-0 systemd[1]: Started libpod-conmon-6d1e731b606d35a424e02fa0c33cc3b3fa41e08b99281b92098f16b3df9fd058.scope.
Oct  3 11:23:54 compute-0 podman[524655]: 2025-10-03 11:23:53.929614427 +0000 UTC m=+0.035547070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:23:54 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:23:54 compute-0 podman[524655]: 2025-10-03 11:23:54.089058637 +0000 UTC m=+0.194991260 container init 6d1e731b606d35a424e02fa0c33cc3b3fa41e08b99281b92098f16b3df9fd058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 11:23:54 compute-0 podman[524655]: 2025-10-03 11:23:54.11131249 +0000 UTC m=+0.217245073 container start 6d1e731b606d35a424e02fa0c33cc3b3fa41e08b99281b92098f16b3df9fd058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:23:54 compute-0 podman[524655]: 2025-10-03 11:23:54.12101194 +0000 UTC m=+0.226944523 container attach 6d1e731b606d35a424e02fa0c33cc3b3fa41e08b99281b92098f16b3df9fd058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:23:54 compute-0 objective_spence[524671]: 167 167
Oct  3 11:23:54 compute-0 systemd[1]: libpod-6d1e731b606d35a424e02fa0c33cc3b3fa41e08b99281b92098f16b3df9fd058.scope: Deactivated successfully.
Oct  3 11:23:54 compute-0 conmon[524671]: conmon 6d1e731b606d35a424e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6d1e731b606d35a424e02fa0c33cc3b3fa41e08b99281b92098f16b3df9fd058.scope/container/memory.events
Oct  3 11:23:54 compute-0 podman[524655]: 2025-10-03 11:23:54.126103484 +0000 UTC m=+0.232036067 container died 6d1e731b606d35a424e02fa0c33cc3b3fa41e08b99281b92098f16b3df9fd058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 11:23:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:23:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1631594162' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:23:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:23:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1631594162' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:23:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fe0619d942af46953a6f848c2f50bcc06586a90594c56042086ab69078cc5fd-merged.mount: Deactivated successfully.
Oct  3 11:23:54 compute-0 podman[524655]: 2025-10-03 11:23:54.21461185 +0000 UTC m=+0.320544423 container remove 6d1e731b606d35a424e02fa0c33cc3b3fa41e08b99281b92098f16b3df9fd058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 11:23:54 compute-0 systemd[1]: libpod-conmon-6d1e731b606d35a424e02fa0c33cc3b3fa41e08b99281b92098f16b3df9fd058.scope: Deactivated successfully.
Oct  3 11:23:54 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:23:54.465 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:73:2e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:70:f7:73:f2:48'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:23:54 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:23:54.466 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  3 11:23:54 compute-0 nova_compute[351685]: 2025-10-03 11:23:54.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:23:54 compute-0 podman[524694]: 2025-10-03 11:23:54.486107891 +0000 UTC m=+0.095707218 container create 4254a53c428168161fd06fd1106f6dafa81a91bfc49a1c2e3931b917f5bca18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 11:23:54 compute-0 podman[524694]: 2025-10-03 11:23:54.45145123 +0000 UTC m=+0.061050637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:23:54 compute-0 systemd[1]: Started libpod-conmon-4254a53c428168161fd06fd1106f6dafa81a91bfc49a1c2e3931b917f5bca18a.scope.
Oct  3 11:23:54 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b1e54a1ba24b0dfe610716ef527582c6592c75c0e58e0eda780b4151301793a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b1e54a1ba24b0dfe610716ef527582c6592c75c0e58e0eda780b4151301793a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b1e54a1ba24b0dfe610716ef527582c6592c75c0e58e0eda780b4151301793a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b1e54a1ba24b0dfe610716ef527582c6592c75c0e58e0eda780b4151301793a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:23:54 compute-0 podman[524694]: 2025-10-03 11:23:54.671680009 +0000 UTC m=+0.281279326 container init 4254a53c428168161fd06fd1106f6dafa81a91bfc49a1c2e3931b917f5bca18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kapitsa, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 11:23:54 compute-0 podman[524694]: 2025-10-03 11:23:54.686910227 +0000 UTC m=+0.296509544 container start 4254a53c428168161fd06fd1106f6dafa81a91bfc49a1c2e3931b917f5bca18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kapitsa, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 11:23:54 compute-0 podman[524694]: 2025-10-03 11:23:54.692278669 +0000 UTC m=+0.301877976 container attach 4254a53c428168161fd06fd1106f6dafa81a91bfc49a1c2e3931b917f5bca18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kapitsa, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:23:54 compute-0 nova_compute[351685]: 2025-10-03 11:23:54.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]: {
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:    "0": [
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:        {
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "devices": [
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "/dev/loop3"
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            ],
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "lv_name": "ceph_lv0",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "lv_size": "21470642176",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "name": "ceph_lv0",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "tags": {
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.cluster_name": "ceph",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.crush_device_class": "",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.encrypted": "0",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.osd_id": "0",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.type": "block",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.vdo": "0"
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            },
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "type": "block",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "vg_name": "ceph_vg0"
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:        }
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:    ],
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:    "1": [
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:        {
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "devices": [
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "/dev/loop4"
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            ],
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "lv_name": "ceph_lv1",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "lv_size": "21470642176",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "name": "ceph_lv1",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "tags": {
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.cluster_name": "ceph",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.crush_device_class": "",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.encrypted": "0",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.osd_id": "1",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.type": "block",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.vdo": "0"
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            },
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "type": "block",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "vg_name": "ceph_vg1"
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:        }
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:    ],
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:    "2": [
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:        {
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "devices": [
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "/dev/loop5"
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            ],
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "lv_name": "ceph_lv2",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "lv_size": "21470642176",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "name": "ceph_lv2",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "tags": {
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.cluster_name": "ceph",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.crush_device_class": "",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.encrypted": "0",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.osd_id": "2",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.type": "block",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:                "ceph.vdo": "0"
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            },
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "type": "block",
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:            "vg_name": "ceph_vg2"
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:        }
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]:    ]
Oct  3 11:23:55 compute-0 eloquent_kapitsa[524711]: }
Oct  3 11:23:55 compute-0 systemd[1]: libpod-4254a53c428168161fd06fd1106f6dafa81a91bfc49a1c2e3931b917f5bca18a.scope: Deactivated successfully.
Oct  3 11:23:55 compute-0 podman[524694]: 2025-10-03 11:23:55.478084914 +0000 UTC m=+1.087684271 container died 4254a53c428168161fd06fd1106f6dafa81a91bfc49a1c2e3931b917f5bca18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:23:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b1e54a1ba24b0dfe610716ef527582c6592c75c0e58e0eda780b4151301793a-merged.mount: Deactivated successfully.
Oct  3 11:23:55 compute-0 podman[524694]: 2025-10-03 11:23:55.557811799 +0000 UTC m=+1.167411116 container remove 4254a53c428168161fd06fd1106f6dafa81a91bfc49a1c2e3931b917f5bca18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kapitsa, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:23:55 compute-0 systemd[1]: libpod-conmon-4254a53c428168161fd06fd1106f6dafa81a91bfc49a1c2e3931b917f5bca18a.scope: Deactivated successfully.
Oct  3 11:23:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3497: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:23:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:23:56 compute-0 podman[524870]: 2025-10-03 11:23:56.52950399 +0000 UTC m=+0.075715367 container create 2bc94f7e31f696c7b4f04c9b2f668a74f09255fa55804b71e78fb99bd380c238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 11:23:56 compute-0 nova_compute[351685]: 2025-10-03 11:23:56.584 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:23:56 compute-0 nova_compute[351685]: 2025-10-03 11:23:56.585 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:23:56 compute-0 podman[524870]: 2025-10-03 11:23:56.498446675 +0000 UTC m=+0.044658132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:23:56 compute-0 systemd[1]: Started libpod-conmon-2bc94f7e31f696c7b4f04c9b2f668a74f09255fa55804b71e78fb99bd380c238.scope.
Oct  3 11:23:56 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:23:56 compute-0 podman[524870]: 2025-10-03 11:23:56.653397711 +0000 UTC m=+0.199609108 container init 2bc94f7e31f696c7b4f04c9b2f668a74f09255fa55804b71e78fb99bd380c238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_clarke, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 11:23:56 compute-0 podman[524870]: 2025-10-03 11:23:56.674625971 +0000 UTC m=+0.220837358 container start 2bc94f7e31f696c7b4f04c9b2f668a74f09255fa55804b71e78fb99bd380c238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_clarke, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:23:56 compute-0 podman[524870]: 2025-10-03 11:23:56.681461971 +0000 UTC m=+0.227673368 container attach 2bc94f7e31f696c7b4f04c9b2f668a74f09255fa55804b71e78fb99bd380c238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_clarke, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:23:56 compute-0 charming_clarke[524887]: 167 167
Oct  3 11:23:56 compute-0 systemd[1]: libpod-2bc94f7e31f696c7b4f04c9b2f668a74f09255fa55804b71e78fb99bd380c238.scope: Deactivated successfully.
Oct  3 11:23:56 compute-0 podman[524870]: 2025-10-03 11:23:56.685041536 +0000 UTC m=+0.231252953 container died 2bc94f7e31f696c7b4f04c9b2f668a74f09255fa55804b71e78fb99bd380c238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_clarke, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:23:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2f9402ecb4783dfde1eae49ac009fe43d37cd122a2fa9673f194048bfe9633d-merged.mount: Deactivated successfully.
Oct  3 11:23:56 compute-0 podman[524870]: 2025-10-03 11:23:56.745148472 +0000 UTC m=+0.291359859 container remove 2bc94f7e31f696c7b4f04c9b2f668a74f09255fa55804b71e78fb99bd380c238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_clarke, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:23:56 compute-0 systemd[1]: libpod-conmon-2bc94f7e31f696c7b4f04c9b2f668a74f09255fa55804b71e78fb99bd380c238.scope: Deactivated successfully.
Oct  3 11:23:56 compute-0 podman[524911]: 2025-10-03 11:23:56.969488832 +0000 UTC m=+0.061741380 container create f1da09b8474839031b1d23d3d312b1092f4a13d99e00ea0d2f86cd15fb1d3192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct  3 11:23:57 compute-0 systemd[1]: Started libpod-conmon-f1da09b8474839031b1d23d3d312b1092f4a13d99e00ea0d2f86cd15fb1d3192.scope.
Oct  3 11:23:57 compute-0 podman[524911]: 2025-10-03 11:23:56.94447568 +0000 UTC m=+0.036728288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:23:57 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d02f1cc05c7dd2cca1df8b904e363ae330bf86ae679b05b93caa074c491623/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d02f1cc05c7dd2cca1df8b904e363ae330bf86ae679b05b93caa074c491623/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d02f1cc05c7dd2cca1df8b904e363ae330bf86ae679b05b93caa074c491623/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d02f1cc05c7dd2cca1df8b904e363ae330bf86ae679b05b93caa074c491623/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:23:57 compute-0 podman[524911]: 2025-10-03 11:23:57.086931996 +0000 UTC m=+0.179184564 container init f1da09b8474839031b1d23d3d312b1092f4a13d99e00ea0d2f86cd15fb1d3192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:23:57 compute-0 podman[524911]: 2025-10-03 11:23:57.098068373 +0000 UTC m=+0.190320921 container start f1da09b8474839031b1d23d3d312b1092f4a13d99e00ea0d2f86cd15fb1d3192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 11:23:57 compute-0 podman[524911]: 2025-10-03 11:23:57.105342155 +0000 UTC m=+0.197594723 container attach f1da09b8474839031b1d23d3d312b1092f4a13d99e00ea0d2f86cd15fb1d3192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gagarin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:23:57 compute-0 podman[524927]: 2025-10-03 11:23:57.108569909 +0000 UTC m=+0.093723434 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Oct  3 11:23:57 compute-0 podman[524925]: 2025-10-03 11:23:57.1092116 +0000 UTC m=+0.100753880 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 11:23:57 compute-0 podman[524926]: 2025-10-03 11:23:57.118922911 +0000 UTC m=+0.098323252 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.buildah.version=1.29.0, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, config_id=edpm, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.expose-services=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 11:23:57 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:23:57.468 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:23:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3498: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]: {
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:        "osd_id": 1,
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:        "type": "bluestore"
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:    },
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:        "osd_id": 2,
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:        "type": "bluestore"
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:    },
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:        "osd_id": 0,
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:        "type": "bluestore"
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]:    }
Oct  3 11:23:58 compute-0 frosty_gagarin[524940]: }
Oct  3 11:23:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:23:58 compute-0 systemd[1]: libpod-f1da09b8474839031b1d23d3d312b1092f4a13d99e00ea0d2f86cd15fb1d3192.scope: Deactivated successfully.
Oct  3 11:23:58 compute-0 podman[524911]: 2025-10-03 11:23:58.217673775 +0000 UTC m=+1.309926363 container died f1da09b8474839031b1d23d3d312b1092f4a13d99e00ea0d2f86cd15fb1d3192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gagarin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:23:58 compute-0 systemd[1]: libpod-f1da09b8474839031b1d23d3d312b1092f4a13d99e00ea0d2f86cd15fb1d3192.scope: Consumed 1.108s CPU time.
Oct  3 11:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4d02f1cc05c7dd2cca1df8b904e363ae330bf86ae679b05b93caa074c491623-merged.mount: Deactivated successfully.
Oct  3 11:23:58 compute-0 podman[524911]: 2025-10-03 11:23:58.329435047 +0000 UTC m=+1.421687635 container remove f1da09b8474839031b1d23d3d312b1092f4a13d99e00ea0d2f86cd15fb1d3192 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:23:58 compute-0 systemd[1]: libpod-conmon-f1da09b8474839031b1d23d3d312b1092f4a13d99e00ea0d2f86cd15fb1d3192.scope: Deactivated successfully.
Oct  3 11:23:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:23:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:23:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:23:58 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:23:58 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d5db50fc-6b8f-4d34-bba9-35e67488c2d6 does not exist
Oct  3 11:23:58 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev bd715e9d-25d7-4758-ae1d-0d7404825205 does not exist
Oct  3 11:23:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:23:58 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:23:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3499: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:23:59 compute-0 nova_compute[351685]: 2025-10-03 11:23:59.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:23:59 compute-0 nova_compute[351685]: 2025-10-03 11:23:59.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:23:59 compute-0 nova_compute[351685]: 2025-10-03 11:23:59.732 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:23:59 compute-0 podman[157165]: time="2025-10-03T11:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:23:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:23:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9120 "" "Go-http-client/1.1"
Oct  3 11:23:59 compute-0 nova_compute[351685]: 2025-10-03 11:23:59.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:23:59 compute-0 nova_compute[351685]: 2025-10-03 11:23:59.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:23:59 compute-0 nova_compute[351685]: 2025-10-03 11:23:59.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  3 11:23:59 compute-0 nova_compute[351685]: 2025-10-03 11:23:59.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:23:59 compute-0 nova_compute[351685]: 2025-10-03 11:23:59.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:23:59 compute-0 nova_compute[351685]: 2025-10-03 11:23:59.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:01 compute-0 openstack_network_exporter[367524]: ERROR   11:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:24:01 compute-0 openstack_network_exporter[367524]: ERROR   11:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:24:01 compute-0 openstack_network_exporter[367524]: ERROR   11:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:24:01 compute-0 openstack_network_exporter[367524]: ERROR   11:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:24:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:24:01 compute-0 openstack_network_exporter[367524]: ERROR   11:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:24:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:24:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3500: 321 pgs: 321 active+clean; 78 MiB data, 264 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:24:01 compute-0 nova_compute[351685]: 2025-10-03 11:24:01.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:24:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct  3 11:24:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct  3 11:24:01 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct  3 11:24:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:24:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3502: 321 pgs: 321 active+clean; 86 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 819 KiB/s wr, 6 op/s
Oct  3 11:24:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct  3 11:24:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct  3 11:24:03 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct  3 11:24:04 compute-0 nova_compute[351685]: 2025-10-03 11:24:04.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:24:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3504: 321 pgs: 321 active+clean; 110 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 4.1 MiB/s wr, 34 op/s
Oct  3 11:24:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3505: 321 pgs: 321 active+clean; 110 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 4.1 MiB/s wr, 34 op/s
Oct  3 11:24:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:24:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3506: 321 pgs: 321 active+clean; 118 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct  3 11:24:09 compute-0 nova_compute[351685]: 2025-10-03 11:24:09.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:24:09 compute-0 nova_compute[351685]: 2025-10-03 11:24:09.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:09 compute-0 nova_compute[351685]: 2025-10-03 11:24:09.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct  3 11:24:09 compute-0 nova_compute[351685]: 2025-10-03 11:24:09.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:24:09 compute-0 nova_compute[351685]: 2025-10-03 11:24:09.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  3 11:24:09 compute-0 nova_compute[351685]: 2025-10-03 11:24:09.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3507: 321 pgs: 321 active+clean; 118 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.2 MiB/s wr, 39 op/s
Oct  3 11:24:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:24:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3508: 321 pgs: 321 active+clean; 118 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.3 MiB/s wr, 31 op/s
Oct  3 11:24:14 compute-0 nova_compute[351685]: 2025-10-03 11:24:14.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:14 compute-0 nova_compute[351685]: 2025-10-03 11:24:14.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3509: 321 pgs: 321 active+clean; 118 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.8 MiB/s wr, 26 op/s
Oct  3 11:24:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:24:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:24:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:24:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:24:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:24:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:24:16 compute-0 podman[525085]: 2025-10-03 11:24:16.880261854 +0000 UTC m=+0.120942727 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:24:16 compute-0 podman[525083]: 2025-10-03 11:24:16.888726015 +0000 UTC m=+0.133335633 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:24:16 compute-0 podman[525086]: 2025-10-03 11:24:16.902187457 +0000 UTC m=+0.131814416 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct  3 11:24:16 compute-0 podman[525084]: 2025-10-03 11:24:16.920206535 +0000 UTC m=+0.162654724 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_id=edpm, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git)
Oct  3 11:24:16 compute-0 podman[525103]: 2025-10-03 11:24:16.920019379 +0000 UTC m=+0.135344820 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true)
Oct  3 11:24:16 compute-0 podman[525087]: 2025-10-03 11:24:16.926558678 +0000 UTC m=+0.147481398 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Oct  3 11:24:16 compute-0 podman[525089]: 2025-10-03 11:24:16.936202237 +0000 UTC m=+0.147365204 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:24:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3510: 321 pgs: 321 active+clean; 118 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 683 KiB/s wr, 8 op/s
Oct  3 11:24:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:24:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3511: 321 pgs: 321 active+clean; 118 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 683 KiB/s wr, 8 op/s
Oct  3 11:24:19 compute-0 nova_compute[351685]: 2025-10-03 11:24:19.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:24:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3512: 321 pgs: 321 active+clean; 118 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:24:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:24:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3513: 321 pgs: 321 active+clean; 118 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:24:24 compute-0 ovn_controller[88471]: 2025-10-03T11:24:24Z|00068|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Oct  3 11:24:24 compute-0 nova_compute[351685]: 2025-10-03 11:24:24.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3514: 321 pgs: 321 active+clean; 118 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:24:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3515: 321 pgs: 321 active+clean; 118 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:24:27 compute-0 podman[525217]: 2025-10-03 11:24:27.819999005 +0000 UTC m=+0.073342961 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:24:27 compute-0 podman[525218]: 2025-10-03 11:24:27.869968407 +0000 UTC m=+0.118983584 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, maintainer=Red Hat, Inc., version=9.4)
Oct  3 11:24:27 compute-0 podman[525219]: 2025-10-03 11:24:27.887704895 +0000 UTC m=+0.117234618 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 11:24:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:24:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3516: 321 pgs: 321 active+clean; 118 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:24:29 compute-0 podman[157165]: time="2025-10-03T11:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:24:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:24:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9113 "" "Go-http-client/1.1"
Oct  3 11:24:29 compute-0 nova_compute[351685]: 2025-10-03 11:24:29.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:24:30 compute-0 nova_compute[351685]: 2025-10-03 11:24:30.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:24:31 compute-0 openstack_network_exporter[367524]: ERROR   11:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:24:31 compute-0 openstack_network_exporter[367524]: ERROR   11:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:24:31 compute-0 openstack_network_exporter[367524]: ERROR   11:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:24:31 compute-0 openstack_network_exporter[367524]: ERROR   11:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:24:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:24:31 compute-0 openstack_network_exporter[367524]: ERROR   11:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:24:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:24:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3517: 321 pgs: 321 active+clean; 118 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:24:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:24:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3518: 321 pgs: 321 active+clean; 118 MiB data, 305 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:24:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:24:34.592 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:73:2e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:70:f7:73:f2:48'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:24:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:24:34.593 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  3 11:24:34 compute-0 nova_compute[351685]: 2025-10-03 11:24:34.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:34 compute-0 nova_compute[351685]: 2025-10-03 11:24:34.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3519: 321 pgs: 321 active+clean; 118 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Oct  3 11:24:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3520: 321 pgs: 321 active+clean; 118 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Oct  3 11:24:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:24:38 compute-0 nova_compute[351685]: 2025-10-03 11:24:38.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3521: 321 pgs: 321 active+clean; 118 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Oct  3 11:24:39 compute-0 nova_compute[351685]: 2025-10-03 11:24:39.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:24:39 compute-0 nova_compute[351685]: 2025-10-03 11:24:39.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:24:39 compute-0 nova_compute[351685]: 2025-10-03 11:24:39.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:24:39 compute-0 nova_compute[351685]: 2025-10-03 11:24:39.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:39 compute-0 nova_compute[351685]: 2025-10-03 11:24:39.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:39 compute-0 nova_compute[351685]: 2025-10-03 11:24:39.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:40 compute-0 nova_compute[351685]: 2025-10-03 11:24:40.334 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:24:40 compute-0 nova_compute[351685]: 2025-10-03 11:24:40.334 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:24:40 compute-0 nova_compute[351685]: 2025-10-03 11:24:40.335 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:24:40 compute-0 nova_compute[351685]: 2025-10-03 11:24:40.335 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:24:41 compute-0 nova_compute[351685]: 2025-10-03 11:24:41.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:24:41.702 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:24:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:24:41.704 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:24:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:24:41.705 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:24:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3522: 321 pgs: 321 active+clean; 118 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Oct  3 11:24:42 compute-0 nova_compute[351685]: 2025-10-03 11:24:42.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:24:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:24:43.596 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:24:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3523: 321 pgs: 321 active+clean; 118 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 11:24:44 compute-0 nova_compute[351685]: 2025-10-03 11:24:44.488 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:24:44 compute-0 nova_compute[351685]: 2025-10-03 11:24:44.512 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:24:44 compute-0 nova_compute[351685]: 2025-10-03 11:24:44.512 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:24:44 compute-0 nova_compute[351685]: 2025-10-03 11:24:44.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:44 compute-0 nova_compute[351685]: 2025-10-03 11:24:44.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3524: 321 pgs: 321 active+clean; 118 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  3 11:24:46 compute-0 nova_compute[351685]: 2025-10-03 11:24:46.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:24:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:24:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:24:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:24:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:24:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:24:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:24:46
Oct  3 11:24:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:24:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:24:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'vms', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.meta', 'backups', 'default.rgw.log']
Oct  3 11:24:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:24:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:24:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:24:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:24:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:24:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:24:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:24:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:24:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:24:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:24:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:24:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3525: 321 pgs: 321 active+clean; 118 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Oct  3 11:24:47 compute-0 podman[525288]: 2025-10-03 11:24:47.872859318 +0000 UTC m=+0.092424644 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:24:47 compute-0 podman[525280]: 2025-10-03 11:24:47.877451715 +0000 UTC m=+0.097370191 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct  3 11:24:47 compute-0 podman[525277]: 2025-10-03 11:24:47.877538448 +0000 UTC m=+0.115355178 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:24:47 compute-0 podman[525278]: 2025-10-03 11:24:47.896042881 +0000 UTC m=+0.126974780 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41)
Oct  3 11:24:47 compute-0 podman[525281]: 2025-10-03 11:24:47.897196988 +0000 UTC m=+0.125170923 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  3 11:24:47 compute-0 podman[525279]: 2025-10-03 11:24:47.899634956 +0000 UTC m=+0.127519428 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:24:47 compute-0 podman[525282]: 2025-10-03 11:24:47.915474494 +0000 UTC m=+0.131419633 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:24:48 compute-0 nova_compute[351685]: 2025-10-03 11:24:48.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:24:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3526: 321 pgs: 321 active+clean; 118 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Oct  3 11:24:49 compute-0 nova_compute[351685]: 2025-10-03 11:24:49.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:51 compute-0 nova_compute[351685]: 2025-10-03 11:24:51.508 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:24:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3527: 321 pgs: 321 active+clean; 118 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Oct  3 11:24:52 compute-0 nova_compute[351685]: 2025-10-03 11:24:52.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:24:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:24:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3528: 321 pgs: 321 active+clean; 118 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Oct  3 11:24:53 compute-0 nova_compute[351685]: 2025-10-03 11:24:53.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:24:53 compute-0 nova_compute[351685]: 2025-10-03 11:24:53.766 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:24:53 compute-0 nova_compute[351685]: 2025-10-03 11:24:53.766 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:24:53 compute-0 nova_compute[351685]: 2025-10-03 11:24:53.767 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:24:53 compute-0 nova_compute[351685]: 2025-10-03 11:24:53.767 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:24:53 compute-0 nova_compute[351685]: 2025-10-03 11:24:53.768 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:24:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:24:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3602145985' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:24:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:24:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3602145985' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:24:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:24:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1928191211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:24:54 compute-0 nova_compute[351685]: 2025-10-03 11:24:54.277 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:24:54 compute-0 nova_compute[351685]: 2025-10-03 11:24:54.382 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:24:54 compute-0 nova_compute[351685]: 2025-10-03 11:24:54.383 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:24:54 compute-0 nova_compute[351685]: 2025-10-03 11:24:54.383 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:24:54 compute-0 nova_compute[351685]: 2025-10-03 11:24:54.843 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:24:54 compute-0 nova_compute[351685]: 2025-10-03 11:24:54.845 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3754MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:24:54 compute-0 nova_compute[351685]: 2025-10-03 11:24:54.845 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:24:54 compute-0 nova_compute[351685]: 2025-10-03 11:24:54.845 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:24:54 compute-0 nova_compute[351685]: 2025-10-03 11:24:54.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:55 compute-0 nova_compute[351685]: 2025-10-03 11:24:55.284 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:24:55 compute-0 nova_compute[351685]: 2025-10-03 11:24:55.285 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:24:55 compute-0 nova_compute[351685]: 2025-10-03 11:24:55.286 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:24:55 compute-0 nova_compute[351685]: 2025-10-03 11:24:55.541 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:24:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3529: 321 pgs: 321 active+clean; 118 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  3 11:24:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:24:56 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3984170699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:24:56 compute-0 nova_compute[351685]: 2025-10-03 11:24:56.070 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:24:56 compute-0 nova_compute[351685]: 2025-10-03 11:24:56.078 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:24:56 compute-0 nova_compute[351685]: 2025-10-03 11:24:56.097 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:24:56 compute-0 nova_compute[351685]: 2025-10-03 11:24:56.099 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:24:56 compute-0 nova_compute[351685]: 2025-10-03 11:24:56.099 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:24:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:24:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3530: 321 pgs: 321 active+clean; 118 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  3 11:24:58 compute-0 nova_compute[351685]: 2025-10-03 11:24:58.099 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:24:58 compute-0 nova_compute[351685]: 2025-10-03 11:24:58.100 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:24:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:24:58.198453) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490698198491, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1952, "num_deletes": 256, "total_data_size": 3270534, "memory_usage": 3323968, "flush_reason": "Manual Compaction"}
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490698234444, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 3206619, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69336, "largest_seqno": 71287, "table_properties": {"data_size": 3197539, "index_size": 5701, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17771, "raw_average_key_size": 19, "raw_value_size": 3179583, "raw_average_value_size": 3544, "num_data_blocks": 253, "num_entries": 897, "num_filter_entries": 897, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759490482, "oldest_key_time": 1759490482, "file_creation_time": 1759490698, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 36083 microseconds, and 7440 cpu microseconds.
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:24:58.234531) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 3206619 bytes OK
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:24:58.234557) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:24:58.242493) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:24:58.242530) EVENT_LOG_v1 {"time_micros": 1759490698242521, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:24:58.242556) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 3262302, prev total WAL file size 3262302, number of live WAL files 2.
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:24:58.245165) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303134' seq:72057594037927935, type:22 .. '6C6F676D0033323636' seq:0, type:0; will stop at (end)
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(3131KB)], [170(7414KB)]
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490698245293, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 10799292, "oldest_snapshot_seqno": -1}
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 8046 keys, 10698729 bytes, temperature: kUnknown
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490698327489, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 10698729, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10648900, "index_size": 28596, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20165, "raw_key_size": 211629, "raw_average_key_size": 26, "raw_value_size": 10506905, "raw_average_value_size": 1305, "num_data_blocks": 1133, "num_entries": 8046, "num_filter_entries": 8046, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759490698, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:24:58.327802) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 10698729 bytes
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:24:58.330568) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.2 rd, 130.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.2 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(6.7) write-amplify(3.3) OK, records in: 8574, records dropped: 528 output_compression: NoCompression
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:24:58.330599) EVENT_LOG_v1 {"time_micros": 1759490698330585, "job": 106, "event": "compaction_finished", "compaction_time_micros": 82285, "compaction_time_cpu_micros": 47656, "output_level": 6, "num_output_files": 1, "total_output_size": 10698729, "num_input_records": 8574, "num_output_records": 8046, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490698331802, "job": 106, "event": "table_file_deletion", "file_number": 172}
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490698334747, "job": 106, "event": "table_file_deletion", "file_number": 170}
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:24:58.244905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:24:58.335062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:24:58.335069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:24:58.335072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:24:58.335075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:24:58 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:24:58.335078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:24:58 compute-0 podman[525468]: 2025-10-03 11:24:58.836700861 +0000 UTC m=+0.086723640 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:24:58 compute-0 podman[525475]: 2025-10-03 11:24:58.845044368 +0000 UTC m=+0.094276142 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 11:24:58 compute-0 podman[525474]: 2025-10-03 11:24:58.855391111 +0000 UTC m=+0.098840869 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.020 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Acquiring lock "b5df7002-5185-4a75-ae2e-e8a44a0be062" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.021 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lock "b5df7002-5185-4a75-ae2e-e8a44a0be062" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.044 2 DEBUG nova.compute.manager [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.146 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Acquiring lock "6ca9e72e-4023-411a-93fb-b137c664f8f2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.147 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lock "6ca9e72e-4023-411a-93fb-b137c664f8f2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.152 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.152 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.162 2 DEBUG nova.virt.hardware [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.163 2 INFO nova.compute.claims [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.166 2 DEBUG nova.compute.manager [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  3 11:24:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct  3 11:24:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct  3 11:24:59 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.260 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.335 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:24:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:24:59 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:24:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3532: 321 pgs: 321 active+clean; 118 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 716 B/s wr, 3 op/s
Oct  3 11:24:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:24:59 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:24:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:24:59 compute-0 podman[157165]: time="2025-10-03T11:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:24:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:24:59 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:24:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:24:59 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 31de14ba-c83c-42ac-990b-fa7e6c7478f5 does not exist
Oct  3 11:24:59 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 5d42e4fa-f696-4f50-97bb-f545383c6684 does not exist
Oct  3 11:24:59 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 1b8d9919-868f-4fe7-abc2-a0d4069e1939 does not exist
Oct  3 11:24:59 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:24:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:24:59 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:24:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:24:59 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:24:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9115 "" "Go-http-client/1.1"
Oct  3 11:24:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:24:59 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/932316144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.895 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.908 2 DEBUG nova.compute.provider_tree [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.924 2 DEBUG nova.scheduler.client.report [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.948 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.949 2 DEBUG nova.compute.manager [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.953 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.962 2 DEBUG nova.virt.hardware [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  3 11:24:59 compute-0 nova_compute[351685]: 2025-10-03 11:24:59.963 2 INFO nova.compute.claims [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.015 2 DEBUG nova.compute.manager [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.015 2 DEBUG nova.network.neutron [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.044 2 INFO nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.061 2 DEBUG nova.compute.manager [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.125 2 DEBUG oslo_concurrency.processutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.151 2 DEBUG nova.compute.manager [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.154 2 DEBUG nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.154 2 INFO nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Creating image(s)#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.194 2 DEBUG nova.storage.rbd_utils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] rbd image b5df7002-5185-4a75-ae2e-e8a44a0be062_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:00 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:25:00 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:25:00 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.249 2 DEBUG nova.storage.rbd_utils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] rbd image b5df7002-5185-4a75-ae2e-e8a44a0be062_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.289 2 DEBUG nova.storage.rbd_utils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] rbd image b5df7002-5185-4a75-ae2e-e8a44a0be062_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.304 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Acquiring lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.305 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.451 2 DEBUG nova.policy [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7851dde78b9e4e9abf7463836db57a8e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '57f47db3919c4f3797a1434bfeebe880', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.547 2 DEBUG nova.virt.libvirt.imagebackend [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Image locations are: [{'url': 'rbd://9b4e8c9a-5555-5510-a631-4742a1182561/images/6a34ed8d-90df-4a16-a968-c59b7cafa2f1/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://9b4e8c9a-5555-5510-a631-4742a1182561/images/6a34ed8d-90df-4a16-a968-c59b7cafa2f1/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  3 11:25:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:25:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/565341420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.627 2 DEBUG oslo_concurrency.processutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.636 2 DEBUG nova.compute.provider_tree [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:25:00 compute-0 podman[525882]: 2025-10-03 11:25:00.63543862 +0000 UTC m=+0.072347000 container create 9f4b4c806d43a807920176168ac6f9a41c3e17aa1535f0070bcaab4362331492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hofstadter, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.654 2 DEBUG nova.scheduler.client.report [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.676 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.677 2 DEBUG nova.compute.manager [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  3 11:25:00 compute-0 systemd[1]: Started libpod-conmon-9f4b4c806d43a807920176168ac6f9a41c3e17aa1535f0070bcaab4362331492.scope.
Oct  3 11:25:00 compute-0 podman[525882]: 2025-10-03 11:25:00.614817859 +0000 UTC m=+0.051726269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.725 2 DEBUG nova.compute.manager [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.726 2 DEBUG nova.network.neutron [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  3 11:25:00 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.747 2 INFO nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  3 11:25:00 compute-0 podman[525882]: 2025-10-03 11:25:00.751358924 +0000 UTC m=+0.188267344 container init 9f4b4c806d43a807920176168ac6f9a41c3e17aa1535f0070bcaab4362331492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hofstadter, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 11:25:00 compute-0 podman[525882]: 2025-10-03 11:25:00.763047099 +0000 UTC m=+0.199955499 container start 9f4b4c806d43a807920176168ac6f9a41c3e17aa1535f0070bcaab4362331492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hofstadter, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.764 2 DEBUG nova.compute.manager [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  3 11:25:00 compute-0 podman[525882]: 2025-10-03 11:25:00.768572106 +0000 UTC m=+0.205480536 container attach 9f4b4c806d43a807920176168ac6f9a41c3e17aa1535f0070bcaab4362331492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 11:25:00 compute-0 laughing_hofstadter[525900]: 167 167
Oct  3 11:25:00 compute-0 systemd[1]: libpod-9f4b4c806d43a807920176168ac6f9a41c3e17aa1535f0070bcaab4362331492.scope: Deactivated successfully.
Oct  3 11:25:00 compute-0 podman[525882]: 2025-10-03 11:25:00.772708599 +0000 UTC m=+0.209616989 container died 9f4b4c806d43a807920176168ac6f9a41c3e17aa1535f0070bcaab4362331492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 11:25:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c9fdbb62b10a26beee69e0bf54cf79686871b574b0b24d1446c7f1105e449ac-merged.mount: Deactivated successfully.
Oct  3 11:25:00 compute-0 podman[525882]: 2025-10-03 11:25:00.834489259 +0000 UTC m=+0.271397649 container remove 9f4b4c806d43a807920176168ac6f9a41c3e17aa1535f0070bcaab4362331492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hofstadter, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Oct  3 11:25:00 compute-0 systemd[1]: libpod-conmon-9f4b4c806d43a807920176168ac6f9a41c3e17aa1535f0070bcaab4362331492.scope: Deactivated successfully.
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.864 2 DEBUG nova.compute.manager [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.866 2 DEBUG nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.866 2 INFO nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Creating image(s)#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.893 2 DEBUG nova.storage.rbd_utils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] rbd image 6ca9e72e-4023-411a-93fb-b137c664f8f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.933 2 DEBUG nova.storage.rbd_utils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] rbd image 6ca9e72e-4023-411a-93fb-b137c664f8f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.967 2 DEBUG nova.storage.rbd_utils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] rbd image 6ca9e72e-4023-411a-93fb-b137c664f8f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:00 compute-0 nova_compute[351685]: 2025-10-03 11:25:00.974 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Acquiring lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:01 compute-0 nova_compute[351685]: 2025-10-03 11:25:01.045 2 DEBUG nova.policy [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '13ea5fe65c674a40a8a29b240a1a5e6d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd6e74ba7072448fdb098db5317752362', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  3 11:25:01 compute-0 podman[525976]: 2025-10-03 11:25:01.048525148 +0000 UTC m=+0.069542289 container create f259effb457227da14e096c839f5f2c3c45c5b1449da8ac6d90387b47aa5703b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:25:01 compute-0 podman[525976]: 2025-10-03 11:25:01.021305506 +0000 UTC m=+0.042322657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:25:01 compute-0 systemd[1]: Started libpod-conmon-f259effb457227da14e096c839f5f2c3c45c5b1449da8ac6d90387b47aa5703b.scope.
Oct  3 11:25:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c75ef39b02d4f149bde56e866558f0b9f97e38ae7a79de2ac88eaf55e6f91485/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c75ef39b02d4f149bde56e866558f0b9f97e38ae7a79de2ac88eaf55e6f91485/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c75ef39b02d4f149bde56e866558f0b9f97e38ae7a79de2ac88eaf55e6f91485/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c75ef39b02d4f149bde56e866558f0b9f97e38ae7a79de2ac88eaf55e6f91485/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c75ef39b02d4f149bde56e866558f0b9f97e38ae7a79de2ac88eaf55e6f91485/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:25:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct  3 11:25:01 compute-0 podman[525976]: 2025-10-03 11:25:01.252479925 +0000 UTC m=+0.273497086 container init f259effb457227da14e096c839f5f2c3c45c5b1449da8ac6d90387b47aa5703b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chandrasekhar, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 11:25:01 compute-0 podman[525976]: 2025-10-03 11:25:01.260200943 +0000 UTC m=+0.281218074 container start f259effb457227da14e096c839f5f2c3c45c5b1449da8ac6d90387b47aa5703b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chandrasekhar, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:25:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct  3 11:25:01 compute-0 podman[525976]: 2025-10-03 11:25:01.263836769 +0000 UTC m=+0.284853900 container attach f259effb457227da14e096c839f5f2c3c45c5b1449da8ac6d90387b47aa5703b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chandrasekhar, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:25:01 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct  3 11:25:01 compute-0 openstack_network_exporter[367524]: ERROR   11:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:25:01 compute-0 openstack_network_exporter[367524]: ERROR   11:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:25:01 compute-0 openstack_network_exporter[367524]: ERROR   11:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:25:01 compute-0 openstack_network_exporter[367524]: ERROR   11:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:25:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:25:01 compute-0 openstack_network_exporter[367524]: ERROR   11:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:25:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:25:01 compute-0 nova_compute[351685]: 2025-10-03 11:25:01.532 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:01 compute-0 nova_compute[351685]: 2025-10-03 11:25:01.628 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8.part --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:01 compute-0 nova_compute[351685]: 2025-10-03 11:25:01.629 2 DEBUG nova.virt.images [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] 6a34ed8d-90df-4a16-a968-c59b7cafa2f1 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  3 11:25:01 compute-0 nova_compute[351685]: 2025-10-03 11:25:01.631 2 DEBUG nova.privsep.utils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  3 11:25:01 compute-0 nova_compute[351685]: 2025-10-03 11:25:01.632 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8.part /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:01 compute-0 nova_compute[351685]: 2025-10-03 11:25:01.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:25:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3534: 321 pgs: 321 active+clean; 118 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Oct  3 11:25:01 compute-0 nova_compute[351685]: 2025-10-03 11:25:01.775 2 DEBUG nova.network.neutron [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Successfully created port: f7d0064f-83c7-44b3-839d-5811852ce687 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  3 11:25:01 compute-0 nova_compute[351685]: 2025-10-03 11:25:01.946 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8.part /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8.converted" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:01 compute-0 nova_compute[351685]: 2025-10-03 11:25:01.950 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.014 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8.converted --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.017 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.063 2 DEBUG nova.storage.rbd_utils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] rbd image b5df7002-5185-4a75-ae2e-e8a44a0be062_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.074 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 b5df7002-5185-4a75-ae2e-e8a44a0be062_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.094 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 1.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.096 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.141 2 DEBUG nova.storage.rbd_utils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] rbd image 6ca9e72e-4023-411a-93fb-b137c664f8f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.155 2 DEBUG oslo_concurrency.processutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 6ca9e72e-4023-411a-93fb-b137c664f8f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:02 compute-0 hungry_chandrasekhar[525994]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:25:02 compute-0 hungry_chandrasekhar[525994]: --> relative data size: 1.0
Oct  3 11:25:02 compute-0 hungry_chandrasekhar[525994]: --> All data devices are unavailable
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.517 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 b5df7002-5185-4a75-ae2e-e8a44a0be062_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:02 compute-0 systemd[1]: libpod-f259effb457227da14e096c839f5f2c3c45c5b1449da8ac6d90387b47aa5703b.scope: Deactivated successfully.
Oct  3 11:25:02 compute-0 podman[525976]: 2025-10-03 11:25:02.526325301 +0000 UTC m=+1.547342462 container died f259effb457227da14e096c839f5f2c3c45c5b1449da8ac6d90387b47aa5703b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chandrasekhar, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:25:02 compute-0 systemd[1]: libpod-f259effb457227da14e096c839f5f2c3c45c5b1449da8ac6d90387b47aa5703b.scope: Consumed 1.109s CPU time.
Oct  3 11:25:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c75ef39b02d4f149bde56e866558f0b9f97e38ae7a79de2ac88eaf55e6f91485-merged.mount: Deactivated successfully.
Oct  3 11:25:02 compute-0 podman[525976]: 2025-10-03 11:25:02.59400711 +0000 UTC m=+1.615024241 container remove f259effb457227da14e096c839f5f2c3c45c5b1449da8ac6d90387b47aa5703b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 11:25:02 compute-0 systemd[1]: libpod-conmon-f259effb457227da14e096c839f5f2c3c45c5b1449da8ac6d90387b47aa5703b.scope: Deactivated successfully.
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.632 2 DEBUG oslo_concurrency.processutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 6ca9e72e-4023-411a-93fb-b137c664f8f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.676 2 DEBUG nova.network.neutron [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Successfully updated port: f7d0064f-83c7-44b3-839d-5811852ce687 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.684 2 DEBUG nova.storage.rbd_utils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] resizing rbd image b5df7002-5185-4a75-ae2e-e8a44a0be062_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.777 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Acquiring lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.778 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Acquired lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.778 2 DEBUG nova.network.neutron [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.793 2 DEBUG nova.storage.rbd_utils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] resizing rbd image 6ca9e72e-4023-411a-93fb-b137c664f8f2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.932 2 DEBUG nova.objects.instance [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lazy-loading 'migration_context' on Instance uuid b5df7002-5185-4a75-ae2e-e8a44a0be062 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.998 2 DEBUG nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  3 11:25:02 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.999 2 DEBUG nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Ensure instance console log exists: /var/lib/nova/instances/b5df7002-5185-4a75-ae2e-e8a44a0be062/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  3 11:25:03 compute-0 nova_compute[351685]: 2025-10-03 11:25:02.999 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:03 compute-0 nova_compute[351685]: 2025-10-03 11:25:03.000 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:03 compute-0 nova_compute[351685]: 2025-10-03 11:25:03.000 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:03 compute-0 nova_compute[351685]: 2025-10-03 11:25:03.016 2 DEBUG nova.objects.instance [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lazy-loading 'migration_context' on Instance uuid 6ca9e72e-4023-411a-93fb-b137c664f8f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:25:03 compute-0 nova_compute[351685]: 2025-10-03 11:25:03.029 2 DEBUG nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  3 11:25:03 compute-0 nova_compute[351685]: 2025-10-03 11:25:03.030 2 DEBUG nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Ensure instance console log exists: /var/lib/nova/instances/6ca9e72e-4023-411a-93fb-b137c664f8f2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  3 11:25:03 compute-0 nova_compute[351685]: 2025-10-03 11:25:03.030 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:03 compute-0 nova_compute[351685]: 2025-10-03 11:25:03.031 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:03 compute-0 nova_compute[351685]: 2025-10-03 11:25:03.031 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:03 compute-0 nova_compute[351685]: 2025-10-03 11:25:03.071 2 DEBUG nova.network.neutron [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Successfully created port: faf705ff-c202-4c38-82a6-3c53798c3d9f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  3 11:25:03 compute-0 nova_compute[351685]: 2025-10-03 11:25:03.152 2 DEBUG nova.network.neutron [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  3 11:25:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:25:03 compute-0 podman[526402]: 2025-10-03 11:25:03.522135377 +0000 UTC m=+0.046131650 container create 5e320fbef4a42981e68142432a3a2db111e0d87b3483a4e6bbbb884b128ee6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 11:25:03 compute-0 systemd[1]: Started libpod-conmon-5e320fbef4a42981e68142432a3a2db111e0d87b3483a4e6bbbb884b128ee6c8.scope.
Oct  3 11:25:03 compute-0 podman[526402]: 2025-10-03 11:25:03.504789201 +0000 UTC m=+0.028785524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:25:03 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:25:03 compute-0 podman[526402]: 2025-10-03 11:25:03.643491926 +0000 UTC m=+0.167488299 container init 5e320fbef4a42981e68142432a3a2db111e0d87b3483a4e6bbbb884b128ee6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 11:25:03 compute-0 podman[526402]: 2025-10-03 11:25:03.665089288 +0000 UTC m=+0.189085611 container start 5e320fbef4a42981e68142432a3a2db111e0d87b3483a4e6bbbb884b128ee6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:25:03 compute-0 podman[526402]: 2025-10-03 11:25:03.670809681 +0000 UTC m=+0.194805964 container attach 5e320fbef4a42981e68142432a3a2db111e0d87b3483a4e6bbbb884b128ee6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 11:25:03 compute-0 infallible_chaplygin[526419]: 167 167
Oct  3 11:25:03 compute-0 systemd[1]: libpod-5e320fbef4a42981e68142432a3a2db111e0d87b3483a4e6bbbb884b128ee6c8.scope: Deactivated successfully.
Oct  3 11:25:03 compute-0 conmon[526419]: conmon 5e320fbef4a42981e681 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5e320fbef4a42981e68142432a3a2db111e0d87b3483a4e6bbbb884b128ee6c8.scope/container/memory.events
Oct  3 11:25:03 compute-0 podman[526402]: 2025-10-03 11:25:03.679757198 +0000 UTC m=+0.203753491 container died 5e320fbef4a42981e68142432a3a2db111e0d87b3483a4e6bbbb884b128ee6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 11:25:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b8ddbdc4cdb31f7437a3da08ede1ce5656facca46bea8dfa6ff389aac49289f-merged.mount: Deactivated successfully.
Oct  3 11:25:03 compute-0 nova_compute[351685]: 2025-10-03 11:25:03.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:25:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3535: 321 pgs: 321 active+clean; 160 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 53 op/s
Oct  3 11:25:03 compute-0 podman[526402]: 2025-10-03 11:25:03.748402938 +0000 UTC m=+0.272399251 container remove 5e320fbef4a42981e68142432a3a2db111e0d87b3483a4e6bbbb884b128ee6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chaplygin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:25:03 compute-0 systemd[1]: libpod-conmon-5e320fbef4a42981e68142432a3a2db111e0d87b3483a4e6bbbb884b128ee6c8.scope: Deactivated successfully.
Oct  3 11:25:04 compute-0 podman[526441]: 2025-10-03 11:25:04.001125357 +0000 UTC m=+0.072063410 container create 622826bbd61b9b8d4f4fa85121d4cb42fb83af39766f778e44939f02f0158084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:25:04 compute-0 systemd[1]: Started libpod-conmon-622826bbd61b9b8d4f4fa85121d4cb42fb83af39766f778e44939f02f0158084.scope.
Oct  3 11:25:04 compute-0 podman[526441]: 2025-10-03 11:25:03.981642473 +0000 UTC m=+0.052580536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:25:04 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f44432003c026e987a388869b6099bca7e72da7de5a8f44edf0ef2fe091deec6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f44432003c026e987a388869b6099bca7e72da7de5a8f44edf0ef2fe091deec6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f44432003c026e987a388869b6099bca7e72da7de5a8f44edf0ef2fe091deec6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f44432003c026e987a388869b6099bca7e72da7de5a8f44edf0ef2fe091deec6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:25:04 compute-0 podman[526441]: 2025-10-03 11:25:04.107893189 +0000 UTC m=+0.178831252 container init 622826bbd61b9b8d4f4fa85121d4cb42fb83af39766f778e44939f02f0158084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cray, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 11:25:04 compute-0 podman[526441]: 2025-10-03 11:25:04.120897957 +0000 UTC m=+0.191836000 container start 622826bbd61b9b8d4f4fa85121d4cb42fb83af39766f778e44939f02f0158084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:25:04 compute-0 podman[526441]: 2025-10-03 11:25:04.12538339 +0000 UTC m=+0.196321533 container attach 622826bbd61b9b8d4f4fa85121d4cb42fb83af39766f778e44939f02f0158084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.330 2 DEBUG nova.compute.manager [req-5a441833-278a-40bb-8754-9010fa0d4fe2 req-579b96d3-81da-40c4-84ff-ab5ce9979023 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Received event network-changed-f7d0064f-83c7-44b3-839d-5811852ce687 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.331 2 DEBUG nova.compute.manager [req-5a441833-278a-40bb-8754-9010fa0d4fe2 req-579b96d3-81da-40c4-84ff-ab5ce9979023 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Refreshing instance network info cache due to event network-changed-f7d0064f-83c7-44b3-839d-5811852ce687. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.335 2 DEBUG oslo_concurrency.lockutils [req-5a441833-278a-40bb-8754-9010fa0d4fe2 req-579b96d3-81da-40c4-84ff-ab5ce9979023 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.544 2 DEBUG nova.network.neutron [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Updating instance_info_cache with network_info: [{"id": "f7d0064f-83c7-44b3-839d-5811852ce687", "address": "fa:16:3e:6c:16:9e", "network": {"id": "65d2d488-03e3-490e-9ad6-7948aea642e8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1607624435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7d0064f-83", "ovs_interfaceid": "f7d0064f-83c7-44b3-839d-5811852ce687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.570 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Releasing lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.570 2 DEBUG nova.compute.manager [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Instance network_info: |[{"id": "f7d0064f-83c7-44b3-839d-5811852ce687", "address": "fa:16:3e:6c:16:9e", "network": {"id": "65d2d488-03e3-490e-9ad6-7948aea642e8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1607624435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7d0064f-83", "ovs_interfaceid": "f7d0064f-83c7-44b3-839d-5811852ce687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.571 2 DEBUG oslo_concurrency.lockutils [req-5a441833-278a-40bb-8754-9010fa0d4fe2 req-579b96d3-81da-40c4-84ff-ab5ce9979023 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.572 2 DEBUG nova.network.neutron [req-5a441833-278a-40bb-8754-9010fa0d4fe2 req-579b96d3-81da-40c4-84ff-ab5ce9979023 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Refreshing network info cache for port f7d0064f-83c7-44b3-839d-5811852ce687 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.575 2 DEBUG nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Start _get_guest_xml network_info=[{"id": "f7d0064f-83c7-44b3-839d-5811852ce687", "address": "fa:16:3e:6c:16:9e", "network": {"id": "65d2d488-03e3-490e-9ad6-7948aea642e8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1607624435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7d0064f-83", "ovs_interfaceid": "f7d0064f-83c7-44b3-839d-5811852ce687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:24:00Z,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:24:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 0, 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '6a34ed8d-90df-4a16-a968-c59b7cafa2f1'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.589 2 WARNING nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.603 2 DEBUG nova.virt.libvirt.host [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.604 2 DEBUG nova.virt.libvirt.host [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.610 2 DEBUG nova.virt.libvirt.host [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.611 2 DEBUG nova.virt.libvirt.host [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.612 2 DEBUG nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.613 2 DEBUG nova.virt.hardware [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-03T11:23:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b93eb926-1d95-406e-aec3-a907be067084',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:24:00Z,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:24:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.614 2 DEBUG nova.virt.hardware [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.614 2 DEBUG nova.virt.hardware [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.615 2 DEBUG nova.virt.hardware [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.615 2 DEBUG nova.virt.hardware [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.616 2 DEBUG nova.virt.hardware [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.616 2 DEBUG nova.virt.hardware [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.617 2 DEBUG nova.virt.hardware [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.618 2 DEBUG nova.virt.hardware [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.618 2 DEBUG nova.virt.hardware [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.619 2 DEBUG nova.virt.hardware [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.622 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:04 compute-0 nova_compute[351685]: 2025-10-03 11:25:04.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:04 compute-0 sweet_cray[526456]: {
Oct  3 11:25:04 compute-0 sweet_cray[526456]:    "0": [
Oct  3 11:25:04 compute-0 sweet_cray[526456]:        {
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "devices": [
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "/dev/loop3"
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            ],
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "lv_name": "ceph_lv0",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "lv_size": "21470642176",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "name": "ceph_lv0",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "tags": {
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.cluster_name": "ceph",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.crush_device_class": "",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.encrypted": "0",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.osd_id": "0",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.type": "block",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.vdo": "0"
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            },
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "type": "block",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "vg_name": "ceph_vg0"
Oct  3 11:25:04 compute-0 sweet_cray[526456]:        }
Oct  3 11:25:04 compute-0 sweet_cray[526456]:    ],
Oct  3 11:25:04 compute-0 sweet_cray[526456]:    "1": [
Oct  3 11:25:04 compute-0 sweet_cray[526456]:        {
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "devices": [
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "/dev/loop4"
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            ],
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "lv_name": "ceph_lv1",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "lv_size": "21470642176",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "name": "ceph_lv1",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "tags": {
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.cluster_name": "ceph",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.crush_device_class": "",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.encrypted": "0",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.osd_id": "1",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.type": "block",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.vdo": "0"
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            },
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "type": "block",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "vg_name": "ceph_vg1"
Oct  3 11:25:04 compute-0 sweet_cray[526456]:        }
Oct  3 11:25:04 compute-0 sweet_cray[526456]:    ],
Oct  3 11:25:04 compute-0 sweet_cray[526456]:    "2": [
Oct  3 11:25:04 compute-0 sweet_cray[526456]:        {
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "devices": [
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "/dev/loop5"
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            ],
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "lv_name": "ceph_lv2",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "lv_size": "21470642176",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "name": "ceph_lv2",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "tags": {
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.cluster_name": "ceph",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.crush_device_class": "",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.encrypted": "0",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.osd_id": "2",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.type": "block",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:                "ceph.vdo": "0"
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            },
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "type": "block",
Oct  3 11:25:04 compute-0 sweet_cray[526456]:            "vg_name": "ceph_vg2"
Oct  3 11:25:04 compute-0 sweet_cray[526456]:        }
Oct  3 11:25:04 compute-0 sweet_cray[526456]:    ]
Oct  3 11:25:04 compute-0 sweet_cray[526456]: }
Oct  3 11:25:04 compute-0 systemd[1]: libpod-622826bbd61b9b8d4f4fa85121d4cb42fb83af39766f778e44939f02f0158084.scope: Deactivated successfully.
Oct  3 11:25:04 compute-0 podman[526441]: 2025-10-03 11:25:04.978075769 +0000 UTC m=+1.049013842 container died 622826bbd61b9b8d4f4fa85121d4cb42fb83af39766f778e44939f02f0158084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:25:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f44432003c026e987a388869b6099bca7e72da7de5a8f44edf0ef2fe091deec6-merged.mount: Deactivated successfully.
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.026 2 DEBUG nova.network.neutron [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Successfully updated port: faf705ff-c202-4c38-82a6-3c53798c3d9f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.044 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Acquiring lock "refresh_cache-6ca9e72e-4023-411a-93fb-b137c664f8f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.044 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Acquired lock "refresh_cache-6ca9e72e-4023-411a-93fb-b137c664f8f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.044 2 DEBUG nova.network.neutron [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 11:25:05 compute-0 podman[526441]: 2025-10-03 11:25:05.077569377 +0000 UTC m=+1.148507430 container remove 622826bbd61b9b8d4f4fa85121d4cb42fb83af39766f778e44939f02f0158084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cray, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 11:25:05 compute-0 systemd[1]: libpod-conmon-622826bbd61b9b8d4f4fa85121d4cb42fb83af39766f778e44939f02f0158084.scope: Deactivated successfully.
Oct  3 11:25:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:25:05 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4156227188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.156 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.202 2 DEBUG nova.storage.rbd_utils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] rbd image b5df7002-5185-4a75-ae2e-e8a44a0be062_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.211 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:25:05 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1966794215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.663 2 DEBUG nova.network.neutron [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.675 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.678 2 DEBUG nova.virt.libvirt.vif [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:24:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-833269917',display_name='tempest-AttachInterfacesUnderV243Test-server-833269917',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-833269917',id=7,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBrQApX9rul7+6NfX14vBvGlk222SXAnUP+XRz92EwxKyAJLho/DMSF7rkjn3hLIOKcY5LDAzskko121CYX5fGFGZzKdCg2yvrWvMCpeTQcfG0+JouOcHB5AzC3ZJEn3+w==',key_name='tempest-keypair-1215622455',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='57f47db3919c4f3797a1434bfeebe880',ramdisk_id='',reservation_id='r-684bvxrt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1461178907',owner_user_name='tempest-AttachInterfacesUnderV243Test-1461178907-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:25:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7851dde78b9e4e9abf7463836db57a8e',uuid=b5df7002-5185-4a75-ae2e-e8a44a0be062,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f7d0064f-83c7-44b3-839d-5811852ce687", "address": "fa:16:3e:6c:16:9e", "network": {"id": "65d2d488-03e3-490e-9ad6-7948aea642e8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1607624435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7d0064f-83", "ovs_interfaceid": "f7d0064f-83c7-44b3-839d-5811852ce687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.679 2 DEBUG nova.network.os_vif_util [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Converting VIF {"id": "f7d0064f-83c7-44b3-839d-5811852ce687", "address": "fa:16:3e:6c:16:9e", "network": {"id": "65d2d488-03e3-490e-9ad6-7948aea642e8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1607624435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7d0064f-83", "ovs_interfaceid": "f7d0064f-83c7-44b3-839d-5811852ce687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.682 2 DEBUG nova.network.os_vif_util [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:16:9e,bridge_name='br-int',has_traffic_filtering=True,id=f7d0064f-83c7-44b3-839d-5811852ce687,network=Network(65d2d488-03e3-490e-9ad6-7948aea642e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7d0064f-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.685 2 DEBUG nova.objects.instance [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lazy-loading 'pci_devices' on Instance uuid b5df7002-5185-4a75-ae2e-e8a44a0be062 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.705 2 DEBUG nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] End _get_guest_xml xml=<domain type="kvm">
Oct  3 11:25:05 compute-0 nova_compute[351685]:  <uuid>b5df7002-5185-4a75-ae2e-e8a44a0be062</uuid>
Oct  3 11:25:05 compute-0 nova_compute[351685]:  <name>instance-00000007</name>
Oct  3 11:25:05 compute-0 nova_compute[351685]:  <memory>131072</memory>
Oct  3 11:25:05 compute-0 nova_compute[351685]:  <vcpu>1</vcpu>
Oct  3 11:25:05 compute-0 nova_compute[351685]:  <metadata>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-833269917</nova:name>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <nova:creationTime>2025-10-03 11:25:04</nova:creationTime>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <nova:flavor name="m1.nano">
Oct  3 11:25:05 compute-0 nova_compute[351685]:        <nova:memory>128</nova:memory>
Oct  3 11:25:05 compute-0 nova_compute[351685]:        <nova:disk>1</nova:disk>
Oct  3 11:25:05 compute-0 nova_compute[351685]:        <nova:swap>0</nova:swap>
Oct  3 11:25:05 compute-0 nova_compute[351685]:        <nova:ephemeral>0</nova:ephemeral>
Oct  3 11:25:05 compute-0 nova_compute[351685]:        <nova:vcpus>1</nova:vcpus>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      </nova:flavor>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <nova:owner>
Oct  3 11:25:05 compute-0 nova_compute[351685]:        <nova:user uuid="7851dde78b9e4e9abf7463836db57a8e">tempest-AttachInterfacesUnderV243Test-1461178907-project-member</nova:user>
Oct  3 11:25:05 compute-0 nova_compute[351685]:        <nova:project uuid="57f47db3919c4f3797a1434bfeebe880">tempest-AttachInterfacesUnderV243Test-1461178907</nova:project>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      </nova:owner>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <nova:root type="image" uuid="6a34ed8d-90df-4a16-a968-c59b7cafa2f1"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <nova:ports>
Oct  3 11:25:05 compute-0 nova_compute[351685]:        <nova:port uuid="f7d0064f-83c7-44b3-839d-5811852ce687">
Oct  3 11:25:05 compute-0 nova_compute[351685]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:        </nova:port>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      </nova:ports>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    </nova:instance>
Oct  3 11:25:05 compute-0 nova_compute[351685]:  </metadata>
Oct  3 11:25:05 compute-0 nova_compute[351685]:  <sysinfo type="smbios">
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <system>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <entry name="manufacturer">RDO</entry>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <entry name="product">OpenStack Compute</entry>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <entry name="serial">b5df7002-5185-4a75-ae2e-e8a44a0be062</entry>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <entry name="uuid">b5df7002-5185-4a75-ae2e-e8a44a0be062</entry>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <entry name="family">Virtual Machine</entry>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    </system>
Oct  3 11:25:05 compute-0 nova_compute[351685]:  </sysinfo>
Oct  3 11:25:05 compute-0 nova_compute[351685]:  <os>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <boot dev="hd"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <smbios mode="sysinfo"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:  </os>
Oct  3 11:25:05 compute-0 nova_compute[351685]:  <features>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <acpi/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <apic/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <vmcoreinfo/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:  </features>
Oct  3 11:25:05 compute-0 nova_compute[351685]:  <clock offset="utc">
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <timer name="pit" tickpolicy="delay"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <timer name="hpet" present="no"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:  </clock>
Oct  3 11:25:05 compute-0 nova_compute[351685]:  <cpu mode="host-model" match="exact">
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <topology sockets="1" cores="1" threads="1"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:  </cpu>
Oct  3 11:25:05 compute-0 nova_compute[351685]:  <devices>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/b5df7002-5185-4a75-ae2e-e8a44a0be062_disk">
Oct  3 11:25:05 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      </source>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:25:05 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <target dev="vda" bus="virtio"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <disk type="network" device="cdrom">
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/b5df7002-5185-4a75-ae2e-e8a44a0be062_disk.config">
Oct  3 11:25:05 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      </source>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:25:05 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <target dev="sda" bus="sata"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <interface type="ethernet">
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <mac address="fa:16:3e:6c:16:9e"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <driver name="vhost" rx_queue_size="512"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <mtu size="1442"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <target dev="tapf7d0064f-83"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    </interface>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <serial type="pty">
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <log file="/var/lib/nova/instances/b5df7002-5185-4a75-ae2e-e8a44a0be062/console.log" append="off"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    </serial>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <video>
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    </video>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <input type="tablet" bus="usb"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <rng model="virtio">
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <backend model="random">/dev/urandom</backend>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    </rng>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <controller type="usb" index="0"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    <memballoon model="virtio">
Oct  3 11:25:05 compute-0 nova_compute[351685]:      <stats period="10"/>
Oct  3 11:25:05 compute-0 nova_compute[351685]:    </memballoon>
Oct  3 11:25:05 compute-0 nova_compute[351685]:  </devices>
Oct  3 11:25:05 compute-0 nova_compute[351685]: </domain>
Oct  3 11:25:05 compute-0 nova_compute[351685]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.707 2 DEBUG nova.compute.manager [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Preparing to wait for external event network-vif-plugged-f7d0064f-83c7-44b3-839d-5811852ce687 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.707 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Acquiring lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.707 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.708 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.708 2 DEBUG nova.virt.libvirt.vif [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:24:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-833269917',display_name='tempest-AttachInterfacesUnderV243Test-server-833269917',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-833269917',id=7,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBrQApX9rul7+6NfX14vBvGlk222SXAnUP+XRz92EwxKyAJLho/DMSF7rkjn3hLIOKcY5LDAzskko121CYX5fGFGZzKdCg2yvrWvMCpeTQcfG0+JouOcHB5AzC3ZJEn3+w==',key_name='tempest-keypair-1215622455',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='57f47db3919c4f3797a1434bfeebe880',ramdisk_id='',reservation_id='r-684bvxrt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1461178907',owner_user_name='tempest-AttachInterfacesUnderV243Test-1461178907-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:25:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7851dde78b9e4e9abf7463836db57a8e',uuid=b5df7002-5185-4a75-ae2e-e8a44a0be062,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f7d0064f-83c7-44b3-839d-5811852ce687", "address": "fa:16:3e:6c:16:9e", "network": {"id": "65d2d488-03e3-490e-9ad6-7948aea642e8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1607624435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7d0064f-83", "ovs_interfaceid": "f7d0064f-83c7-44b3-839d-5811852ce687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.709 2 DEBUG nova.network.os_vif_util [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Converting VIF {"id": "f7d0064f-83c7-44b3-839d-5811852ce687", "address": "fa:16:3e:6c:16:9e", "network": {"id": "65d2d488-03e3-490e-9ad6-7948aea642e8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1607624435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7d0064f-83", "ovs_interfaceid": "f7d0064f-83c7-44b3-839d-5811852ce687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.709 2 DEBUG nova.network.os_vif_util [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:16:9e,bridge_name='br-int',has_traffic_filtering=True,id=f7d0064f-83c7-44b3-839d-5811852ce687,network=Network(65d2d488-03e3-490e-9ad6-7948aea642e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7d0064f-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.710 2 DEBUG os_vif [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:16:9e,bridge_name='br-int',has_traffic_filtering=True,id=f7d0064f-83c7-44b3-839d-5811852ce687,network=Network(65d2d488-03e3-490e-9ad6-7948aea642e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7d0064f-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.710 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.711 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.714 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf7d0064f-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.715 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf7d0064f-83, col_values=(('external_ids', {'iface-id': 'f7d0064f-83c7-44b3-839d-5811852ce687', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:16:9e', 'vm-uuid': 'b5df7002-5185-4a75-ae2e-e8a44a0be062'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:05 compute-0 NetworkManager[45015]: <info>  [1759490705.7180] manager: (tapf7d0064f-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.726 2 INFO os_vif [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:16:9e,bridge_name='br-int',has_traffic_filtering=True,id=f7d0064f-83c7-44b3-839d-5811852ce687,network=Network(65d2d488-03e3-490e-9ad6-7948aea642e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7d0064f-83')#033[00m
Oct  3 11:25:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3536: 321 pgs: 321 active+clean; 211 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.3 MiB/s wr, 151 op/s
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.787 2 DEBUG nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.787 2 DEBUG nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.788 2 DEBUG nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] No VIF found with MAC fa:16:3e:6c:16:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.788 2 INFO nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Using config drive#033[00m
Oct  3 11:25:05 compute-0 nova_compute[351685]: 2025-10-03 11:25:05.823 2 DEBUG nova.storage.rbd_utils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] rbd image b5df7002-5185-4a75-ae2e-e8a44a0be062_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:05 compute-0 podman[526698]: 2025-10-03 11:25:05.893718144 +0000 UTC m=+0.052032739 container create aae3f52cfc05e3aeaa896f29cd6b32a613a5ba9b1814d61d0e05e0a28d7c0785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_johnson, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  3 11:25:05 compute-0 systemd[1]: Started libpod-conmon-aae3f52cfc05e3aeaa896f29cd6b32a613a5ba9b1814d61d0e05e0a28d7c0785.scope.
Oct  3 11:25:05 compute-0 podman[526698]: 2025-10-03 11:25:05.871380578 +0000 UTC m=+0.029695183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:25:05 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:25:06 compute-0 podman[526698]: 2025-10-03 11:25:06.003962277 +0000 UTC m=+0.162276872 container init aae3f52cfc05e3aeaa896f29cd6b32a613a5ba9b1814d61d0e05e0a28d7c0785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  3 11:25:06 compute-0 podman[526698]: 2025-10-03 11:25:06.021368785 +0000 UTC m=+0.179683400 container start aae3f52cfc05e3aeaa896f29cd6b32a613a5ba9b1814d61d0e05e0a28d7c0785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_johnson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 11:25:06 compute-0 podman[526698]: 2025-10-03 11:25:06.027837923 +0000 UTC m=+0.186152548 container attach aae3f52cfc05e3aeaa896f29cd6b32a613a5ba9b1814d61d0e05e0a28d7c0785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:25:06 compute-0 eloquent_johnson[526714]: 167 167
Oct  3 11:25:06 compute-0 systemd[1]: libpod-aae3f52cfc05e3aeaa896f29cd6b32a613a5ba9b1814d61d0e05e0a28d7c0785.scope: Deactivated successfully.
Oct  3 11:25:06 compute-0 podman[526698]: 2025-10-03 11:25:06.032414019 +0000 UTC m=+0.190728614 container died aae3f52cfc05e3aeaa896f29cd6b32a613a5ba9b1814d61d0e05e0a28d7c0785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_johnson, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:25:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-da6508e1a0d5a5f32a4601d905f22ce70cc6666d52a823ce066256bd0f34f023-merged.mount: Deactivated successfully.
Oct  3 11:25:06 compute-0 podman[526698]: 2025-10-03 11:25:06.090921744 +0000 UTC m=+0.249236349 container remove aae3f52cfc05e3aeaa896f29cd6b32a613a5ba9b1814d61d0e05e0a28d7c0785 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_johnson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 11:25:06 compute-0 systemd[1]: libpod-conmon-aae3f52cfc05e3aeaa896f29cd6b32a613a5ba9b1814d61d0e05e0a28d7c0785.scope: Deactivated successfully.
Oct  3 11:25:06 compute-0 podman[526738]: 2025-10-03 11:25:06.357463697 +0000 UTC m=+0.082962421 container create de410b802135e8c84b9c32f6d680493a30756279aa4ae7188218f5a17e841093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:25:06 compute-0 podman[526738]: 2025-10-03 11:25:06.319564712 +0000 UTC m=+0.045063516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:25:06 compute-0 systemd[1]: Started libpod-conmon-de410b802135e8c84b9c32f6d680493a30756279aa4ae7188218f5a17e841093.scope.
Oct  3 11:25:06 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b743f5037907f0f6a82a29892390202540add868548a5c7d514dbd406991130f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b743f5037907f0f6a82a29892390202540add868548a5c7d514dbd406991130f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b743f5037907f0f6a82a29892390202540add868548a5c7d514dbd406991130f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b743f5037907f0f6a82a29892390202540add868548a5c7d514dbd406991130f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:25:06 compute-0 podman[526738]: 2025-10-03 11:25:06.529738438 +0000 UTC m=+0.255237192 container init de410b802135e8c84b9c32f6d680493a30756279aa4ae7188218f5a17e841093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 11:25:06 compute-0 podman[526738]: 2025-10-03 11:25:06.544735049 +0000 UTC m=+0.270233803 container start de410b802135e8c84b9c32f6d680493a30756279aa4ae7188218f5a17e841093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 11:25:06 compute-0 podman[526738]: 2025-10-03 11:25:06.550883486 +0000 UTC m=+0.276382300 container attach de410b802135e8c84b9c32f6d680493a30756279aa4ae7188218f5a17e841093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 11:25:06 compute-0 nova_compute[351685]: 2025-10-03 11:25:06.625 2 INFO nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Creating config drive at /var/lib/nova/instances/b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.config#033[00m
Oct  3 11:25:06 compute-0 nova_compute[351685]: 2025-10-03 11:25:06.639 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt6v5k0lj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:06 compute-0 nova_compute[351685]: 2025-10-03 11:25:06.776 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt6v5k0lj" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:06 compute-0 nova_compute[351685]: 2025-10-03 11:25:06.827 2 DEBUG nova.storage.rbd_utils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] rbd image b5df7002-5185-4a75-ae2e-e8a44a0be062_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:06 compute-0 nova_compute[351685]: 2025-10-03 11:25:06.839 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.config b5df7002-5185-4a75-ae2e-e8a44a0be062_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:25:06.841316) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490706841366, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 371, "num_deletes": 251, "total_data_size": 169978, "memory_usage": 176984, "flush_reason": "Manual Compaction"}
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490706845167, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 168197, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71288, "largest_seqno": 71658, "table_properties": {"data_size": 165889, "index_size": 407, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5970, "raw_average_key_size": 19, "raw_value_size": 161169, "raw_average_value_size": 514, "num_data_blocks": 18, "num_entries": 313, "num_filter_entries": 313, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759490699, "oldest_key_time": 1759490699, "file_creation_time": 1759490706, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 3881 microseconds, and 1131 cpu microseconds.
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:25:06.845198) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 168197 bytes OK
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:25:06.845213) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:25:06.847417) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:25:06.847432) EVENT_LOG_v1 {"time_micros": 1759490706847428, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:25:06.847446) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 167514, prev total WAL file size 167514, number of live WAL files 2.
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:25:06.847869) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(164KB)], [173(10MB)]
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490706847923, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 10866926, "oldest_snapshot_seqno": -1}
Oct  3 11:25:06 compute-0 nova_compute[351685]: 2025-10-03 11:25:06.875 2 DEBUG nova.compute.manager [req-6cb862cf-4510-416b-9bd7-4f0215cd696c req-6942a0b7-f85a-44a6-9cd2-e27b88747cd9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Received event network-changed-faf705ff-c202-4c38-82a6-3c53798c3d9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:25:06 compute-0 nova_compute[351685]: 2025-10-03 11:25:06.876 2 DEBUG nova.compute.manager [req-6cb862cf-4510-416b-9bd7-4f0215cd696c req-6942a0b7-f85a-44a6-9cd2-e27b88747cd9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Refreshing instance network info cache due to event network-changed-faf705ff-c202-4c38-82a6-3c53798c3d9f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:25:06 compute-0 nova_compute[351685]: 2025-10-03 11:25:06.876 2 DEBUG oslo_concurrency.lockutils [req-6cb862cf-4510-416b-9bd7-4f0215cd696c req-6942a0b7-f85a-44a6-9cd2-e27b88747cd9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-6ca9e72e-4023-411a-93fb-b137c664f8f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 7844 keys, 9127364 bytes, temperature: kUnknown
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490706916575, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 9127364, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9080421, "index_size": 26222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19653, "raw_key_size": 208196, "raw_average_key_size": 26, "raw_value_size": 8943424, "raw_average_value_size": 1140, "num_data_blocks": 1023, "num_entries": 7844, "num_filter_entries": 7844, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759490706, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:25:06.916811) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 9127364 bytes
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:25:06.918890) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.1 rd, 132.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.2 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(118.9) write-amplify(54.3) OK, records in: 8359, records dropped: 515 output_compression: NoCompression
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:25:06.918925) EVENT_LOG_v1 {"time_micros": 1759490706918912, "job": 108, "event": "compaction_finished", "compaction_time_micros": 68717, "compaction_time_cpu_micros": 28668, "output_level": 6, "num_output_files": 1, "total_output_size": 9127364, "num_input_records": 8359, "num_output_records": 7844, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490706919113, "job": 108, "event": "table_file_deletion", "file_number": 175}
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490706920910, "job": 108, "event": "table_file_deletion", "file_number": 173}
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:25:06.847775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:25:06.921099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:25:06.921104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:25:06.921105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:25:06.921107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:25:06 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:25:06.921109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.131 2 DEBUG oslo_concurrency.processutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.config b5df7002-5185-4a75-ae2e-e8a44a0be062_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.132 2 INFO nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Deleting local config drive /var/lib/nova/instances/b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.config because it was imported into RBD.#033[00m
Oct  3 11:25:07 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct  3 11:25:07 compute-0 systemd[1]: Started libvirt secret daemon.
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.204 2 DEBUG nova.network.neutron [req-5a441833-278a-40bb-8754-9010fa0d4fe2 req-579b96d3-81da-40c4-84ff-ab5ce9979023 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Updated VIF entry in instance network info cache for port f7d0064f-83c7-44b3-839d-5811852ce687. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.205 2 DEBUG nova.network.neutron [req-5a441833-278a-40bb-8754-9010fa0d4fe2 req-579b96d3-81da-40c4-84ff-ab5ce9979023 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Updating instance_info_cache with network_info: [{"id": "f7d0064f-83c7-44b3-839d-5811852ce687", "address": "fa:16:3e:6c:16:9e", "network": {"id": "65d2d488-03e3-490e-9ad6-7948aea642e8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1607624435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7d0064f-83", "ovs_interfaceid": "f7d0064f-83c7-44b3-839d-5811852ce687", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.225 2 DEBUG oslo_concurrency.lockutils [req-5a441833-278a-40bb-8754-9010fa0d4fe2 req-579b96d3-81da-40c4-84ff-ab5ce9979023 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:25:07 compute-0 kernel: tapf7d0064f-83: entered promiscuous mode
Oct  3 11:25:07 compute-0 NetworkManager[45015]: <info>  [1759490707.2728] manager: (tapf7d0064f-83): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Oct  3 11:25:07 compute-0 ovn_controller[88471]: 2025-10-03T11:25:07Z|00069|binding|INFO|Claiming lport f7d0064f-83c7-44b3-839d-5811852ce687 for this chassis.
Oct  3 11:25:07 compute-0 ovn_controller[88471]: 2025-10-03T11:25:07Z|00070|binding|INFO|f7d0064f-83c7-44b3-839d-5811852ce687: Claiming fa:16:3e:6c:16:9e 10.100.0.12
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.285 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:16:9e 10.100.0.12'], port_security=['fa:16:3e:6c:16:9e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b5df7002-5185-4a75-ae2e-e8a44a0be062', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65d2d488-03e3-490e-9ad6-7948aea642e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f47db3919c4f3797a1434bfeebe880', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4a216c94-f665-4b11-9602-f5df2570ec89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b73237e9-0ef8-4014-9df4-22d8c589f3e2, chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=f7d0064f-83c7-44b3-839d-5811852ce687) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.287 284328 INFO neutron.agent.ovn.metadata.agent [-] Port f7d0064f-83c7-44b3-839d-5811852ce687 in datapath 65d2d488-03e3-490e-9ad6-7948aea642e8 bound to our chassis#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.294 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65d2d488-03e3-490e-9ad6-7948aea642e8#033[00m
Oct  3 11:25:07 compute-0 ovn_controller[88471]: 2025-10-03T11:25:07Z|00071|binding|INFO|Setting lport f7d0064f-83c7-44b3-839d-5811852ce687 ovn-installed in OVS
Oct  3 11:25:07 compute-0 ovn_controller[88471]: 2025-10-03T11:25:07Z|00072|binding|INFO|Setting lport f7d0064f-83c7-44b3-839d-5811852ce687 up in Southbound
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.310 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[7115536f-3e1a-40bd-abbc-22e54ecc9eb6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.311 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap65d2d488-01 in ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.315 412583 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap65d2d488-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:07 compute-0 systemd-udevd[526829]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.316 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[3c0999f8-8176-4178-9417-1ef54131b4c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.319 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[bb213be4-f50e-4734-b5d7-f092f3935d17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:07 compute-0 systemd-machined[137653]: New machine qemu-7-instance-00000007.
Oct  3 11:25:07 compute-0 NetworkManager[45015]: <info>  [1759490707.3482] device (tapf7d0064f-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.347 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[934a9181-c710-4f72-b201-148d74127145]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:07 compute-0 NetworkManager[45015]: <info>  [1759490707.3503] device (tapf7d0064f-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  3 11:25:07 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.375 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[2a9bc573-0e94-476e-8086-b9d0c0a537da]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.406 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[e31ed9d5-00b7-4faa-85dc-fa9bc601f3eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:07 compute-0 systemd-udevd[526836]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.415 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[f72bc682-9ba1-437f-ab93-31f0a17ce381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:07 compute-0 NetworkManager[45015]: <info>  [1759490707.4169] manager: (tap65d2d488-00): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.459 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[7533997d-9307-4f93-8a49-74aeb44c8e08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.463 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[80c357b0-812c-4ea9-8b8c-56a4d612ee5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:07 compute-0 NetworkManager[45015]: <info>  [1759490707.4863] device (tap65d2d488-00): carrier: link connected
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.491 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b61bfb-ab89-4f12-abfb-37e4119ce5f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.510 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[809d461b-35e7-4e38-9159-1128e2e77890]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65d2d488-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:ca:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 989008, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 526877, 'error': None, 'target': 'ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.526 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[6422bb4b-5405-4a76-9ce4-deede9587f6f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:cae5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 989008, 'tstamp': 989008}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 526879, 'error': None, 'target': 'ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.545 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[ea05971f-7655-4c46-b129-b82d606aa8ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65d2d488-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:ca:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 989008, 'reachable_time': 21678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 526883, 'error': None, 'target': 'ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.553 2 DEBUG nova.network.neutron [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Updating instance_info_cache with network_info: [{"id": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "address": "fa:16:3e:f5:5b:59", "network": {"id": "96ea37f7-5146-4b15-96c2-447657e358e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1854453832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6e74ba7072448fdb098db5317752362", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf705ff-c2", "ovs_interfaceid": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.576 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[62ac11d2-f763-49f3-b0e8-7e54d0cca775]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.577 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Releasing lock "refresh_cache-6ca9e72e-4023-411a-93fb-b137c664f8f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.577 2 DEBUG nova.compute.manager [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Instance network_info: |[{"id": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "address": "fa:16:3e:f5:5b:59", "network": {"id": "96ea37f7-5146-4b15-96c2-447657e358e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1854453832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6e74ba7072448fdb098db5317752362", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf705ff-c2", "ovs_interfaceid": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.578 2 DEBUG oslo_concurrency.lockutils [req-6cb862cf-4510-416b-9bd7-4f0215cd696c req-6942a0b7-f85a-44a6-9cd2-e27b88747cd9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-6ca9e72e-4023-411a-93fb-b137c664f8f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.578 2 DEBUG nova.network.neutron [req-6cb862cf-4510-416b-9bd7-4f0215cd696c req-6942a0b7-f85a-44a6-9cd2-e27b88747cd9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Refreshing network info cache for port faf705ff-c202-4c38-82a6-3c53798c3d9f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.580 2 DEBUG nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Start _get_guest_xml network_info=[{"id": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "address": "fa:16:3e:f5:5b:59", "network": {"id": "96ea37f7-5146-4b15-96c2-447657e358e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1854453832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6e74ba7072448fdb098db5317752362", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf705ff-c2", "ovs_interfaceid": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:24:00Z,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:24:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 0, 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '6a34ed8d-90df-4a16-a968-c59b7cafa2f1'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.587 2 WARNING nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.595 2 DEBUG nova.virt.libvirt.host [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.596 2 DEBUG nova.virt.libvirt.host [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.601 2 DEBUG nova.virt.libvirt.host [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.602 2 DEBUG nova.virt.libvirt.host [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.602 2 DEBUG nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.602 2 DEBUG nova.virt.hardware [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-03T11:23:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b93eb926-1d95-406e-aec3-a907be067084',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:24:00Z,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:24:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.602 2 DEBUG nova.virt.hardware [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.603 2 DEBUG nova.virt.hardware [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.603 2 DEBUG nova.virt.hardware [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.603 2 DEBUG nova.virt.hardware [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.603 2 DEBUG nova.virt.hardware [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.603 2 DEBUG nova.virt.hardware [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.603 2 DEBUG nova.virt.hardware [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.604 2 DEBUG nova.virt.hardware [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.604 2 DEBUG nova.virt.hardware [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.604 2 DEBUG nova.virt.hardware [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.606 2 DEBUG oslo_concurrency.processutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.651 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[928122a3-5757-4b55-a911-1e2af93b504a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.656 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65d2d488-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.657 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.658 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65d2d488-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:07 compute-0 kernel: tap65d2d488-00: entered promiscuous mode
Oct  3 11:25:07 compute-0 NetworkManager[45015]: <info>  [1759490707.6617] manager: (tap65d2d488-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.664 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65d2d488-00, col_values=(('external_ids', {'iface-id': '9360fd43-509e-48cf-868f-65a2768ca52b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:07 compute-0 ovn_controller[88471]: 2025-10-03T11:25:07Z|00073|binding|INFO|Releasing lport 9360fd43-509e-48cf-868f-65a2768ca52b from this chassis (sb_readonly=0)
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:07 compute-0 nova_compute[351685]: 2025-10-03 11:25:07.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.680 284328 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/65d2d488-03e3-490e-9ad6-7948aea642e8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/65d2d488-03e3-490e-9ad6-7948aea642e8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.681 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[11c2eb5f-fec9-4cb2-84c5-0be4ac83b187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.682 284328 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: global
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    log         /dev/log local0 debug
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    log-tag     haproxy-metadata-proxy-65d2d488-03e3-490e-9ad6-7948aea642e8
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    user        root
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    group       root
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    maxconn     1024
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    pidfile     /var/lib/neutron/external/pids/65d2d488-03e3-490e-9ad6-7948aea642e8.pid.haproxy
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    daemon
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: defaults
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    log global
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    mode http
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    option httplog
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    option dontlognull
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    option http-server-close
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    option forwardfor
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    retries                 3
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    timeout http-request    30s
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    timeout connect         30s
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    timeout client          32s
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    timeout server          32s
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    timeout http-keep-alive 30s
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: listen listener
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    bind 169.254.169.254:80
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]:    http-request add-header X-OVN-Network-ID 65d2d488-03e3-490e-9ad6-7948aea642e8
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  3 11:25:07 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:07.682 284328 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8', 'env', 'PROCESS_TAG=haproxy-65d2d488-03e3-490e-9ad6-7948aea642e8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/65d2d488-03e3-490e-9ad6-7948aea642e8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  3 11:25:07 compute-0 strange_swartz[526754]: {
Oct  3 11:25:07 compute-0 strange_swartz[526754]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:25:07 compute-0 strange_swartz[526754]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:25:07 compute-0 strange_swartz[526754]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:25:07 compute-0 strange_swartz[526754]:        "osd_id": 1,
Oct  3 11:25:07 compute-0 strange_swartz[526754]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:25:07 compute-0 strange_swartz[526754]:        "type": "bluestore"
Oct  3 11:25:07 compute-0 strange_swartz[526754]:    },
Oct  3 11:25:07 compute-0 strange_swartz[526754]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:25:07 compute-0 strange_swartz[526754]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:25:07 compute-0 strange_swartz[526754]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:25:07 compute-0 strange_swartz[526754]:        "osd_id": 2,
Oct  3 11:25:07 compute-0 strange_swartz[526754]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:25:07 compute-0 strange_swartz[526754]:        "type": "bluestore"
Oct  3 11:25:07 compute-0 strange_swartz[526754]:    },
Oct  3 11:25:07 compute-0 strange_swartz[526754]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:25:07 compute-0 strange_swartz[526754]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:25:07 compute-0 strange_swartz[526754]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:25:07 compute-0 strange_swartz[526754]:        "osd_id": 0,
Oct  3 11:25:07 compute-0 strange_swartz[526754]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:25:07 compute-0 strange_swartz[526754]:        "type": "bluestore"
Oct  3 11:25:07 compute-0 strange_swartz[526754]:    }
Oct  3 11:25:07 compute-0 strange_swartz[526754]: }
Oct  3 11:25:07 compute-0 systemd[1]: libpod-de410b802135e8c84b9c32f6d680493a30756279aa4ae7188218f5a17e841093.scope: Deactivated successfully.
Oct  3 11:25:07 compute-0 systemd[1]: libpod-de410b802135e8c84b9c32f6d680493a30756279aa4ae7188218f5a17e841093.scope: Consumed 1.093s CPU time.
Oct  3 11:25:07 compute-0 podman[526738]: 2025-10-03 11:25:07.720215862 +0000 UTC m=+1.445714576 container died de410b802135e8c84b9c32f6d680493a30756279aa4ae7188218f5a17e841093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:25:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3537: 321 pgs: 321 active+clean; 211 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 142 op/s
Oct  3 11:25:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b743f5037907f0f6a82a29892390202540add868548a5c7d514dbd406991130f-merged.mount: Deactivated successfully.
Oct  3 11:25:07 compute-0 podman[526738]: 2025-10-03 11:25:07.78846407 +0000 UTC m=+1.513962784 container remove de410b802135e8c84b9c32f6d680493a30756279aa4ae7188218f5a17e841093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_swartz, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 11:25:07 compute-0 systemd[1]: libpod-conmon-de410b802135e8c84b9c32f6d680493a30756279aa4ae7188218f5a17e841093.scope: Deactivated successfully.
Oct  3 11:25:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:25:07 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:25:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:25:07 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:25:07 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 09ef08f7-51de-49b3-bfb0-54677055bdce does not exist
Oct  3 11:25:07 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 218ed2f4-57cf-4ea7-b5cf-91642ad2c2bd does not exist
Oct  3 11:25:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:25:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1841389500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:25:08 compute-0 podman[527048]: 2025-10-03 11:25:08.143733336 +0000 UTC m=+0.074721817 container create 2a5c8fd65e41008b6ca312ee44c77c5647571e7df3a5e3b77a055addc1987a8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.143 2 DEBUG oslo_concurrency.processutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.186 2 DEBUG nova.storage.rbd_utils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] rbd image 6ca9e72e-4023-411a-93fb-b137c664f8f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:08 compute-0 systemd[1]: Started libpod-conmon-2a5c8fd65e41008b6ca312ee44c77c5647571e7df3a5e3b77a055addc1987a8d.scope.
Oct  3 11:25:08 compute-0 podman[527048]: 2025-10-03 11:25:08.097679869 +0000 UTC m=+0.028668350 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.195 2 DEBUG oslo_concurrency.processutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:25:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct  3 11:25:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct  3 11:25:08 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct  3 11:25:08 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed236628f85ac2baaaf6c7cf4a0a07b32e8f0472a23f78f03699e84fb302de2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  3 11:25:08 compute-0 podman[527048]: 2025-10-03 11:25:08.24373329 +0000 UTC m=+0.174721771 container init 2a5c8fd65e41008b6ca312ee44c77c5647571e7df3a5e3b77a055addc1987a8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:25:08 compute-0 podman[527048]: 2025-10-03 11:25:08.250993033 +0000 UTC m=+0.181981514 container start 2a5c8fd65e41008b6ca312ee44c77c5647571e7df3a5e3b77a055addc1987a8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:25:08 compute-0 neutron-haproxy-ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8[527083]: [NOTICE]   (527088) : New worker (527090) forked
Oct  3 11:25:08 compute-0 neutron-haproxy-ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8[527083]: [NOTICE]   (527088) : Loading success.
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.466 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490708.4657004, b5df7002-5185-4a75-ae2e-e8a44a0be062 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.467 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] VM Started (Lifecycle Event)#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.496 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.502 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490708.4658656, b5df7002-5185-4a75-ae2e-e8a44a0be062 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.502 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] VM Paused (Lifecycle Event)#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.523 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.534 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.556 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:25:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:25:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4004433522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.713 2 DEBUG oslo_concurrency.processutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.715 2 DEBUG nova.virt.libvirt.vif [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:24:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1573234839',display_name='tempest-ServersTestJSON-server-1573234839',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1573234839',id=8,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOE9cfANkTk3y7G8mPZNDViR+daYFxmpjiC6kgwPI/VheqoR8mguWkKNfVCD46+QxpoO1I+EG/S2uMwZbuSi3ZrKo974t/IcFL5qJJ5/8ATBRCKJUGqTxcktbi+3NAoZiA==',key_name='tempest-keypair-611544652',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6e74ba7072448fdb098db5317752362',ramdisk_id='',reservation_id='r-bcvyroax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-315955129',owner_user_name='tempest-ServersTestJSON-315955129-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:25:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='13ea5fe65c674a40a8a29b240a1a5e6d',uuid=6ca9e72e-4023-411a-93fb-b137c664f8f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "address": "fa:16:3e:f5:5b:59", "network": {"id": "96ea37f7-5146-4b15-96c2-447657e358e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1854453832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6e74ba7072448fdb098db5317752362", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf705ff-c2", "ovs_interfaceid": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.715 2 DEBUG nova.network.os_vif_util [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Converting VIF {"id": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "address": "fa:16:3e:f5:5b:59", "network": {"id": "96ea37f7-5146-4b15-96c2-447657e358e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1854453832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6e74ba7072448fdb098db5317752362", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf705ff-c2", "ovs_interfaceid": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.717 2 DEBUG nova.network.os_vif_util [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:5b:59,bridge_name='br-int',has_traffic_filtering=True,id=faf705ff-c202-4c38-82a6-3c53798c3d9f,network=Network(96ea37f7-5146-4b15-96c2-447657e358e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf705ff-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.719 2 DEBUG nova.objects.instance [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6ca9e72e-4023-411a-93fb-b137c664f8f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.749 2 DEBUG nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] End _get_guest_xml xml=<domain type="kvm">
Oct  3 11:25:08 compute-0 nova_compute[351685]:  <uuid>6ca9e72e-4023-411a-93fb-b137c664f8f2</uuid>
Oct  3 11:25:08 compute-0 nova_compute[351685]:  <name>instance-00000008</name>
Oct  3 11:25:08 compute-0 nova_compute[351685]:  <memory>131072</memory>
Oct  3 11:25:08 compute-0 nova_compute[351685]:  <vcpu>1</vcpu>
Oct  3 11:25:08 compute-0 nova_compute[351685]:  <metadata>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <nova:name>tempest-ServersTestJSON-server-1573234839</nova:name>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <nova:creationTime>2025-10-03 11:25:07</nova:creationTime>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <nova:flavor name="m1.nano">
Oct  3 11:25:08 compute-0 nova_compute[351685]:        <nova:memory>128</nova:memory>
Oct  3 11:25:08 compute-0 nova_compute[351685]:        <nova:disk>1</nova:disk>
Oct  3 11:25:08 compute-0 nova_compute[351685]:        <nova:swap>0</nova:swap>
Oct  3 11:25:08 compute-0 nova_compute[351685]:        <nova:ephemeral>0</nova:ephemeral>
Oct  3 11:25:08 compute-0 nova_compute[351685]:        <nova:vcpus>1</nova:vcpus>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      </nova:flavor>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <nova:owner>
Oct  3 11:25:08 compute-0 nova_compute[351685]:        <nova:user uuid="13ea5fe65c674a40a8a29b240a1a5e6d">tempest-ServersTestJSON-315955129-project-member</nova:user>
Oct  3 11:25:08 compute-0 nova_compute[351685]:        <nova:project uuid="d6e74ba7072448fdb098db5317752362">tempest-ServersTestJSON-315955129</nova:project>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      </nova:owner>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <nova:root type="image" uuid="6a34ed8d-90df-4a16-a968-c59b7cafa2f1"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <nova:ports>
Oct  3 11:25:08 compute-0 nova_compute[351685]:        <nova:port uuid="faf705ff-c202-4c38-82a6-3c53798c3d9f">
Oct  3 11:25:08 compute-0 nova_compute[351685]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:        </nova:port>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      </nova:ports>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    </nova:instance>
Oct  3 11:25:08 compute-0 nova_compute[351685]:  </metadata>
Oct  3 11:25:08 compute-0 nova_compute[351685]:  <sysinfo type="smbios">
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <system>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <entry name="manufacturer">RDO</entry>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <entry name="product">OpenStack Compute</entry>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <entry name="serial">6ca9e72e-4023-411a-93fb-b137c664f8f2</entry>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <entry name="uuid">6ca9e72e-4023-411a-93fb-b137c664f8f2</entry>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <entry name="family">Virtual Machine</entry>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    </system>
Oct  3 11:25:08 compute-0 nova_compute[351685]:  </sysinfo>
Oct  3 11:25:08 compute-0 nova_compute[351685]:  <os>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <boot dev="hd"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <smbios mode="sysinfo"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:  </os>
Oct  3 11:25:08 compute-0 nova_compute[351685]:  <features>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <acpi/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <apic/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <vmcoreinfo/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:  </features>
Oct  3 11:25:08 compute-0 nova_compute[351685]:  <clock offset="utc">
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <timer name="pit" tickpolicy="delay"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <timer name="hpet" present="no"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:  </clock>
Oct  3 11:25:08 compute-0 nova_compute[351685]:  <cpu mode="host-model" match="exact">
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <topology sockets="1" cores="1" threads="1"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:  </cpu>
Oct  3 11:25:08 compute-0 nova_compute[351685]:  <devices>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/6ca9e72e-4023-411a-93fb-b137c664f8f2_disk">
Oct  3 11:25:08 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      </source>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:25:08 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <target dev="vda" bus="virtio"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <disk type="network" device="cdrom">
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/6ca9e72e-4023-411a-93fb-b137c664f8f2_disk.config">
Oct  3 11:25:08 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      </source>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:25:08 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <target dev="sda" bus="sata"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <interface type="ethernet">
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <mac address="fa:16:3e:f5:5b:59"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <driver name="vhost" rx_queue_size="512"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <mtu size="1442"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <target dev="tapfaf705ff-c2"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    </interface>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <serial type="pty">
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <log file="/var/lib/nova/instances/6ca9e72e-4023-411a-93fb-b137c664f8f2/console.log" append="off"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    </serial>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <video>
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    </video>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <input type="tablet" bus="usb"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <rng model="virtio">
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <backend model="random">/dev/urandom</backend>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    </rng>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <controller type="usb" index="0"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    <memballoon model="virtio">
Oct  3 11:25:08 compute-0 nova_compute[351685]:      <stats period="10"/>
Oct  3 11:25:08 compute-0 nova_compute[351685]:    </memballoon>
Oct  3 11:25:08 compute-0 nova_compute[351685]:  </devices>
Oct  3 11:25:08 compute-0 nova_compute[351685]: </domain>
Oct  3 11:25:08 compute-0 nova_compute[351685]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.749 2 DEBUG nova.compute.manager [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Preparing to wait for external event network-vif-plugged-faf705ff-c202-4c38-82a6-3c53798c3d9f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.750 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Acquiring lock "6ca9e72e-4023-411a-93fb-b137c664f8f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.750 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lock "6ca9e72e-4023-411a-93fb-b137c664f8f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.751 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lock "6ca9e72e-4023-411a-93fb-b137c664f8f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.752 2 DEBUG nova.virt.libvirt.vif [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:24:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1573234839',display_name='tempest-ServersTestJSON-server-1573234839',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1573234839',id=8,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOE9cfANkTk3y7G8mPZNDViR+daYFxmpjiC6kgwPI/VheqoR8mguWkKNfVCD46+QxpoO1I+EG/S2uMwZbuSi3ZrKo974t/IcFL5qJJ5/8ATBRCKJUGqTxcktbi+3NAoZiA==',key_name='tempest-keypair-611544652',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6e74ba7072448fdb098db5317752362',ramdisk_id='',reservation_id='r-bcvyroax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-315955129',owner_user_name='tempest-ServersTestJSON-315955129-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:25:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='13ea5fe65c674a40a8a29b240a1a5e6d',uuid=6ca9e72e-4023-411a-93fb-b137c664f8f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "address": "fa:16:3e:f5:5b:59", "network": {"id": "96ea37f7-5146-4b15-96c2-447657e358e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1854453832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6e74ba7072448fdb098db5317752362", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf705ff-c2", "ovs_interfaceid": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.752 2 DEBUG nova.network.os_vif_util [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Converting VIF {"id": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "address": "fa:16:3e:f5:5b:59", "network": {"id": "96ea37f7-5146-4b15-96c2-447657e358e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1854453832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6e74ba7072448fdb098db5317752362", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf705ff-c2", "ovs_interfaceid": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.753 2 DEBUG nova.network.os_vif_util [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:5b:59,bridge_name='br-int',has_traffic_filtering=True,id=faf705ff-c202-4c38-82a6-3c53798c3d9f,network=Network(96ea37f7-5146-4b15-96c2-447657e358e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf705ff-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.754 2 DEBUG os_vif [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:5b:59,bridge_name='br-int',has_traffic_filtering=True,id=faf705ff-c202-4c38-82a6-3c53798c3d9f,network=Network(96ea37f7-5146-4b15-96c2-447657e358e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf705ff-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.755 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.756 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.761 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfaf705ff-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.762 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfaf705ff-c2, col_values=(('external_ids', {'iface-id': 'faf705ff-c202-4c38-82a6-3c53798c3d9f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:5b:59', 'vm-uuid': '6ca9e72e-4023-411a-93fb-b137c664f8f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:08 compute-0 NetworkManager[45015]: <info>  [1759490708.7653] manager: (tapfaf705ff-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.778 2 INFO os_vif [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:5b:59,bridge_name='br-int',has_traffic_filtering=True,id=faf705ff-c202-4c38-82a6-3c53798c3d9f,network=Network(96ea37f7-5146-4b15-96c2-447657e358e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf705ff-c2')#033[00m
Oct  3 11:25:08 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:25:08 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.839 2 DEBUG nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.840 2 DEBUG nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.840 2 DEBUG nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] No VIF found with MAC fa:16:3e:f5:5b:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.841 2 INFO nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Using config drive#033[00m
Oct  3 11:25:08 compute-0 nova_compute[351685]: 2025-10-03 11:25:08.945 2 DEBUG nova.storage.rbd_utils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] rbd image 6ca9e72e-4023-411a-93fb-b137c664f8f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:09 compute-0 nova_compute[351685]: 2025-10-03 11:25:09.670 2 INFO nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Creating config drive at /var/lib/nova/instances/6ca9e72e-4023-411a-93fb-b137c664f8f2/disk.config#033[00m
Oct  3 11:25:09 compute-0 nova_compute[351685]: 2025-10-03 11:25:09.677 2 DEBUG oslo_concurrency.processutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6ca9e72e-4023-411a-93fb-b137c664f8f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpemrhwb3c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3539: 321 pgs: 321 active+clean; 211 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 151 op/s
Oct  3 11:25:09 compute-0 nova_compute[351685]: 2025-10-03 11:25:09.811 2 DEBUG oslo_concurrency.processutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6ca9e72e-4023-411a-93fb-b137c664f8f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpemrhwb3c" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:09 compute-0 nova_compute[351685]: 2025-10-03 11:25:09.866 2 DEBUG nova.storage.rbd_utils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] rbd image 6ca9e72e-4023-411a-93fb-b137c664f8f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:09 compute-0 nova_compute[351685]: 2025-10-03 11:25:09.886 2 DEBUG oslo_concurrency.processutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6ca9e72e-4023-411a-93fb-b137c664f8f2/disk.config 6ca9e72e-4023-411a-93fb-b137c664f8f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:09 compute-0 nova_compute[351685]: 2025-10-03 11:25:09.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:10 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct  3 11:25:10 compute-0 nova_compute[351685]: 2025-10-03 11:25:10.067 2 DEBUG nova.network.neutron [req-6cb862cf-4510-416b-9bd7-4f0215cd696c req-6942a0b7-f85a-44a6-9cd2-e27b88747cd9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Updated VIF entry in instance network info cache for port faf705ff-c202-4c38-82a6-3c53798c3d9f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:25:10 compute-0 nova_compute[351685]: 2025-10-03 11:25:10.068 2 DEBUG nova.network.neutron [req-6cb862cf-4510-416b-9bd7-4f0215cd696c req-6942a0b7-f85a-44a6-9cd2-e27b88747cd9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Updating instance_info_cache with network_info: [{"id": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "address": "fa:16:3e:f5:5b:59", "network": {"id": "96ea37f7-5146-4b15-96c2-447657e358e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1854453832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6e74ba7072448fdb098db5317752362", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf705ff-c2", "ovs_interfaceid": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:25:10 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct  3 11:25:10 compute-0 nova_compute[351685]: 2025-10-03 11:25:10.089 2 DEBUG oslo_concurrency.lockutils [req-6cb862cf-4510-416b-9bd7-4f0215cd696c req-6942a0b7-f85a-44a6-9cd2-e27b88747cd9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-6ca9e72e-4023-411a-93fb-b137c664f8f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:25:10 compute-0 nova_compute[351685]: 2025-10-03 11:25:10.171 2 DEBUG oslo_concurrency.processutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6ca9e72e-4023-411a-93fb-b137c664f8f2/disk.config 6ca9e72e-4023-411a-93fb-b137c664f8f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:10 compute-0 nova_compute[351685]: 2025-10-03 11:25:10.172 2 INFO nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Deleting local config drive /var/lib/nova/instances/6ca9e72e-4023-411a-93fb-b137c664f8f2/disk.config because it was imported into RBD.#033[00m
Oct  3 11:25:10 compute-0 kernel: tapfaf705ff-c2: entered promiscuous mode
Oct  3 11:25:10 compute-0 NetworkManager[45015]: <info>  [1759490710.2325] manager: (tapfaf705ff-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Oct  3 11:25:10 compute-0 nova_compute[351685]: 2025-10-03 11:25:10.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:10 compute-0 systemd-udevd[526857]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 11:25:10 compute-0 ovn_controller[88471]: 2025-10-03T11:25:10Z|00074|binding|INFO|Claiming lport faf705ff-c202-4c38-82a6-3c53798c3d9f for this chassis.
Oct  3 11:25:10 compute-0 ovn_controller[88471]: 2025-10-03T11:25:10Z|00075|binding|INFO|faf705ff-c202-4c38-82a6-3c53798c3d9f: Claiming fa:16:3e:f5:5b:59 10.100.0.7
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.245 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:5b:59 10.100.0.7'], port_security=['fa:16:3e:f5:5b:59 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6ca9e72e-4023-411a-93fb-b137c664f8f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96ea37f7-5146-4b15-96c2-447657e358e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6e74ba7072448fdb098db5317752362', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ff65c076-028a-4a74-8b1b-4045c124c8da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d97a5161-396b-4982-a5f1-55914ed3c579, chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=faf705ff-c202-4c38-82a6-3c53798c3d9f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.248 284328 INFO neutron.agent.ovn.metadata.agent [-] Port faf705ff-c202-4c38-82a6-3c53798c3d9f in datapath 96ea37f7-5146-4b15-96c2-447657e358e8 bound to our chassis#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.251 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 96ea37f7-5146-4b15-96c2-447657e358e8#033[00m
Oct  3 11:25:10 compute-0 NetworkManager[45015]: <info>  [1759490710.2562] device (tapfaf705ff-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 11:25:10 compute-0 NetworkManager[45015]: <info>  [1759490710.2578] device (tapfaf705ff-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  3 11:25:10 compute-0 ovn_controller[88471]: 2025-10-03T11:25:10Z|00076|binding|INFO|Setting lport faf705ff-c202-4c38-82a6-3c53798c3d9f ovn-installed in OVS
Oct  3 11:25:10 compute-0 ovn_controller[88471]: 2025-10-03T11:25:10Z|00077|binding|INFO|Setting lport faf705ff-c202-4c38-82a6-3c53798c3d9f up in Southbound
Oct  3 11:25:10 compute-0 nova_compute[351685]: 2025-10-03 11:25:10.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.266 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[ab472c3a-d7d6-4e6e-a6c8-f04e9704f052]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.267 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap96ea37f7-51 in ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.269 412583 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap96ea37f7-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.269 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[81b1bd40-d0e2-4abb-9565-2eea4f4aea65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:10 compute-0 nova_compute[351685]: 2025-10-03 11:25:10.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.272 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[589273de-4ccc-4005-98d9-e200a78bb1dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:10 compute-0 systemd-machined[137653]: New machine qemu-8-instance-00000008.
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.306 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[c648c3a8-0fdd-4d5c-90cf-9dcac3711179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:10 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.332 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[b47ec229-8427-4625-82cb-dc7f4f7f48b9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.373 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[6c2317ac-e482-489b-9781-f89b9d955916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:10 compute-0 NetworkManager[45015]: <info>  [1759490710.3803] manager: (tap96ea37f7-50): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.379 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[4659f456-4659-494b-a4d1-f2a686db6918]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.429 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[c494c8db-3454-40bc-b6cc-6bbd25dca4af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:10 compute-0 systemd-udevd[527226]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.433 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[0bf578bb-ab55-4db9-b581-af3283533d28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:10 compute-0 NetworkManager[45015]: <info>  [1759490710.4681] device (tap96ea37f7-50): carrier: link connected
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.473 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[8df66929-ef1e-4f89-8384-bdfa380a3ffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.489 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[1b458054-b91b-49b9-8803-67a440ef23f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96ea37f7-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:46:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 989306, 'reachable_time': 15591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 527246, 'error': None, 'target': 'ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.506 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[60bc3e2c-c685-4c5c-a43f-f82a906cb6d1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:46b8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 989306, 'tstamp': 989306}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 527247, 'error': None, 'target': 'ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.523 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6d6d69-719d-4a8a-aef1-bc8356f77be6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96ea37f7-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:46:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 989306, 'reachable_time': 15591, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 527248, 'error': None, 'target': 'ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.558 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[ef66973f-eaf9-4343-aff1-277df2f14b7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.624 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[f93e5e8d-7596-4072-ade2-b98cbb0cb862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.625 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96ea37f7-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.626 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.626 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96ea37f7-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:10 compute-0 NetworkManager[45015]: <info>  [1759490710.6296] manager: (tap96ea37f7-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct  3 11:25:10 compute-0 kernel: tap96ea37f7-50: entered promiscuous mode
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.633 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap96ea37f7-50, col_values=(('external_ids', {'iface-id': '257cb087-2d3d-4c88-8d82-f8526e49a12f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:10 compute-0 ovn_controller[88471]: 2025-10-03T11:25:10Z|00078|binding|INFO|Releasing lport 257cb087-2d3d-4c88-8d82-f8526e49a12f from this chassis (sb_readonly=0)
Oct  3 11:25:10 compute-0 nova_compute[351685]: 2025-10-03 11:25:10.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.637 284328 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/96ea37f7-5146-4b15-96c2-447657e358e8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/96ea37f7-5146-4b15-96c2-447657e358e8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.639 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[7f2ecbf7-467a-4ea9-af97-1a00eda844ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.640 284328 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: global
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    log         /dev/log local0 debug
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    log-tag     haproxy-metadata-proxy-96ea37f7-5146-4b15-96c2-447657e358e8
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    user        root
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    group       root
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    maxconn     1024
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    pidfile     /var/lib/neutron/external/pids/96ea37f7-5146-4b15-96c2-447657e358e8.pid.haproxy
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    daemon
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: defaults
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    log global
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    mode http
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    option httplog
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    option dontlognull
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    option http-server-close
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    option forwardfor
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    retries                 3
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    timeout http-request    30s
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    timeout connect         30s
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    timeout client          32s
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    timeout server          32s
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    timeout http-keep-alive 30s
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: listen listener
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    bind 169.254.169.254:80
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]:    http-request add-header X-OVN-Network-ID 96ea37f7-5146-4b15-96c2-447657e358e8
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  3 11:25:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:10.640 284328 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8', 'env', 'PROCESS_TAG=haproxy-96ea37f7-5146-4b15-96c2-447657e358e8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/96ea37f7-5146-4b15-96c2-447657e358e8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  3 11:25:10 compute-0 nova_compute[351685]: 2025-10-03 11:25:10.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:11 compute-0 podman[527280]: 2025-10-03 11:25:11.148168216 +0000 UTC m=+0.091973190 container create 38dea59799325d3ac699f80fdff05010810a3b1d66d82a0e91effb351fe8e01f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:25:11 compute-0 podman[527280]: 2025-10-03 11:25:11.101570802 +0000 UTC m=+0.045375796 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  3 11:25:11 compute-0 systemd[1]: Started libpod-conmon-38dea59799325d3ac699f80fdff05010810a3b1d66d82a0e91effb351fe8e01f.scope.
Oct  3 11:25:11 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a94d02b9e39b5af18726c4902d0697429b05713cd4b278efa8ed8220893512/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  3 11:25:11 compute-0 podman[527280]: 2025-10-03 11:25:11.265663801 +0000 UTC m=+0.209468805 container init 38dea59799325d3ac699f80fdff05010810a3b1d66d82a0e91effb351fe8e01f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 11:25:11 compute-0 podman[527280]: 2025-10-03 11:25:11.275440594 +0000 UTC m=+0.219245578 container start 38dea59799325d3ac699f80fdff05010810a3b1d66d82a0e91effb351fe8e01f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:25:11 compute-0 neutron-haproxy-ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8[527295]: [NOTICE]   (527299) : New worker (527301) forked
Oct  3 11:25:11 compute-0 neutron-haproxy-ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8[527295]: [NOTICE]   (527299) : Loading success.
Oct  3 11:25:11 compute-0 ovn_controller[88471]: 2025-10-03T11:25:11Z|00079|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:25:11 compute-0 ovn_controller[88471]: 2025-10-03T11:25:11Z|00080|binding|INFO|Releasing lport 9360fd43-509e-48cf-868f-65a2768ca52b from this chassis (sb_readonly=0)
Oct  3 11:25:11 compute-0 ovn_controller[88471]: 2025-10-03T11:25:11Z|00081|binding|INFO|Releasing lport 257cb087-2d3d-4c88-8d82-f8526e49a12f from this chassis (sb_readonly=0)
Oct  3 11:25:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3540: 321 pgs: 321 active+clean; 211 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 129 op/s
Oct  3 11:25:11 compute-0 nova_compute[351685]: 2025-10-03 11:25:11.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.161 2 DEBUG nova.compute.manager [req-868a7627-a6f9-432f-a3e4-45a3ce59d6f8 req-c8ea5f81-9709-403a-9bdf-c8f19441ada6 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Received event network-vif-plugged-f7d0064f-83c7-44b3-839d-5811852ce687 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.162 2 DEBUG oslo_concurrency.lockutils [req-868a7627-a6f9-432f-a3e4-45a3ce59d6f8 req-c8ea5f81-9709-403a-9bdf-c8f19441ada6 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.162 2 DEBUG oslo_concurrency.lockutils [req-868a7627-a6f9-432f-a3e4-45a3ce59d6f8 req-c8ea5f81-9709-403a-9bdf-c8f19441ada6 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.163 2 DEBUG oslo_concurrency.lockutils [req-868a7627-a6f9-432f-a3e4-45a3ce59d6f8 req-c8ea5f81-9709-403a-9bdf-c8f19441ada6 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.163 2 DEBUG nova.compute.manager [req-868a7627-a6f9-432f-a3e4-45a3ce59d6f8 req-c8ea5f81-9709-403a-9bdf-c8f19441ada6 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Processing event network-vif-plugged-f7d0064f-83c7-44b3-839d-5811852ce687 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.165 2 DEBUG nova.compute.manager [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.171 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490712.1709101, b5df7002-5185-4a75-ae2e-e8a44a0be062 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.171 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] VM Resumed (Lifecycle Event)#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.175 2 DEBUG nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.183 2 INFO nova.virt.libvirt.driver [-] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Instance spawned successfully.#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.184 2 DEBUG nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.187 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.194 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.206 2 DEBUG nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.207 2 DEBUG nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.208 2 DEBUG nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.208 2 DEBUG nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.209 2 DEBUG nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.209 2 DEBUG nova.virt.libvirt.driver [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.213 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.265 2 INFO nova.compute.manager [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Took 12.11 seconds to spawn the instance on the hypervisor.#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.266 2 DEBUG nova.compute.manager [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.369 2 INFO nova.compute.manager [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Took 13.24 seconds to build instance.#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.384 2 DEBUG oslo_concurrency.lockutils [None req-6b68ca76-0188-4490-8143-c6cb00ae4123 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lock "b5df7002-5185-4a75-ae2e-e8a44a0be062" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.504 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490712.5041728, 6ca9e72e-4023-411a-93fb-b137c664f8f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.505 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] VM Started (Lifecycle Event)#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.522 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.530 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490712.5043046, 6ca9e72e-4023-411a-93fb-b137c664f8f2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.530 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] VM Paused (Lifecycle Event)#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.553 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.558 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:25:12 compute-0 nova_compute[351685]: 2025-10-03 11:25:12.576 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:25:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:25:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3541: 321 pgs: 321 active+clean; 211 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 90 op/s
Oct  3 11:25:13 compute-0 nova_compute[351685]: 2025-10-03 11:25:13.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:14 compute-0 nova_compute[351685]: 2025-10-03 11:25:14.571 2 DEBUG nova.compute.manager [req-0d76b735-1a74-4ba3-aec7-d85c8b48ba53 req-e170f3b7-169c-42d7-b53c-0139365a8e32 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Received event network-vif-plugged-f7d0064f-83c7-44b3-839d-5811852ce687 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:25:14 compute-0 nova_compute[351685]: 2025-10-03 11:25:14.571 2 DEBUG oslo_concurrency.lockutils [req-0d76b735-1a74-4ba3-aec7-d85c8b48ba53 req-e170f3b7-169c-42d7-b53c-0139365a8e32 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:14 compute-0 nova_compute[351685]: 2025-10-03 11:25:14.572 2 DEBUG oslo_concurrency.lockutils [req-0d76b735-1a74-4ba3-aec7-d85c8b48ba53 req-e170f3b7-169c-42d7-b53c-0139365a8e32 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:14 compute-0 nova_compute[351685]: 2025-10-03 11:25:14.572 2 DEBUG oslo_concurrency.lockutils [req-0d76b735-1a74-4ba3-aec7-d85c8b48ba53 req-e170f3b7-169c-42d7-b53c-0139365a8e32 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:14 compute-0 nova_compute[351685]: 2025-10-03 11:25:14.573 2 DEBUG nova.compute.manager [req-0d76b735-1a74-4ba3-aec7-d85c8b48ba53 req-e170f3b7-169c-42d7-b53c-0139365a8e32 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] No waiting events found dispatching network-vif-plugged-f7d0064f-83c7-44b3-839d-5811852ce687 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:25:14 compute-0 nova_compute[351685]: 2025-10-03 11:25:14.573 2 WARNING nova.compute.manager [req-0d76b735-1a74-4ba3-aec7-d85c8b48ba53 req-e170f3b7-169c-42d7-b53c-0139365a8e32 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Received unexpected event network-vif-plugged-f7d0064f-83c7-44b3-839d-5811852ce687 for instance with vm_state active and task_state None.#033[00m
Oct  3 11:25:14 compute-0 nova_compute[351685]: 2025-10-03 11:25:14.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:15 compute-0 ovn_controller[88471]: 2025-10-03T11:25:15Z|00082|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:25:15 compute-0 ovn_controller[88471]: 2025-10-03T11:25:15Z|00083|binding|INFO|Releasing lport 9360fd43-509e-48cf-868f-65a2768ca52b from this chassis (sb_readonly=0)
Oct  3 11:25:15 compute-0 ovn_controller[88471]: 2025-10-03T11:25:15Z|00084|binding|INFO|Releasing lport 257cb087-2d3d-4c88-8d82-f8526e49a12f from this chassis (sb_readonly=0)
Oct  3 11:25:15 compute-0 nova_compute[351685]: 2025-10-03 11:25:15.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3542: 321 pgs: 321 active+clean; 211 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 35 KiB/s wr, 35 op/s
Oct  3 11:25:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:25:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:25:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:25:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:25:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:25:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:25:17 compute-0 nova_compute[351685]: 2025-10-03 11:25:17.121 2 DEBUG nova.compute.manager [req-f7207137-c841-4311-920b-ab6e3d348bd8 req-41207f82-eb49-4e0b-8baa-5fe32cdf0683 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Received event network-changed-f7d0064f-83c7-44b3-839d-5811852ce687 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:25:17 compute-0 nova_compute[351685]: 2025-10-03 11:25:17.121 2 DEBUG nova.compute.manager [req-f7207137-c841-4311-920b-ab6e3d348bd8 req-41207f82-eb49-4e0b-8baa-5fe32cdf0683 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Refreshing instance network info cache due to event network-changed-f7d0064f-83c7-44b3-839d-5811852ce687. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:25:17 compute-0 nova_compute[351685]: 2025-10-03 11:25:17.122 2 DEBUG oslo_concurrency.lockutils [req-f7207137-c841-4311-920b-ab6e3d348bd8 req-41207f82-eb49-4e0b-8baa-5fe32cdf0683 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:25:17 compute-0 nova_compute[351685]: 2025-10-03 11:25:17.122 2 DEBUG oslo_concurrency.lockutils [req-f7207137-c841-4311-920b-ab6e3d348bd8 req-41207f82-eb49-4e0b-8baa-5fe32cdf0683 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:25:17 compute-0 nova_compute[351685]: 2025-10-03 11:25:17.122 2 DEBUG nova.network.neutron [req-f7207137-c841-4311-920b-ab6e3d348bd8 req-41207f82-eb49-4e0b-8baa-5fe32cdf0683 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Refreshing network info cache for port f7d0064f-83c7-44b3-839d-5811852ce687 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:25:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3543: 321 pgs: 321 active+clean; 211 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 35 KiB/s wr, 35 op/s
Oct  3 11:25:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:25:18 compute-0 nova_compute[351685]: 2025-10-03 11:25:18.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:18 compute-0 podman[527352]: 2025-10-03 11:25:18.902504217 +0000 UTC m=+0.149507793 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 11:25:18 compute-0 podman[527361]: 2025-10-03 11:25:18.914113599 +0000 UTC m=+0.109634615 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3)
Oct  3 11:25:18 compute-0 podman[527354]: 2025-10-03 11:25:18.920860765 +0000 UTC m=+0.158946835 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  3 11:25:18 compute-0 podman[527353]: 2025-10-03 11:25:18.929956377 +0000 UTC m=+0.167230991 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 11:25:18 compute-0 podman[527373]: 2025-10-03 11:25:18.928314384 +0000 UTC m=+0.117977342 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20250930, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.schema-version=1.0)
Oct  3 11:25:18 compute-0 podman[527376]: 2025-10-03 11:25:18.95408957 +0000 UTC m=+0.133173859 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  3 11:25:18 compute-0 podman[527374]: 2025-10-03 11:25:18.960793585 +0000 UTC m=+0.163841193 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 11:25:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3544: 321 pgs: 321 active+clean; 211 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 30 KiB/s wr, 76 op/s
Oct  3 11:25:19 compute-0 nova_compute[351685]: 2025-10-03 11:25:19.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:19 compute-0 nova_compute[351685]: 2025-10-03 11:25:19.880 2 DEBUG nova.network.neutron [req-f7207137-c841-4311-920b-ab6e3d348bd8 req-41207f82-eb49-4e0b-8baa-5fe32cdf0683 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Updated VIF entry in instance network info cache for port f7d0064f-83c7-44b3-839d-5811852ce687. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:25:19 compute-0 nova_compute[351685]: 2025-10-03 11:25:19.881 2 DEBUG nova.network.neutron [req-f7207137-c841-4311-920b-ab6e3d348bd8 req-41207f82-eb49-4e0b-8baa-5fe32cdf0683 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Updating instance_info_cache with network_info: [{"id": "f7d0064f-83c7-44b3-839d-5811852ce687", "address": "fa:16:3e:6c:16:9e", "network": {"id": "65d2d488-03e3-490e-9ad6-7948aea642e8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1607624435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7d0064f-83", "ovs_interfaceid": "f7d0064f-83c7-44b3-839d-5811852ce687", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:25:19 compute-0 nova_compute[351685]: 2025-10-03 11:25:19.908 2 DEBUG oslo_concurrency.lockutils [req-f7207137-c841-4311-920b-ab6e3d348bd8 req-41207f82-eb49-4e0b-8baa-5fe32cdf0683 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.769 2 DEBUG nova.compute.manager [req-da859783-2888-4735-8ac2-2c90e311b569 req-5e57c34a-cd64-48e3-920b-ef813a0e8791 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Received event network-vif-plugged-faf705ff-c202-4c38-82a6-3c53798c3d9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.770 2 DEBUG oslo_concurrency.lockutils [req-da859783-2888-4735-8ac2-2c90e311b569 req-5e57c34a-cd64-48e3-920b-ef813a0e8791 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "6ca9e72e-4023-411a-93fb-b137c664f8f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.771 2 DEBUG oslo_concurrency.lockutils [req-da859783-2888-4735-8ac2-2c90e311b569 req-5e57c34a-cd64-48e3-920b-ef813a0e8791 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "6ca9e72e-4023-411a-93fb-b137c664f8f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.771 2 DEBUG oslo_concurrency.lockutils [req-da859783-2888-4735-8ac2-2c90e311b569 req-5e57c34a-cd64-48e3-920b-ef813a0e8791 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "6ca9e72e-4023-411a-93fb-b137c664f8f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.772 2 DEBUG nova.compute.manager [req-da859783-2888-4735-8ac2-2c90e311b569 req-5e57c34a-cd64-48e3-920b-ef813a0e8791 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Processing event network-vif-plugged-faf705ff-c202-4c38-82a6-3c53798c3d9f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.773 2 DEBUG nova.compute.manager [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.780 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490720.7799551, 6ca9e72e-4023-411a-93fb-b137c664f8f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.781 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] VM Resumed (Lifecycle Event)#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.783 2 DEBUG nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.788 2 INFO nova.virt.libvirt.driver [-] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Instance spawned successfully.#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.789 2 DEBUG nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.807 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.817 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.824 2 DEBUG nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.825 2 DEBUG nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.826 2 DEBUG nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.828 2 DEBUG nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.829 2 DEBUG nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.829 2 DEBUG nova.virt.libvirt.driver [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.846 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.889 2 INFO nova.compute.manager [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Took 20.02 seconds to spawn the instance on the hypervisor.#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.890 2 DEBUG nova.compute.manager [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.965 2 INFO nova.compute.manager [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Took 21.74 seconds to build instance.#033[00m
Oct  3 11:25:20 compute-0 nova_compute[351685]: 2025-10-03 11:25:20.997 2 DEBUG oslo_concurrency.lockutils [None req-10a8ec8d-4e8e-4920-85d6-6c3b2c22c3db 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lock "6ca9e72e-4023-411a-93fb-b137c664f8f2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3545: 321 pgs: 321 active+clean; 211 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct  3 11:25:22 compute-0 nova_compute[351685]: 2025-10-03 11:25:22.926 2 DEBUG nova.compute.manager [req-39cf11aa-88d4-44e5-a667-08f2646c8403 req-58ed95ca-6bbf-4267-be1e-f14ba4cf5038 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Received event network-vif-plugged-faf705ff-c202-4c38-82a6-3c53798c3d9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:25:22 compute-0 nova_compute[351685]: 2025-10-03 11:25:22.927 2 DEBUG oslo_concurrency.lockutils [req-39cf11aa-88d4-44e5-a667-08f2646c8403 req-58ed95ca-6bbf-4267-be1e-f14ba4cf5038 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "6ca9e72e-4023-411a-93fb-b137c664f8f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:22 compute-0 nova_compute[351685]: 2025-10-03 11:25:22.927 2 DEBUG oslo_concurrency.lockutils [req-39cf11aa-88d4-44e5-a667-08f2646c8403 req-58ed95ca-6bbf-4267-be1e-f14ba4cf5038 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "6ca9e72e-4023-411a-93fb-b137c664f8f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:22 compute-0 nova_compute[351685]: 2025-10-03 11:25:22.927 2 DEBUG oslo_concurrency.lockutils [req-39cf11aa-88d4-44e5-a667-08f2646c8403 req-58ed95ca-6bbf-4267-be1e-f14ba4cf5038 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "6ca9e72e-4023-411a-93fb-b137c664f8f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:22 compute-0 nova_compute[351685]: 2025-10-03 11:25:22.927 2 DEBUG nova.compute.manager [req-39cf11aa-88d4-44e5-a667-08f2646c8403 req-58ed95ca-6bbf-4267-be1e-f14ba4cf5038 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] No waiting events found dispatching network-vif-plugged-faf705ff-c202-4c38-82a6-3c53798c3d9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:25:22 compute-0 nova_compute[351685]: 2025-10-03 11:25:22.928 2 WARNING nova.compute.manager [req-39cf11aa-88d4-44e5-a667-08f2646c8403 req-58ed95ca-6bbf-4267-be1e-f14ba4cf5038 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Received unexpected event network-vif-plugged-faf705ff-c202-4c38-82a6-3c53798c3d9f for instance with vm_state active and task_state None.#033[00m
Oct  3 11:25:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:25:23 compute-0 nova_compute[351685]: 2025-10-03 11:25:23.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3546: 321 pgs: 321 active+clean; 211 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 84 op/s
Oct  3 11:25:23 compute-0 nova_compute[351685]: 2025-10-03 11:25:23.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:24 compute-0 nova_compute[351685]: 2025-10-03 11:25:24.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:25 compute-0 nova_compute[351685]: 2025-10-03 11:25:25.610 2 DEBUG nova.compute.manager [req-3a83df78-a36d-436a-90ff-f8a5e09db921 req-bbef9099-65a0-4315-8f1e-724c3ad6dad1 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Received event network-changed-faf705ff-c202-4c38-82a6-3c53798c3d9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:25:25 compute-0 nova_compute[351685]: 2025-10-03 11:25:25.611 2 DEBUG nova.compute.manager [req-3a83df78-a36d-436a-90ff-f8a5e09db921 req-bbef9099-65a0-4315-8f1e-724c3ad6dad1 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Refreshing instance network info cache due to event network-changed-faf705ff-c202-4c38-82a6-3c53798c3d9f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:25:25 compute-0 nova_compute[351685]: 2025-10-03 11:25:25.611 2 DEBUG oslo_concurrency.lockutils [req-3a83df78-a36d-436a-90ff-f8a5e09db921 req-bbef9099-65a0-4315-8f1e-724c3ad6dad1 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-6ca9e72e-4023-411a-93fb-b137c664f8f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:25:25 compute-0 nova_compute[351685]: 2025-10-03 11:25:25.611 2 DEBUG oslo_concurrency.lockutils [req-3a83df78-a36d-436a-90ff-f8a5e09db921 req-bbef9099-65a0-4315-8f1e-724c3ad6dad1 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-6ca9e72e-4023-411a-93fb-b137c664f8f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:25:25 compute-0 nova_compute[351685]: 2025-10-03 11:25:25.611 2 DEBUG nova.network.neutron [req-3a83df78-a36d-436a-90ff-f8a5e09db921 req-bbef9099-65a0-4315-8f1e-724c3ad6dad1 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Refreshing network info cache for port faf705ff-c202-4c38-82a6-3c53798c3d9f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:25:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3547: 321 pgs: 321 active+clean; 211 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 426 B/s wr, 123 op/s
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.068 2 DEBUG oslo_concurrency.lockutils [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Acquiring lock "6ca9e72e-4023-411a-93fb-b137c664f8f2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.069 2 DEBUG oslo_concurrency.lockutils [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lock "6ca9e72e-4023-411a-93fb-b137c664f8f2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.070 2 DEBUG oslo_concurrency.lockutils [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Acquiring lock "6ca9e72e-4023-411a-93fb-b137c664f8f2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.071 2 DEBUG oslo_concurrency.lockutils [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lock "6ca9e72e-4023-411a-93fb-b137c664f8f2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.071 2 DEBUG oslo_concurrency.lockutils [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lock "6ca9e72e-4023-411a-93fb-b137c664f8f2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.073 2 INFO nova.compute.manager [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Terminating instance#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.074 2 DEBUG nova.compute.manager [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  3 11:25:27 compute-0 kernel: tapfaf705ff-c2 (unregistering): left promiscuous mode
Oct  3 11:25:27 compute-0 NetworkManager[45015]: <info>  [1759490727.1648] device (tapfaf705ff-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  3 11:25:27 compute-0 ovn_controller[88471]: 2025-10-03T11:25:27Z|00085|binding|INFO|Releasing lport faf705ff-c202-4c38-82a6-3c53798c3d9f from this chassis (sb_readonly=0)
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:27 compute-0 ovn_controller[88471]: 2025-10-03T11:25:27Z|00086|binding|INFO|Setting lport faf705ff-c202-4c38-82a6-3c53798c3d9f down in Southbound
Oct  3 11:25:27 compute-0 ovn_controller[88471]: 2025-10-03T11:25:27Z|00087|binding|INFO|Removing iface tapfaf705ff-c2 ovn-installed in OVS
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:27 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:27.195 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:5b:59 10.100.0.7'], port_security=['fa:16:3e:f5:5b:59 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6ca9e72e-4023-411a-93fb-b137c664f8f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96ea37f7-5146-4b15-96c2-447657e358e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6e74ba7072448fdb098db5317752362', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ff65c076-028a-4a74-8b1b-4045c124c8da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d97a5161-396b-4982-a5f1-55914ed3c579, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=faf705ff-c202-4c38-82a6-3c53798c3d9f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:25:27 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:27.196 284328 INFO neutron.agent.ovn.metadata.agent [-] Port faf705ff-c202-4c38-82a6-3c53798c3d9f in datapath 96ea37f7-5146-4b15-96c2-447657e358e8 unbound from our chassis#033[00m
Oct  3 11:25:27 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:27.198 284328 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 96ea37f7-5146-4b15-96c2-447657e358e8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  3 11:25:27 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:27.205 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[466a732f-4d73-4741-919f-91b8c9f511b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:27 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:27.206 284328 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8 namespace which is not needed anymore#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:27 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct  3 11:25:27 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 8.925s CPU time.
Oct  3 11:25:27 compute-0 systemd-machined[137653]: Machine qemu-8-instance-00000008 terminated.
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.317 2 INFO nova.virt.libvirt.driver [-] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Instance destroyed successfully.#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.318 2 DEBUG nova.objects.instance [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lazy-loading 'resources' on Instance uuid 6ca9e72e-4023-411a-93fb-b137c664f8f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.334 2 DEBUG nova.virt.libvirt.vif [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T11:24:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1573234839',display_name='tempest-ServersTestJSON-server-1573234839',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1573234839',id=8,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOE9cfANkTk3y7G8mPZNDViR+daYFxmpjiC6kgwPI/VheqoR8mguWkKNfVCD46+QxpoO1I+EG/S2uMwZbuSi3ZrKo974t/IcFL5qJJ5/8ATBRCKJUGqTxcktbi+3NAoZiA==',key_name='tempest-keypair-611544652',keypairs=<?>,launch_index=0,launched_at=2025-10-03T11:25:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d6e74ba7072448fdb098db5317752362',ramdisk_id='',reservation_id='r-bcvyroax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-315955129',owner_user_name='tempest-ServersTestJSON-315955129-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-03T11:25:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='13ea5fe65c674a40a8a29b240a1a5e6d',uuid=6ca9e72e-4023-411a-93fb-b137c664f8f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "address": "fa:16:3e:f5:5b:59", "network": {"id": "96ea37f7-5146-4b15-96c2-447657e358e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1854453832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6e74ba7072448fdb098db5317752362", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf705ff-c2", "ovs_interfaceid": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.337 2 DEBUG nova.network.os_vif_util [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Converting VIF {"id": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "address": "fa:16:3e:f5:5b:59", "network": {"id": "96ea37f7-5146-4b15-96c2-447657e358e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1854453832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6e74ba7072448fdb098db5317752362", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf705ff-c2", "ovs_interfaceid": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.340 2 DEBUG nova.network.os_vif_util [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:5b:59,bridge_name='br-int',has_traffic_filtering=True,id=faf705ff-c202-4c38-82a6-3c53798c3d9f,network=Network(96ea37f7-5146-4b15-96c2-447657e358e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf705ff-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.341 2 DEBUG os_vif [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:5b:59,bridge_name='br-int',has_traffic_filtering=True,id=faf705ff-c202-4c38-82a6-3c53798c3d9f,network=Network(96ea37f7-5146-4b15-96c2-447657e358e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf705ff-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaf705ff-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.353 2 INFO os_vif [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:5b:59,bridge_name='br-int',has_traffic_filtering=True,id=faf705ff-c202-4c38-82a6-3c53798c3d9f,network=Network(96ea37f7-5146-4b15-96c2-447657e358e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf705ff-c2')#033[00m
Oct  3 11:25:27 compute-0 neutron-haproxy-ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8[527295]: [NOTICE]   (527299) : haproxy version is 2.8.14-c23fe91
Oct  3 11:25:27 compute-0 neutron-haproxy-ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8[527295]: [NOTICE]   (527299) : path to executable is /usr/sbin/haproxy
Oct  3 11:25:27 compute-0 neutron-haproxy-ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8[527295]: [WARNING]  (527299) : Exiting Master process...
Oct  3 11:25:27 compute-0 neutron-haproxy-ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8[527295]: [ALERT]    (527299) : Current worker (527301) exited with code 143 (Terminated)
Oct  3 11:25:27 compute-0 neutron-haproxy-ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8[527295]: [WARNING]  (527299) : All workers exited. Exiting... (0)
Oct  3 11:25:27 compute-0 systemd[1]: libpod-38dea59799325d3ac699f80fdff05010810a3b1d66d82a0e91effb351fe8e01f.scope: Deactivated successfully.
Oct  3 11:25:27 compute-0 podman[527517]: 2025-10-03 11:25:27.413382673 +0000 UTC m=+0.072401692 container died 38dea59799325d3ac699f80fdff05010810a3b1d66d82a0e91effb351fe8e01f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  3 11:25:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-38dea59799325d3ac699f80fdff05010810a3b1d66d82a0e91effb351fe8e01f-userdata-shm.mount: Deactivated successfully.
Oct  3 11:25:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8a94d02b9e39b5af18726c4902d0697429b05713cd4b278efa8ed8220893512-merged.mount: Deactivated successfully.
Oct  3 11:25:27 compute-0 podman[527517]: 2025-10-03 11:25:27.484808331 +0000 UTC m=+0.143827300 container cleanup 38dea59799325d3ac699f80fdff05010810a3b1d66d82a0e91effb351fe8e01f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  3 11:25:27 compute-0 systemd[1]: libpod-conmon-38dea59799325d3ac699f80fdff05010810a3b1d66d82a0e91effb351fe8e01f.scope: Deactivated successfully.
Oct  3 11:25:27 compute-0 podman[527568]: 2025-10-03 11:25:27.609952882 +0000 UTC m=+0.085903104 container remove 38dea59799325d3ac699f80fdff05010810a3b1d66d82a0e91effb351fe8e01f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  3 11:25:27 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:27.625 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[17ff8fc3-bdc7-4988-8bf6-bb6a38fef5ee]: (4, ('Fri Oct  3 11:25:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8 (38dea59799325d3ac699f80fdff05010810a3b1d66d82a0e91effb351fe8e01f)\n38dea59799325d3ac699f80fdff05010810a3b1d66d82a0e91effb351fe8e01f\nFri Oct  3 11:25:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8 (38dea59799325d3ac699f80fdff05010810a3b1d66d82a0e91effb351fe8e01f)\n38dea59799325d3ac699f80fdff05010810a3b1d66d82a0e91effb351fe8e01f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:27 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:27.630 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d81c21-69f3-4506-94b1-97c80c9c3643]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:27 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:27.633 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96ea37f7-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:27 compute-0 kernel: tap96ea37f7-50: left promiscuous mode
Oct  3 11:25:27 compute-0 nova_compute[351685]: 2025-10-03 11:25:27.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:27 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:27.657 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[36a78476-7c6b-45c5-80b6-a511e5e682bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:27 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:27.686 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef8fc02-d1b8-4527-a223-913c8699da9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:27 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:27.687 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[f213252b-2d13-4163-a964-3fee2af9d5ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:27 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:27.719 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[79b3deab-56fc-4da8-8e11-6b0e5ea984ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 989296, 'reachable_time': 38356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 527583, 'error': None, 'target': 'ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d96ea37f7\x2d5146\x2d4b15\x2d96c2\x2d447657e358e8.mount: Deactivated successfully.
Oct  3 11:25:27 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:27.740 284439 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-96ea37f7-5146-4b15-96c2-447657e358e8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  3 11:25:27 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:27.741 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[52cd8187-cbde-47a7-a0ec-a67da19551a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3548: 321 pgs: 321 active+clean; 211 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 104 op/s
Oct  3 11:25:28 compute-0 nova_compute[351685]: 2025-10-03 11:25:28.049 2 INFO nova.virt.libvirt.driver [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Deleting instance files /var/lib/nova/instances/6ca9e72e-4023-411a-93fb-b137c664f8f2_del#033[00m
Oct  3 11:25:28 compute-0 nova_compute[351685]: 2025-10-03 11:25:28.053 2 INFO nova.virt.libvirt.driver [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Deletion of /var/lib/nova/instances/6ca9e72e-4023-411a-93fb-b137c664f8f2_del complete#033[00m
Oct  3 11:25:28 compute-0 nova_compute[351685]: 2025-10-03 11:25:28.151 2 INFO nova.compute.manager [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Took 1.08 seconds to destroy the instance on the hypervisor.#033[00m
Oct  3 11:25:28 compute-0 nova_compute[351685]: 2025-10-03 11:25:28.152 2 DEBUG oslo.service.loopingcall [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  3 11:25:28 compute-0 nova_compute[351685]: 2025-10-03 11:25:28.153 2 DEBUG nova.compute.manager [-] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  3 11:25:28 compute-0 nova_compute[351685]: 2025-10-03 11:25:28.153 2 DEBUG nova.network.neutron [-] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  3 11:25:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:25:28 compute-0 nova_compute[351685]: 2025-10-03 11:25:28.617 2 DEBUG nova.network.neutron [req-3a83df78-a36d-436a-90ff-f8a5e09db921 req-bbef9099-65a0-4315-8f1e-724c3ad6dad1 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Updated VIF entry in instance network info cache for port faf705ff-c202-4c38-82a6-3c53798c3d9f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:25:28 compute-0 nova_compute[351685]: 2025-10-03 11:25:28.619 2 DEBUG nova.network.neutron [req-3a83df78-a36d-436a-90ff-f8a5e09db921 req-bbef9099-65a0-4315-8f1e-724c3ad6dad1 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Updating instance_info_cache with network_info: [{"id": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "address": "fa:16:3e:f5:5b:59", "network": {"id": "96ea37f7-5146-4b15-96c2-447657e358e8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1854453832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6e74ba7072448fdb098db5317752362", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf705ff-c2", "ovs_interfaceid": "faf705ff-c202-4c38-82a6-3c53798c3d9f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:25:28 compute-0 nova_compute[351685]: 2025-10-03 11:25:28.666 2 DEBUG oslo_concurrency.lockutils [req-3a83df78-a36d-436a-90ff-f8a5e09db921 req-bbef9099-65a0-4315-8f1e-724c3ad6dad1 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-6ca9e72e-4023-411a-93fb-b137c664f8f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:25:29 compute-0 nova_compute[351685]: 2025-10-03 11:25:29.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3549: 321 pgs: 321 active+clean; 174 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.2 KiB/s wr, 143 op/s
Oct  3 11:25:29 compute-0 podman[157165]: time="2025-10-03T11:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:25:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:25:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9577 "" "Go-http-client/1.1"
Oct  3 11:25:29 compute-0 podman[527586]: 2025-10-03 11:25:29.859282902 +0000 UTC m=+0.108492378 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:25:29 compute-0 nova_compute[351685]: 2025-10-03 11:25:29.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:29 compute-0 podman[527588]: 2025-10-03 11:25:29.891900667 +0000 UTC m=+0.129324245 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 11:25:29 compute-0 podman[527587]: 2025-10-03 11:25:29.898394215 +0000 UTC m=+0.141858787 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, release-0.7.12=, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 11:25:30 compute-0 nova_compute[351685]: 2025-10-03 11:25:30.378 2 DEBUG nova.network.neutron [-] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:25:30 compute-0 nova_compute[351685]: 2025-10-03 11:25:30.395 2 INFO nova.compute.manager [-] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Took 2.24 seconds to deallocate network for instance.#033[00m
Oct  3 11:25:30 compute-0 nova_compute[351685]: 2025-10-03 11:25:30.438 2 DEBUG oslo_concurrency.lockutils [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:30 compute-0 nova_compute[351685]: 2025-10-03 11:25:30.439 2 DEBUG oslo_concurrency.lockutils [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:30 compute-0 nova_compute[351685]: 2025-10-03 11:25:30.474 2 DEBUG nova.compute.manager [req-43a083f4-c300-447a-a75e-0c6847a72dd3 req-08bea0e8-11b2-45fd-985e-5c7c10dc97b7 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Received event network-vif-deleted-faf705ff-c202-4c38-82a6-3c53798c3d9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:25:30 compute-0 nova_compute[351685]: 2025-10-03 11:25:30.531 2 DEBUG oslo_concurrency.processutils [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:30 compute-0 nova_compute[351685]: 2025-10-03 11:25:30.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:25:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3216978749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:25:31 compute-0 nova_compute[351685]: 2025-10-03 11:25:31.073 2 DEBUG oslo_concurrency.processutils [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:31 compute-0 nova_compute[351685]: 2025-10-03 11:25:31.085 2 DEBUG nova.compute.provider_tree [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:25:31 compute-0 nova_compute[351685]: 2025-10-03 11:25:31.107 2 DEBUG nova.scheduler.client.report [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:25:31 compute-0 nova_compute[351685]: 2025-10-03 11:25:31.129 2 DEBUG oslo_concurrency.lockutils [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:31 compute-0 nova_compute[351685]: 2025-10-03 11:25:31.167 2 INFO nova.scheduler.client.report [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Deleted allocations for instance 6ca9e72e-4023-411a-93fb-b137c664f8f2#033[00m
Oct  3 11:25:31 compute-0 nova_compute[351685]: 2025-10-03 11:25:31.240 2 DEBUG oslo_concurrency.lockutils [None req-7d7b30ab-599b-47f1-8856-e93b11c04e81 13ea5fe65c674a40a8a29b240a1a5e6d d6e74ba7072448fdb098db5317752362 - - default default] Lock "6ca9e72e-4023-411a-93fb-b137c664f8f2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:31 compute-0 openstack_network_exporter[367524]: ERROR   11:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:25:31 compute-0 openstack_network_exporter[367524]: ERROR   11:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:25:31 compute-0 openstack_network_exporter[367524]: ERROR   11:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:25:31 compute-0 openstack_network_exporter[367524]: ERROR   11:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:25:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:25:31 compute-0 openstack_network_exporter[367524]: ERROR   11:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:25:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:25:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3550: 321 pgs: 321 active+clean; 165 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 KiB/s wr, 94 op/s
Oct  3 11:25:32 compute-0 nova_compute[351685]: 2025-10-03 11:25:32.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:25:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3551: 321 pgs: 321 active+clean; 165 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Oct  3 11:25:34 compute-0 nova_compute[351685]: 2025-10-03 11:25:34.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:34 compute-0 nova_compute[351685]: 2025-10-03 11:25:34.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:35 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:35.090 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:73:2e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:70:f7:73:f2:48'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:25:35 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:35.090 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  3 11:25:35 compute-0 nova_compute[351685]: 2025-10-03 11:25:35.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3552: 321 pgs: 321 active+clean; 165 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.2 KiB/s wr, 78 op/s
Oct  3 11:25:37 compute-0 nova_compute[351685]: 2025-10-03 11:25:37.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3553: 321 pgs: 321 active+clean; 165 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 430 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Oct  3 11:25:37 compute-0 nova_compute[351685]: 2025-10-03 11:25:37.885 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Acquiring lock "50697870-0565-414d-a9e6-5262e3e25e3c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:37 compute-0 nova_compute[351685]: 2025-10-03 11:25:37.886 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lock "50697870-0565-414d-a9e6-5262e3e25e3c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:37 compute-0 nova_compute[351685]: 2025-10-03 11:25:37.906 2 DEBUG nova.compute.manager [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.000 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.001 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.008 2 DEBUG nova.virt.hardware [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.009 2 INFO nova.compute.claims [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.186 2 DEBUG oslo_concurrency.processutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:25:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:25:38 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2195284711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.663 2 DEBUG oslo_concurrency.processutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.676 2 DEBUG nova.compute.provider_tree [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.694 2 DEBUG nova.scheduler.client.report [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.722 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.724 2 DEBUG nova.compute.manager [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.779 2 DEBUG nova.compute.manager [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.781 2 DEBUG nova.network.neutron [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.804 2 INFO nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.827 2 DEBUG nova.compute.manager [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.924 2 DEBUG nova.compute.manager [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.927 2 DEBUG nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.928 2 INFO nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Creating image(s)#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.962 2 DEBUG nova.storage.rbd_utils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] rbd image 50697870-0565-414d-a9e6-5262e3e25e3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:38 compute-0 nova_compute[351685]: 2025-10-03 11:25:38.998 2 DEBUG nova.storage.rbd_utils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] rbd image 50697870-0565-414d-a9e6-5262e3e25e3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.033 2 DEBUG nova.storage.rbd_utils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] rbd image 50697870-0565-414d-a9e6-5262e3e25e3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.041 2 DEBUG oslo_concurrency.processutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.072 2 DEBUG nova.policy [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '018e43ba13984d3cbaef2cef945dfb9e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aeefabefe92a4b9a95b28bf43d68c1f5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.105 2 DEBUG oslo_concurrency.processutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.106 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Acquiring lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.106 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.107 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.152 2 DEBUG nova.storage.rbd_utils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] rbd image 50697870-0565-414d-a9e6-5262e3e25e3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.162 2 DEBUG oslo_concurrency.processutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 50697870-0565-414d-a9e6-5262e3e25e3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.550 2 DEBUG oslo_concurrency.processutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 50697870-0565-414d-a9e6-5262e3e25e3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.637 2 DEBUG nova.storage.rbd_utils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] resizing rbd image 50697870-0565-414d-a9e6-5262e3e25e3c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  3 11:25:39 compute-0 ovn_controller[88471]: 2025-10-03T11:25:39Z|00088|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:25:39 compute-0 ovn_controller[88471]: 2025-10-03T11:25:39Z|00089|binding|INFO|Releasing lport 9360fd43-509e-48cf-868f-65a2768ca52b from this chassis (sb_readonly=0)
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:25:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3554: 321 pgs: 321 active+clean; 165 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 430 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.752 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.841 2 DEBUG nova.objects.instance [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lazy-loading 'migration_context' on Instance uuid 50697870-0565-414d-a9e6-5262e3e25e3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.858 2 DEBUG nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.858 2 DEBUG nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Ensure instance console log exists: /var/lib/nova/instances/50697870-0565-414d-a9e6-5262e3e25e3c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.859 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.859 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.859 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:39 compute-0 nova_compute[351685]: 2025-10-03 11:25:39.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:40 compute-0 nova_compute[351685]: 2025-10-03 11:25:40.092 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:25:40 compute-0 nova_compute[351685]: 2025-10-03 11:25:40.093 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:25:40 compute-0 nova_compute[351685]: 2025-10-03 11:25:40.093 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:25:40 compute-0 nova_compute[351685]: 2025-10-03 11:25:40.093 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.906 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.906 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.907 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a960d0a40>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:25:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:40.915 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b5df7002-5185-4a75-ae2e-e8a44a0be062 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct  3 11:25:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:41.093 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:41 compute-0 nova_compute[351685]: 2025-10-03 11:25:41.230 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Acquiring lock "f7465889-4aed-4799-835b-1c604f730144" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:41 compute-0 nova_compute[351685]: 2025-10-03 11:25:41.230 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:41 compute-0 nova_compute[351685]: 2025-10-03 11:25:41.249 2 DEBUG nova.compute.manager [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  3 11:25:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:41.259 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b5df7002-5185-4a75-ae2e-e8a44a0be062 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}05c71207458c74aad35f0b171d3453ab31f2036fb50a6b94fe7b4d338da45aed" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct  3 11:25:41 compute-0 nova_compute[351685]: 2025-10-03 11:25:41.333 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:41 compute-0 nova_compute[351685]: 2025-10-03 11:25:41.334 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:41 compute-0 nova_compute[351685]: 2025-10-03 11:25:41.344 2 DEBUG nova.virt.hardware [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  3 11:25:41 compute-0 nova_compute[351685]: 2025-10-03 11:25:41.345 2 INFO nova.compute.claims [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  3 11:25:41 compute-0 nova_compute[351685]: 2025-10-03 11:25:41.545 2 DEBUG oslo_concurrency.processutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:41 compute-0 nova_compute[351685]: 2025-10-03 11:25:41.623 2 DEBUG nova.network.neutron [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Successfully created port: fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  3 11:25:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:41.703 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:41.704 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:41.704 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3555: 321 pgs: 321 active+clean; 169 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 256 KiB/s wr, 12 op/s
Oct  3 11:25:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:25:41 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1154407683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:25:41 compute-0 nova_compute[351685]: 2025-10-03 11:25:41.999 2 DEBUG oslo_concurrency.processutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.014 2 DEBUG nova.compute.provider_tree [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.032 2 DEBUG nova.scheduler.client.report [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.073 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.074 2 DEBUG nova.compute.manager [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.133 2 DEBUG nova.compute.manager [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.134 2 DEBUG nova.network.neutron [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.160 2 INFO nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.164 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1996 Content-Type: application/json Date: Fri, 03 Oct 2025 11:25:41 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-fd60f598-5083-42aa-9053-c4e0b3f64179 x-openstack-request-id: req-fd60f598-5083-42aa-9053-c4e0b3f64179 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.165 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b5df7002-5185-4a75-ae2e-e8a44a0be062", "name": "tempest-AttachInterfacesUnderV243Test-server-833269917", "status": "ACTIVE", "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "user_id": "7851dde78b9e4e9abf7463836db57a8e", "metadata": {}, "hostId": "52563732316fb6f52f382b41dbbcf48a471640606ce6ef8b160de33c", "image": {"id": "6a34ed8d-90df-4a16-a968-c59b7cafa2f1", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/6a34ed8d-90df-4a16-a968-c59b7cafa2f1"}]}, "flavor": {"id": "b93eb926-1d95-406e-aec3-a907be067084", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b93eb926-1d95-406e-aec3-a907be067084"}]}, "created": "2025-10-03T11:24:56Z", "updated": "2025-10-03T11:25:12Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-1607624435-network": [{"version": 4, "addr": "10.100.0.12", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:6c:16:9e"}, {"version": 4, "addr": "192.168.122.181", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:6c:16:9e"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b5df7002-5185-4a75-ae2e-e8a44a0be062"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b5df7002-5185-4a75-ae2e-e8a44a0be062"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1215622455", "OS-SRV-USG:launched_at": "2025-10-03T11:25:12.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--270639023"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.165 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b5df7002-5185-4a75-ae2e-e8a44a0be062 used request id req-fd60f598-5083-42aa-9053-c4e0b3f64179 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.166 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b5df7002-5185-4a75-ae2e-e8a44a0be062', 'name': 'tempest-AttachInterfacesUnderV243Test-server-833269917', 'flavor': {'id': 'b93eb926-1d95-406e-aec3-a907be067084', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '6a34ed8d-90df-4a16-a968-c59b7cafa2f1'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '57f47db3919c4f3797a1434bfeebe880', 'user_id': '7851dde78b9e4e9abf7463836db57a8e', 'hostId': '52563732316fb6f52f382b41dbbcf48a471640606ce6ef8b160de33c', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.169 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.169 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.169 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.169 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.169 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.171 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:25:42.169832) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.173 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b5df7002-5185-4a75-ae2e-e8a44a0be062 / tapf7d0064f-83 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.174 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.178 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.178 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.178 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.179 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.179 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:25:42.179098) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.179 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.180 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.180 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.180 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.180 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.180 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:25:42.180657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.181 2 DEBUG nova.compute.manager [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.193 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.193 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.215 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.215 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.216 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.216 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.216 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.216 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.216 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.216 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.216 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:25:42.216852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.244 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.245 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.284 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.284 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.285 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.285 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.285 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.285 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.286 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.286 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.286 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.read.latency volume: 1706961798 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.286 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.read.latency volume: 2530251 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.287 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.287 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:25:42.286206) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.287 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.287 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.288 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.288 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.288 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.288 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.288 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.289 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.289 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:25:42.288896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.289 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.289 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.290 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.290 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.290 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.291 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.291 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.291 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.291 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.291 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:25:42.291536) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.292 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.292 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.292 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.293 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.293 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.293 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.293 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.294 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.294 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.294 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:25:42.294109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.294 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.294 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.295 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.295 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.295 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.295 2 DEBUG nova.compute.manager [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.296 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.296 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.296 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.296 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.296 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.297 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.297 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.297 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:25:42.296850) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.297 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.298 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.298 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.298 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.299 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.299 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.299 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.298 2 DEBUG nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:25:42.299539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.300 2 INFO nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Creating image(s)#033[00m
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.323 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.344 2 DEBUG nova.storage.rbd_utils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] rbd image f7465889-4aed-4799-835b-1c604f730144_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.355 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.356 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.356 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.357 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.358 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.359 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.359 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.360 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.361 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:25:42.357727) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.362 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.362 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.363 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.363 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.363 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.363 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.363 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.364 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.364 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.365 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.365 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:25:42.363374) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.366 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.366 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.366 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:25:42.366751) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.367 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.367 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.368 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.368 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.368 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.368 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.369 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.369 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.369 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-833269917>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-833269917>]
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-03T11:25:42.369046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.370 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.370 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.370 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.370 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.371 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:25:42.370707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.371 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.371 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.372 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.372 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.372 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.372 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:25:42.372189) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.372 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.373 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.373 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.373 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.374 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.374 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.374 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.374 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.375 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:25:42.374225) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.375 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.375 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.375 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.375 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.375 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.376 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:25:42.375838) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.377 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.377 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.378 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.378 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.378 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.378 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.378 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/cpu volume: 29330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.379 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 106430000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.379 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.380 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.380 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.380 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:25:42.378657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.380 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.381 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.381 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.381 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.381 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.382 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.382 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:25:42.381121) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.382 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.383 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.383 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.383 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.383 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:25:42.383423) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.384 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.384 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.385 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.385 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.385 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.385 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.385 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.385 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.386 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.386 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.387 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.387 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:25:42.385680) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.387 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.387 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.388 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.388 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.388 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance b5df7002-5185-4a75-ae2e-e8a44a0be062: ceilometer.compute.pollsters.NoVolumeException
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.388 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:25:42.388127) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.389 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.389 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.389 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.390 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.390 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.390 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.390 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.390 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.391 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-833269917>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-833269917>]
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.391 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.391 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.392 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-03T11:25:42.390551) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.392 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.392 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.392 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.392 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/network.incoming.bytes volume: 110 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.393 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.394 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:25:42.392624) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.394 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.394 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.394 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.394 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.395 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.395 14 DEBUG ceilometer.compute.pollsters [-] b5df7002-5185-4a75-ae2e-e8a44a0be062/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.395 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.396 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.396 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.397 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.397 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.397 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.397 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.397 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:25:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:25:42.400 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:25:42.394942) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.412 2 DEBUG nova.storage.rbd_utils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] rbd image f7465889-4aed-4799-835b-1c604f730144_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.450 2 DEBUG nova.storage.rbd_utils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] rbd image f7465889-4aed-4799-835b-1c604f730144_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.456 2 DEBUG oslo_concurrency.processutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.477 2 DEBUG nova.policy [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a98b98aa35184e41a4ae6e74ba3a32e6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8ac8b91115c2483686f9dc31c58b49fc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.481 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759490727.3125153, 6ca9e72e-4023-411a-93fb-b137c664f8f2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.482 2 INFO nova.compute.manager [-] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] VM Stopped (Lifecycle Event)#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.512 2 DEBUG nova.compute.manager [None req-a2169c39-d5d0-4fde-b57e-ec537063c897 - - - - - -] [instance: 6ca9e72e-4023-411a-93fb-b137c664f8f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.521 2 DEBUG oslo_concurrency.processutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.522 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Acquiring lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.523 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.523 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.564 2 DEBUG nova.storage.rbd_utils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] rbd image f7465889-4aed-4799-835b-1c604f730144_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.577 2 DEBUG oslo_concurrency.processutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 f7465889-4aed-4799-835b-1c604f730144_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:42 compute-0 nova_compute[351685]: 2025-10-03 11:25:42.930 2 DEBUG oslo_concurrency.processutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 f7465889-4aed-4799-835b-1c604f730144_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:43 compute-0 nova_compute[351685]: 2025-10-03 11:25:43.095 2 DEBUG nova.storage.rbd_utils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] resizing rbd image f7465889-4aed-4799-835b-1c604f730144_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  3 11:25:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:25:43 compute-0 nova_compute[351685]: 2025-10-03 11:25:43.286 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:25:43 compute-0 nova_compute[351685]: 2025-10-03 11:25:43.299 2 DEBUG nova.objects.instance [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lazy-loading 'migration_context' on Instance uuid f7465889-4aed-4799-835b-1c604f730144 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:25:43 compute-0 nova_compute[351685]: 2025-10-03 11:25:43.331 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:25:43 compute-0 nova_compute[351685]: 2025-10-03 11:25:43.332 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:25:43 compute-0 nova_compute[351685]: 2025-10-03 11:25:43.333 2 DEBUG nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  3 11:25:43 compute-0 nova_compute[351685]: 2025-10-03 11:25:43.334 2 DEBUG nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Ensure instance console log exists: /var/lib/nova/instances/f7465889-4aed-4799-835b-1c604f730144/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  3 11:25:43 compute-0 nova_compute[351685]: 2025-10-03 11:25:43.334 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:43 compute-0 nova_compute[351685]: 2025-10-03 11:25:43.335 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:43 compute-0 nova_compute[351685]: 2025-10-03 11:25:43.335 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:43 compute-0 nova_compute[351685]: 2025-10-03 11:25:43.537 2 DEBUG nova.network.neutron [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Successfully created port: d444b4b5-5243-48c2-80dd-3074b56d4277 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  3 11:25:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3556: 321 pgs: 321 active+clean; 201 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.7 MiB/s wr, 37 op/s
Oct  3 11:25:44 compute-0 nova_compute[351685]: 2025-10-03 11:25:44.211 2 DEBUG nova.network.neutron [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Successfully updated port: fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  3 11:25:44 compute-0 nova_compute[351685]: 2025-10-03 11:25:44.228 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Acquiring lock "refresh_cache-50697870-0565-414d-a9e6-5262e3e25e3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:25:44 compute-0 nova_compute[351685]: 2025-10-03 11:25:44.228 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Acquired lock "refresh_cache-50697870-0565-414d-a9e6-5262e3e25e3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:25:44 compute-0 nova_compute[351685]: 2025-10-03 11:25:44.229 2 DEBUG nova.network.neutron [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 11:25:44 compute-0 nova_compute[351685]: 2025-10-03 11:25:44.301 2 DEBUG nova.network.neutron [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Successfully updated port: d444b4b5-5243-48c2-80dd-3074b56d4277 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  3 11:25:44 compute-0 nova_compute[351685]: 2025-10-03 11:25:44.317 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Acquiring lock "refresh_cache-f7465889-4aed-4799-835b-1c604f730144" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:25:44 compute-0 nova_compute[351685]: 2025-10-03 11:25:44.318 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Acquired lock "refresh_cache-f7465889-4aed-4799-835b-1c604f730144" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:25:44 compute-0 nova_compute[351685]: 2025-10-03 11:25:44.319 2 DEBUG nova.network.neutron [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 11:25:44 compute-0 nova_compute[351685]: 2025-10-03 11:25:44.465 2 DEBUG nova.compute.manager [req-53b55071-66aa-4691-81ca-a0a31d29a898 req-fd14ed9c-efa3-4008-b858-b6e0a8584961 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Received event network-changed-fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:25:44 compute-0 nova_compute[351685]: 2025-10-03 11:25:44.467 2 DEBUG nova.compute.manager [req-53b55071-66aa-4691-81ca-a0a31d29a898 req-fd14ed9c-efa3-4008-b858-b6e0a8584961 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Refreshing instance network info cache due to event network-changed-fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:25:44 compute-0 nova_compute[351685]: 2025-10-03 11:25:44.467 2 DEBUG oslo_concurrency.lockutils [req-53b55071-66aa-4691-81ca-a0a31d29a898 req-fd14ed9c-efa3-4008-b858-b6e0a8584961 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-50697870-0565-414d-a9e6-5262e3e25e3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:25:44 compute-0 nova_compute[351685]: 2025-10-03 11:25:44.572 2 DEBUG nova.compute.manager [req-436bf232-ffb1-4d88-8df6-2bd1157985bc req-ceeda165-a616-44a7-b37f-137a71e6c378 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received event network-changed-d444b4b5-5243-48c2-80dd-3074b56d4277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:25:44 compute-0 nova_compute[351685]: 2025-10-03 11:25:44.573 2 DEBUG nova.compute.manager [req-436bf232-ffb1-4d88-8df6-2bd1157985bc req-ceeda165-a616-44a7-b37f-137a71e6c378 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Refreshing instance network info cache due to event network-changed-d444b4b5-5243-48c2-80dd-3074b56d4277. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:25:44 compute-0 nova_compute[351685]: 2025-10-03 11:25:44.573 2 DEBUG oslo_concurrency.lockutils [req-436bf232-ffb1-4d88-8df6-2bd1157985bc req-ceeda165-a616-44a7-b37f-137a71e6c378 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-f7465889-4aed-4799-835b-1c604f730144" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:25:44 compute-0 nova_compute[351685]: 2025-10-03 11:25:44.605 2 DEBUG nova.network.neutron [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  3 11:25:44 compute-0 nova_compute[351685]: 2025-10-03 11:25:44.635 2 DEBUG nova.network.neutron [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  3 11:25:44 compute-0 nova_compute[351685]: 2025-10-03 11:25:44.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.736 2 DEBUG nova.network.neutron [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Updating instance_info_cache with network_info: [{"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.753 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Releasing lock "refresh_cache-f7465889-4aed-4799-835b-1c604f730144" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.754 2 DEBUG nova.compute.manager [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Instance network_info: |[{"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  3 11:25:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3557: 321 pgs: 321 active+clean; 257 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.5 MiB/s wr, 44 op/s
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.756 2 DEBUG oslo_concurrency.lockutils [req-436bf232-ffb1-4d88-8df6-2bd1157985bc req-ceeda165-a616-44a7-b37f-137a71e6c378 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-f7465889-4aed-4799-835b-1c604f730144" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.756 2 DEBUG nova.network.neutron [req-436bf232-ffb1-4d88-8df6-2bd1157985bc req-ceeda165-a616-44a7-b37f-137a71e6c378 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Refreshing network info cache for port d444b4b5-5243-48c2-80dd-3074b56d4277 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.760 2 DEBUG nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Start _get_guest_xml network_info=[{"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:24:00Z,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:24:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 0, 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '6a34ed8d-90df-4a16-a968-c59b7cafa2f1'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.768 2 WARNING nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.775 2 DEBUG nova.virt.libvirt.host [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.776 2 DEBUG nova.virt.libvirt.host [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.781 2 DEBUG nova.virt.libvirt.host [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.782 2 DEBUG nova.virt.libvirt.host [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.782 2 DEBUG nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.783 2 DEBUG nova.virt.hardware [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-03T11:23:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b93eb926-1d95-406e-aec3-a907be067084',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:24:00Z,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:24:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.783 2 DEBUG nova.virt.hardware [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.784 2 DEBUG nova.virt.hardware [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.784 2 DEBUG nova.virt.hardware [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.785 2 DEBUG nova.virt.hardware [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.785 2 DEBUG nova.virt.hardware [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.786 2 DEBUG nova.virt.hardware [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.786 2 DEBUG nova.virt.hardware [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.787 2 DEBUG nova.virt.hardware [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.787 2 DEBUG nova.virt.hardware [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.787 2 DEBUG nova.virt.hardware [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  3 11:25:45 compute-0 nova_compute[351685]: 2025-10-03 11:25:45.790 2 DEBUG oslo_concurrency.processutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:25:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:25:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:25:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:25:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:25:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:25:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:25:46 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/758470555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.290 2 DEBUG oslo_concurrency.processutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.332 2 DEBUG nova.storage.rbd_utils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] rbd image f7465889-4aed-4799-835b-1c604f730144_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.342 2 DEBUG oslo_concurrency.processutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:25:46
Oct  3 11:25:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:25:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:25:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.rgw.root', '.mgr', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'images']
Oct  3 11:25:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.418 2 DEBUG nova.network.neutron [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Updating instance_info_cache with network_info: [{"id": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "address": "fa:16:3e:a4:e9:e4", "network": {"id": "1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-726477615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aeefabefe92a4b9a95b28bf43d68c1f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd6b0308-e6", "ovs_interfaceid": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.439 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Releasing lock "refresh_cache-50697870-0565-414d-a9e6-5262e3e25e3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.441 2 DEBUG nova.compute.manager [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Instance network_info: |[{"id": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "address": "fa:16:3e:a4:e9:e4", "network": {"id": "1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-726477615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aeefabefe92a4b9a95b28bf43d68c1f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd6b0308-e6", "ovs_interfaceid": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.442 2 DEBUG oslo_concurrency.lockutils [req-53b55071-66aa-4691-81ca-a0a31d29a898 req-fd14ed9c-efa3-4008-b858-b6e0a8584961 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-50697870-0565-414d-a9e6-5262e3e25e3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.443 2 DEBUG nova.network.neutron [req-53b55071-66aa-4691-81ca-a0a31d29a898 req-fd14ed9c-efa3-4008-b858-b6e0a8584961 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Refreshing network info cache for port fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.446 2 DEBUG nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Start _get_guest_xml network_info=[{"id": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "address": "fa:16:3e:a4:e9:e4", "network": {"id": "1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-726477615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aeefabefe92a4b9a95b28bf43d68c1f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd6b0308-e6", "ovs_interfaceid": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:24:00Z,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:24:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 0, 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '6a34ed8d-90df-4a16-a968-c59b7cafa2f1'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.457 2 WARNING nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.462 2 DEBUG nova.virt.libvirt.host [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.464 2 DEBUG nova.virt.libvirt.host [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.476 2 DEBUG nova.virt.libvirt.host [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.477 2 DEBUG nova.virt.libvirt.host [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.478 2 DEBUG nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.479 2 DEBUG nova.virt.hardware [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-03T11:23:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b93eb926-1d95-406e-aec3-a907be067084',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:24:00Z,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:24:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.480 2 DEBUG nova.virt.hardware [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.480 2 DEBUG nova.virt.hardware [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.481 2 DEBUG nova.virt.hardware [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.481 2 DEBUG nova.virt.hardware [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.481 2 DEBUG nova.virt.hardware [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.482 2 DEBUG nova.virt.hardware [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.483 2 DEBUG nova.virt.hardware [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.483 2 DEBUG nova.virt.hardware [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.484 2 DEBUG nova.virt.hardware [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.484 2 DEBUG nova.virt.hardware [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.489 2 DEBUG oslo_concurrency.processutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:25:46 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1071068035' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.877 2 DEBUG nova.network.neutron [req-436bf232-ffb1-4d88-8df6-2bd1157985bc req-ceeda165-a616-44a7-b37f-137a71e6c378 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Updated VIF entry in instance network info cache for port d444b4b5-5243-48c2-80dd-3074b56d4277. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.878 2 DEBUG nova.network.neutron [req-436bf232-ffb1-4d88-8df6-2bd1157985bc req-ceeda165-a616-44a7-b37f-137a71e6c378 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Updating instance_info_cache with network_info: [{"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.895 2 DEBUG oslo_concurrency.lockutils [req-436bf232-ffb1-4d88-8df6-2bd1157985bc req-ceeda165-a616-44a7-b37f-137a71e6c378 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-f7465889-4aed-4799-835b-1c604f730144" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.914 2 DEBUG oslo_concurrency.processutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.915 2 DEBUG nova.virt.libvirt.vif [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:25:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1342038803',display_name='tempest-ServerActionsTestJSON-server-1342038803',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1342038803',id=10,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAI0U91Y0bWTV/Ghqu9u2/YJCcMsSuIqJJqou9MoVKU4IwCcE850tanFrdmeQ7ELHWboekr6vOg1XXLEEvVERh2sZ+QqKxSzY5UgETm25CB7b1mAR5wQF+48QlfVG8JFgw==',key_name='tempest-keypair-881855979',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8ac8b91115c2483686f9dc31c58b49fc',ramdisk_id='',reservation_id='r-5st1uthq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-136578470',owner_user_name='tempest-ServerActionsTestJSON-136578470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:25:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a98b98aa35184e41a4ae6e74ba3a32e6',uuid=f7465889-4aed-4799-835b-1c604f730144,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.915 2 DEBUG nova.network.os_vif_util [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Converting VIF {"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.916 2 DEBUG nova.network.os_vif_util [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:8a:bc,bridge_name='br-int',has_traffic_filtering=True,id=d444b4b5-5243-48c2-80dd-3074b56d4277,network=Network(527efcd5-9efe-47de-97ae-4c1c2ca2b999),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd444b4b5-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.917 2 DEBUG nova.objects.instance [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lazy-loading 'pci_devices' on Instance uuid f7465889-4aed-4799-835b-1c604f730144 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.949 2 DEBUG nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] End _get_guest_xml xml=<domain type="kvm">
Oct  3 11:25:46 compute-0 nova_compute[351685]:  <uuid>f7465889-4aed-4799-835b-1c604f730144</uuid>
Oct  3 11:25:46 compute-0 nova_compute[351685]:  <name>instance-0000000a</name>
Oct  3 11:25:46 compute-0 nova_compute[351685]:  <memory>131072</memory>
Oct  3 11:25:46 compute-0 nova_compute[351685]:  <vcpu>1</vcpu>
Oct  3 11:25:46 compute-0 nova_compute[351685]:  <metadata>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <nova:name>tempest-ServerActionsTestJSON-server-1342038803</nova:name>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <nova:creationTime>2025-10-03 11:25:45</nova:creationTime>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <nova:flavor name="m1.nano">
Oct  3 11:25:46 compute-0 nova_compute[351685]:        <nova:memory>128</nova:memory>
Oct  3 11:25:46 compute-0 nova_compute[351685]:        <nova:disk>1</nova:disk>
Oct  3 11:25:46 compute-0 nova_compute[351685]:        <nova:swap>0</nova:swap>
Oct  3 11:25:46 compute-0 nova_compute[351685]:        <nova:ephemeral>0</nova:ephemeral>
Oct  3 11:25:46 compute-0 nova_compute[351685]:        <nova:vcpus>1</nova:vcpus>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      </nova:flavor>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <nova:owner>
Oct  3 11:25:46 compute-0 nova_compute[351685]:        <nova:user uuid="a98b98aa35184e41a4ae6e74ba3a32e6">tempest-ServerActionsTestJSON-136578470-project-member</nova:user>
Oct  3 11:25:46 compute-0 nova_compute[351685]:        <nova:project uuid="8ac8b91115c2483686f9dc31c58b49fc">tempest-ServerActionsTestJSON-136578470</nova:project>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      </nova:owner>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <nova:root type="image" uuid="6a34ed8d-90df-4a16-a968-c59b7cafa2f1"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <nova:ports>
Oct  3 11:25:46 compute-0 nova_compute[351685]:        <nova:port uuid="d444b4b5-5243-48c2-80dd-3074b56d4277">
Oct  3 11:25:46 compute-0 nova_compute[351685]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:        </nova:port>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      </nova:ports>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    </nova:instance>
Oct  3 11:25:46 compute-0 nova_compute[351685]:  </metadata>
Oct  3 11:25:46 compute-0 nova_compute[351685]:  <sysinfo type="smbios">
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <system>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <entry name="manufacturer">RDO</entry>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <entry name="product">OpenStack Compute</entry>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <entry name="serial">f7465889-4aed-4799-835b-1c604f730144</entry>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <entry name="uuid">f7465889-4aed-4799-835b-1c604f730144</entry>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <entry name="family">Virtual Machine</entry>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    </system>
Oct  3 11:25:46 compute-0 nova_compute[351685]:  </sysinfo>
Oct  3 11:25:46 compute-0 nova_compute[351685]:  <os>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <boot dev="hd"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <smbios mode="sysinfo"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:  </os>
Oct  3 11:25:46 compute-0 nova_compute[351685]:  <features>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <acpi/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <apic/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <vmcoreinfo/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:  </features>
Oct  3 11:25:46 compute-0 nova_compute[351685]:  <clock offset="utc">
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <timer name="pit" tickpolicy="delay"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <timer name="hpet" present="no"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:  </clock>
Oct  3 11:25:46 compute-0 nova_compute[351685]:  <cpu mode="host-model" match="exact">
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <topology sockets="1" cores="1" threads="1"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:  </cpu>
Oct  3 11:25:46 compute-0 nova_compute[351685]:  <devices>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/f7465889-4aed-4799-835b-1c604f730144_disk">
Oct  3 11:25:46 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      </source>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:25:46 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <target dev="vda" bus="virtio"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <disk type="network" device="cdrom">
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/f7465889-4aed-4799-835b-1c604f730144_disk.config">
Oct  3 11:25:46 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      </source>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:25:46 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <target dev="sda" bus="sata"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <interface type="ethernet">
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <mac address="fa:16:3e:5d:8a:bc"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <driver name="vhost" rx_queue_size="512"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <mtu size="1442"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <target dev="tapd444b4b5-52"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    </interface>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <serial type="pty">
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <log file="/var/lib/nova/instances/f7465889-4aed-4799-835b-1c604f730144/console.log" append="off"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    </serial>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <video>
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    </video>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <input type="tablet" bus="usb"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <rng model="virtio">
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <backend model="random">/dev/urandom</backend>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    </rng>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <controller type="usb" index="0"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    <memballoon model="virtio">
Oct  3 11:25:46 compute-0 nova_compute[351685]:      <stats period="10"/>
Oct  3 11:25:46 compute-0 nova_compute[351685]:    </memballoon>
Oct  3 11:25:46 compute-0 nova_compute[351685]:  </devices>
Oct  3 11:25:46 compute-0 nova_compute[351685]: </domain>
Oct  3 11:25:46 compute-0 nova_compute[351685]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.952 2 DEBUG nova.compute.manager [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Preparing to wait for external event network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.952 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Acquiring lock "f7465889-4aed-4799-835b-1c604f730144-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.953 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.953 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.954 2 DEBUG nova.virt.libvirt.vif [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:25:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1342038803',display_name='tempest-ServerActionsTestJSON-server-1342038803',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1342038803',id=10,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAI0U91Y0bWTV/Ghqu9u2/YJCcMsSuIqJJqou9MoVKU4IwCcE850tanFrdmeQ7ELHWboekr6vOg1XXLEEvVERh2sZ+QqKxSzY5UgETm25CB7b1mAR5wQF+48QlfVG8JFgw==',key_name='tempest-keypair-881855979',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8ac8b91115c2483686f9dc31c58b49fc',ramdisk_id='',reservation_id='r-5st1uthq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-136578470',owner_user_name='tempest-ServerActionsTestJSON-136578470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:25:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a98b98aa35184e41a4ae6e74ba3a32e6',uuid=f7465889-4aed-4799-835b-1c604f730144,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.955 2 DEBUG nova.network.os_vif_util [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Converting VIF {"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.956 2 DEBUG nova.network.os_vif_util [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:8a:bc,bridge_name='br-int',has_traffic_filtering=True,id=d444b4b5-5243-48c2-80dd-3074b56d4277,network=Network(527efcd5-9efe-47de-97ae-4c1c2ca2b999),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd444b4b5-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:25:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:25:46 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1422949783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.957 2 DEBUG os_vif [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:8a:bc,bridge_name='br-int',has_traffic_filtering=True,id=d444b4b5-5243-48c2-80dd-3074b56d4277,network=Network(527efcd5-9efe-47de-97ae-4c1c2ca2b999),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd444b4b5-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.963 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.964 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.971 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd444b4b5-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.972 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd444b4b5-52, col_values=(('external_ids', {'iface-id': 'd444b4b5-5243-48c2-80dd-3074b56d4277', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:8a:bc', 'vm-uuid': 'f7465889-4aed-4799-835b-1c604f730144'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:46 compute-0 NetworkManager[45015]: <info>  [1759490746.9746] manager: (tapd444b4b5-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:25:46 compute-0 nova_compute[351685]: 2025-10-03 11:25:46.984 2 DEBUG oslo_concurrency.processutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.021 2 DEBUG nova.storage.rbd_utils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] rbd image 50697870-0565-414d-a9e6-5262e3e25e3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.028 2 DEBUG oslo_concurrency.processutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.051 2 INFO os_vif [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:8a:bc,bridge_name='br-int',has_traffic_filtering=True,id=d444b4b5-5243-48c2-80dd-3074b56d4277,network=Network(527efcd5-9efe-47de-97ae-4c1c2ca2b999),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd444b4b5-52')#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.130 2 DEBUG nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.131 2 DEBUG nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.132 2 DEBUG nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] No VIF found with MAC fa:16:3e:5d:8a:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.133 2 INFO nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Using config drive#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.180 2 DEBUG nova.storage.rbd_utils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] rbd image f7465889-4aed-4799-835b-1c604f730144_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:25:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:25:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:25:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:25:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:25:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:25:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:25:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:25:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:25:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:25:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:25:47 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1910954813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.492 2 DEBUG oslo_concurrency.processutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.493 2 DEBUG nova.virt.libvirt.vif [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:25:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-2096745416',display_name='tempest-ServersTestManualDisk-server-2096745416',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-2096745416',id=9,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPkUAVtkK2DWE2ZSkCwh/VInl1MTjNJIyl/GDnX7teszWsOy84W8+U9NLrpaz94mseNsY/tUmKpBVPmL61vRlPv7fCR411/LHp0ptZ7+pRTSF8SDQS5PzqVwycZCJmLRvA==',key_name='tempest-keypair-368348861',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aeefabefe92a4b9a95b28bf43d68c1f5',ramdisk_id='',reservation_id='r-fs9p23qy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1050468926',owner_user_name='tempest-ServersTestManualDisk-1050468926-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:25:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='018e43ba13984d3cbaef2cef945dfb9e',uuid=50697870-0565-414d-a9e6-5262e3e25e3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "address": "fa:16:3e:a4:e9:e4", "network": {"id": "1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-726477615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aeefabefe92a4b9a95b28bf43d68c1f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd6b0308-e6", "ovs_interfaceid": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.494 2 DEBUG nova.network.os_vif_util [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Converting VIF {"id": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "address": "fa:16:3e:a4:e9:e4", "network": {"id": "1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-726477615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aeefabefe92a4b9a95b28bf43d68c1f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd6b0308-e6", "ovs_interfaceid": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.495 2 DEBUG nova.network.os_vif_util [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:e9:e4,bridge_name='br-int',has_traffic_filtering=True,id=fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a,network=Network(1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd6b0308-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.497 2 DEBUG nova.objects.instance [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 50697870-0565-414d-a9e6-5262e3e25e3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.516 2 DEBUG nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] End _get_guest_xml xml=<domain type="kvm">
Oct  3 11:25:47 compute-0 nova_compute[351685]:  <uuid>50697870-0565-414d-a9e6-5262e3e25e3c</uuid>
Oct  3 11:25:47 compute-0 nova_compute[351685]:  <name>instance-00000009</name>
Oct  3 11:25:47 compute-0 nova_compute[351685]:  <memory>131072</memory>
Oct  3 11:25:47 compute-0 nova_compute[351685]:  <vcpu>1</vcpu>
Oct  3 11:25:47 compute-0 nova_compute[351685]:  <metadata>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <nova:name>tempest-ServersTestManualDisk-server-2096745416</nova:name>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <nova:creationTime>2025-10-03 11:25:46</nova:creationTime>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <nova:flavor name="m1.nano">
Oct  3 11:25:47 compute-0 nova_compute[351685]:        <nova:memory>128</nova:memory>
Oct  3 11:25:47 compute-0 nova_compute[351685]:        <nova:disk>1</nova:disk>
Oct  3 11:25:47 compute-0 nova_compute[351685]:        <nova:swap>0</nova:swap>
Oct  3 11:25:47 compute-0 nova_compute[351685]:        <nova:ephemeral>0</nova:ephemeral>
Oct  3 11:25:47 compute-0 nova_compute[351685]:        <nova:vcpus>1</nova:vcpus>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      </nova:flavor>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <nova:owner>
Oct  3 11:25:47 compute-0 nova_compute[351685]:        <nova:user uuid="018e43ba13984d3cbaef2cef945dfb9e">tempest-ServersTestManualDisk-1050468926-project-member</nova:user>
Oct  3 11:25:47 compute-0 nova_compute[351685]:        <nova:project uuid="aeefabefe92a4b9a95b28bf43d68c1f5">tempest-ServersTestManualDisk-1050468926</nova:project>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      </nova:owner>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <nova:root type="image" uuid="6a34ed8d-90df-4a16-a968-c59b7cafa2f1"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <nova:ports>
Oct  3 11:25:47 compute-0 nova_compute[351685]:        <nova:port uuid="fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a">
Oct  3 11:25:47 compute-0 nova_compute[351685]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:        </nova:port>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      </nova:ports>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    </nova:instance>
Oct  3 11:25:47 compute-0 nova_compute[351685]:  </metadata>
Oct  3 11:25:47 compute-0 nova_compute[351685]:  <sysinfo type="smbios">
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <system>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <entry name="manufacturer">RDO</entry>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <entry name="product">OpenStack Compute</entry>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <entry name="serial">50697870-0565-414d-a9e6-5262e3e25e3c</entry>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <entry name="uuid">50697870-0565-414d-a9e6-5262e3e25e3c</entry>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <entry name="family">Virtual Machine</entry>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    </system>
Oct  3 11:25:47 compute-0 nova_compute[351685]:  </sysinfo>
Oct  3 11:25:47 compute-0 nova_compute[351685]:  <os>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <boot dev="hd"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <smbios mode="sysinfo"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:  </os>
Oct  3 11:25:47 compute-0 nova_compute[351685]:  <features>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <acpi/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <apic/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <vmcoreinfo/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:  </features>
Oct  3 11:25:47 compute-0 nova_compute[351685]:  <clock offset="utc">
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <timer name="pit" tickpolicy="delay"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <timer name="hpet" present="no"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:  </clock>
Oct  3 11:25:47 compute-0 nova_compute[351685]:  <cpu mode="host-model" match="exact">
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <topology sockets="1" cores="1" threads="1"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:  </cpu>
Oct  3 11:25:47 compute-0 nova_compute[351685]:  <devices>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/50697870-0565-414d-a9e6-5262e3e25e3c_disk">
Oct  3 11:25:47 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      </source>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:25:47 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <target dev="vda" bus="virtio"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <disk type="network" device="cdrom">
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/50697870-0565-414d-a9e6-5262e3e25e3c_disk.config">
Oct  3 11:25:47 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      </source>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:25:47 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <target dev="sda" bus="sata"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <interface type="ethernet">
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <mac address="fa:16:3e:a4:e9:e4"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <driver name="vhost" rx_queue_size="512"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <mtu size="1442"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <target dev="tapfd6b0308-e6"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    </interface>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <serial type="pty">
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <log file="/var/lib/nova/instances/50697870-0565-414d-a9e6-5262e3e25e3c/console.log" append="off"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    </serial>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <video>
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    </video>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <input type="tablet" bus="usb"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <rng model="virtio">
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <backend model="random">/dev/urandom</backend>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    </rng>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <controller type="usb" index="0"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    <memballoon model="virtio">
Oct  3 11:25:47 compute-0 nova_compute[351685]:      <stats period="10"/>
Oct  3 11:25:47 compute-0 nova_compute[351685]:    </memballoon>
Oct  3 11:25:47 compute-0 nova_compute[351685]:  </devices>
Oct  3 11:25:47 compute-0 nova_compute[351685]: </domain>
Oct  3 11:25:47 compute-0 nova_compute[351685]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.517 2 DEBUG nova.compute.manager [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Preparing to wait for external event network-vif-plugged-fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.517 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Acquiring lock "50697870-0565-414d-a9e6-5262e3e25e3c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.517 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lock "50697870-0565-414d-a9e6-5262e3e25e3c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.517 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lock "50697870-0565-414d-a9e6-5262e3e25e3c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.518 2 DEBUG nova.virt.libvirt.vif [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:25:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-2096745416',display_name='tempest-ServersTestManualDisk-server-2096745416',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-2096745416',id=9,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPkUAVtkK2DWE2ZSkCwh/VInl1MTjNJIyl/GDnX7teszWsOy84W8+U9NLrpaz94mseNsY/tUmKpBVPmL61vRlPv7fCR411/LHp0ptZ7+pRTSF8SDQS5PzqVwycZCJmLRvA==',key_name='tempest-keypair-368348861',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aeefabefe92a4b9a95b28bf43d68c1f5',ramdisk_id='',reservation_id='r-fs9p23qy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1050468926',owner_user_name='tempest-ServersTestManualDisk-1050468926-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:25:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='018e43ba13984d3cbaef2cef945dfb9e',uuid=50697870-0565-414d-a9e6-5262e3e25e3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "address": "fa:16:3e:a4:e9:e4", "network": {"id": "1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-726477615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aeefabefe92a4b9a95b28bf43d68c1f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd6b0308-e6", "ovs_interfaceid": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.518 2 DEBUG nova.network.os_vif_util [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Converting VIF {"id": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "address": "fa:16:3e:a4:e9:e4", "network": {"id": "1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-726477615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aeefabefe92a4b9a95b28bf43d68c1f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd6b0308-e6", "ovs_interfaceid": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.519 2 DEBUG nova.network.os_vif_util [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:e9:e4,bridge_name='br-int',has_traffic_filtering=True,id=fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a,network=Network(1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd6b0308-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.519 2 DEBUG os_vif [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:e9:e4,bridge_name='br-int',has_traffic_filtering=True,id=fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a,network=Network(1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd6b0308-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.521 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.524 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd6b0308-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.525 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd6b0308-e6, col_values=(('external_ids', {'iface-id': 'fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:e9:e4', 'vm-uuid': '50697870-0565-414d-a9e6-5262e3e25e3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:47 compute-0 NetworkManager[45015]: <info>  [1759490747.5283] manager: (tapfd6b0308-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.540 2 INFO nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Creating config drive at /var/lib/nova/instances/f7465889-4aed-4799-835b-1c604f730144/disk.config#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.548 2 DEBUG oslo_concurrency.processutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f7465889-4aed-4799-835b-1c604f730144/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2smi61xq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.582 2 INFO os_vif [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:e9:e4,bridge_name='br-int',has_traffic_filtering=True,id=fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a,network=Network(1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd6b0308-e6')#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.666 2 DEBUG nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.667 2 DEBUG nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.667 2 DEBUG nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] No VIF found with MAC fa:16:3e:a4:e9:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.668 2 INFO nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Using config drive#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.724 2 DEBUG nova.storage.rbd_utils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] rbd image 50697870-0565-414d-a9e6-5262e3e25e3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.742 2 DEBUG oslo_concurrency.processutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f7465889-4aed-4799-835b-1c604f730144/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2smi61xq" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3558: 321 pgs: 321 active+clean; 257 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.5 MiB/s wr, 44 op/s
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.809 2 DEBUG nova.storage.rbd_utils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] rbd image f7465889-4aed-4799-835b-1c604f730144_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:47 compute-0 nova_compute[351685]: 2025-10-03 11:25:47.826 2 DEBUG oslo_concurrency.processutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f7465889-4aed-4799-835b-1c604f730144/disk.config f7465889-4aed-4799-835b-1c604f730144_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:48 compute-0 nova_compute[351685]: 2025-10-03 11:25:48.137 2 DEBUG oslo_concurrency.processutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f7465889-4aed-4799-835b-1c604f730144/disk.config f7465889-4aed-4799-835b-1c604f730144_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:48 compute-0 nova_compute[351685]: 2025-10-03 11:25:48.138 2 INFO nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Deleting local config drive /var/lib/nova/instances/f7465889-4aed-4799-835b-1c604f730144/disk.config because it was imported into RBD.#033[00m
Oct  3 11:25:48 compute-0 NetworkManager[45015]: <info>  [1759490748.2016] manager: (tapd444b4b5-52): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Oct  3 11:25:48 compute-0 kernel: tapd444b4b5-52: entered promiscuous mode
Oct  3 11:25:48 compute-0 nova_compute[351685]: 2025-10-03 11:25:48.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:25:48 compute-0 ovn_controller[88471]: 2025-10-03T11:25:48Z|00090|binding|INFO|Claiming lport d444b4b5-5243-48c2-80dd-3074b56d4277 for this chassis.
Oct  3 11:25:48 compute-0 ovn_controller[88471]: 2025-10-03T11:25:48Z|00091|binding|INFO|d444b4b5-5243-48c2-80dd-3074b56d4277: Claiming fa:16:3e:5d:8a:bc 10.100.0.7
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.235 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:8a:bc 10.100.0.7'], port_security=['fa:16:3e:5d:8a:bc 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f7465889-4aed-4799-835b-1c604f730144', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527efcd5-9efe-47de-97ae-4c1c2ca2b999', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8ac8b91115c2483686f9dc31c58b49fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c15d67bc-31ac-4909-a6df-d8296b99758d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2b7eff5-cbee-4a08-96a7-16ae54234c96, chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=d444b4b5-5243-48c2-80dd-3074b56d4277) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.237 284328 INFO neutron.agent.ovn.metadata.agent [-] Port d444b4b5-5243-48c2-80dd-3074b56d4277 in datapath 527efcd5-9efe-47de-97ae-4c1c2ca2b999 bound to our chassis#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.238 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 527efcd5-9efe-47de-97ae-4c1c2ca2b999#033[00m
Oct  3 11:25:48 compute-0 ovn_controller[88471]: 2025-10-03T11:25:48Z|00092|binding|INFO|Setting lport d444b4b5-5243-48c2-80dd-3074b56d4277 ovn-installed in OVS
Oct  3 11:25:48 compute-0 ovn_controller[88471]: 2025-10-03T11:25:48Z|00093|binding|INFO|Setting lport d444b4b5-5243-48c2-80dd-3074b56d4277 up in Southbound
Oct  3 11:25:48 compute-0 systemd-udevd[528263]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 11:25:48 compute-0 nova_compute[351685]: 2025-10-03 11:25:48.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.255 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[73fb76e5-e539-4822-8450-b8f64f81f10a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:48 compute-0 systemd-machined[137653]: New machine qemu-9-instance-0000000a.
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.257 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap527efcd5-91 in ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.259 412583 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap527efcd5-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.259 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[f78c381c-04a9-4a93-8818-cf2df8689316]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.260 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[70d391e5-02aa-4d42-ba6e-544eb9137cd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:48 compute-0 NetworkManager[45015]: <info>  [1759490748.2692] device (tapd444b4b5-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 11:25:48 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-0000000a.
Oct  3 11:25:48 compute-0 NetworkManager[45015]: <info>  [1759490748.2737] device (tapd444b4b5-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.274 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[5b7fa451-ff85-42f3-b452-a75cd6c6f27f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.290 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[d6da0367-476b-4b5d-ae76-86b0ce89a410]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.323 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[10d23783-c573-480c-83e3-3d625caa7aa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:48 compute-0 NetworkManager[45015]: <info>  [1759490748.3362] manager: (tap527efcd5-90): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.335 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[a567ed34-c8f6-4e65-99d7-b33a65776e36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.377 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[7dba613c-0bd0-4f73-9708-46cd1f7d16c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.381 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[cab5e35f-9df5-4a05-9d3e-ab20a3836f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:48 compute-0 NetworkManager[45015]: <info>  [1759490748.4094] device (tap527efcd5-90): carrier: link connected
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.418 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[44b1f85c-e318-4e9b-9553-0620c632c6ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:48 compute-0 nova_compute[351685]: 2025-10-03 11:25:48.429 2 INFO nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Creating config drive at /var/lib/nova/instances/50697870-0565-414d-a9e6-5262e3e25e3c/disk.config#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.443 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3e6435-2359-4c15-84ef-dedea8e4c5d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527efcd5-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:5d:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 993100, 'reachable_time': 36429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 528302, 'error': None, 'target': 'ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:48 compute-0 nova_compute[351685]: 2025-10-03 11:25:48.444 2 DEBUG oslo_concurrency.processutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/50697870-0565-414d-a9e6-5262e3e25e3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpin4rmxdc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.469 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2ef544-caed-416e-b8ac-65817ddeb5fc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feff:5d1f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 993100, 'tstamp': 993100}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 528303, 'error': None, 'target': 'ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.500 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[dba9ca1f-860b-408e-bdf6-bff954188c6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527efcd5-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:5d:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 993100, 'reachable_time': 36429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 528305, 'error': None, 'target': 'ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.544 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[2d7c5726-a4d8-469e-acdf-27d8e0c352b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:48 compute-0 nova_compute[351685]: 2025-10-03 11:25:48.597 2 DEBUG oslo_concurrency.processutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/50697870-0565-414d-a9e6-5262e3e25e3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpin4rmxdc" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.629 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[6893f3f7-fbe4-4579-8720-15666d1c6f27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.631 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527efcd5-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.631 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.632 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap527efcd5-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:48 compute-0 kernel: tap527efcd5-90: entered promiscuous mode
Oct  3 11:25:48 compute-0 nova_compute[351685]: 2025-10-03 11:25:48.638 2 DEBUG nova.storage.rbd_utils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] rbd image 50697870-0565-414d-a9e6-5262e3e25e3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:25:48 compute-0 NetworkManager[45015]: <info>  [1759490748.6401] manager: (tap527efcd5-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.646 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap527efcd5-90, col_values=(('external_ids', {'iface-id': '1eb40ea8-53b0-46a1-bf82-85a3448330ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:48 compute-0 ovn_controller[88471]: 2025-10-03T11:25:48Z|00094|binding|INFO|Releasing lport 1eb40ea8-53b0-46a1-bf82-85a3448330ac from this chassis (sb_readonly=0)
Oct  3 11:25:48 compute-0 nova_compute[351685]: 2025-10-03 11:25:48.656 2 DEBUG oslo_concurrency.processutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/50697870-0565-414d-a9e6-5262e3e25e3c/disk.config 50697870-0565-414d-a9e6-5262e3e25e3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.673 284328 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/527efcd5-9efe-47de-97ae-4c1c2ca2b999.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/527efcd5-9efe-47de-97ae-4c1c2ca2b999.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.675 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[cbbc545e-0399-47f2-8041-c7d245d06db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.677 284328 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: global
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    log         /dev/log local0 debug
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    log-tag     haproxy-metadata-proxy-527efcd5-9efe-47de-97ae-4c1c2ca2b999
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    user        root
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    group       root
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    maxconn     1024
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    pidfile     /var/lib/neutron/external/pids/527efcd5-9efe-47de-97ae-4c1c2ca2b999.pid.haproxy
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    daemon
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: defaults
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    log global
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    mode http
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    option httplog
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    option dontlognull
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    option http-server-close
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    option forwardfor
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    retries                 3
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    timeout http-request    30s
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    timeout connect         30s
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    timeout client          32s
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    timeout server          32s
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    timeout http-keep-alive 30s
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: listen listener
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    bind 169.254.169.254:80
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]:    http-request add-header X-OVN-Network-ID 527efcd5-9efe-47de-97ae-4c1c2ca2b999
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.678 284328 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999', 'env', 'PROCESS_TAG=haproxy-527efcd5-9efe-47de-97ae-4c1c2ca2b999', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/527efcd5-9efe-47de-97ae-4c1c2ca2b999.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  3 11:25:48 compute-0 nova_compute[351685]: 2025-10-03 11:25:48.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:48 compute-0 nova_compute[351685]: 2025-10-03 11:25:48.862 2 DEBUG oslo_concurrency.processutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/50697870-0565-414d-a9e6-5262e3e25e3c/disk.config 50697870-0565-414d-a9e6-5262e3e25e3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:48 compute-0 nova_compute[351685]: 2025-10-03 11:25:48.863 2 INFO nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Deleting local config drive /var/lib/nova/instances/50697870-0565-414d-a9e6-5262e3e25e3c/disk.config because it was imported into RBD.#033[00m
Oct  3 11:25:48 compute-0 kernel: tapfd6b0308-e6: entered promiscuous mode
Oct  3 11:25:48 compute-0 NetworkManager[45015]: <info>  [1759490748.9330] manager: (tapfd6b0308-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Oct  3 11:25:48 compute-0 systemd-udevd[528296]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 11:25:48 compute-0 ovn_controller[88471]: 2025-10-03T11:25:48Z|00095|binding|INFO|Claiming lport fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a for this chassis.
Oct  3 11:25:48 compute-0 ovn_controller[88471]: 2025-10-03T11:25:48Z|00096|binding|INFO|fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a: Claiming fa:16:3e:a4:e9:e4 10.100.0.7
Oct  3 11:25:48 compute-0 nova_compute[351685]: 2025-10-03 11:25:48.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:48.944 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:e9:e4 10.100.0.7'], port_security=['fa:16:3e:a4:e9:e4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '50697870-0565-414d-a9e6-5262e3e25e3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aeefabefe92a4b9a95b28bf43d68c1f5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78d9cfd8-4cde-4d5f-bbcc-237a7e6b4364', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bda35277-322e-4200-96ed-1dcc2349f348, chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:25:48 compute-0 NetworkManager[45015]: <info>  [1759490748.9485] device (tapfd6b0308-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 11:25:48 compute-0 NetworkManager[45015]: <info>  [1759490748.9491] device (tapfd6b0308-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  3 11:25:48 compute-0 ovn_controller[88471]: 2025-10-03T11:25:48Z|00097|binding|INFO|Setting lport fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a ovn-installed in OVS
Oct  3 11:25:48 compute-0 ovn_controller[88471]: 2025-10-03T11:25:48Z|00098|binding|INFO|Setting lport fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a up in Southbound
Oct  3 11:25:48 compute-0 nova_compute[351685]: 2025-10-03 11:25:48.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:48 compute-0 nova_compute[351685]: 2025-10-03 11:25:48.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:49 compute-0 systemd-machined[137653]: New machine qemu-10-instance-00000009.
Oct  3 11:25:49 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-00000009.
Oct  3 11:25:49 compute-0 podman[528416]: 2025-10-03 11:25:49.115784161 +0000 UTC m=+0.139639446 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:25:49 compute-0 podman[528419]: 2025-10-03 11:25:49.131593807 +0000 UTC m=+0.139717839 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  3 11:25:49 compute-0 podman[528418]: 2025-10-03 11:25:49.137821477 +0000 UTC m=+0.123133147 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  3 11:25:49 compute-0 podman[528417]: 2025-10-03 11:25:49.152038113 +0000 UTC m=+0.174370430 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 11:25:49 compute-0 podman[528420]: 2025-10-03 11:25:49.161219176 +0000 UTC m=+0.162052814 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Oct  3 11:25:49 compute-0 podman[528438]: 2025-10-03 11:25:49.169188162 +0000 UTC m=+0.161330972 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  3 11:25:49 compute-0 podman[528504]: 2025-10-03 11:25:49.195761314 +0000 UTC m=+0.112985163 container create 30e7694129c25a42fd92444750bd9b97df2f96181aed88b0a5c4f46e23a1e081 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.198 2 DEBUG nova.network.neutron [req-53b55071-66aa-4691-81ca-a0a31d29a898 req-fd14ed9c-efa3-4008-b858-b6e0a8584961 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Updated VIF entry in instance network info cache for port fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.199 2 DEBUG nova.network.neutron [req-53b55071-66aa-4691-81ca-a0a31d29a898 req-fd14ed9c-efa3-4008-b858-b6e0a8584961 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Updating instance_info_cache with network_info: [{"id": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "address": "fa:16:3e:a4:e9:e4", "network": {"id": "1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-726477615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aeefabefe92a4b9a95b28bf43d68c1f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd6b0308-e6", "ovs_interfaceid": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:25:49 compute-0 podman[528437]: 2025-10-03 11:25:49.208704868 +0000 UTC m=+0.204726711 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.220 2 DEBUG oslo_concurrency.lockutils [req-53b55071-66aa-4691-81ca-a0a31d29a898 req-fd14ed9c-efa3-4008-b858-b6e0a8584961 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-50697870-0565-414d-a9e6-5262e3e25e3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:25:49 compute-0 podman[528504]: 2025-10-03 11:25:49.146163524 +0000 UTC m=+0.063387383 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  3 11:25:49 compute-0 systemd[1]: Started libpod-conmon-30e7694129c25a42fd92444750bd9b97df2f96181aed88b0a5c4f46e23a1e081.scope.
Oct  3 11:25:49 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4921de8be30ede5a84cf1958a003657de1f2781f2e8195583c65d2fd26e866/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  3 11:25:49 compute-0 podman[528504]: 2025-10-03 11:25:49.303556739 +0000 UTC m=+0.220780598 container init 30e7694129c25a42fd92444750bd9b97df2f96181aed88b0a5c4f46e23a1e081 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:25:49 compute-0 podman[528504]: 2025-10-03 11:25:49.313736205 +0000 UTC m=+0.230960054 container start 30e7694129c25a42fd92444750bd9b97df2f96181aed88b0a5c4f46e23a1e081 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:25:49 compute-0 neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999[528588]: [NOTICE]   (528592) : New worker (528594) forked
Oct  3 11:25:49 compute-0 neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999[528588]: [NOTICE]   (528592) : Loading success.
Oct  3 11:25:49 compute-0 ovn_controller[88471]: 2025-10-03T11:25:49Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6c:16:9e 10.100.0.12
Oct  3 11:25:49 compute-0 ovn_controller[88471]: 2025-10-03T11:25:49Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6c:16:9e 10.100.0.12
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.435 284328 INFO neutron.agent.ovn.metadata.agent [-] Port fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a in datapath 1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89 unbound from our chassis#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.437 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.453 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[a9248dab-f870-455d-b9fd-273452cd2e81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.454 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1c8bf64c-91 in ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.456 412583 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1c8bf64c-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.457 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[6265adc6-3b42-4288-89df-3df3703c7fd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.461 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[f659d1dd-3942-46a6-b259-24c7328be0a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.468 2 DEBUG nova.compute.manager [req-4f4d6b3f-9e1c-43f3-9f04-40100650fba2 req-ac70972e-853c-4790-99e3-85f5599c8ed0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received event network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.470 2 DEBUG oslo_concurrency.lockutils [req-4f4d6b3f-9e1c-43f3-9f04-40100650fba2 req-ac70972e-853c-4790-99e3-85f5599c8ed0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "f7465889-4aed-4799-835b-1c604f730144-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.471 2 DEBUG oslo_concurrency.lockutils [req-4f4d6b3f-9e1c-43f3-9f04-40100650fba2 req-ac70972e-853c-4790-99e3-85f5599c8ed0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.471 2 DEBUG oslo_concurrency.lockutils [req-4f4d6b3f-9e1c-43f3-9f04-40100650fba2 req-ac70972e-853c-4790-99e3-85f5599c8ed0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.471 2 DEBUG nova.compute.manager [req-4f4d6b3f-9e1c-43f3-9f04-40100650fba2 req-ac70972e-853c-4790-99e3-85f5599c8ed0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Processing event network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.481 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a16ff2-d81f-414b-8e9e-8bd598bc41f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.512 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[09ec679f-7316-49a7-8100-72bdd99f0c7d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.539 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[5acd8e16-870b-45d5-b6f4-2ee29cfe4632]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.554 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d3edee-1983-42de-a154-fad71cffaf7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:49 compute-0 NetworkManager[45015]: <info>  [1759490749.5553] manager: (tap1c8bf64c-90): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.594 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[db4a843c-7a7d-4036-ab43-a40ee9959ffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.598 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[25a7b162-cbdb-42f8-a0ac-5f80480da0c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:49 compute-0 NetworkManager[45015]: <info>  [1759490749.6255] device (tap1c8bf64c-90): carrier: link connected
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.631 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[91541269-d0de-4fcb-b881-7e6d1c51f6f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.649 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[908f4abb-9cdf-48c0-84a4-2d6de7979844]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c8bf64c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:7c:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 993222, 'reachable_time': 23241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 528614, 'error': None, 'target': 'ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.675 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[c6813d33-e82d-4e99-9182-f841357c3be6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:7c17'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 993222, 'tstamp': 993222}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 528615, 'error': None, 'target': 'ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.698 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[578618a0-f806-4f58-88b4-7592cd23ec1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c8bf64c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:7c:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 993222, 'reachable_time': 23241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 528616, 'error': None, 'target': 'ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.729 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490749.728772, f7465889-4aed-4799-835b-1c604f730144 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.729 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] VM Started (Lifecycle Event)#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.731 2 DEBUG nova.compute.manager [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.734 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[1ecbdc93-0a53-4175-86e7-ef05fbca9876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.737 2 DEBUG nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.742 2 INFO nova.virt.libvirt.driver [-] [instance: f7465889-4aed-4799-835b-1c604f730144] Instance spawned successfully.#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.742 2 DEBUG nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.748 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.753 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:25:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3559: 321 pgs: 321 active+clean; 263 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 158 KiB/s rd, 4.2 MiB/s wr, 76 op/s
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.763 2 DEBUG nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.763 2 DEBUG nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.763 2 DEBUG nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.764 2 DEBUG nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.764 2 DEBUG nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.764 2 DEBUG nova.virt.libvirt.driver [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.769 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.769 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490749.7289052, f7465889-4aed-4799-835b-1c604f730144 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.770 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] VM Paused (Lifecycle Event)#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.797 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.802 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490749.7350287, f7465889-4aed-4799-835b-1c604f730144 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.802 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] VM Resumed (Lifecycle Event)#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.818 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.819 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[f50c5d7e-61c1-465c-a559-3ada5c2e51b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.821 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c8bf64c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.821 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.821 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c8bf64c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.823 2 INFO nova.compute.manager [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Took 7.53 seconds to spawn the instance on the hypervisor.#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.824 2 DEBUG nova.compute.manager [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:49 compute-0 NetworkManager[45015]: <info>  [1759490749.8250] manager: (tap1c8bf64c-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Oct  3 11:25:49 compute-0 kernel: tap1c8bf64c-90: entered promiscuous mode
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.829 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1c8bf64c-90, col_values=(('external_ids', {'iface-id': 'ef402e1b-40ed-4955-853a-cc6d0ccb646a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:49 compute-0 ovn_controller[88471]: 2025-10-03T11:25:49Z|00099|binding|INFO|Releasing lport ef402e1b-40ed-4955-853a-cc6d0ccb646a from this chassis (sb_readonly=0)
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.837 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.854 284328 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.856 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[5af47675-38f6-41c5-b9a6-db136ded3a1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.857 284328 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: global
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    log         /dev/log local0 debug
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    log-tag     haproxy-metadata-proxy-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    user        root
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    group       root
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    maxconn     1024
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    pidfile     /var/lib/neutron/external/pids/1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89.pid.haproxy
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    daemon
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: defaults
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    log global
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    mode http
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    option httplog
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    option dontlognull
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    option http-server-close
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    option forwardfor
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    retries                 3
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    timeout http-request    30s
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    timeout connect         30s
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    timeout client          32s
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    timeout server          32s
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    timeout http-keep-alive 30s
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: listen listener
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    bind 169.254.169.254:80
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]:    http-request add-header X-OVN-Network-ID 1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  3 11:25:49 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:49.858 284328 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89', 'env', 'PROCESS_TAG=haproxy-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.863 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.904 2 INFO nova.compute.manager [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Took 8.60 seconds to build instance.#033[00m
Oct  3 11:25:49 compute-0 nova_compute[351685]: 2025-10-03 11:25:49.925 2 DEBUG oslo_concurrency.lockutils [None req-3ad1534e-0213-499e-b013-f66ecac64cf7 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.049 2 DEBUG nova.compute.manager [req-81c6ce3b-cc4e-4551-9b54-bac02c890231 req-d0d17116-32cf-4951-9855-9eb42aedbb6a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Received event network-vif-plugged-fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.050 2 DEBUG oslo_concurrency.lockutils [req-81c6ce3b-cc4e-4551-9b54-bac02c890231 req-d0d17116-32cf-4951-9855-9eb42aedbb6a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "50697870-0565-414d-a9e6-5262e3e25e3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.050 2 DEBUG oslo_concurrency.lockutils [req-81c6ce3b-cc4e-4551-9b54-bac02c890231 req-d0d17116-32cf-4951-9855-9eb42aedbb6a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "50697870-0565-414d-a9e6-5262e3e25e3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.050 2 DEBUG oslo_concurrency.lockutils [req-81c6ce3b-cc4e-4551-9b54-bac02c890231 req-d0d17116-32cf-4951-9855-9eb42aedbb6a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "50697870-0565-414d-a9e6-5262e3e25e3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.050 2 DEBUG nova.compute.manager [req-81c6ce3b-cc4e-4551-9b54-bac02c890231 req-d0d17116-32cf-4951-9855-9eb42aedbb6a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Processing event network-vif-plugged-fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.328 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:25:50 compute-0 podman[528690]: 2025-10-03 11:25:50.363893792 +0000 UTC m=+0.086950878 container create 050527ed5a550777da89e61665e1cac5aa19f766059c3b84e69e0cc175455ec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:25:50 compute-0 podman[528690]: 2025-10-03 11:25:50.320335005 +0000 UTC m=+0.043392091 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  3 11:25:50 compute-0 systemd[1]: Started libpod-conmon-050527ed5a550777da89e61665e1cac5aa19f766059c3b84e69e0cc175455ec1.scope.
Oct  3 11:25:50 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ea616b0eca54b29b6140fbfa22045fdd856eb7cdc14e00fe52a6b7c409bd8b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  3 11:25:50 compute-0 podman[528690]: 2025-10-03 11:25:50.511025617 +0000 UTC m=+0.234082693 container init 050527ed5a550777da89e61665e1cac5aa19f766059c3b84e69e0cc175455ec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 11:25:50 compute-0 podman[528690]: 2025-10-03 11:25:50.522424772 +0000 UTC m=+0.245481888 container start 050527ed5a550777da89e61665e1cac5aa19f766059c3b84e69e0cc175455ec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  3 11:25:50 compute-0 neutron-haproxy-ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89[528705]: [NOTICE]   (528709) : New worker (528711) forked
Oct  3 11:25:50 compute-0 neutron-haproxy-ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89[528705]: [NOTICE]   (528709) : Loading success.
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.852 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490750.851941, 50697870-0565-414d-a9e6-5262e3e25e3c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.853 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] VM Started (Lifecycle Event)#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.855 2 DEBUG nova.compute.manager [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.859 2 DEBUG nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.865 2 INFO nova.virt.libvirt.driver [-] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Instance spawned successfully.#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.865 2 DEBUG nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.880 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.889 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.893 2 DEBUG nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.894 2 DEBUG nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.894 2 DEBUG nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.895 2 DEBUG nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.895 2 DEBUG nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.896 2 DEBUG nova.virt.libvirt.driver [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.923 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.923 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490750.8520489, 50697870-0565-414d-a9e6-5262e3e25e3c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.924 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] VM Paused (Lifecycle Event)#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.958 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.963 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490750.8584378, 50697870-0565-414d-a9e6-5262e3e25e3c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.964 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] VM Resumed (Lifecycle Event)#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.968 2 INFO nova.compute.manager [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Took 12.04 seconds to spawn the instance on the hypervisor.#033[00m
Oct  3 11:25:50 compute-0 nova_compute[351685]: 2025-10-03 11:25:50.969 2 DEBUG nova.compute.manager [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:25:51 compute-0 nova_compute[351685]: 2025-10-03 11:25:51.128 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:25:51 compute-0 nova_compute[351685]: 2025-10-03 11:25:51.140 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:25:51 compute-0 nova_compute[351685]: 2025-10-03 11:25:51.183 2 INFO nova.compute.manager [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Took 13.21 seconds to build instance.#033[00m
Oct  3 11:25:51 compute-0 nova_compute[351685]: 2025-10-03 11:25:51.199 2 DEBUG oslo_concurrency.lockutils [None req-b0787132-1fda-4138-94e2-69df4bad4292 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lock "50697870-0565-414d-a9e6-5262e3e25e3c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3560: 321 pgs: 321 active+clean; 274 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 238 KiB/s rd, 5.0 MiB/s wr, 104 op/s
Oct  3 11:25:52 compute-0 nova_compute[351685]: 2025-10-03 11:25:52.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:52 compute-0 nova_compute[351685]: 2025-10-03 11:25:52.538 2 DEBUG nova.compute.manager [req-e2f3344a-efb8-4ea0-8e85-d8db680302d0 req-b28a3064-9fd5-4c30-8812-a8d69390ff3c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received event network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:25:52 compute-0 nova_compute[351685]: 2025-10-03 11:25:52.538 2 DEBUG oslo_concurrency.lockutils [req-e2f3344a-efb8-4ea0-8e85-d8db680302d0 req-b28a3064-9fd5-4c30-8812-a8d69390ff3c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "f7465889-4aed-4799-835b-1c604f730144-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:52 compute-0 nova_compute[351685]: 2025-10-03 11:25:52.538 2 DEBUG oslo_concurrency.lockutils [req-e2f3344a-efb8-4ea0-8e85-d8db680302d0 req-b28a3064-9fd5-4c30-8812-a8d69390ff3c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:52 compute-0 nova_compute[351685]: 2025-10-03 11:25:52.539 2 DEBUG oslo_concurrency.lockutils [req-e2f3344a-efb8-4ea0-8e85-d8db680302d0 req-b28a3064-9fd5-4c30-8812-a8d69390ff3c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:52 compute-0 nova_compute[351685]: 2025-10-03 11:25:52.539 2 DEBUG nova.compute.manager [req-e2f3344a-efb8-4ea0-8e85-d8db680302d0 req-b28a3064-9fd5-4c30-8812-a8d69390ff3c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] No waiting events found dispatching network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:25:52 compute-0 nova_compute[351685]: 2025-10-03 11:25:52.540 2 WARNING nova.compute.manager [req-e2f3344a-efb8-4ea0-8e85-d8db680302d0 req-b28a3064-9fd5-4c30-8812-a8d69390ff3c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received unexpected event network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 for instance with vm_state active and task_state None.#033[00m
Oct  3 11:25:52 compute-0 nova_compute[351685]: 2025-10-03 11:25:52.555 2 DEBUG nova.compute.manager [req-69a99db5-e826-491a-9466-1b78fc24a02b req-864c0130-fea8-4f6c-9c19-9e2e5cc6ef7b 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Received event network-vif-plugged-fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:25:52 compute-0 nova_compute[351685]: 2025-10-03 11:25:52.556 2 DEBUG oslo_concurrency.lockutils [req-69a99db5-e826-491a-9466-1b78fc24a02b req-864c0130-fea8-4f6c-9c19-9e2e5cc6ef7b 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "50697870-0565-414d-a9e6-5262e3e25e3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:52 compute-0 nova_compute[351685]: 2025-10-03 11:25:52.557 2 DEBUG oslo_concurrency.lockutils [req-69a99db5-e826-491a-9466-1b78fc24a02b req-864c0130-fea8-4f6c-9c19-9e2e5cc6ef7b 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "50697870-0565-414d-a9e6-5262e3e25e3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:52 compute-0 nova_compute[351685]: 2025-10-03 11:25:52.557 2 DEBUG oslo_concurrency.lockutils [req-69a99db5-e826-491a-9466-1b78fc24a02b req-864c0130-fea8-4f6c-9c19-9e2e5cc6ef7b 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "50697870-0565-414d-a9e6-5262e3e25e3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:52 compute-0 nova_compute[351685]: 2025-10-03 11:25:52.559 2 DEBUG nova.compute.manager [req-69a99db5-e826-491a-9466-1b78fc24a02b req-864c0130-fea8-4f6c-9c19-9e2e5cc6ef7b 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] No waiting events found dispatching network-vif-plugged-fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:25:52 compute-0 nova_compute[351685]: 2025-10-03 11:25:52.559 2 WARNING nova.compute.manager [req-69a99db5-e826-491a-9466-1b78fc24a02b req-864c0130-fea8-4f6c-9c19-9e2e5cc6ef7b 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Received unexpected event network-vif-plugged-fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a for instance with vm_state active and task_state None.#033[00m
Oct  3 11:25:52 compute-0 nova_compute[351685]: 2025-10-03 11:25:52.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:25:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:25:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3561: 321 pgs: 321 active+clean; 284 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 5.4 MiB/s wr, 144 op/s
Oct  3 11:25:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:25:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2999182327' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:25:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:25:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2999182327' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:25:54 compute-0 nova_compute[351685]: 2025-10-03 11:25:54.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:54 compute-0 nova_compute[351685]: 2025-10-03 11:25:54.663 2 DEBUG nova.compute.manager [req-2d12f949-8e6b-4966-be03-6b1a47227758 req-c7e2dd29-7bb3-44e3-97e5-ec7fa9791ca0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received event network-changed-d444b4b5-5243-48c2-80dd-3074b56d4277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:25:54 compute-0 nova_compute[351685]: 2025-10-03 11:25:54.664 2 DEBUG nova.compute.manager [req-2d12f949-8e6b-4966-be03-6b1a47227758 req-c7e2dd29-7bb3-44e3-97e5-ec7fa9791ca0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Refreshing instance network info cache due to event network-changed-d444b4b5-5243-48c2-80dd-3074b56d4277. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:25:54 compute-0 nova_compute[351685]: 2025-10-03 11:25:54.664 2 DEBUG oslo_concurrency.lockutils [req-2d12f949-8e6b-4966-be03-6b1a47227758 req-c7e2dd29-7bb3-44e3-97e5-ec7fa9791ca0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-f7465889-4aed-4799-835b-1c604f730144" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:25:54 compute-0 nova_compute[351685]: 2025-10-03 11:25:54.664 2 DEBUG oslo_concurrency.lockutils [req-2d12f949-8e6b-4966-be03-6b1a47227758 req-c7e2dd29-7bb3-44e3-97e5-ec7fa9791ca0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-f7465889-4aed-4799-835b-1c604f730144" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:25:54 compute-0 nova_compute[351685]: 2025-10-03 11:25:54.665 2 DEBUG nova.network.neutron [req-2d12f949-8e6b-4966-be03-6b1a47227758 req-c7e2dd29-7bb3-44e3-97e5-ec7fa9791ca0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Refreshing network info cache for port d444b4b5-5243-48c2-80dd-3074b56d4277 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:25:54 compute-0 nova_compute[351685]: 2025-10-03 11:25:54.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:55 compute-0 nova_compute[351685]: 2025-10-03 11:25:55.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:25:55 compute-0 nova_compute[351685]: 2025-10-03 11:25:55.750 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:55 compute-0 nova_compute[351685]: 2025-10-03 11:25:55.751 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:55 compute-0 nova_compute[351685]: 2025-10-03 11:25:55.751 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:55 compute-0 nova_compute[351685]: 2025-10-03 11:25:55.752 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:25:55 compute-0 nova_compute[351685]: 2025-10-03 11:25:55.752 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3562: 321 pgs: 321 active+clean; 291 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.0 MiB/s wr, 199 op/s
Oct  3 11:25:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:25:56 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3371949742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:25:56 compute-0 nova_compute[351685]: 2025-10-03 11:25:56.302 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:56 compute-0 nova_compute[351685]: 2025-10-03 11:25:56.439 2 DEBUG nova.network.neutron [req-2d12f949-8e6b-4966-be03-6b1a47227758 req-c7e2dd29-7bb3-44e3-97e5-ec7fa9791ca0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Updated VIF entry in instance network info cache for port d444b4b5-5243-48c2-80dd-3074b56d4277. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:25:56 compute-0 nova_compute[351685]: 2025-10-03 11:25:56.440 2 DEBUG nova.network.neutron [req-2d12f949-8e6b-4966-be03-6b1a47227758 req-c7e2dd29-7bb3-44e3-97e5-ec7fa9791ca0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Updating instance_info_cache with network_info: [{"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:25:56 compute-0 nova_compute[351685]: 2025-10-03 11:25:56.448 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:25:56 compute-0 nova_compute[351685]: 2025-10-03 11:25:56.449 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:25:56 compute-0 nova_compute[351685]: 2025-10-03 11:25:56.458 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:25:56 compute-0 nova_compute[351685]: 2025-10-03 11:25:56.459 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:25:56 compute-0 nova_compute[351685]: 2025-10-03 11:25:56.461 2 DEBUG oslo_concurrency.lockutils [req-2d12f949-8e6b-4966-be03-6b1a47227758 req-c7e2dd29-7bb3-44e3-97e5-ec7fa9791ca0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-f7465889-4aed-4799-835b-1c604f730144" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:25:56 compute-0 nova_compute[351685]: 2025-10-03 11:25:56.471 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:25:56 compute-0 nova_compute[351685]: 2025-10-03 11:25:56.471 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:25:56 compute-0 nova_compute[351685]: 2025-10-03 11:25:56.480 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:25:56 compute-0 nova_compute[351685]: 2025-10-03 11:25:56.480 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:25:56 compute-0 nova_compute[351685]: 2025-10-03 11:25:56.481 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00200635042244579 of space, bias 1.0, pg target 0.6019051267337371 quantized to 32 (current 32)
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:25:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:25:56 compute-0 nova_compute[351685]: 2025-10-03 11:25:56.974 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:25:56 compute-0 nova_compute[351685]: 2025-10-03 11:25:56.976 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3242MB free_disk=59.867923736572266GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:25:56 compute-0 nova_compute[351685]: 2025-10-03 11:25:56.976 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:56 compute-0 nova_compute[351685]: 2025-10-03 11:25:56.977 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.073 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.073 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b5df7002-5185-4a75-ae2e-e8a44a0be062 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.074 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 50697870-0565-414d-a9e6-5262e3e25e3c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.074 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance f7465889-4aed-4799-835b-1c604f730144 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.075 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.075 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1408MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.164 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.453 2 DEBUG nova.compute.manager [req-65f9f1b6-450e-4ad0-a18f-0f71f0782e3b req-cf0d15a0-4008-4542-8b10-259fbce0e39f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Received event network-changed-fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.454 2 DEBUG nova.compute.manager [req-65f9f1b6-450e-4ad0-a18f-0f71f0782e3b req-cf0d15a0-4008-4542-8b10-259fbce0e39f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Refreshing instance network info cache due to event network-changed-fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.455 2 DEBUG oslo_concurrency.lockutils [req-65f9f1b6-450e-4ad0-a18f-0f71f0782e3b req-cf0d15a0-4008-4542-8b10-259fbce0e39f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-50697870-0565-414d-a9e6-5262e3e25e3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.455 2 DEBUG oslo_concurrency.lockutils [req-65f9f1b6-450e-4ad0-a18f-0f71f0782e3b req-cf0d15a0-4008-4542-8b10-259fbce0e39f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-50697870-0565-414d-a9e6-5262e3e25e3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.455 2 DEBUG nova.network.neutron [req-65f9f1b6-450e-4ad0-a18f-0f71f0782e3b req-cf0d15a0-4008-4542-8b10-259fbce0e39f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Refreshing network info cache for port fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:25:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4107331883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.631 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.640 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.656 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.686 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:25:57 compute-0 nova_compute[351685]: 2025-10-03 11:25:57.686 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3563: 321 pgs: 321 active+clean; 291 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 192 op/s
Oct  3 11:25:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.385 2 DEBUG oslo_concurrency.lockutils [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Acquiring lock "50697870-0565-414d-a9e6-5262e3e25e3c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.387 2 DEBUG oslo_concurrency.lockutils [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lock "50697870-0565-414d-a9e6-5262e3e25e3c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.388 2 DEBUG oslo_concurrency.lockutils [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Acquiring lock "50697870-0565-414d-a9e6-5262e3e25e3c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.389 2 DEBUG oslo_concurrency.lockutils [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lock "50697870-0565-414d-a9e6-5262e3e25e3c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.390 2 DEBUG oslo_concurrency.lockutils [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lock "50697870-0565-414d-a9e6-5262e3e25e3c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.393 2 INFO nova.compute.manager [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Terminating instance#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.396 2 DEBUG nova.compute.manager [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  3 11:25:58 compute-0 kernel: tapfd6b0308-e6 (unregistering): left promiscuous mode
Oct  3 11:25:58 compute-0 NetworkManager[45015]: <info>  [1759490758.4800] device (tapfd6b0308-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:58 compute-0 ovn_controller[88471]: 2025-10-03T11:25:58Z|00100|binding|INFO|Releasing lport fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a from this chassis (sb_readonly=0)
Oct  3 11:25:58 compute-0 ovn_controller[88471]: 2025-10-03T11:25:58Z|00101|binding|INFO|Setting lport fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a down in Southbound
Oct  3 11:25:58 compute-0 ovn_controller[88471]: 2025-10-03T11:25:58Z|00102|binding|INFO|Removing iface tapfd6b0308-e6 ovn-installed in OVS
Oct  3 11:25:58 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:58.511 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:e9:e4 10.100.0.7'], port_security=['fa:16:3e:a4:e9:e4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '50697870-0565-414d-a9e6-5262e3e25e3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aeefabefe92a4b9a95b28bf43d68c1f5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78d9cfd8-4cde-4d5f-bbcc-237a7e6b4364', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bda35277-322e-4200-96ed-1dcc2349f348, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:25:58 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:58.512 284328 INFO neutron.agent.ovn.metadata.agent [-] Port fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a in datapath 1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89 unbound from our chassis#033[00m
Oct  3 11:25:58 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:58.514 284328 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  3 11:25:58 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:58.515 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[eb899a1e-264f-41ad-89c4-e7c4a2a83a82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:58 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:58.516 284328 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89 namespace which is not needed anymore#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:58 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct  3 11:25:58 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000009.scope: Consumed 9.299s CPU time.
Oct  3 11:25:58 compute-0 systemd-machined[137653]: Machine qemu-10-instance-00000009 terminated.
Oct  3 11:25:58 compute-0 kernel: tapfd6b0308-e6: entered promiscuous mode
Oct  3 11:25:58 compute-0 NetworkManager[45015]: <info>  [1759490758.6193] manager: (tapfd6b0308-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Oct  3 11:25:58 compute-0 kernel: tapfd6b0308-e6 (unregistering): left promiscuous mode
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:58 compute-0 ovn_controller[88471]: 2025-10-03T11:25:58Z|00103|binding|INFO|Claiming lport fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a for this chassis.
Oct  3 11:25:58 compute-0 ovn_controller[88471]: 2025-10-03T11:25:58Z|00104|binding|INFO|fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a: Claiming fa:16:3e:a4:e9:e4 10.100.0.7
Oct  3 11:25:58 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:58.641 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:e9:e4 10.100.0.7'], port_security=['fa:16:3e:a4:e9:e4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '50697870-0565-414d-a9e6-5262e3e25e3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aeefabefe92a4b9a95b28bf43d68c1f5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78d9cfd8-4cde-4d5f-bbcc-237a7e6b4364', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bda35277-322e-4200-96ed-1dcc2349f348, chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.655 2 INFO nova.virt.libvirt.driver [-] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Instance destroyed successfully.#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.655 2 DEBUG nova.objects.instance [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lazy-loading 'resources' on Instance uuid 50697870-0565-414d-a9e6-5262e3e25e3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:25:58 compute-0 ovn_controller[88471]: 2025-10-03T11:25:58Z|00105|binding|INFO|Setting lport fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a ovn-installed in OVS
Oct  3 11:25:58 compute-0 ovn_controller[88471]: 2025-10-03T11:25:58Z|00106|binding|INFO|Setting lport fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a up in Southbound
Oct  3 11:25:58 compute-0 ovn_controller[88471]: 2025-10-03T11:25:58Z|00107|binding|INFO|Releasing lport fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a from this chassis (sb_readonly=1)
Oct  3 11:25:58 compute-0 ovn_controller[88471]: 2025-10-03T11:25:58Z|00108|if_status|INFO|Not setting lport fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a down as sb is readonly
Oct  3 11:25:58 compute-0 ovn_controller[88471]: 2025-10-03T11:25:58Z|00109|binding|INFO|Removing iface tapfd6b0308-e6 ovn-installed in OVS
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:58 compute-0 ovn_controller[88471]: 2025-10-03T11:25:58Z|00110|binding|INFO|Releasing lport fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a from this chassis (sb_readonly=0)
Oct  3 11:25:58 compute-0 ovn_controller[88471]: 2025-10-03T11:25:58Z|00111|binding|INFO|Setting lport fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a down in Southbound
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:58 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:58.688 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:e9:e4 10.100.0.7'], port_security=['fa:16:3e:a4:e9:e4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '50697870-0565-414d-a9e6-5262e3e25e3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aeefabefe92a4b9a95b28bf43d68c1f5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78d9cfd8-4cde-4d5f-bbcc-237a7e6b4364', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bda35277-322e-4200-96ed-1dcc2349f348, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.689 2 DEBUG nova.virt.libvirt.vif [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T11:25:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-2096745416',display_name='tempest-ServersTestManualDisk-server-2096745416',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-2096745416',id=9,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPkUAVtkK2DWE2ZSkCwh/VInl1MTjNJIyl/GDnX7teszWsOy84W8+U9NLrpaz94mseNsY/tUmKpBVPmL61vRlPv7fCR411/LHp0ptZ7+pRTSF8SDQS5PzqVwycZCJmLRvA==',key_name='tempest-keypair-368348861',keypairs=<?>,launch_index=0,launched_at=2025-10-03T11:25:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aeefabefe92a4b9a95b28bf43d68c1f5',ramdisk_id='',reservation_id='r-fs9p23qy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1050468926',owner_user_name='tempest-ServersTestManualDisk-1050468926-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-03T11:25:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='018e43ba13984d3cbaef2cef945dfb9e',uuid=50697870-0565-414d-a9e6-5262e3e25e3c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "address": "fa:16:3e:a4:e9:e4", "network": {"id": "1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-726477615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aeefabefe92a4b9a95b28bf43d68c1f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd6b0308-e6", "ovs_interfaceid": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.690 2 DEBUG nova.network.os_vif_util [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Converting VIF {"id": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "address": "fa:16:3e:a4:e9:e4", "network": {"id": "1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-726477615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aeefabefe92a4b9a95b28bf43d68c1f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd6b0308-e6", "ovs_interfaceid": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.690 2 DEBUG nova.network.os_vif_util [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:e9:e4,bridge_name='br-int',has_traffic_filtering=True,id=fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a,network=Network(1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd6b0308-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.691 2 DEBUG os_vif [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:e9:e4,bridge_name='br-int',has_traffic_filtering=True,id=fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a,network=Network(1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd6b0308-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.694 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd6b0308-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.702 2 INFO os_vif [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:e9:e4,bridge_name='br-int',has_traffic_filtering=True,id=fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a,network=Network(1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd6b0308-e6')#033[00m
Oct  3 11:25:58 compute-0 neutron-haproxy-ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89[528705]: [NOTICE]   (528709) : haproxy version is 2.8.14-c23fe91
Oct  3 11:25:58 compute-0 neutron-haproxy-ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89[528705]: [NOTICE]   (528709) : path to executable is /usr/sbin/haproxy
Oct  3 11:25:58 compute-0 neutron-haproxy-ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89[528705]: [WARNING]  (528709) : Exiting Master process...
Oct  3 11:25:58 compute-0 neutron-haproxy-ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89[528705]: [ALERT]    (528709) : Current worker (528711) exited with code 143 (Terminated)
Oct  3 11:25:58 compute-0 neutron-haproxy-ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89[528705]: [WARNING]  (528709) : All workers exited. Exiting... (0)
Oct  3 11:25:58 compute-0 systemd[1]: libpod-050527ed5a550777da89e61665e1cac5aa19f766059c3b84e69e0cc175455ec1.scope: Deactivated successfully.
Oct  3 11:25:58 compute-0 conmon[528705]: conmon 050527ed5a550777da89 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-050527ed5a550777da89e61665e1cac5aa19f766059c3b84e69e0cc175455ec1.scope/container/memory.events
Oct  3 11:25:58 compute-0 podman[528790]: 2025-10-03 11:25:58.780703283 +0000 UTC m=+0.072755123 container died 050527ed5a550777da89e61665e1cac5aa19f766059c3b84e69e0cc175455ec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:25:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-050527ed5a550777da89e61665e1cac5aa19f766059c3b84e69e0cc175455ec1-userdata-shm.mount: Deactivated successfully.
Oct  3 11:25:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-80ea616b0eca54b29b6140fbfa22045fdd856eb7cdc14e00fe52a6b7c409bd8b-merged.mount: Deactivated successfully.
Oct  3 11:25:58 compute-0 podman[528790]: 2025-10-03 11:25:58.852037649 +0000 UTC m=+0.144089489 container cleanup 050527ed5a550777da89e61665e1cac5aa19f766059c3b84e69e0cc175455ec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  3 11:25:58 compute-0 systemd[1]: libpod-conmon-050527ed5a550777da89e61665e1cac5aa19f766059c3b84e69e0cc175455ec1.scope: Deactivated successfully.
Oct  3 11:25:58 compute-0 podman[528833]: 2025-10-03 11:25:58.959278257 +0000 UTC m=+0.076818604 container remove 050527ed5a550777da89e61665e1cac5aa19f766059c3b84e69e0cc175455ec1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  3 11:25:58 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:58.970 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[ce9a72a8-7e02-405c-b6a6-0c74cc3a5474]: (4, ('Fri Oct  3 11:25:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89 (050527ed5a550777da89e61665e1cac5aa19f766059c3b84e69e0cc175455ec1)\n050527ed5a550777da89e61665e1cac5aa19f766059c3b84e69e0cc175455ec1\nFri Oct  3 11:25:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89 (050527ed5a550777da89e61665e1cac5aa19f766059c3b84e69e0cc175455ec1)\n050527ed5a550777da89e61665e1cac5aa19f766059c3b84e69e0cc175455ec1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:58 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:58.985 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[42cebfb8-b6a8-402a-8013-e99113db72f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:58 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:58.989 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c8bf64c-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:25:58 compute-0 nova_compute[351685]: 2025-10-03 11:25:58.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:59 compute-0 kernel: tap1c8bf64c-90: left promiscuous mode
Oct  3 11:25:59 compute-0 nova_compute[351685]: 2025-10-03 11:25:59.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:59 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:59.025 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[49d919e3-798b-47ad-87b6-749cf2956dd9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:59 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:59.049 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[258513c0-c0af-404c-882b-0462f3295a1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:59 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:59.050 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5e3ab2-d80e-493f-b305-dc90117cbc6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:59 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:59.067 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[a944acfc-1f66-4ec7-a77d-de242de5de62]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 993213, 'reachable_time': 37862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 528849, 'error': None, 'target': 'ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d1c8bf64c\x2d9e9f\x2d4039\x2dbb96\x2dcd2eda3e3a89.mount: Deactivated successfully.
Oct  3 11:25:59 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:59.073 284439 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  3 11:25:59 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:59.073 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[033522a3-8ee2-44b7-a0ff-533a52575e7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:59 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:59.074 284328 INFO neutron.agent.ovn.metadata.agent [-] Port fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a in datapath 1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89 unbound from our chassis#033[00m
Oct  3 11:25:59 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:59.076 284328 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  3 11:25:59 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:59.078 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[1433cfca-8198-45ab-a203-8e73c317848e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:59 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:59.080 284328 INFO neutron.agent.ovn.metadata.agent [-] Port fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a in datapath 1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89 unbound from our chassis#033[00m
Oct  3 11:25:59 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:59.084 284328 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  3 11:25:59 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:25:59.085 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3f26a9-e04b-46fc-8b7c-5c8d397e90a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:25:59 compute-0 nova_compute[351685]: 2025-10-03 11:25:59.350 2 INFO nova.virt.libvirt.driver [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Deleting instance files /var/lib/nova/instances/50697870-0565-414d-a9e6-5262e3e25e3c_del#033[00m
Oct  3 11:25:59 compute-0 nova_compute[351685]: 2025-10-03 11:25:59.351 2 INFO nova.virt.libvirt.driver [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Deletion of /var/lib/nova/instances/50697870-0565-414d-a9e6-5262e3e25e3c_del complete#033[00m
Oct  3 11:25:59 compute-0 nova_compute[351685]: 2025-10-03 11:25:59.376 2 DEBUG nova.network.neutron [req-65f9f1b6-450e-4ad0-a18f-0f71f0782e3b req-cf0d15a0-4008-4542-8b10-259fbce0e39f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Updated VIF entry in instance network info cache for port fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:25:59 compute-0 nova_compute[351685]: 2025-10-03 11:25:59.377 2 DEBUG nova.network.neutron [req-65f9f1b6-450e-4ad0-a18f-0f71f0782e3b req-cf0d15a0-4008-4542-8b10-259fbce0e39f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Updating instance_info_cache with network_info: [{"id": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "address": "fa:16:3e:a4:e9:e4", "network": {"id": "1c8bf64c-9e9f-4039-bb96-cd2eda3e3a89", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-726477615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aeefabefe92a4b9a95b28bf43d68c1f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd6b0308-e6", "ovs_interfaceid": "fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:25:59 compute-0 nova_compute[351685]: 2025-10-03 11:25:59.414 2 DEBUG oslo_concurrency.lockutils [req-65f9f1b6-450e-4ad0-a18f-0f71f0782e3b req-cf0d15a0-4008-4542-8b10-259fbce0e39f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-50697870-0565-414d-a9e6-5262e3e25e3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:25:59 compute-0 nova_compute[351685]: 2025-10-03 11:25:59.424 2 INFO nova.compute.manager [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Took 1.03 seconds to destroy the instance on the hypervisor.#033[00m
Oct  3 11:25:59 compute-0 nova_compute[351685]: 2025-10-03 11:25:59.425 2 DEBUG oslo.service.loopingcall [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  3 11:25:59 compute-0 nova_compute[351685]: 2025-10-03 11:25:59.425 2 DEBUG nova.compute.manager [-] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  3 11:25:59 compute-0 nova_compute[351685]: 2025-10-03 11:25:59.425 2 DEBUG nova.network.neutron [-] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  3 11:25:59 compute-0 nova_compute[351685]: 2025-10-03 11:25:59.687 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:25:59 compute-0 nova_compute[351685]: 2025-10-03 11:25:59.688 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:25:59 compute-0 podman[157165]: time="2025-10-03T11:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:25:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3564: 321 pgs: 321 active+clean; 264 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 224 op/s
Oct  3 11:25:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 48733 "" "Go-http-client/1.1"
Oct  3 11:25:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10046 "" "Go-http-client/1.1"
Oct  3 11:25:59 compute-0 nova_compute[351685]: 2025-10-03 11:25:59.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:25:59 compute-0 nova_compute[351685]: 2025-10-03 11:25:59.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:00 compute-0 nova_compute[351685]: 2025-10-03 11:26:00.566 2 DEBUG nova.network.neutron [-] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:26:00 compute-0 nova_compute[351685]: 2025-10-03 11:26:00.588 2 DEBUG nova.compute.manager [req-bcb3e9b6-9eb4-4f66-be3d-230b24278baa req-e75dec26-408c-41cc-a54c-20d1b85102db 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Received event network-vif-plugged-fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:00 compute-0 nova_compute[351685]: 2025-10-03 11:26:00.589 2 DEBUG oslo_concurrency.lockutils [req-bcb3e9b6-9eb4-4f66-be3d-230b24278baa req-e75dec26-408c-41cc-a54c-20d1b85102db 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "50697870-0565-414d-a9e6-5262e3e25e3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:00 compute-0 nova_compute[351685]: 2025-10-03 11:26:00.590 2 DEBUG oslo_concurrency.lockutils [req-bcb3e9b6-9eb4-4f66-be3d-230b24278baa req-e75dec26-408c-41cc-a54c-20d1b85102db 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "50697870-0565-414d-a9e6-5262e3e25e3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:00 compute-0 nova_compute[351685]: 2025-10-03 11:26:00.590 2 DEBUG oslo_concurrency.lockutils [req-bcb3e9b6-9eb4-4f66-be3d-230b24278baa req-e75dec26-408c-41cc-a54c-20d1b85102db 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "50697870-0565-414d-a9e6-5262e3e25e3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:00 compute-0 nova_compute[351685]: 2025-10-03 11:26:00.591 2 DEBUG nova.compute.manager [req-bcb3e9b6-9eb4-4f66-be3d-230b24278baa req-e75dec26-408c-41cc-a54c-20d1b85102db 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] No waiting events found dispatching network-vif-plugged-fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:26:00 compute-0 nova_compute[351685]: 2025-10-03 11:26:00.594 2 WARNING nova.compute.manager [req-bcb3e9b6-9eb4-4f66-be3d-230b24278baa req-e75dec26-408c-41cc-a54c-20d1b85102db 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Received unexpected event network-vif-plugged-fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a for instance with vm_state active and task_state deleting.#033[00m
Oct  3 11:26:00 compute-0 nova_compute[351685]: 2025-10-03 11:26:00.596 2 INFO nova.compute.manager [-] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Took 1.17 seconds to deallocate network for instance.#033[00m
Oct  3 11:26:00 compute-0 nova_compute[351685]: 2025-10-03 11:26:00.669 2 DEBUG oslo_concurrency.lockutils [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:00 compute-0 nova_compute[351685]: 2025-10-03 11:26:00.670 2 DEBUG oslo_concurrency.lockutils [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:00 compute-0 nova_compute[351685]: 2025-10-03 11:26:00.676 2 DEBUG nova.compute.manager [req-a02748c3-1367-4b0b-b3f3-de5dcbed5c4c req-64c76a83-834c-4bfd-bfe2-98c3f76ca0e1 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Received event network-vif-deleted-fd6b0308-e61b-4bf6-abbf-d31fa7c9ea7a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:00 compute-0 nova_compute[351685]: 2025-10-03 11:26:00.798 2 DEBUG oslo_concurrency.processutils [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:00 compute-0 podman[528850]: 2025-10-03 11:26:00.856751089 +0000 UTC m=+0.106791694 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:26:00 compute-0 podman[528851]: 2025-10-03 11:26:00.882378101 +0000 UTC m=+0.125830884 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, version=9.4, architecture=x86_64, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, maintainer=Red Hat, Inc., release=1214.1726694543, io.openshift.tags=base rhel9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Oct  3 11:26:00 compute-0 podman[528852]: 2025-10-03 11:26:00.887301568 +0000 UTC m=+0.124415349 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3)
Oct  3 11:26:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:26:01 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2514459493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:26:01 compute-0 nova_compute[351685]: 2025-10-03 11:26:01.259 2 DEBUG oslo_concurrency.processutils [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:01 compute-0 nova_compute[351685]: 2025-10-03 11:26:01.271 2 DEBUG nova.compute.provider_tree [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:26:01 compute-0 nova_compute[351685]: 2025-10-03 11:26:01.296 2 DEBUG nova.scheduler.client.report [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:26:01 compute-0 nova_compute[351685]: 2025-10-03 11:26:01.320 2 DEBUG oslo_concurrency.lockutils [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:01 compute-0 nova_compute[351685]: 2025-10-03 11:26:01.343 2 INFO nova.scheduler.client.report [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Deleted allocations for instance 50697870-0565-414d-a9e6-5262e3e25e3c#033[00m
Oct  3 11:26:01 compute-0 openstack_network_exporter[367524]: ERROR   11:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:26:01 compute-0 openstack_network_exporter[367524]: ERROR   11:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:26:01 compute-0 openstack_network_exporter[367524]: ERROR   11:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:26:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:26:01 compute-0 openstack_network_exporter[367524]: ERROR   11:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:26:01 compute-0 openstack_network_exporter[367524]: ERROR   11:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:26:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:26:01 compute-0 nova_compute[351685]: 2025-10-03 11:26:01.427 2 DEBUG oslo_concurrency.lockutils [None req-cb66593e-68dd-48fd-9321-51d46f735cc1 018e43ba13984d3cbaef2cef945dfb9e aeefabefe92a4b9a95b28bf43d68c1f5 - - default default] Lock "50697870-0565-414d-a9e6-5262e3e25e3c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:01 compute-0 nova_compute[351685]: 2025-10-03 11:26:01.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:26:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3565: 321 pgs: 321 active+clean; 260 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.5 MiB/s wr, 209 op/s
Oct  3 11:26:02 compute-0 nova_compute[351685]: 2025-10-03 11:26:02.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:26:02 compute-0 nova_compute[351685]: 2025-10-03 11:26:02.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:26:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:26:03 compute-0 nova_compute[351685]: 2025-10-03 11:26:03.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:03 compute-0 nova_compute[351685]: 2025-10-03 11:26:03.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:26:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3566: 321 pgs: 321 active+clean; 244 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 749 KiB/s wr, 189 op/s
Oct  3 11:26:04 compute-0 nova_compute[351685]: 2025-10-03 11:26:04.564 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Acquiring lock "939bb9dc-5fcb-4b53-adc4-df36f016d404" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:04 compute-0 nova_compute[351685]: 2025-10-03 11:26:04.565 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lock "939bb9dc-5fcb-4b53-adc4-df36f016d404" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:04 compute-0 nova_compute[351685]: 2025-10-03 11:26:04.581 2 DEBUG nova.compute.manager [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  3 11:26:04 compute-0 nova_compute[351685]: 2025-10-03 11:26:04.649 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:04 compute-0 nova_compute[351685]: 2025-10-03 11:26:04.649 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:04 compute-0 nova_compute[351685]: 2025-10-03 11:26:04.656 2 DEBUG nova.virt.hardware [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  3 11:26:04 compute-0 nova_compute[351685]: 2025-10-03 11:26:04.657 2 INFO nova.compute.claims [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  3 11:26:04 compute-0 nova_compute[351685]: 2025-10-03 11:26:04.822 2 DEBUG oslo_concurrency.processutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:04 compute-0 nova_compute[351685]: 2025-10-03 11:26:04.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:26:05 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3695091702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.333 2 DEBUG oslo_concurrency.processutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.344 2 DEBUG nova.compute.provider_tree [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.363 2 DEBUG nova.scheduler.client.report [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.398 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.399 2 DEBUG nova.compute.manager [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.454 2 DEBUG nova.compute.manager [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.455 2 DEBUG nova.network.neutron [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.487 2 INFO nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.508 2 DEBUG nova.compute.manager [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.621 2 DEBUG nova.compute.manager [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.634 2 DEBUG nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.635 2 INFO nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Creating image(s)#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.680 2 DEBUG nova.storage.rbd_utils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] rbd image 939bb9dc-5fcb-4b53-adc4-df36f016d404_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.734 2 DEBUG nova.storage.rbd_utils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] rbd image 939bb9dc-5fcb-4b53-adc4-df36f016d404_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3567: 321 pgs: 321 active+clean; 244 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 75 KiB/s wr, 137 op/s
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.774 2 DEBUG nova.storage.rbd_utils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] rbd image 939bb9dc-5fcb-4b53-adc4-df36f016d404_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.781 2 DEBUG oslo_concurrency.processutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.848 2 DEBUG nova.policy [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '842e5fed9314415e8fc9c491dd9efc11', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '673cf473bf374c91b11ac2de62d239fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.857 2 DEBUG oslo_concurrency.processutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.858 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Acquiring lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.858 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.859 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.899 2 DEBUG nova.storage.rbd_utils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] rbd image 939bb9dc-5fcb-4b53-adc4-df36f016d404_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:05 compute-0 nova_compute[351685]: 2025-10-03 11:26:05.916 2 DEBUG oslo_concurrency.processutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 939bb9dc-5fcb-4b53-adc4-df36f016d404_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:06 compute-0 ovn_controller[88471]: 2025-10-03T11:26:06Z|00112|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:26:06 compute-0 ovn_controller[88471]: 2025-10-03T11:26:06Z|00113|binding|INFO|Releasing lport 9360fd43-509e-48cf-868f-65a2768ca52b from this chassis (sb_readonly=0)
Oct  3 11:26:06 compute-0 ovn_controller[88471]: 2025-10-03T11:26:06Z|00114|binding|INFO|Releasing lport 1eb40ea8-53b0-46a1-bf82-85a3448330ac from this chassis (sb_readonly=0)
Oct  3 11:26:06 compute-0 nova_compute[351685]: 2025-10-03 11:26:06.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:06 compute-0 nova_compute[351685]: 2025-10-03 11:26:06.381 2 DEBUG oslo_concurrency.processutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 939bb9dc-5fcb-4b53-adc4-df36f016d404_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:06 compute-0 nova_compute[351685]: 2025-10-03 11:26:06.494 2 DEBUG nova.storage.rbd_utils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] resizing rbd image 939bb9dc-5fcb-4b53-adc4-df36f016d404_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  3 11:26:06 compute-0 nova_compute[351685]: 2025-10-03 11:26:06.689 2 DEBUG nova.network.neutron [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Successfully created port: 5390e6f6-6c7c-44df-a141-60b229fbae5b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  3 11:26:06 compute-0 nova_compute[351685]: 2025-10-03 11:26:06.701 2 DEBUG nova.objects.instance [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lazy-loading 'migration_context' on Instance uuid 939bb9dc-5fcb-4b53-adc4-df36f016d404 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:26:06 compute-0 nova_compute[351685]: 2025-10-03 11:26:06.717 2 DEBUG nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  3 11:26:06 compute-0 nova_compute[351685]: 2025-10-03 11:26:06.718 2 DEBUG nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Ensure instance console log exists: /var/lib/nova/instances/939bb9dc-5fcb-4b53-adc4-df36f016d404/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  3 11:26:06 compute-0 nova_compute[351685]: 2025-10-03 11:26:06.718 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:06 compute-0 nova_compute[351685]: 2025-10-03 11:26:06.719 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:06 compute-0 nova_compute[351685]: 2025-10-03 11:26:06.719 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3568: 321 pgs: 321 active+clean; 244 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 947 KiB/s rd, 13 KiB/s wr, 56 op/s
Oct  3 11:26:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:26:08 compute-0 nova_compute[351685]: 2025-10-03 11:26:08.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:08 compute-0 nova_compute[351685]: 2025-10-03 11:26:08.859 2 DEBUG nova.network.neutron [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Successfully updated port: 5390e6f6-6c7c-44df-a141-60b229fbae5b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  3 11:26:08 compute-0 nova_compute[351685]: 2025-10-03 11:26:08.878 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Acquiring lock "refresh_cache-939bb9dc-5fcb-4b53-adc4-df36f016d404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:26:08 compute-0 nova_compute[351685]: 2025-10-03 11:26:08.879 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Acquired lock "refresh_cache-939bb9dc-5fcb-4b53-adc4-df36f016d404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:26:08 compute-0 nova_compute[351685]: 2025-10-03 11:26:08.879 2 DEBUG nova.network.neutron [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 11:26:09 compute-0 nova_compute[351685]: 2025-10-03 11:26:09.044 2 DEBUG nova.compute.manager [req-8d49de21-5f8a-4810-994f-a891ed895fe0 req-3f0b751a-0102-4cab-ad87-9420f7cd6c7e 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Received event network-changed-5390e6f6-6c7c-44df-a141-60b229fbae5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:09 compute-0 nova_compute[351685]: 2025-10-03 11:26:09.044 2 DEBUG nova.compute.manager [req-8d49de21-5f8a-4810-994f-a891ed895fe0 req-3f0b751a-0102-4cab-ad87-9420f7cd6c7e 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Refreshing instance network info cache due to event network-changed-5390e6f6-6c7c-44df-a141-60b229fbae5b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:26:09 compute-0 nova_compute[351685]: 2025-10-03 11:26:09.044 2 DEBUG oslo_concurrency.lockutils [req-8d49de21-5f8a-4810-994f-a891ed895fe0 req-3f0b751a-0102-4cab-ad87-9420f7cd6c7e 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-939bb9dc-5fcb-4b53-adc4-df36f016d404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:26:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:26:09 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:26:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:26:09 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:26:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:26:09 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:26:09 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 25c68507-0f54-4384-a52f-fbb33f65ed91 does not exist
Oct  3 11:26:09 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f3b3591b-1ef4-457d-a094-cd983c47aa5c does not exist
Oct  3 11:26:09 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev be7ad9f6-c39e-43b4-b012-d5ac226004fc does not exist
Oct  3 11:26:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:26:09 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:26:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:26:09 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:26:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:26:09 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:26:09 compute-0 nova_compute[351685]: 2025-10-03 11:26:09.360 2 DEBUG nova.network.neutron [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  3 11:26:09 compute-0 nova_compute[351685]: 2025-10-03 11:26:09.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3569: 321 pgs: 321 active+clean; 275 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 963 KiB/s rd, 652 KiB/s wr, 80 op/s
Oct  3 11:26:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:26:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:26:09 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:26:09 compute-0 nova_compute[351685]: 2025-10-03 11:26:09.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:10 compute-0 podman[529383]: 2025-10-03 11:26:10.013937711 +0000 UTC m=+0.063327731 container create 9f0aeda517bafc0e12ce35d9259b49eefc7c3cd1f9038693a683413b379708b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 11:26:10 compute-0 podman[529383]: 2025-10-03 11:26:09.987650458 +0000 UTC m=+0.037040478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:26:10 compute-0 systemd[1]: Started libpod-conmon-9f0aeda517bafc0e12ce35d9259b49eefc7c3cd1f9038693a683413b379708b1.scope.
Oct  3 11:26:10 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:26:10 compute-0 podman[529383]: 2025-10-03 11:26:10.158132492 +0000 UTC m=+0.207522512 container init 9f0aeda517bafc0e12ce35d9259b49eefc7c3cd1f9038693a683413b379708b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  3 11:26:10 compute-0 podman[529383]: 2025-10-03 11:26:10.16929667 +0000 UTC m=+0.218686670 container start 9f0aeda517bafc0e12ce35d9259b49eefc7c3cd1f9038693a683413b379708b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 11:26:10 compute-0 podman[529383]: 2025-10-03 11:26:10.174487237 +0000 UTC m=+0.223877267 container attach 9f0aeda517bafc0e12ce35d9259b49eefc7c3cd1f9038693a683413b379708b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:26:10 compute-0 peaceful_mclaren[529399]: 167 167
Oct  3 11:26:10 compute-0 systemd[1]: libpod-9f0aeda517bafc0e12ce35d9259b49eefc7c3cd1f9038693a683413b379708b1.scope: Deactivated successfully.
Oct  3 11:26:10 compute-0 podman[529383]: 2025-10-03 11:26:10.179702424 +0000 UTC m=+0.229092424 container died 9f0aeda517bafc0e12ce35d9259b49eefc7c3cd1f9038693a683413b379708b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:26:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-278e360b83f2b24f0bd90197c81b961fdd024c2d276ba91d047875a9db9ea5ed-merged.mount: Deactivated successfully.
Oct  3 11:26:10 compute-0 podman[529383]: 2025-10-03 11:26:10.236569526 +0000 UTC m=+0.285959516 container remove 9f0aeda517bafc0e12ce35d9259b49eefc7c3cd1f9038693a683413b379708b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mclaren, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:26:10 compute-0 systemd[1]: libpod-conmon-9f0aeda517bafc0e12ce35d9259b49eefc7c3cd1f9038693a683413b379708b1.scope: Deactivated successfully.
Oct  3 11:26:10 compute-0 podman[529422]: 2025-10-03 11:26:10.489580265 +0000 UTC m=+0.072464884 container create a4dd38154d66923a9dcd990b25ed0bb3345ae97de481668239f0c40a52204924 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nightingale, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:26:10 compute-0 podman[529422]: 2025-10-03 11:26:10.454367297 +0000 UTC m=+0.037251956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:26:10 compute-0 systemd[1]: Started libpod-conmon-a4dd38154d66923a9dcd990b25ed0bb3345ae97de481668239f0c40a52204924.scope.
Oct  3 11:26:10 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd0b9b1962c2e798fb761035dbdb08405b3230b249279062f5a4be064a54b6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd0b9b1962c2e798fb761035dbdb08405b3230b249279062f5a4be064a54b6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd0b9b1962c2e798fb761035dbdb08405b3230b249279062f5a4be064a54b6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd0b9b1962c2e798fb761035dbdb08405b3230b249279062f5a4be064a54b6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd0b9b1962c2e798fb761035dbdb08405b3230b249279062f5a4be064a54b6d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:26:10 compute-0 podman[529422]: 2025-10-03 11:26:10.646557156 +0000 UTC m=+0.229441765 container init a4dd38154d66923a9dcd990b25ed0bb3345ae97de481668239f0c40a52204924 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:26:10 compute-0 podman[529422]: 2025-10-03 11:26:10.662533568 +0000 UTC m=+0.245418167 container start a4dd38154d66923a9dcd990b25ed0bb3345ae97de481668239f0c40a52204924 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nightingale, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 11:26:10 compute-0 podman[529422]: 2025-10-03 11:26:10.668103776 +0000 UTC m=+0.250988395 container attach a4dd38154d66923a9dcd990b25ed0bb3345ae97de481668239f0c40a52204924 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nightingale, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.934 2 DEBUG nova.network.neutron [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Updating instance_info_cache with network_info: [{"id": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "address": "fa:16:3e:09:3c:79", "network": {"id": "85197338-73a4-414f-bcee-412a119e5ac4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-610792588-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "673cf473bf374c91b11ac2de62d239fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5390e6f6-6c", "ovs_interfaceid": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.957 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Releasing lock "refresh_cache-939bb9dc-5fcb-4b53-adc4-df36f016d404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.957 2 DEBUG nova.compute.manager [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Instance network_info: |[{"id": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "address": "fa:16:3e:09:3c:79", "network": {"id": "85197338-73a4-414f-bcee-412a119e5ac4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-610792588-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "673cf473bf374c91b11ac2de62d239fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5390e6f6-6c", "ovs_interfaceid": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.958 2 DEBUG oslo_concurrency.lockutils [req-8d49de21-5f8a-4810-994f-a891ed895fe0 req-3f0b751a-0102-4cab-ad87-9420f7cd6c7e 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-939bb9dc-5fcb-4b53-adc4-df36f016d404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.958 2 DEBUG nova.network.neutron [req-8d49de21-5f8a-4810-994f-a891ed895fe0 req-3f0b751a-0102-4cab-ad87-9420f7cd6c7e 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Refreshing network info cache for port 5390e6f6-6c7c-44df-a141-60b229fbae5b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.964 2 DEBUG nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Start _get_guest_xml network_info=[{"id": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "address": "fa:16:3e:09:3c:79", "network": {"id": "85197338-73a4-414f-bcee-412a119e5ac4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-610792588-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "673cf473bf374c91b11ac2de62d239fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5390e6f6-6c", "ovs_interfaceid": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:24:00Z,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:24:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 0, 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '6a34ed8d-90df-4a16-a968-c59b7cafa2f1'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.973 2 WARNING nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.984 2 DEBUG nova.virt.libvirt.host [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.986 2 DEBUG nova.virt.libvirt.host [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.993 2 DEBUG nova.virt.libvirt.host [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.994 2 DEBUG nova.virt.libvirt.host [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.994 2 DEBUG nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.995 2 DEBUG nova.virt.hardware [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-03T11:23:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b93eb926-1d95-406e-aec3-a907be067084',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:24:00Z,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:24:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.996 2 DEBUG nova.virt.hardware [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.996 2 DEBUG nova.virt.hardware [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.996 2 DEBUG nova.virt.hardware [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.997 2 DEBUG nova.virt.hardware [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.997 2 DEBUG nova.virt.hardware [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.998 2 DEBUG nova.virt.hardware [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.998 2 DEBUG nova.virt.hardware [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.998 2 DEBUG nova.virt.hardware [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.999 2 DEBUG nova.virt.hardware [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  3 11:26:10 compute-0 nova_compute[351685]: 2025-10-03 11:26:10.999 2 DEBUG nova.virt.hardware [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  3 11:26:11 compute-0 nova_compute[351685]: 2025-10-03 11:26:11.003 2 DEBUG oslo_concurrency.processutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:11 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:26:11 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3816107625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:26:11 compute-0 nova_compute[351685]: 2025-10-03 11:26:11.499 2 DEBUG oslo_concurrency.processutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:11 compute-0 nova_compute[351685]: 2025-10-03 11:26:11.532 2 DEBUG nova.storage.rbd_utils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] rbd image 939bb9dc-5fcb-4b53-adc4-df36f016d404_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:11 compute-0 nova_compute[351685]: 2025-10-03 11:26:11.539 2 DEBUG oslo_concurrency.processutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3570: 321 pgs: 321 active+clean; 291 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Oct  3 11:26:11 compute-0 zealous_nightingale[529439]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:26:11 compute-0 zealous_nightingale[529439]: --> relative data size: 1.0
Oct  3 11:26:11 compute-0 zealous_nightingale[529439]: --> All data devices are unavailable
Oct  3 11:26:11 compute-0 systemd[1]: libpod-a4dd38154d66923a9dcd990b25ed0bb3345ae97de481668239f0c40a52204924.scope: Deactivated successfully.
Oct  3 11:26:11 compute-0 systemd[1]: libpod-a4dd38154d66923a9dcd990b25ed0bb3345ae97de481668239f0c40a52204924.scope: Consumed 1.068s CPU time.
Oct  3 11:26:11 compute-0 podman[529422]: 2025-10-03 11:26:11.836458342 +0000 UTC m=+1.419342941 container died a4dd38154d66923a9dcd990b25ed0bb3345ae97de481668239f0c40a52204924 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  3 11:26:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fd0b9b1962c2e798fb761035dbdb08405b3230b249279062f5a4be064a54b6d-merged.mount: Deactivated successfully.
Oct  3 11:26:11 compute-0 podman[529422]: 2025-10-03 11:26:11.93189561 +0000 UTC m=+1.514780219 container remove a4dd38154d66923a9dcd990b25ed0bb3345ae97de481668239f0c40a52204924 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 11:26:11 compute-0 systemd[1]: libpod-conmon-a4dd38154d66923a9dcd990b25ed0bb3345ae97de481668239f0c40a52204924.scope: Deactivated successfully.
Oct  3 11:26:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:26:12 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2717476108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.080 2 DEBUG oslo_concurrency.processutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.082 2 DEBUG nova.virt.libvirt.vif [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:26:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-494597138',display_name='tempest-ServerAddressesTestJSON-server-494597138',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-494597138',id=11,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='673cf473bf374c91b11ac2de62d239fc',ramdisk_id='',reservation_id='r-azyqxn9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-209133408',owner_user_name='tempest-ServerAddressesTestJSON-209133408-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:26:05Z,user_data=None,user_id='842e5fed9314415e8fc9c491dd9efc11',uuid=939bb9dc-5fcb-4b53-adc4-df36f016d404,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "address": "fa:16:3e:09:3c:79", "network": {"id": "85197338-73a4-414f-bcee-412a119e5ac4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-610792588-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "673cf473bf374c91b11ac2de62d239fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5390e6f6-6c", "ovs_interfaceid": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.083 2 DEBUG nova.network.os_vif_util [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Converting VIF {"id": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "address": "fa:16:3e:09:3c:79", "network": {"id": "85197338-73a4-414f-bcee-412a119e5ac4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-610792588-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "673cf473bf374c91b11ac2de62d239fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5390e6f6-6c", "ovs_interfaceid": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.083 2 DEBUG nova.network.os_vif_util [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:3c:79,bridge_name='br-int',has_traffic_filtering=True,id=5390e6f6-6c7c-44df-a141-60b229fbae5b,network=Network(85197338-73a4-414f-bcee-412a119e5ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5390e6f6-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.085 2 DEBUG nova.objects.instance [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 939bb9dc-5fcb-4b53-adc4-df36f016d404 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.107 2 DEBUG nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] End _get_guest_xml xml=<domain type="kvm">
Oct  3 11:26:12 compute-0 nova_compute[351685]:  <uuid>939bb9dc-5fcb-4b53-adc4-df36f016d404</uuid>
Oct  3 11:26:12 compute-0 nova_compute[351685]:  <name>instance-0000000b</name>
Oct  3 11:26:12 compute-0 nova_compute[351685]:  <memory>131072</memory>
Oct  3 11:26:12 compute-0 nova_compute[351685]:  <vcpu>1</vcpu>
Oct  3 11:26:12 compute-0 nova_compute[351685]:  <metadata>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <nova:name>tempest-ServerAddressesTestJSON-server-494597138</nova:name>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <nova:creationTime>2025-10-03 11:26:10</nova:creationTime>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <nova:flavor name="m1.nano">
Oct  3 11:26:12 compute-0 nova_compute[351685]:        <nova:memory>128</nova:memory>
Oct  3 11:26:12 compute-0 nova_compute[351685]:        <nova:disk>1</nova:disk>
Oct  3 11:26:12 compute-0 nova_compute[351685]:        <nova:swap>0</nova:swap>
Oct  3 11:26:12 compute-0 nova_compute[351685]:        <nova:ephemeral>0</nova:ephemeral>
Oct  3 11:26:12 compute-0 nova_compute[351685]:        <nova:vcpus>1</nova:vcpus>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      </nova:flavor>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <nova:owner>
Oct  3 11:26:12 compute-0 nova_compute[351685]:        <nova:user uuid="842e5fed9314415e8fc9c491dd9efc11">tempest-ServerAddressesTestJSON-209133408-project-member</nova:user>
Oct  3 11:26:12 compute-0 nova_compute[351685]:        <nova:project uuid="673cf473bf374c91b11ac2de62d239fc">tempest-ServerAddressesTestJSON-209133408</nova:project>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      </nova:owner>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <nova:root type="image" uuid="6a34ed8d-90df-4a16-a968-c59b7cafa2f1"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <nova:ports>
Oct  3 11:26:12 compute-0 nova_compute[351685]:        <nova:port uuid="5390e6f6-6c7c-44df-a141-60b229fbae5b">
Oct  3 11:26:12 compute-0 nova_compute[351685]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:        </nova:port>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      </nova:ports>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    </nova:instance>
Oct  3 11:26:12 compute-0 nova_compute[351685]:  </metadata>
Oct  3 11:26:12 compute-0 nova_compute[351685]:  <sysinfo type="smbios">
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <system>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <entry name="manufacturer">RDO</entry>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <entry name="product">OpenStack Compute</entry>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <entry name="serial">939bb9dc-5fcb-4b53-adc4-df36f016d404</entry>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <entry name="uuid">939bb9dc-5fcb-4b53-adc4-df36f016d404</entry>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <entry name="family">Virtual Machine</entry>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    </system>
Oct  3 11:26:12 compute-0 nova_compute[351685]:  </sysinfo>
Oct  3 11:26:12 compute-0 nova_compute[351685]:  <os>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <boot dev="hd"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <smbios mode="sysinfo"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:  </os>
Oct  3 11:26:12 compute-0 nova_compute[351685]:  <features>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <acpi/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <apic/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <vmcoreinfo/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:  </features>
Oct  3 11:26:12 compute-0 nova_compute[351685]:  <clock offset="utc">
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <timer name="pit" tickpolicy="delay"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <timer name="hpet" present="no"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:  </clock>
Oct  3 11:26:12 compute-0 nova_compute[351685]:  <cpu mode="host-model" match="exact">
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <topology sockets="1" cores="1" threads="1"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:  </cpu>
Oct  3 11:26:12 compute-0 nova_compute[351685]:  <devices>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/939bb9dc-5fcb-4b53-adc4-df36f016d404_disk">
Oct  3 11:26:12 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      </source>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:26:12 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <target dev="vda" bus="virtio"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <disk type="network" device="cdrom">
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/939bb9dc-5fcb-4b53-adc4-df36f016d404_disk.config">
Oct  3 11:26:12 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      </source>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:26:12 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <target dev="sda" bus="sata"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <interface type="ethernet">
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <mac address="fa:16:3e:09:3c:79"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <driver name="vhost" rx_queue_size="512"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <mtu size="1442"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <target dev="tap5390e6f6-6c"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    </interface>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <serial type="pty">
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <log file="/var/lib/nova/instances/939bb9dc-5fcb-4b53-adc4-df36f016d404/console.log" append="off"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    </serial>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <video>
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    </video>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <input type="tablet" bus="usb"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <rng model="virtio">
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <backend model="random">/dev/urandom</backend>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    </rng>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <controller type="usb" index="0"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    <memballoon model="virtio">
Oct  3 11:26:12 compute-0 nova_compute[351685]:      <stats period="10"/>
Oct  3 11:26:12 compute-0 nova_compute[351685]:    </memballoon>
Oct  3 11:26:12 compute-0 nova_compute[351685]:  </devices>
Oct  3 11:26:12 compute-0 nova_compute[351685]: </domain>
Oct  3 11:26:12 compute-0 nova_compute[351685]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.108 2 DEBUG nova.compute.manager [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Preparing to wait for external event network-vif-plugged-5390e6f6-6c7c-44df-a141-60b229fbae5b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.108 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Acquiring lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.108 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.109 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.109 2 DEBUG nova.virt.libvirt.vif [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:26:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-494597138',display_name='tempest-ServerAddressesTestJSON-server-494597138',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-494597138',id=11,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='673cf473bf374c91b11ac2de62d239fc',ramdisk_id='',reservation_id='r-azyqxn9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-209133408',owner_user_name='tempest-ServerAddressesTestJSON-209133408-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:26:05Z,user_data=None,user_id='842e5fed9314415e8fc9c491dd9efc11',uuid=939bb9dc-5fcb-4b53-adc4-df36f016d404,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "address": "fa:16:3e:09:3c:79", "network": {"id": "85197338-73a4-414f-bcee-412a119e5ac4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-610792588-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "673cf473bf374c91b11ac2de62d239fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5390e6f6-6c", "ovs_interfaceid": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.109 2 DEBUG nova.network.os_vif_util [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Converting VIF {"id": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "address": "fa:16:3e:09:3c:79", "network": {"id": "85197338-73a4-414f-bcee-412a119e5ac4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-610792588-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "673cf473bf374c91b11ac2de62d239fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5390e6f6-6c", "ovs_interfaceid": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.110 2 DEBUG nova.network.os_vif_util [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:3c:79,bridge_name='br-int',has_traffic_filtering=True,id=5390e6f6-6c7c-44df-a141-60b229fbae5b,network=Network(85197338-73a4-414f-bcee-412a119e5ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5390e6f6-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.110 2 DEBUG os_vif [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:3c:79,bridge_name='br-int',has_traffic_filtering=True,id=5390e6f6-6c7c-44df-a141-60b229fbae5b,network=Network(85197338-73a4-414f-bcee-412a119e5ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5390e6f6-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.111 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.111 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5390e6f6-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5390e6f6-6c, col_values=(('external_ids', {'iface-id': '5390e6f6-6c7c-44df-a141-60b229fbae5b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:3c:79', 'vm-uuid': '939bb9dc-5fcb-4b53-adc4-df36f016d404'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:12 compute-0 NetworkManager[45015]: <info>  [1759490772.1182] manager: (tap5390e6f6-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.126 2 INFO os_vif [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:3c:79,bridge_name='br-int',has_traffic_filtering=True,id=5390e6f6-6c7c-44df-a141-60b229fbae5b,network=Network(85197338-73a4-414f-bcee-412a119e5ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5390e6f6-6c')#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.199 2 DEBUG nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.200 2 DEBUG nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.200 2 DEBUG nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] No VIF found with MAC fa:16:3e:09:3c:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.200 2 INFO nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Using config drive#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.245 2 DEBUG nova.storage.rbd_utils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] rbd image 939bb9dc-5fcb-4b53-adc4-df36f016d404_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.773 2 INFO nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Creating config drive at /var/lib/nova/instances/939bb9dc-5fcb-4b53-adc4-df36f016d404/disk.config#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.777 2 DEBUG oslo_concurrency.processutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/939bb9dc-5fcb-4b53-adc4-df36f016d404/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6y22qjfh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.803 2 DEBUG nova.network.neutron [req-8d49de21-5f8a-4810-994f-a891ed895fe0 req-3f0b751a-0102-4cab-ad87-9420f7cd6c7e 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Updated VIF entry in instance network info cache for port 5390e6f6-6c7c-44df-a141-60b229fbae5b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.804 2 DEBUG nova.network.neutron [req-8d49de21-5f8a-4810-994f-a891ed895fe0 req-3f0b751a-0102-4cab-ad87-9420f7cd6c7e 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Updating instance_info_cache with network_info: [{"id": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "address": "fa:16:3e:09:3c:79", "network": {"id": "85197338-73a4-414f-bcee-412a119e5ac4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-610792588-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "673cf473bf374c91b11ac2de62d239fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5390e6f6-6c", "ovs_interfaceid": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.834 2 DEBUG oslo_concurrency.lockutils [req-8d49de21-5f8a-4810-994f-a891ed895fe0 req-3f0b751a-0102-4cab-ad87-9420f7cd6c7e 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-939bb9dc-5fcb-4b53-adc4-df36f016d404" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:26:12 compute-0 podman[529705]: 2025-10-03 11:26:12.886598838 +0000 UTC m=+0.066633686 container create ca14f872f469e2a20c830b7ee0932b1b612a4335b7cf9c9e67770bd313ac66c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_newton, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.905 2 DEBUG oslo_concurrency.processutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/939bb9dc-5fcb-4b53-adc4-df36f016d404/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6y22qjfh" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:12 compute-0 podman[529705]: 2025-10-03 11:26:12.857626489 +0000 UTC m=+0.037661327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.958 2 DEBUG nova.storage.rbd_utils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] rbd image 939bb9dc-5fcb-4b53-adc4-df36f016d404_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:12 compute-0 nova_compute[351685]: 2025-10-03 11:26:12.973 2 DEBUG oslo_concurrency.processutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/939bb9dc-5fcb-4b53-adc4-df36f016d404/disk.config 939bb9dc-5fcb-4b53-adc4-df36f016d404_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:12 compute-0 systemd[1]: Started libpod-conmon-ca14f872f469e2a20c830b7ee0932b1b612a4335b7cf9c9e67770bd313ac66c8.scope.
Oct  3 11:26:13 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:26:13 compute-0 podman[529705]: 2025-10-03 11:26:13.066738531 +0000 UTC m=+0.246773399 container init ca14f872f469e2a20c830b7ee0932b1b612a4335b7cf9c9e67770bd313ac66c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_newton, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 11:26:13 compute-0 podman[529705]: 2025-10-03 11:26:13.081569106 +0000 UTC m=+0.261603924 container start ca14f872f469e2a20c830b7ee0932b1b612a4335b7cf9c9e67770bd313ac66c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_newton, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  3 11:26:13 compute-0 podman[529705]: 2025-10-03 11:26:13.086849485 +0000 UTC m=+0.266884333 container attach ca14f872f469e2a20c830b7ee0932b1b612a4335b7cf9c9e67770bd313ac66c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 11:26:13 compute-0 upbeat_newton[529741]: 167 167
Oct  3 11:26:13 compute-0 systemd[1]: libpod-ca14f872f469e2a20c830b7ee0932b1b612a4335b7cf9c9e67770bd313ac66c8.scope: Deactivated successfully.
Oct  3 11:26:13 compute-0 podman[529705]: 2025-10-03 11:26:13.094480771 +0000 UTC m=+0.274515599 container died ca14f872f469e2a20c830b7ee0932b1b612a4335b7cf9c9e67770bd313ac66c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_newton, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:26:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-bace3c638a5df800bf08fe9412e659f4ee870dc15c4fe889d7ddb03747c483ea-merged.mount: Deactivated successfully.
Oct  3 11:26:13 compute-0 podman[529705]: 2025-10-03 11:26:13.158891115 +0000 UTC m=+0.338925933 container remove ca14f872f469e2a20c830b7ee0932b1b612a4335b7cf9c9e67770bd313ac66c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_newton, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:26:13 compute-0 systemd[1]: libpod-conmon-ca14f872f469e2a20c830b7ee0932b1b612a4335b7cf9c9e67770bd313ac66c8.scope: Deactivated successfully.
Oct  3 11:26:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:26:13 compute-0 nova_compute[351685]: 2025-10-03 11:26:13.256 2 DEBUG oslo_concurrency.processutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/939bb9dc-5fcb-4b53-adc4-df36f016d404/disk.config 939bb9dc-5fcb-4b53-adc4-df36f016d404_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:13 compute-0 nova_compute[351685]: 2025-10-03 11:26:13.259 2 INFO nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Deleting local config drive /var/lib/nova/instances/939bb9dc-5fcb-4b53-adc4-df36f016d404/disk.config because it was imported into RBD.#033[00m
Oct  3 11:26:13 compute-0 kernel: tap5390e6f6-6c: entered promiscuous mode
Oct  3 11:26:13 compute-0 ovn_controller[88471]: 2025-10-03T11:26:13Z|00115|binding|INFO|Claiming lport 5390e6f6-6c7c-44df-a141-60b229fbae5b for this chassis.
Oct  3 11:26:13 compute-0 ovn_controller[88471]: 2025-10-03T11:26:13Z|00116|binding|INFO|5390e6f6-6c7c-44df-a141-60b229fbae5b: Claiming fa:16:3e:09:3c:79 10.100.0.12
Oct  3 11:26:13 compute-0 NetworkManager[45015]: <info>  [1759490773.3397] manager: (tap5390e6f6-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Oct  3 11:26:13 compute-0 nova_compute[351685]: 2025-10-03 11:26:13.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.345 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:3c:79 10.100.0.12'], port_security=['fa:16:3e:09:3c:79 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '939bb9dc-5fcb-4b53-adc4-df36f016d404', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85197338-73a4-414f-bcee-412a119e5ac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '673cf473bf374c91b11ac2de62d239fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '36f63af8-d42d-4323-8db6-4e4a7313e807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff320888-3cb9-4b8c-9a4a-fe5c523d569c, chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=5390e6f6-6c7c-44df-a141-60b229fbae5b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.346 284328 INFO neutron.agent.ovn.metadata.agent [-] Port 5390e6f6-6c7c-44df-a141-60b229fbae5b in datapath 85197338-73a4-414f-bcee-412a119e5ac4 bound to our chassis#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.348 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85197338-73a4-414f-bcee-412a119e5ac4#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.360 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[0c00f739-70b2-4461-80c7-2b4ce2b4b0f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.361 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85197338-71 in ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.363 412583 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85197338-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.363 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[46723663-5269-4eb0-a104-a56d66aafc81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.365 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[ca0fb985-f999-49ef-956d-474c5982aebc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:13 compute-0 ovn_controller[88471]: 2025-10-03T11:26:13Z|00117|binding|INFO|Setting lport 5390e6f6-6c7c-44df-a141-60b229fbae5b ovn-installed in OVS
Oct  3 11:26:13 compute-0 ovn_controller[88471]: 2025-10-03T11:26:13Z|00118|binding|INFO|Setting lport 5390e6f6-6c7c-44df-a141-60b229fbae5b up in Southbound
Oct  3 11:26:13 compute-0 nova_compute[351685]: 2025-10-03 11:26:13.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:13 compute-0 nova_compute[351685]: 2025-10-03 11:26:13.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.379 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[5d436620-0d3d-4eee-bc37-d2dfd13bb53b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:13 compute-0 systemd-udevd[529805]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.394 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5f904c-cad5-44de-8a2b-f3d6d3af7967]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:13 compute-0 systemd-machined[137653]: New machine qemu-11-instance-0000000b.
Oct  3 11:26:13 compute-0 NetworkManager[45015]: <info>  [1759490773.4022] device (tap5390e6f6-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 11:26:13 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Oct  3 11:26:13 compute-0 podman[529786]: 2025-10-03 11:26:13.404861508 +0000 UTC m=+0.079573922 container create 6ad16a40ec087755c9d8a691c9231205eaa7463841c4ce014bfd98749d192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 11:26:13 compute-0 NetworkManager[45015]: <info>  [1759490773.4093] device (tap5390e6f6-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.427 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0f44e7-7431-49a6-a0c5-7275921dc554]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.438 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d012c1-8dc1-4b23-8f20-f290961e8322]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:13 compute-0 NetworkManager[45015]: <info>  [1759490773.4419] manager: (tap85197338-70): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Oct  3 11:26:13 compute-0 systemd[1]: Started libpod-conmon-6ad16a40ec087755c9d8a691c9231205eaa7463841c4ce014bfd98749d192fb3.scope.
Oct  3 11:26:13 compute-0 podman[529786]: 2025-10-03 11:26:13.385980702 +0000 UTC m=+0.060693136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.475 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2438b0-0139-48cf-a1d6-bb58fcd771c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.492 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf6ff5c-b186-40d2-b3a9-22e995d3f028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:13 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1bc041891ca8dc5c0fa41798c91163073619e631fbd3aa5030e7aef47d2e8b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1bc041891ca8dc5c0fa41798c91163073619e631fbd3aa5030e7aef47d2e8b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1bc041891ca8dc5c0fa41798c91163073619e631fbd3aa5030e7aef47d2e8b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1bc041891ca8dc5c0fa41798c91163073619e631fbd3aa5030e7aef47d2e8b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:26:13 compute-0 NetworkManager[45015]: <info>  [1759490773.5199] device (tap85197338-70): carrier: link connected
Oct  3 11:26:13 compute-0 podman[529786]: 2025-10-03 11:26:13.520583336 +0000 UTC m=+0.195295770 container init 6ad16a40ec087755c9d8a691c9231205eaa7463841c4ce014bfd98749d192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.526 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[e26b950b-4a26-4b7d-8b3b-ea18596bee39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:13 compute-0 podman[529786]: 2025-10-03 11:26:13.534903405 +0000 UTC m=+0.209615819 container start 6ad16a40ec087755c9d8a691c9231205eaa7463841c4ce014bfd98749d192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:26:13 compute-0 podman[529786]: 2025-10-03 11:26:13.541222658 +0000 UTC m=+0.215935142 container attach 6ad16a40ec087755c9d8a691c9231205eaa7463841c4ce014bfd98749d192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.542 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[1f208231-55c9-4517-9977-7f9a3fea0397]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85197338-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:db:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 995611, 'reachable_time': 38906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 529846, 'error': None, 'target': 'ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.556 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[99601d7e-5185-46bc-ab43-c9900f5b8dec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:dba9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 995611, 'tstamp': 995611}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 529848, 'error': None, 'target': 'ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.573 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[91db1e56-b174-4046-ac3e-eeef9fa2f035]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85197338-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:db:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 995611, 'reachable_time': 38906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 529849, 'error': None, 'target': 'ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.603 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[75312f13-e813-44f0-8be4-00f9188e11c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:13 compute-0 nova_compute[351685]: 2025-10-03 11:26:13.648 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759490758.6468031, 50697870-0565-414d-a9e6-5262e3e25e3c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:26:13 compute-0 nova_compute[351685]: 2025-10-03 11:26:13.650 2 INFO nova.compute.manager [-] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] VM Stopped (Lifecycle Event)#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.667 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7bea0d-67cb-45a3-bf3d-88ead44af798]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.668 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85197338-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.668 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.669 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85197338-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:13 compute-0 nova_compute[351685]: 2025-10-03 11:26:13.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:13 compute-0 NetworkManager[45015]: <info>  [1759490773.6724] manager: (tap85197338-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Oct  3 11:26:13 compute-0 kernel: tap85197338-70: entered promiscuous mode
Oct  3 11:26:13 compute-0 nova_compute[351685]: 2025-10-03 11:26:13.676 2 DEBUG nova.compute.manager [None req-5af90e39-6c40-4e60-9799-1a3b712a83cb - - - - - -] [instance: 50697870-0565-414d-a9e6-5262e3e25e3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:26:13 compute-0 nova_compute[351685]: 2025-10-03 11:26:13.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.678 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85197338-70, col_values=(('external_ids', {'iface-id': '10272b6a-246b-471b-b41b-0b65e8920fb3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:13 compute-0 nova_compute[351685]: 2025-10-03 11:26:13.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:13 compute-0 ovn_controller[88471]: 2025-10-03T11:26:13Z|00119|binding|INFO|Releasing lport 10272b6a-246b-471b-b41b-0b65e8920fb3 from this chassis (sb_readonly=0)
Oct  3 11:26:13 compute-0 nova_compute[351685]: 2025-10-03 11:26:13.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.688 284328 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85197338-73a4-414f-bcee-412a119e5ac4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85197338-73a4-414f-bcee-412a119e5ac4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.698 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[71a6a594-7ed9-455b-a5ef-7c5de78d08f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.699 284328 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: global
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    log         /dev/log local0 debug
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    log-tag     haproxy-metadata-proxy-85197338-73a4-414f-bcee-412a119e5ac4
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    user        root
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    group       root
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    maxconn     1024
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    pidfile     /var/lib/neutron/external/pids/85197338-73a4-414f-bcee-412a119e5ac4.pid.haproxy
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    daemon
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: defaults
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    log global
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    mode http
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    option httplog
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    option dontlognull
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    option http-server-close
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    option forwardfor
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    retries                 3
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    timeout http-request    30s
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    timeout connect         30s
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    timeout client          32s
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    timeout server          32s
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    timeout http-keep-alive 30s
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: listen listener
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    bind 169.254.169.254:80
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]:    http-request add-header X-OVN-Network-ID 85197338-73a4-414f-bcee-412a119e5ac4
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  3 11:26:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:13.700 284328 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4', 'env', 'PROCESS_TAG=haproxy-85197338-73a4-414f-bcee-412a119e5ac4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85197338-73a4-414f-bcee-412a119e5ac4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  3 11:26:13 compute-0 nova_compute[351685]: 2025-10-03 11:26:13.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3571: 321 pgs: 321 active+clean; 291 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct  3 11:26:13 compute-0 nova_compute[351685]: 2025-10-03 11:26:13.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:14 compute-0 podman[529915]: 2025-10-03 11:26:14.132319303 +0000 UTC m=+0.069496179 container create 2f68be4c8c0ba2d78fc0d81724d77f49bee8c903e5cde640daa85c89b5ec9cfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:26:14 compute-0 systemd[1]: Started libpod-conmon-2f68be4c8c0ba2d78fc0d81724d77f49bee8c903e5cde640daa85c89b5ec9cfb.scope.
Oct  3 11:26:14 compute-0 podman[529915]: 2025-10-03 11:26:14.097779845 +0000 UTC m=+0.034956741 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  3 11:26:14 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b6f218d91fc860af74aca43f2c8b60f92321ecd386756feab483f1a7769935/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  3 11:26:14 compute-0 podman[529915]: 2025-10-03 11:26:14.253163326 +0000 UTC m=+0.190340242 container init 2f68be4c8c0ba2d78fc0d81724d77f49bee8c903e5cde640daa85c89b5ec9cfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:26:14 compute-0 podman[529915]: 2025-10-03 11:26:14.261511133 +0000 UTC m=+0.198688009 container start 2f68be4c8c0ba2d78fc0d81724d77f49bee8c903e5cde640daa85c89b5ec9cfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:26:14 compute-0 neutron-haproxy-ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4[529937]: [NOTICE]   (529942) : New worker (529945) forked
Oct  3 11:26:14 compute-0 neutron-haproxy-ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4[529937]: [NOTICE]   (529942) : Loading success.
Oct  3 11:26:14 compute-0 angry_pare[529838]: {
Oct  3 11:26:14 compute-0 angry_pare[529838]:    "0": [
Oct  3 11:26:14 compute-0 angry_pare[529838]:        {
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "devices": [
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "/dev/loop3"
Oct  3 11:26:14 compute-0 angry_pare[529838]:            ],
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "lv_name": "ceph_lv0",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "lv_size": "21470642176",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "name": "ceph_lv0",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "tags": {
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.cluster_name": "ceph",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.crush_device_class": "",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.encrypted": "0",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.osd_id": "0",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.type": "block",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.vdo": "0"
Oct  3 11:26:14 compute-0 angry_pare[529838]:            },
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "type": "block",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "vg_name": "ceph_vg0"
Oct  3 11:26:14 compute-0 angry_pare[529838]:        }
Oct  3 11:26:14 compute-0 angry_pare[529838]:    ],
Oct  3 11:26:14 compute-0 angry_pare[529838]:    "1": [
Oct  3 11:26:14 compute-0 angry_pare[529838]:        {
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "devices": [
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "/dev/loop4"
Oct  3 11:26:14 compute-0 angry_pare[529838]:            ],
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "lv_name": "ceph_lv1",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "lv_size": "21470642176",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "name": "ceph_lv1",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "tags": {
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.cluster_name": "ceph",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.crush_device_class": "",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.encrypted": "0",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.osd_id": "1",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.type": "block",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.vdo": "0"
Oct  3 11:26:14 compute-0 angry_pare[529838]:            },
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "type": "block",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "vg_name": "ceph_vg1"
Oct  3 11:26:14 compute-0 angry_pare[529838]:        }
Oct  3 11:26:14 compute-0 angry_pare[529838]:    ],
Oct  3 11:26:14 compute-0 angry_pare[529838]:    "2": [
Oct  3 11:26:14 compute-0 angry_pare[529838]:        {
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "devices": [
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "/dev/loop5"
Oct  3 11:26:14 compute-0 angry_pare[529838]:            ],
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "lv_name": "ceph_lv2",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "lv_size": "21470642176",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "name": "ceph_lv2",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "tags": {
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.cluster_name": "ceph",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.crush_device_class": "",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.encrypted": "0",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.osd_id": "2",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.type": "block",
Oct  3 11:26:14 compute-0 angry_pare[529838]:                "ceph.vdo": "0"
Oct  3 11:26:14 compute-0 angry_pare[529838]:            },
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "type": "block",
Oct  3 11:26:14 compute-0 angry_pare[529838]:            "vg_name": "ceph_vg2"
Oct  3 11:26:14 compute-0 angry_pare[529838]:        }
Oct  3 11:26:14 compute-0 angry_pare[529838]:    ]
Oct  3 11:26:14 compute-0 angry_pare[529838]: }
Oct  3 11:26:14 compute-0 systemd[1]: libpod-6ad16a40ec087755c9d8a691c9231205eaa7463841c4ce014bfd98749d192fb3.scope: Deactivated successfully.
Oct  3 11:26:14 compute-0 podman[529786]: 2025-10-03 11:26:14.429557979 +0000 UTC m=+1.104270413 container died 6ad16a40ec087755c9d8a691c9231205eaa7463841c4ce014bfd98749d192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:26:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1bc041891ca8dc5c0fa41798c91163073619e631fbd3aa5030e7aef47d2e8b9-merged.mount: Deactivated successfully.
Oct  3 11:26:14 compute-0 podman[529786]: 2025-10-03 11:26:14.521545757 +0000 UTC m=+1.196258191 container remove 6ad16a40ec087755c9d8a691c9231205eaa7463841c4ce014bfd98749d192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 11:26:14 compute-0 systemd[1]: libpod-conmon-6ad16a40ec087755c9d8a691c9231205eaa7463841c4ce014bfd98749d192fb3.scope: Deactivated successfully.
Oct  3 11:26:14 compute-0 nova_compute[351685]: 2025-10-03 11:26:14.832 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490774.8318653, 939bb9dc-5fcb-4b53-adc4-df36f016d404 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:26:14 compute-0 nova_compute[351685]: 2025-10-03 11:26:14.832 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] VM Started (Lifecycle Event)#033[00m
Oct  3 11:26:14 compute-0 nova_compute[351685]: 2025-10-03 11:26:14.848 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:26:14 compute-0 nova_compute[351685]: 2025-10-03 11:26:14.853 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490774.831996, 939bb9dc-5fcb-4b53-adc4-df36f016d404 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:26:14 compute-0 nova_compute[351685]: 2025-10-03 11:26:14.853 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] VM Paused (Lifecycle Event)#033[00m
Oct  3 11:26:14 compute-0 nova_compute[351685]: 2025-10-03 11:26:14.869 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:26:14 compute-0 nova_compute[351685]: 2025-10-03 11:26:14.875 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:26:14 compute-0 nova_compute[351685]: 2025-10-03 11:26:14.893 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:26:14 compute-0 nova_compute[351685]: 2025-10-03 11:26:14.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:15 compute-0 podman[530109]: 2025-10-03 11:26:15.420283771 +0000 UTC m=+0.053210856 container create 4cf7564f8265f6eac459ad753ee8766bcfce5ce3c6ef84b6abdc764a3ffc7db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_moser, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:26:15 compute-0 podman[530109]: 2025-10-03 11:26:15.399158443 +0000 UTC m=+0.032085548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:26:15 compute-0 systemd[1]: Started libpod-conmon-4cf7564f8265f6eac459ad753ee8766bcfce5ce3c6ef84b6abdc764a3ffc7db9.scope.
Oct  3 11:26:15 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:26:15 compute-0 podman[530109]: 2025-10-03 11:26:15.585685912 +0000 UTC m=+0.218613077 container init 4cf7564f8265f6eac459ad753ee8766bcfce5ce3c6ef84b6abdc764a3ffc7db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_moser, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct  3 11:26:15 compute-0 podman[530109]: 2025-10-03 11:26:15.598763921 +0000 UTC m=+0.231691006 container start 4cf7564f8265f6eac459ad753ee8766bcfce5ce3c6ef84b6abdc764a3ffc7db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 11:26:15 compute-0 podman[530109]: 2025-10-03 11:26:15.603030457 +0000 UTC m=+0.235957562 container attach 4cf7564f8265f6eac459ad753ee8766bcfce5ce3c6ef84b6abdc764a3ffc7db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_moser, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 11:26:15 compute-0 youthful_moser[530125]: 167 167
Oct  3 11:26:15 compute-0 systemd[1]: libpod-4cf7564f8265f6eac459ad753ee8766bcfce5ce3c6ef84b6abdc764a3ffc7db9.scope: Deactivated successfully.
Oct  3 11:26:15 compute-0 podman[530109]: 2025-10-03 11:26:15.608856634 +0000 UTC m=+0.241783729 container died 4cf7564f8265f6eac459ad753ee8766bcfce5ce3c6ef84b6abdc764a3ffc7db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_moser, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:26:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f738b5c4cad33d095ed0ba594f71e9e136703219c0be8be03e12f9ee260422ac-merged.mount: Deactivated successfully.
Oct  3 11:26:15 compute-0 podman[530109]: 2025-10-03 11:26:15.663856967 +0000 UTC m=+0.296784052 container remove 4cf7564f8265f6eac459ad753ee8766bcfce5ce3c6ef84b6abdc764a3ffc7db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:26:15 compute-0 systemd[1]: libpod-conmon-4cf7564f8265f6eac459ad753ee8766bcfce5ce3c6ef84b6abdc764a3ffc7db9.scope: Deactivated successfully.
Oct  3 11:26:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3572: 321 pgs: 321 active+clean; 291 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct  3 11:26:15 compute-0 podman[530148]: 2025-10-03 11:26:15.922435034 +0000 UTC m=+0.051899004 container create b4ad0380118c84004d893252e51316eef14b29e962d649bbe25c64ca1e7d2b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:26:15 compute-0 systemd[1]: Started libpod-conmon-b4ad0380118c84004d893252e51316eef14b29e962d649bbe25c64ca1e7d2b4d.scope.
Oct  3 11:26:16 compute-0 podman[530148]: 2025-10-03 11:26:15.901447802 +0000 UTC m=+0.030911802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:26:16 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b2583885faab4289f8aae91d0e353470d8c25fccc25f69647c23d76c071670/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b2583885faab4289f8aae91d0e353470d8c25fccc25f69647c23d76c071670/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b2583885faab4289f8aae91d0e353470d8c25fccc25f69647c23d76c071670/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63b2583885faab4289f8aae91d0e353470d8c25fccc25f69647c23d76c071670/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:26:16 compute-0 podman[530148]: 2025-10-03 11:26:16.104101816 +0000 UTC m=+0.233565806 container init b4ad0380118c84004d893252e51316eef14b29e962d649bbe25c64ca1e7d2b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_brattain, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 11:26:16 compute-0 podman[530148]: 2025-10-03 11:26:16.116207414 +0000 UTC m=+0.245671394 container start b4ad0380118c84004d893252e51316eef14b29e962d649bbe25c64ca1e7d2b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_brattain, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 11:26:16 compute-0 podman[530148]: 2025-10-03 11:26:16.121556626 +0000 UTC m=+0.251020626 container attach b4ad0380118c84004d893252e51316eef14b29e962d649bbe25c64ca1e7d2b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_brattain, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  3 11:26:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:26:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:26:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:26:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:26:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:26:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:26:16 compute-0 nova_compute[351685]: 2025-10-03 11:26:16.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:17 compute-0 nova_compute[351685]: 2025-10-03 11:26:17.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:17 compute-0 sharp_brattain[530164]: {
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:        "osd_id": 1,
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:        "type": "bluestore"
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:    },
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:        "osd_id": 2,
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:        "type": "bluestore"
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:    },
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:        "osd_id": 0,
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:        "type": "bluestore"
Oct  3 11:26:17 compute-0 sharp_brattain[530164]:    }
Oct  3 11:26:17 compute-0 sharp_brattain[530164]: }
Oct  3 11:26:17 compute-0 systemd[1]: libpod-b4ad0380118c84004d893252e51316eef14b29e962d649bbe25c64ca1e7d2b4d.scope: Deactivated successfully.
Oct  3 11:26:17 compute-0 systemd[1]: libpod-b4ad0380118c84004d893252e51316eef14b29e962d649bbe25c64ca1e7d2b4d.scope: Consumed 1.162s CPU time.
Oct  3 11:26:17 compute-0 podman[530148]: 2025-10-03 11:26:17.297738352 +0000 UTC m=+1.427202332 container died b4ad0380118c84004d893252e51316eef14b29e962d649bbe25c64ca1e7d2b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:26:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-63b2583885faab4289f8aae91d0e353470d8c25fccc25f69647c23d76c071670-merged.mount: Deactivated successfully.
Oct  3 11:26:17 compute-0 podman[530148]: 2025-10-03 11:26:17.37663204 +0000 UTC m=+1.506096020 container remove b4ad0380118c84004d893252e51316eef14b29e962d649bbe25c64ca1e7d2b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_brattain, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:26:17 compute-0 systemd[1]: libpod-conmon-b4ad0380118c84004d893252e51316eef14b29e962d649bbe25c64ca1e7d2b4d.scope: Deactivated successfully.
Oct  3 11:26:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:26:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:26:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:26:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:26:17 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f39cbc68-5b64-463f-a2fe-db25c97fb282 does not exist
Oct  3 11:26:17 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 1609d96b-a84e-4015-8a98-a215fccf53a6 does not exist
Oct  3 11:26:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3573: 321 pgs: 321 active+clean; 291 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct  3 11:26:18 compute-0 nova_compute[351685]: 2025-10-03 11:26:18.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:26:18 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:26:18 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.352 2 DEBUG nova.compute.manager [req-e6a0af37-b988-4195-b2fc-f66402563e33 req-3fef11da-11f9-4e18-8196-ba0391deb537 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Received event network-vif-plugged-5390e6f6-6c7c-44df-a141-60b229fbae5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.353 2 DEBUG oslo_concurrency.lockutils [req-e6a0af37-b988-4195-b2fc-f66402563e33 req-3fef11da-11f9-4e18-8196-ba0391deb537 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.354 2 DEBUG oslo_concurrency.lockutils [req-e6a0af37-b988-4195-b2fc-f66402563e33 req-3fef11da-11f9-4e18-8196-ba0391deb537 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.355 2 DEBUG oslo_concurrency.lockutils [req-e6a0af37-b988-4195-b2fc-f66402563e33 req-3fef11da-11f9-4e18-8196-ba0391deb537 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.355 2 DEBUG nova.compute.manager [req-e6a0af37-b988-4195-b2fc-f66402563e33 req-3fef11da-11f9-4e18-8196-ba0391deb537 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Processing event network-vif-plugged-5390e6f6-6c7c-44df-a141-60b229fbae5b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.356 2 DEBUG nova.compute.manager [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.362 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490779.361893, 939bb9dc-5fcb-4b53-adc4-df36f016d404 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.363 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] VM Resumed (Lifecycle Event)#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.365 2 DEBUG nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.373 2 INFO nova.virt.libvirt.driver [-] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Instance spawned successfully.#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.374 2 DEBUG nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.397 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.407 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.416 2 DEBUG nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.417 2 DEBUG nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.418 2 DEBUG nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.419 2 DEBUG nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.420 2 DEBUG nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.421 2 DEBUG nova.virt.libvirt.driver [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.427 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.478 2 INFO nova.compute.manager [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Took 13.86 seconds to spawn the instance on the hypervisor.#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.479 2 DEBUG nova.compute.manager [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.541 2 INFO nova.compute.manager [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Took 14.92 seconds to build instance.#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.569 2 DEBUG oslo_concurrency.lockutils [None req-bff99d84-e110-4b5d-bd7a-433c5a68ac6e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lock "939bb9dc-5fcb-4b53-adc4-df36f016d404" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3574: 321 pgs: 321 active+clean; 291 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct  3 11:26:19 compute-0 podman[530262]: 2025-10-03 11:26:19.874820346 +0000 UTC m=+0.124992107 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-type=git, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, release=1755695350, config_id=edpm, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9)
Oct  3 11:26:19 compute-0 podman[530261]: 2025-10-03 11:26:19.903825095 +0000 UTC m=+0.143784898 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 11:26:19 compute-0 podman[530264]: 2025-10-03 11:26:19.91143891 +0000 UTC m=+0.146442475 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  3 11:26:19 compute-0 nova_compute[351685]: 2025-10-03 11:26:19.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:19 compute-0 podman[530265]: 2025-10-03 11:26:19.922342479 +0000 UTC m=+0.161298641 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:26:19 compute-0 podman[530263]: 2025-10-03 11:26:19.925572442 +0000 UTC m=+0.151496116 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 11:26:19 compute-0 podman[530275]: 2025-10-03 11:26:19.94578989 +0000 UTC m=+0.161988862 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  3 11:26:19 compute-0 podman[530291]: 2025-10-03 11:26:19.949918293 +0000 UTC m=+0.160544317 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.713 2 DEBUG oslo_concurrency.lockutils [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Acquiring lock "939bb9dc-5fcb-4b53-adc4-df36f016d404" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.713 2 DEBUG oslo_concurrency.lockutils [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lock "939bb9dc-5fcb-4b53-adc4-df36f016d404" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.714 2 DEBUG oslo_concurrency.lockutils [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Acquiring lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.714 2 DEBUG oslo_concurrency.lockutils [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.714 2 DEBUG oslo_concurrency.lockutils [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.715 2 INFO nova.compute.manager [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Terminating instance#033[00m
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.716 2 DEBUG nova.compute.manager [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  3 11:26:20 compute-0 kernel: tap5390e6f6-6c (unregistering): left promiscuous mode
Oct  3 11:26:20 compute-0 NetworkManager[45015]: <info>  [1759490780.7739] device (tap5390e6f6-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  3 11:26:20 compute-0 ovn_controller[88471]: 2025-10-03T11:26:20Z|00120|binding|INFO|Releasing lport 5390e6f6-6c7c-44df-a141-60b229fbae5b from this chassis (sb_readonly=0)
Oct  3 11:26:20 compute-0 ovn_controller[88471]: 2025-10-03T11:26:20Z|00121|binding|INFO|Setting lport 5390e6f6-6c7c-44df-a141-60b229fbae5b down in Southbound
Oct  3 11:26:20 compute-0 ovn_controller[88471]: 2025-10-03T11:26:20Z|00122|binding|INFO|Removing iface tap5390e6f6-6c ovn-installed in OVS
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:20 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:20.820 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:3c:79 10.100.0.12'], port_security=['fa:16:3e:09:3c:79 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '939bb9dc-5fcb-4b53-adc4-df36f016d404', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85197338-73a4-414f-bcee-412a119e5ac4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '673cf473bf374c91b11ac2de62d239fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '36f63af8-d42d-4323-8db6-4e4a7313e807', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff320888-3cb9-4b8c-9a4a-fe5c523d569c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=5390e6f6-6c7c-44df-a141-60b229fbae5b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:26:20 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:20.824 284328 INFO neutron.agent.ovn.metadata.agent [-] Port 5390e6f6-6c7c-44df-a141-60b229fbae5b in datapath 85197338-73a4-414f-bcee-412a119e5ac4 unbound from our chassis#033[00m
Oct  3 11:26:20 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:20.828 284328 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85197338-73a4-414f-bcee-412a119e5ac4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  3 11:26:20 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:20.830 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[866772f2-e89b-4e7e-9393-e47bcf988525]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:20 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:20.831 284328 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4 namespace which is not needed anymore#033[00m
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:20 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct  3 11:26:20 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 2.466s CPU time.
Oct  3 11:26:20 compute-0 systemd-machined[137653]: Machine qemu-11-instance-0000000b terminated.
Oct  3 11:26:20 compute-0 systemd-udevd[530399]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 11:26:20 compute-0 kernel: tap5390e6f6-6c: entered promiscuous mode
Oct  3 11:26:20 compute-0 NetworkManager[45015]: <info>  [1759490780.9399] manager: (tap5390e6f6-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Oct  3 11:26:20 compute-0 kernel: tap5390e6f6-6c (unregistering): left promiscuous mode
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.967 2 INFO nova.virt.libvirt.driver [-] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Instance destroyed successfully.#033[00m
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.968 2 DEBUG nova.objects.instance [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lazy-loading 'resources' on Instance uuid 939bb9dc-5fcb-4b53-adc4-df36f016d404 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.995 2 DEBUG nova.virt.libvirt.vif [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T11:26:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-494597138',display_name='tempest-ServerAddressesTestJSON-server-494597138',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-494597138',id=11,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-03T11:26:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='673cf473bf374c91b11ac2de62d239fc',ramdisk_id='',reservation_id='r-azyqxn9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-209133408',owner_user_name='tempest-ServerAddressesTestJSON-209133408-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-03T11:26:19Z,user_data=None,user_id='842e5fed9314415e8fc9c491dd9efc11',uuid=939bb9dc-5fcb-4b53-adc4-df36f016d404,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "address": "fa:16:3e:09:3c:79", "network": {"id": "85197338-73a4-414f-bcee-412a119e5ac4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-610792588-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "673cf473bf374c91b11ac2de62d239fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5390e6f6-6c", "ovs_interfaceid": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.996 2 DEBUG nova.network.os_vif_util [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Converting VIF {"id": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "address": "fa:16:3e:09:3c:79", "network": {"id": "85197338-73a4-414f-bcee-412a119e5ac4", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-610792588-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "673cf473bf374c91b11ac2de62d239fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5390e6f6-6c", "ovs_interfaceid": "5390e6f6-6c7c-44df-a141-60b229fbae5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.996 2 DEBUG nova.network.os_vif_util [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:3c:79,bridge_name='br-int',has_traffic_filtering=True,id=5390e6f6-6c7c-44df-a141-60b229fbae5b,network=Network(85197338-73a4-414f-bcee-412a119e5ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5390e6f6-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.996 2 DEBUG os_vif [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:3c:79,bridge_name='br-int',has_traffic_filtering=True,id=5390e6f6-6c7c-44df-a141-60b229fbae5b,network=Network(85197338-73a4-414f-bcee-412a119e5ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5390e6f6-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:20 compute-0 nova_compute[351685]: 2025-10-03 11:26:20.998 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5390e6f6-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.006 2 INFO os_vif [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:3c:79,bridge_name='br-int',has_traffic_filtering=True,id=5390e6f6-6c7c-44df-a141-60b229fbae5b,network=Network(85197338-73a4-414f-bcee-412a119e5ac4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5390e6f6-6c')#033[00m
Oct  3 11:26:21 compute-0 neutron-haproxy-ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4[529937]: [NOTICE]   (529942) : haproxy version is 2.8.14-c23fe91
Oct  3 11:26:21 compute-0 neutron-haproxy-ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4[529937]: [NOTICE]   (529942) : path to executable is /usr/sbin/haproxy
Oct  3 11:26:21 compute-0 neutron-haproxy-ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4[529937]: [WARNING]  (529942) : Exiting Master process...
Oct  3 11:26:21 compute-0 neutron-haproxy-ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4[529937]: [WARNING]  (529942) : Exiting Master process...
Oct  3 11:26:21 compute-0 neutron-haproxy-ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4[529937]: [ALERT]    (529942) : Current worker (529945) exited with code 143 (Terminated)
Oct  3 11:26:21 compute-0 neutron-haproxy-ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4[529937]: [WARNING]  (529942) : All workers exited. Exiting... (0)
Oct  3 11:26:21 compute-0 systemd[1]: libpod-2f68be4c8c0ba2d78fc0d81724d77f49bee8c903e5cde640daa85c89b5ec9cfb.scope: Deactivated successfully.
Oct  3 11:26:21 compute-0 podman[530420]: 2025-10-03 11:26:21.030690111 +0000 UTC m=+0.081032608 container died 2f68be4c8c0ba2d78fc0d81724d77f49bee8c903e5cde640daa85c89b5ec9cfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:26:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f68be4c8c0ba2d78fc0d81724d77f49bee8c903e5cde640daa85c89b5ec9cfb-userdata-shm.mount: Deactivated successfully.
Oct  3 11:26:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8b6f218d91fc860af74aca43f2c8b60f92321ecd386756feab483f1a7769935-merged.mount: Deactivated successfully.
Oct  3 11:26:21 compute-0 podman[530420]: 2025-10-03 11:26:21.088446262 +0000 UTC m=+0.138788749 container cleanup 2f68be4c8c0ba2d78fc0d81724d77f49bee8c903e5cde640daa85c89b5ec9cfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:26:21 compute-0 systemd[1]: libpod-conmon-2f68be4c8c0ba2d78fc0d81724d77f49bee8c903e5cde640daa85c89b5ec9cfb.scope: Deactivated successfully.
Oct  3 11:26:21 compute-0 podman[530468]: 2025-10-03 11:26:21.248877864 +0000 UTC m=+0.115693480 container remove 2f68be4c8c0ba2d78fc0d81724d77f49bee8c903e5cde640daa85c89b5ec9cfb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:26:21 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:21.268 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[981f48f7-439c-4c3c-ab53-0729dfd87486]: (4, ('Fri Oct  3 11:26:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4 (2f68be4c8c0ba2d78fc0d81724d77f49bee8c903e5cde640daa85c89b5ec9cfb)\n2f68be4c8c0ba2d78fc0d81724d77f49bee8c903e5cde640daa85c89b5ec9cfb\nFri Oct  3 11:26:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4 (2f68be4c8c0ba2d78fc0d81724d77f49bee8c903e5cde640daa85c89b5ec9cfb)\n2f68be4c8c0ba2d78fc0d81724d77f49bee8c903e5cde640daa85c89b5ec9cfb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:21 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:21.283 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d27de6-3f49-4f44-8d7a-7b0568f38523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:21 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:21.285 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85197338-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:21 compute-0 kernel: tap85197338-70: left promiscuous mode
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:21 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:21.303 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[2446f800-b969-4aae-be2e-0ae479a6be89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:21 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:21.330 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[d9611d95-84a7-4543-964b-1eefc8023d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:21 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:21.331 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[02e044b3-53f6-4c04-b7f0-2ff4399b01c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:21 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:21.349 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[7a0fed5f-c23a-43ac-aa84-b45785dc1677]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 995602, 'reachable_time': 43563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 530482, 'error': None, 'target': 'ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d85197338\x2d73a4\x2d414f\x2dbcee\x2d412a119e5ac4.mount: Deactivated successfully.
Oct  3 11:26:21 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:21.357 284439 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85197338-73a4-414f-bcee-412a119e5ac4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  3 11:26:21 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:21.357 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[ec8ef7c2-b05a-469f-9a37-f21fa0a69242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.474 2 DEBUG nova.compute.manager [req-c1ce21af-f5e2-44c9-bf1c-686bced92c1c req-044e1332-48a3-456b-b149-35be8d9b40ab 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Received event network-vif-plugged-5390e6f6-6c7c-44df-a141-60b229fbae5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.474 2 DEBUG oslo_concurrency.lockutils [req-c1ce21af-f5e2-44c9-bf1c-686bced92c1c req-044e1332-48a3-456b-b149-35be8d9b40ab 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.474 2 DEBUG oslo_concurrency.lockutils [req-c1ce21af-f5e2-44c9-bf1c-686bced92c1c req-044e1332-48a3-456b-b149-35be8d9b40ab 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.475 2 DEBUG oslo_concurrency.lockutils [req-c1ce21af-f5e2-44c9-bf1c-686bced92c1c req-044e1332-48a3-456b-b149-35be8d9b40ab 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.475 2 DEBUG nova.compute.manager [req-c1ce21af-f5e2-44c9-bf1c-686bced92c1c req-044e1332-48a3-456b-b149-35be8d9b40ab 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] No waiting events found dispatching network-vif-plugged-5390e6f6-6c7c-44df-a141-60b229fbae5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.475 2 WARNING nova.compute.manager [req-c1ce21af-f5e2-44c9-bf1c-686bced92c1c req-044e1332-48a3-456b-b149-35be8d9b40ab 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Received unexpected event network-vif-plugged-5390e6f6-6c7c-44df-a141-60b229fbae5b for instance with vm_state active and task_state deleting.#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.475 2 DEBUG nova.compute.manager [req-c1ce21af-f5e2-44c9-bf1c-686bced92c1c req-044e1332-48a3-456b-b149-35be8d9b40ab 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Received event network-vif-unplugged-5390e6f6-6c7c-44df-a141-60b229fbae5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.475 2 DEBUG oslo_concurrency.lockutils [req-c1ce21af-f5e2-44c9-bf1c-686bced92c1c req-044e1332-48a3-456b-b149-35be8d9b40ab 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.476 2 DEBUG oslo_concurrency.lockutils [req-c1ce21af-f5e2-44c9-bf1c-686bced92c1c req-044e1332-48a3-456b-b149-35be8d9b40ab 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.476 2 DEBUG oslo_concurrency.lockutils [req-c1ce21af-f5e2-44c9-bf1c-686bced92c1c req-044e1332-48a3-456b-b149-35be8d9b40ab 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.477 2 DEBUG nova.compute.manager [req-c1ce21af-f5e2-44c9-bf1c-686bced92c1c req-044e1332-48a3-456b-b149-35be8d9b40ab 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] No waiting events found dispatching network-vif-unplugged-5390e6f6-6c7c-44df-a141-60b229fbae5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.477 2 DEBUG nova.compute.manager [req-c1ce21af-f5e2-44c9-bf1c-686bced92c1c req-044e1332-48a3-456b-b149-35be8d9b40ab 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Received event network-vif-unplugged-5390e6f6-6c7c-44df-a141-60b229fbae5b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  3 11:26:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3575: 321 pgs: 321 active+clean; 291 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.2 MiB/s wr, 12 op/s
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.799 2 INFO nova.virt.libvirt.driver [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Deleting instance files /var/lib/nova/instances/939bb9dc-5fcb-4b53-adc4-df36f016d404_del#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.800 2 INFO nova.virt.libvirt.driver [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Deletion of /var/lib/nova/instances/939bb9dc-5fcb-4b53-adc4-df36f016d404_del complete#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.857 2 INFO nova.compute.manager [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Took 1.14 seconds to destroy the instance on the hypervisor.#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.858 2 DEBUG oslo.service.loopingcall [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.858 2 DEBUG nova.compute.manager [-] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  3 11:26:21 compute-0 nova_compute[351685]: 2025-10-03 11:26:21.859 2 DEBUG nova.network.neutron [-] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  3 11:26:22 compute-0 nova_compute[351685]: 2025-10-03 11:26:22.231 2 DEBUG nova.objects.instance [None req-1f376267-bc6e-4ea0-9cb8-bf4b669eb7ad 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lazy-loading 'flavor' on Instance uuid b5df7002-5185-4a75-ae2e-e8a44a0be062 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:26:22 compute-0 nova_compute[351685]: 2025-10-03 11:26:22.308 2 DEBUG oslo_concurrency.lockutils [None req-1f376267-bc6e-4ea0-9cb8-bf4b669eb7ad 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Acquiring lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:26:22 compute-0 nova_compute[351685]: 2025-10-03 11:26:22.308 2 DEBUG oslo_concurrency.lockutils [None req-1f376267-bc6e-4ea0-9cb8-bf4b669eb7ad 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Acquired lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:26:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:26:23 compute-0 nova_compute[351685]: 2025-10-03 11:26:23.484 2 DEBUG nova.network.neutron [-] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:26:23 compute-0 nova_compute[351685]: 2025-10-03 11:26:23.509 2 INFO nova.compute.manager [-] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Took 1.65 seconds to deallocate network for instance.#033[00m
Oct  3 11:26:23 compute-0 nova_compute[351685]: 2025-10-03 11:26:23.571 2 DEBUG oslo_concurrency.lockutils [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:23 compute-0 nova_compute[351685]: 2025-10-03 11:26:23.572 2 DEBUG oslo_concurrency.lockutils [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:23 compute-0 nova_compute[351685]: 2025-10-03 11:26:23.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:23 compute-0 nova_compute[351685]: 2025-10-03 11:26:23.679 2 DEBUG nova.compute.manager [req-39ccd396-25bf-4663-832b-ed1433842bff req-0ad104e6-f48f-493d-9e58-da50458dace8 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Received event network-vif-deleted-5390e6f6-6c7c-44df-a141-60b229fbae5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:23 compute-0 nova_compute[351685]: 2025-10-03 11:26:23.698 2 DEBUG oslo_concurrency.processutils [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3576: 321 pgs: 321 active+clean; 270 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 13 KiB/s wr, 23 op/s
Oct  3 11:26:23 compute-0 nova_compute[351685]: 2025-10-03 11:26:23.836 2 DEBUG nova.compute.manager [req-25f60a43-fd4f-4af0-bd5f-e848ba0cea27 req-aa771b79-1641-4289-849f-b388b4cd7811 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Received event network-vif-plugged-5390e6f6-6c7c-44df-a141-60b229fbae5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:23 compute-0 nova_compute[351685]: 2025-10-03 11:26:23.837 2 DEBUG oslo_concurrency.lockutils [req-25f60a43-fd4f-4af0-bd5f-e848ba0cea27 req-aa771b79-1641-4289-849f-b388b4cd7811 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:23 compute-0 nova_compute[351685]: 2025-10-03 11:26:23.838 2 DEBUG oslo_concurrency.lockutils [req-25f60a43-fd4f-4af0-bd5f-e848ba0cea27 req-aa771b79-1641-4289-849f-b388b4cd7811 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:23 compute-0 nova_compute[351685]: 2025-10-03 11:26:23.838 2 DEBUG oslo_concurrency.lockutils [req-25f60a43-fd4f-4af0-bd5f-e848ba0cea27 req-aa771b79-1641-4289-849f-b388b4cd7811 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "939bb9dc-5fcb-4b53-adc4-df36f016d404-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:23 compute-0 nova_compute[351685]: 2025-10-03 11:26:23.839 2 DEBUG nova.compute.manager [req-25f60a43-fd4f-4af0-bd5f-e848ba0cea27 req-aa771b79-1641-4289-849f-b388b4cd7811 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] No waiting events found dispatching network-vif-plugged-5390e6f6-6c7c-44df-a141-60b229fbae5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:26:23 compute-0 nova_compute[351685]: 2025-10-03 11:26:23.840 2 WARNING nova.compute.manager [req-25f60a43-fd4f-4af0-bd5f-e848ba0cea27 req-aa771b79-1641-4289-849f-b388b4cd7811 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Received unexpected event network-vif-plugged-5390e6f6-6c7c-44df-a141-60b229fbae5b for instance with vm_state deleted and task_state None.#033[00m
Oct  3 11:26:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:26:24 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/427345403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:26:24 compute-0 nova_compute[351685]: 2025-10-03 11:26:24.212 2 DEBUG oslo_concurrency.processutils [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:24 compute-0 nova_compute[351685]: 2025-10-03 11:26:24.221 2 DEBUG nova.compute.provider_tree [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:26:24 compute-0 nova_compute[351685]: 2025-10-03 11:26:24.258 2 DEBUG nova.scheduler.client.report [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:26:24 compute-0 nova_compute[351685]: 2025-10-03 11:26:24.289 2 DEBUG oslo_concurrency.lockutils [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:24 compute-0 nova_compute[351685]: 2025-10-03 11:26:24.530 2 INFO nova.scheduler.client.report [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Deleted allocations for instance 939bb9dc-5fcb-4b53-adc4-df36f016d404#033[00m
Oct  3 11:26:24 compute-0 nova_compute[351685]: 2025-10-03 11:26:24.653 2 DEBUG oslo_concurrency.lockutils [None req-6e7c9942-b4fa-46dd-8e29-84fd19c1ba2e 842e5fed9314415e8fc9c491dd9efc11 673cf473bf374c91b11ac2de62d239fc - - default default] Lock "939bb9dc-5fcb-4b53-adc4-df36f016d404" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:24 compute-0 nova_compute[351685]: 2025-10-03 11:26:24.657 2 DEBUG nova.network.neutron [None req-1f376267-bc6e-4ea0-9cb8-bf4b669eb7ad 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 11:26:24 compute-0 nova_compute[351685]: 2025-10-03 11:26:24.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:25 compute-0 nova_compute[351685]: 2025-10-03 11:26:25.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3577: 321 pgs: 321 active+clean; 244 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 16 KiB/s wr, 39 op/s
Oct  3 11:26:25 compute-0 nova_compute[351685]: 2025-10-03 11:26:25.949 2 DEBUG nova.compute.manager [req-6fd742f5-3cdb-4ff0-a240-148ae594bcca req-3f079f76-1c71-41d8-a1a1-65287d01213a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Received event network-changed-f7d0064f-83c7-44b3-839d-5811852ce687 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:25 compute-0 nova_compute[351685]: 2025-10-03 11:26:25.952 2 DEBUG nova.compute.manager [req-6fd742f5-3cdb-4ff0-a240-148ae594bcca req-3f079f76-1c71-41d8-a1a1-65287d01213a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Refreshing instance network info cache due to event network-changed-f7d0064f-83c7-44b3-839d-5811852ce687. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:26:25 compute-0 nova_compute[351685]: 2025-10-03 11:26:25.953 2 DEBUG oslo_concurrency.lockutils [req-6fd742f5-3cdb-4ff0-a240-148ae594bcca req-3f079f76-1c71-41d8-a1a1-65287d01213a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:26:26 compute-0 nova_compute[351685]: 2025-10-03 11:26:26.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:26 compute-0 nova_compute[351685]: 2025-10-03 11:26:26.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3578: 321 pgs: 321 active+clean; 244 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 3.4 KiB/s wr, 32 op/s
Oct  3 11:26:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:26:28 compute-0 nova_compute[351685]: 2025-10-03 11:26:28.234 2 DEBUG nova.network.neutron [None req-1f376267-bc6e-4ea0-9cb8-bf4b669eb7ad 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Updating instance_info_cache with network_info: [{"id": "f7d0064f-83c7-44b3-839d-5811852ce687", "address": "fa:16:3e:6c:16:9e", "network": {"id": "65d2d488-03e3-490e-9ad6-7948aea642e8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1607624435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7d0064f-83", "ovs_interfaceid": "f7d0064f-83c7-44b3-839d-5811852ce687", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:26:28 compute-0 nova_compute[351685]: 2025-10-03 11:26:28.265 2 DEBUG oslo_concurrency.lockutils [None req-1f376267-bc6e-4ea0-9cb8-bf4b669eb7ad 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Releasing lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:26:28 compute-0 nova_compute[351685]: 2025-10-03 11:26:28.265 2 DEBUG nova.compute.manager [None req-1f376267-bc6e-4ea0-9cb8-bf4b669eb7ad 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Oct  3 11:26:28 compute-0 nova_compute[351685]: 2025-10-03 11:26:28.266 2 DEBUG nova.compute.manager [None req-1f376267-bc6e-4ea0-9cb8-bf4b669eb7ad 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] network_info to inject: |[{"id": "f7d0064f-83c7-44b3-839d-5811852ce687", "address": "fa:16:3e:6c:16:9e", "network": {"id": "65d2d488-03e3-490e-9ad6-7948aea642e8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1607624435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7d0064f-83", "ovs_interfaceid": "f7d0064f-83c7-44b3-839d-5811852ce687", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Oct  3 11:26:28 compute-0 nova_compute[351685]: 2025-10-03 11:26:28.269 2 DEBUG oslo_concurrency.lockutils [req-6fd742f5-3cdb-4ff0-a240-148ae594bcca req-3f079f76-1c71-41d8-a1a1-65287d01213a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:26:28 compute-0 nova_compute[351685]: 2025-10-03 11:26:28.269 2 DEBUG nova.network.neutron [req-6fd742f5-3cdb-4ff0-a240-148ae594bcca req-3f079f76-1c71-41d8-a1a1-65287d01213a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Refreshing network info cache for port f7d0064f-83c7-44b3-839d-5811852ce687 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:26:29 compute-0 podman[157165]: time="2025-10-03T11:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:26:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 48733 "" "Go-http-client/1.1"
Oct  3 11:26:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10052 "" "Go-http-client/1.1"
Oct  3 11:26:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3579: 321 pgs: 321 active+clean; 264 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 206 KiB/s rd, 1.6 MiB/s wr, 53 op/s
Oct  3 11:26:29 compute-0 ovn_controller[88471]: 2025-10-03T11:26:29Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:8a:bc 10.100.0.7
Oct  3 11:26:29 compute-0 nova_compute[351685]: 2025-10-03 11:26:29.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:29 compute-0 ovn_controller[88471]: 2025-10-03T11:26:29Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:8a:bc 10.100.0.7
Oct  3 11:26:30 compute-0 nova_compute[351685]: 2025-10-03 11:26:30.720 2 DEBUG nova.objects.instance [None req-f1e75ecb-a0aa-4876-b1d7-1b7fffce3ee5 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lazy-loading 'flavor' on Instance uuid b5df7002-5185-4a75-ae2e-e8a44a0be062 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:26:30 compute-0 nova_compute[351685]: 2025-10-03 11:26:30.724 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:26:30 compute-0 nova_compute[351685]: 2025-10-03 11:26:30.755 2 DEBUG oslo_concurrency.lockutils [None req-f1e75ecb-a0aa-4876-b1d7-1b7fffce3ee5 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Acquiring lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:26:30 compute-0 nova_compute[351685]: 2025-10-03 11:26:30.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:31 compute-0 nova_compute[351685]: 2025-10-03 11:26:31.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:31 compute-0 nova_compute[351685]: 2025-10-03 11:26:31.279 2 DEBUG nova.network.neutron [req-6fd742f5-3cdb-4ff0-a240-148ae594bcca req-3f079f76-1c71-41d8-a1a1-65287d01213a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Updated VIF entry in instance network info cache for port f7d0064f-83c7-44b3-839d-5811852ce687. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:26:31 compute-0 nova_compute[351685]: 2025-10-03 11:26:31.280 2 DEBUG nova.network.neutron [req-6fd742f5-3cdb-4ff0-a240-148ae594bcca req-3f079f76-1c71-41d8-a1a1-65287d01213a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Updating instance_info_cache with network_info: [{"id": "f7d0064f-83c7-44b3-839d-5811852ce687", "address": "fa:16:3e:6c:16:9e", "network": {"id": "65d2d488-03e3-490e-9ad6-7948aea642e8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1607624435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7d0064f-83", "ovs_interfaceid": "f7d0064f-83c7-44b3-839d-5811852ce687", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:26:31 compute-0 nova_compute[351685]: 2025-10-03 11:26:31.301 2 DEBUG oslo_concurrency.lockutils [req-6fd742f5-3cdb-4ff0-a240-148ae594bcca req-3f079f76-1c71-41d8-a1a1-65287d01213a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:26:31 compute-0 nova_compute[351685]: 2025-10-03 11:26:31.302 2 DEBUG oslo_concurrency.lockutils [None req-f1e75ecb-a0aa-4876-b1d7-1b7fffce3ee5 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Acquired lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:26:31 compute-0 openstack_network_exporter[367524]: ERROR   11:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:26:31 compute-0 openstack_network_exporter[367524]: ERROR   11:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:26:31 compute-0 openstack_network_exporter[367524]: ERROR   11:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:26:31 compute-0 openstack_network_exporter[367524]: ERROR   11:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:26:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:26:31 compute-0 openstack_network_exporter[367524]: ERROR   11:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:26:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:26:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3580: 321 pgs: 321 active+clean; 266 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.0 MiB/s wr, 75 op/s
Oct  3 11:26:31 compute-0 podman[530508]: 2025-10-03 11:26:31.837554162 +0000 UTC m=+0.081824364 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:26:31 compute-0 podman[530506]: 2025-10-03 11:26:31.847891602 +0000 UTC m=+0.104290504 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:26:31 compute-0 podman[530507]: 2025-10-03 11:26:31.884363131 +0000 UTC m=+0.134511532 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, config_id=edpm)
Oct  3 11:26:32 compute-0 nova_compute[351685]: 2025-10-03 11:26:32.687 2 DEBUG nova.network.neutron [None req-f1e75ecb-a0aa-4876-b1d7-1b7fffce3ee5 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 11:26:32 compute-0 nova_compute[351685]: 2025-10-03 11:26:32.807 2 DEBUG nova.compute.manager [req-6a875b6b-5e0c-4c7f-bc79-dad5d44cc547 req-99df35f1-b2ec-4a74-8fe2-60a88895580a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Received event network-changed-f7d0064f-83c7-44b3-839d-5811852ce687 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:32 compute-0 nova_compute[351685]: 2025-10-03 11:26:32.808 2 DEBUG nova.compute.manager [req-6a875b6b-5e0c-4c7f-bc79-dad5d44cc547 req-99df35f1-b2ec-4a74-8fe2-60a88895580a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Refreshing instance network info cache due to event network-changed-f7d0064f-83c7-44b3-839d-5811852ce687. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:26:32 compute-0 nova_compute[351685]: 2025-10-03 11:26:32.808 2 DEBUG oslo_concurrency.lockutils [req-6a875b6b-5e0c-4c7f-bc79-dad5d44cc547 req-99df35f1-b2ec-4a74-8fe2-60a88895580a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:26:32 compute-0 nova_compute[351685]: 2025-10-03 11:26:32.862 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:32 compute-0 nova_compute[351685]: 2025-10-03 11:26:32.865 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:32 compute-0 nova_compute[351685]: 2025-10-03 11:26:32.890 2 DEBUG nova.compute.manager [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  3 11:26:32 compute-0 ovn_controller[88471]: 2025-10-03T11:26:32Z|00123|memory|INFO|peak resident set size grew 52% in last 7580.1 seconds, from 16512 kB to 25164 kB
Oct  3 11:26:32 compute-0 ovn_controller[88471]: 2025-10-03T11:26:32Z|00124|memory|INFO|idl-cells-OVN_Southbound:12071 idl-cells-Open_vSwitch:984 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:424 lflow-cache-entries-cache-matches:310 lflow-cache-size-KB:1665 local_datapath_usage-KB:4 ofctrl_desired_flow_usage-KB:759 ofctrl_installed_flow_usage-KB:553 ofctrl_sb_flow_ref_usage-KB:284
Oct  3 11:26:32 compute-0 ovn_controller[88471]: 2025-10-03T11:26:32Z|00125|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:26:32 compute-0 ovn_controller[88471]: 2025-10-03T11:26:32Z|00126|binding|INFO|Releasing lport 9360fd43-509e-48cf-868f-65a2768ca52b from this chassis (sb_readonly=0)
Oct  3 11:26:32 compute-0 ovn_controller[88471]: 2025-10-03T11:26:32Z|00127|binding|INFO|Releasing lport 1eb40ea8-53b0-46a1-bf82-85a3448330ac from this chassis (sb_readonly=0)
Oct  3 11:26:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct  3 11:26:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct  3 11:26:32 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct  3 11:26:33 compute-0 nova_compute[351685]: 2025-10-03 11:26:33.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:33 compute-0 nova_compute[351685]: 2025-10-03 11:26:33.037 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:33 compute-0 nova_compute[351685]: 2025-10-03 11:26:33.038 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:33 compute-0 nova_compute[351685]: 2025-10-03 11:26:33.045 2 DEBUG nova.virt.hardware [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  3 11:26:33 compute-0 nova_compute[351685]: 2025-10-03 11:26:33.045 2 INFO nova.compute.claims [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  3 11:26:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:26:33 compute-0 nova_compute[351685]: 2025-10-03 11:26:33.228 2 DEBUG oslo_concurrency.processutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:26:33 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/313831651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:26:33 compute-0 nova_compute[351685]: 2025-10-03 11:26:33.705 2 DEBUG oslo_concurrency.processutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:33 compute-0 nova_compute[351685]: 2025-10-03 11:26:33.715 2 DEBUG nova.compute.provider_tree [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:26:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3582: 321 pgs: 321 active+clean; 267 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.5 MiB/s wr, 86 op/s
Oct  3 11:26:33 compute-0 nova_compute[351685]: 2025-10-03 11:26:33.938 2 DEBUG nova.scheduler.client.report [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:26:33 compute-0 nova_compute[351685]: 2025-10-03 11:26:33.983 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:33 compute-0 nova_compute[351685]: 2025-10-03 11:26:33.984 2 DEBUG nova.compute.manager [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.037 2 DEBUG nova.compute.manager [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.037 2 DEBUG nova.network.neutron [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.055 2 INFO nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.075 2 DEBUG nova.compute.manager [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.184 2 DEBUG nova.compute.manager [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.187 2 DEBUG nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.188 2 INFO nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Creating image(s)#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.232 2 DEBUG nova.storage.rbd_utils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] rbd image fd405fd5-7402-43b4-8ab3-a52c18493a6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.281 2 DEBUG nova.storage.rbd_utils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] rbd image fd405fd5-7402-43b4-8ab3-a52c18493a6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.326 2 DEBUG nova.storage.rbd_utils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] rbd image fd405fd5-7402-43b4-8ab3-a52c18493a6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.343 2 DEBUG oslo_concurrency.processutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.382 2 DEBUG nova.policy [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a62337822a774597b9068cf3aed6a92f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5ea98f29bce64ae8ba81224645237ac7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.413 2 DEBUG oslo_concurrency.processutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.415 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.416 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.416 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.452 2 DEBUG nova.storage.rbd_utils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] rbd image fd405fd5-7402-43b4-8ab3-a52c18493a6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.461 2 DEBUG oslo_concurrency.processutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 fd405fd5-7402-43b4-8ab3-a52c18493a6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.619 2 DEBUG nova.network.neutron [None req-f1e75ecb-a0aa-4876-b1d7-1b7fffce3ee5 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Updating instance_info_cache with network_info: [{"id": "f7d0064f-83c7-44b3-839d-5811852ce687", "address": "fa:16:3e:6c:16:9e", "network": {"id": "65d2d488-03e3-490e-9ad6-7948aea642e8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1607624435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7d0064f-83", "ovs_interfaceid": "f7d0064f-83c7-44b3-839d-5811852ce687", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.650 2 DEBUG oslo_concurrency.lockutils [None req-f1e75ecb-a0aa-4876-b1d7-1b7fffce3ee5 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Releasing lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.651 2 DEBUG nova.compute.manager [None req-f1e75ecb-a0aa-4876-b1d7-1b7fffce3ee5 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.652 2 DEBUG nova.compute.manager [None req-f1e75ecb-a0aa-4876-b1d7-1b7fffce3ee5 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] network_info to inject: |[{"id": "f7d0064f-83c7-44b3-839d-5811852ce687", "address": "fa:16:3e:6c:16:9e", "network": {"id": "65d2d488-03e3-490e-9ad6-7948aea642e8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1607624435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7d0064f-83", "ovs_interfaceid": "f7d0064f-83c7-44b3-839d-5811852ce687", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.658 2 DEBUG oslo_concurrency.lockutils [req-6a875b6b-5e0c-4c7f-bc79-dad5d44cc547 req-99df35f1-b2ec-4a74-8fe2-60a88895580a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.659 2 DEBUG nova.network.neutron [req-6a875b6b-5e0c-4c7f-bc79-dad5d44cc547 req-99df35f1-b2ec-4a74-8fe2-60a88895580a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Refreshing network info cache for port f7d0064f-83c7-44b3-839d-5811852ce687 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:34 compute-0 nova_compute[351685]: 2025-10-03 11:26:34.948 2 DEBUG oslo_concurrency.processutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 fd405fd5-7402-43b4-8ab3-a52c18493a6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.085 2 DEBUG nova.storage.rbd_utils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] resizing rbd image fd405fd5-7402-43b4-8ab3-a52c18493a6e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.269 2 DEBUG nova.objects.instance [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lazy-loading 'migration_context' on Instance uuid fd405fd5-7402-43b4-8ab3-a52c18493a6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.294 2 DEBUG nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.294 2 DEBUG nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Ensure instance console log exists: /var/lib/nova/instances/fd405fd5-7402-43b4-8ab3-a52c18493a6e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.296 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.296 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.296 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:35 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:35.377 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:73:2e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:70:f7:73:f2:48'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:35 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:35.379 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.632 2 DEBUG oslo_concurrency.lockutils [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Acquiring lock "b5df7002-5185-4a75-ae2e-e8a44a0be062" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.633 2 DEBUG oslo_concurrency.lockutils [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lock "b5df7002-5185-4a75-ae2e-e8a44a0be062" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.634 2 DEBUG oslo_concurrency.lockutils [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Acquiring lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.635 2 DEBUG oslo_concurrency.lockutils [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.635 2 DEBUG oslo_concurrency.lockutils [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.637 2 INFO nova.compute.manager [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Terminating instance#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.638 2 DEBUG nova.compute.manager [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.659 2 DEBUG nova.network.neutron [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Successfully created port: d84c98dc-8422-4b51-aaf4-2f9403a4649c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  3 11:26:35 compute-0 kernel: tapf7d0064f-83 (unregistering): left promiscuous mode
Oct  3 11:26:35 compute-0 NetworkManager[45015]: <info>  [1759490795.7278] device (tapf7d0064f-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  3 11:26:35 compute-0 ovn_controller[88471]: 2025-10-03T11:26:35Z|00128|binding|INFO|Releasing lport f7d0064f-83c7-44b3-839d-5811852ce687 from this chassis (sb_readonly=0)
Oct  3 11:26:35 compute-0 ovn_controller[88471]: 2025-10-03T11:26:35Z|00129|binding|INFO|Setting lport f7d0064f-83c7-44b3-839d-5811852ce687 down in Southbound
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:35 compute-0 ovn_controller[88471]: 2025-10-03T11:26:35Z|00130|binding|INFO|Removing iface tapf7d0064f-83 ovn-installed in OVS
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:35 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:35.756 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:16:9e 10.100.0.12'], port_security=['fa:16:3e:6c:16:9e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b5df7002-5185-4a75-ae2e-e8a44a0be062', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65d2d488-03e3-490e-9ad6-7948aea642e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '57f47db3919c4f3797a1434bfeebe880', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4a216c94-f665-4b11-9602-f5df2570ec89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b73237e9-0ef8-4014-9df4-22d8c589f3e2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=f7d0064f-83c7-44b3-839d-5811852ce687) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:26:35 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:35.758 284328 INFO neutron.agent.ovn.metadata.agent [-] Port f7d0064f-83c7-44b3-839d-5811852ce687 in datapath 65d2d488-03e3-490e-9ad6-7948aea642e8 unbound from our chassis#033[00m
Oct  3 11:26:35 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:35.760 284328 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 65d2d488-03e3-490e-9ad6-7948aea642e8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  3 11:26:35 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:35.761 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[31c76104-701b-4a47-a699-24e9ba7b9878]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:35 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:35.762 284328 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8 namespace which is not needed anymore#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3583: 321 pgs: 321 active+clean; 304 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 408 KiB/s rd, 4.8 MiB/s wr, 97 op/s
Oct  3 11:26:35 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct  3 11:26:35 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 45.652s CPU time.
Oct  3 11:26:35 compute-0 systemd-machined[137653]: Machine qemu-7-instance-00000007 terminated.
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.882 2 INFO nova.virt.libvirt.driver [-] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Instance destroyed successfully.#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.883 2 DEBUG nova.objects.instance [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lazy-loading 'resources' on Instance uuid b5df7002-5185-4a75-ae2e-e8a44a0be062 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.903 2 DEBUG nova.virt.libvirt.vif [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T11:24:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-833269917',display_name='tempest-AttachInterfacesUnderV243Test-server-833269917',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-833269917',id=7,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBrQApX9rul7+6NfX14vBvGlk222SXAnUP+XRz92EwxKyAJLho/DMSF7rkjn3hLIOKcY5LDAzskko121CYX5fGFGZzKdCg2yvrWvMCpeTQcfG0+JouOcHB5AzC3ZJEn3+w==',key_name='tempest-keypair-1215622455',keypairs=<?>,launch_index=0,launched_at=2025-10-03T11:25:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='57f47db3919c4f3797a1434bfeebe880',ramdisk_id='',reservation_id='r-684bvxrt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1461178907',owner_user_name='tempest-AttachInterfacesUnderV243Test-1461178907-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-03T11:26:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7851dde78b9e4e9abf7463836db57a8e',uuid=b5df7002-5185-4a75-ae2e-e8a44a0be062,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f7d0064f-83c7-44b3-839d-5811852ce687", "address": "fa:16:3e:6c:16:9e", "network": {"id": "65d2d488-03e3-490e-9ad6-7948aea642e8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1607624435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7d0064f-83", "ovs_interfaceid": "f7d0064f-83c7-44b3-839d-5811852ce687", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.903 2 DEBUG nova.network.os_vif_util [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Converting VIF {"id": "f7d0064f-83c7-44b3-839d-5811852ce687", "address": "fa:16:3e:6c:16:9e", "network": {"id": "65d2d488-03e3-490e-9ad6-7948aea642e8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1607624435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7d0064f-83", "ovs_interfaceid": "f7d0064f-83c7-44b3-839d-5811852ce687", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.904 2 DEBUG nova.network.os_vif_util [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:16:9e,bridge_name='br-int',has_traffic_filtering=True,id=f7d0064f-83c7-44b3-839d-5811852ce687,network=Network(65d2d488-03e3-490e-9ad6-7948aea642e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7d0064f-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.904 2 DEBUG os_vif [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:16:9e,bridge_name='br-int',has_traffic_filtering=True,id=f7d0064f-83c7-44b3-839d-5811852ce687,network=Network(65d2d488-03e3-490e-9ad6-7948aea642e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7d0064f-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.906 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf7d0064f-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.912 2 INFO os_vif [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:16:9e,bridge_name='br-int',has_traffic_filtering=True,id=f7d0064f-83c7-44b3-839d-5811852ce687,network=Network(65d2d488-03e3-490e-9ad6-7948aea642e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf7d0064f-83')#033[00m
Oct  3 11:26:35 compute-0 neutron-haproxy-ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8[527083]: [NOTICE]   (527088) : haproxy version is 2.8.14-c23fe91
Oct  3 11:26:35 compute-0 neutron-haproxy-ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8[527083]: [NOTICE]   (527088) : path to executable is /usr/sbin/haproxy
Oct  3 11:26:35 compute-0 neutron-haproxy-ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8[527083]: [WARNING]  (527088) : Exiting Master process...
Oct  3 11:26:35 compute-0 neutron-haproxy-ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8[527083]: [ALERT]    (527088) : Current worker (527090) exited with code 143 (Terminated)
Oct  3 11:26:35 compute-0 neutron-haproxy-ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8[527083]: [WARNING]  (527088) : All workers exited. Exiting... (0)
Oct  3 11:26:35 compute-0 systemd[1]: libpod-2a5c8fd65e41008b6ca312ee44c77c5647571e7df3a5e3b77a055addc1987a8d.scope: Deactivated successfully.
Oct  3 11:26:35 compute-0 podman[530776]: 2025-10-03 11:26:35.949154425 +0000 UTC m=+0.076494533 container died 2a5c8fd65e41008b6ca312ee44c77c5647571e7df3a5e3b77a055addc1987a8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.964 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759490780.9630313, 939bb9dc-5fcb-4b53-adc4-df36f016d404 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.964 2 INFO nova.compute.manager [-] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] VM Stopped (Lifecycle Event)#033[00m
Oct  3 11:26:35 compute-0 nova_compute[351685]: 2025-10-03 11:26:35.987 2 DEBUG nova.compute.manager [None req-2a721f95-eea2-4e20-8e9c-1ffbd1cef4fb - - - - - -] [instance: 939bb9dc-5fcb-4b53-adc4-df36f016d404] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:26:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a5c8fd65e41008b6ca312ee44c77c5647571e7df3a5e3b77a055addc1987a8d-userdata-shm.mount: Deactivated successfully.
Oct  3 11:26:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ed236628f85ac2baaaf6c7cf4a0a07b32e8f0472a23f78f03699e84fb302de2-merged.mount: Deactivated successfully.
Oct  3 11:26:36 compute-0 podman[530776]: 2025-10-03 11:26:36.013173527 +0000 UTC m=+0.140513645 container cleanup 2a5c8fd65e41008b6ca312ee44c77c5647571e7df3a5e3b77a055addc1987a8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  3 11:26:36 compute-0 systemd[1]: libpod-conmon-2a5c8fd65e41008b6ca312ee44c77c5647571e7df3a5e3b77a055addc1987a8d.scope: Deactivated successfully.
Oct  3 11:26:36 compute-0 podman[530831]: 2025-10-03 11:26:36.104310977 +0000 UTC m=+0.065140118 container remove 2a5c8fd65e41008b6ca312ee44c77c5647571e7df3a5e3b77a055addc1987a8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  3 11:26:36 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:36.112 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ca3bea-9df8-451e-91ce-5809c6b8f703]: (4, ('Fri Oct  3 11:26:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8 (2a5c8fd65e41008b6ca312ee44c77c5647571e7df3a5e3b77a055addc1987a8d)\n2a5c8fd65e41008b6ca312ee44c77c5647571e7df3a5e3b77a055addc1987a8d\nFri Oct  3 11:26:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8 (2a5c8fd65e41008b6ca312ee44c77c5647571e7df3a5e3b77a055addc1987a8d)\n2a5c8fd65e41008b6ca312ee44c77c5647571e7df3a5e3b77a055addc1987a8d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:36 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:36.114 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[677bb522-0b59-4425-aaa3-cc99f34b0bdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:36 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:36.115 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65d2d488-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:36 compute-0 kernel: tap65d2d488-00: left promiscuous mode
Oct  3 11:26:36 compute-0 nova_compute[351685]: 2025-10-03 11:26:36.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:36 compute-0 nova_compute[351685]: 2025-10-03 11:26:36.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:36 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:36.137 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[9499af95-eb5c-4fe8-b32a-9d8f940763c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:36 compute-0 nova_compute[351685]: 2025-10-03 11:26:36.149 2 DEBUG nova.compute.manager [req-9be80c7d-a15d-4160-89d3-19890a6e9f7f req-c0084053-e243-4d3f-aac7-a9ff06eb3365 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Received event network-vif-unplugged-f7d0064f-83c7-44b3-839d-5811852ce687 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:36 compute-0 nova_compute[351685]: 2025-10-03 11:26:36.150 2 DEBUG oslo_concurrency.lockutils [req-9be80c7d-a15d-4160-89d3-19890a6e9f7f req-c0084053-e243-4d3f-aac7-a9ff06eb3365 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:36 compute-0 nova_compute[351685]: 2025-10-03 11:26:36.150 2 DEBUG oslo_concurrency.lockutils [req-9be80c7d-a15d-4160-89d3-19890a6e9f7f req-c0084053-e243-4d3f-aac7-a9ff06eb3365 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:36 compute-0 nova_compute[351685]: 2025-10-03 11:26:36.151 2 DEBUG oslo_concurrency.lockutils [req-9be80c7d-a15d-4160-89d3-19890a6e9f7f req-c0084053-e243-4d3f-aac7-a9ff06eb3365 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:36 compute-0 nova_compute[351685]: 2025-10-03 11:26:36.151 2 DEBUG nova.compute.manager [req-9be80c7d-a15d-4160-89d3-19890a6e9f7f req-c0084053-e243-4d3f-aac7-a9ff06eb3365 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] No waiting events found dispatching network-vif-unplugged-f7d0064f-83c7-44b3-839d-5811852ce687 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:26:36 compute-0 nova_compute[351685]: 2025-10-03 11:26:36.152 2 DEBUG nova.compute.manager [req-9be80c7d-a15d-4160-89d3-19890a6e9f7f req-c0084053-e243-4d3f-aac7-a9ff06eb3365 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Received event network-vif-unplugged-f7d0064f-83c7-44b3-839d-5811852ce687 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  3 11:26:36 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:36.157 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[5f06fb8c-035d-4319-a06b-6db513194d36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:36 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:36.159 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[87d8bef8-5679-45fc-8d85-059816dbd0ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:36 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:36.180 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[685746b6-2081-49e4-b642-1123081a5c04]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 988999, 'reachable_time': 28443, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 530845, 'error': None, 'target': 'ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:36 compute-0 systemd[1]: run-netns-ovnmeta\x2d65d2d488\x2d03e3\x2d490e\x2d9ad6\x2d7948aea642e8.mount: Deactivated successfully.
Oct  3 11:26:36 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:36.185 284439 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-65d2d488-03e3-490e-9ad6-7948aea642e8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  3 11:26:36 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:36.186 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[05d24016-ae7e-4dae-8484-58017422c88c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:36 compute-0 nova_compute[351685]: 2025-10-03 11:26:36.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:36 compute-0 nova_compute[351685]: 2025-10-03 11:26:36.635 2 INFO nova.virt.libvirt.driver [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Deleting instance files /var/lib/nova/instances/b5df7002-5185-4a75-ae2e-e8a44a0be062_del#033[00m
Oct  3 11:26:36 compute-0 nova_compute[351685]: 2025-10-03 11:26:36.636 2 INFO nova.virt.libvirt.driver [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Deletion of /var/lib/nova/instances/b5df7002-5185-4a75-ae2e-e8a44a0be062_del complete#033[00m
Oct  3 11:26:36 compute-0 nova_compute[351685]: 2025-10-03 11:26:36.709 2 INFO nova.compute.manager [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Took 1.07 seconds to destroy the instance on the hypervisor.#033[00m
Oct  3 11:26:36 compute-0 nova_compute[351685]: 2025-10-03 11:26:36.710 2 DEBUG oslo.service.loopingcall [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  3 11:26:36 compute-0 nova_compute[351685]: 2025-10-03 11:26:36.710 2 DEBUG nova.compute.manager [-] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  3 11:26:36 compute-0 nova_compute[351685]: 2025-10-03 11:26:36.710 2 DEBUG nova.network.neutron [-] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  3 11:26:37 compute-0 nova_compute[351685]: 2025-10-03 11:26:37.382 2 DEBUG nova.network.neutron [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Successfully updated port: d84c98dc-8422-4b51-aaf4-2f9403a4649c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  3 11:26:37 compute-0 nova_compute[351685]: 2025-10-03 11:26:37.399 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "refresh_cache-fd405fd5-7402-43b4-8ab3-a52c18493a6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:26:37 compute-0 nova_compute[351685]: 2025-10-03 11:26:37.399 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquired lock "refresh_cache-fd405fd5-7402-43b4-8ab3-a52c18493a6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:26:37 compute-0 nova_compute[351685]: 2025-10-03 11:26:37.399 2 DEBUG nova.network.neutron [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 11:26:37 compute-0 nova_compute[351685]: 2025-10-03 11:26:37.595 2 DEBUG nova.compute.manager [req-94425328-ee99-4d5b-bb5c-747431f50f3e req-604c67d3-9c13-48d7-aec9-4a948f92028a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Received event network-changed-d84c98dc-8422-4b51-aaf4-2f9403a4649c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:37 compute-0 nova_compute[351685]: 2025-10-03 11:26:37.596 2 DEBUG nova.compute.manager [req-94425328-ee99-4d5b-bb5c-747431f50f3e req-604c67d3-9c13-48d7-aec9-4a948f92028a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Refreshing instance network info cache due to event network-changed-d84c98dc-8422-4b51-aaf4-2f9403a4649c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:26:37 compute-0 nova_compute[351685]: 2025-10-03 11:26:37.596 2 DEBUG oslo_concurrency.lockutils [req-94425328-ee99-4d5b-bb5c-747431f50f3e req-604c67d3-9c13-48d7-aec9-4a948f92028a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-fd405fd5-7402-43b4-8ab3-a52c18493a6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:26:37 compute-0 nova_compute[351685]: 2025-10-03 11:26:37.755 2 DEBUG nova.network.neutron [req-6a875b6b-5e0c-4c7f-bc79-dad5d44cc547 req-99df35f1-b2ec-4a74-8fe2-60a88895580a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Updated VIF entry in instance network info cache for port f7d0064f-83c7-44b3-839d-5811852ce687. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:26:37 compute-0 nova_compute[351685]: 2025-10-03 11:26:37.756 2 DEBUG nova.network.neutron [req-6a875b6b-5e0c-4c7f-bc79-dad5d44cc547 req-99df35f1-b2ec-4a74-8fe2-60a88895580a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Updating instance_info_cache with network_info: [{"id": "f7d0064f-83c7-44b3-839d-5811852ce687", "address": "fa:16:3e:6c:16:9e", "network": {"id": "65d2d488-03e3-490e-9ad6-7948aea642e8", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1607624435-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "57f47db3919c4f3797a1434bfeebe880", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf7d0064f-83", "ovs_interfaceid": "f7d0064f-83c7-44b3-839d-5811852ce687", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:26:37 compute-0 nova_compute[351685]: 2025-10-03 11:26:37.783 2 DEBUG oslo_concurrency.lockutils [req-6a875b6b-5e0c-4c7f-bc79-dad5d44cc547 req-99df35f1-b2ec-4a74-8fe2-60a88895580a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-b5df7002-5185-4a75-ae2e-e8a44a0be062" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:26:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3584: 321 pgs: 321 active+clean; 304 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 408 KiB/s rd, 4.8 MiB/s wr, 97 op/s
Oct  3 11:26:38 compute-0 nova_compute[351685]: 2025-10-03 11:26:38.213 2 DEBUG nova.network.neutron [-] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:26:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:26:38 compute-0 nova_compute[351685]: 2025-10-03 11:26:38.235 2 INFO nova.compute.manager [-] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Took 1.52 seconds to deallocate network for instance.#033[00m
Oct  3 11:26:38 compute-0 nova_compute[351685]: 2025-10-03 11:26:38.307 2 DEBUG oslo_concurrency.lockutils [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:38 compute-0 nova_compute[351685]: 2025-10-03 11:26:38.307 2 DEBUG oslo_concurrency.lockutils [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:38 compute-0 nova_compute[351685]: 2025-10-03 11:26:38.339 2 DEBUG nova.compute.manager [req-d5b8c908-d113-4cae-8728-f065f12ca587 req-c14807a5-3cb6-4157-8712-ed782733e366 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Received event network-vif-plugged-f7d0064f-83c7-44b3-839d-5811852ce687 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:38 compute-0 nova_compute[351685]: 2025-10-03 11:26:38.342 2 DEBUG oslo_concurrency.lockutils [req-d5b8c908-d113-4cae-8728-f065f12ca587 req-c14807a5-3cb6-4157-8712-ed782733e366 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:38 compute-0 nova_compute[351685]: 2025-10-03 11:26:38.342 2 DEBUG oslo_concurrency.lockutils [req-d5b8c908-d113-4cae-8728-f065f12ca587 req-c14807a5-3cb6-4157-8712-ed782733e366 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:38 compute-0 nova_compute[351685]: 2025-10-03 11:26:38.343 2 DEBUG oslo_concurrency.lockutils [req-d5b8c908-d113-4cae-8728-f065f12ca587 req-c14807a5-3cb6-4157-8712-ed782733e366 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "b5df7002-5185-4a75-ae2e-e8a44a0be062-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:38 compute-0 nova_compute[351685]: 2025-10-03 11:26:38.343 2 DEBUG nova.compute.manager [req-d5b8c908-d113-4cae-8728-f065f12ca587 req-c14807a5-3cb6-4157-8712-ed782733e366 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] No waiting events found dispatching network-vif-plugged-f7d0064f-83c7-44b3-839d-5811852ce687 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:26:38 compute-0 nova_compute[351685]: 2025-10-03 11:26:38.344 2 WARNING nova.compute.manager [req-d5b8c908-d113-4cae-8728-f065f12ca587 req-c14807a5-3cb6-4157-8712-ed782733e366 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Received unexpected event network-vif-plugged-f7d0064f-83c7-44b3-839d-5811852ce687 for instance with vm_state deleted and task_state None.#033[00m
Oct  3 11:26:38 compute-0 nova_compute[351685]: 2025-10-03 11:26:38.438 2 DEBUG oslo_concurrency.processutils [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:38 compute-0 nova_compute[351685]: 2025-10-03 11:26:38.462 2 DEBUG nova.network.neutron [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  3 11:26:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:26:38 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3565950007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:26:38 compute-0 nova_compute[351685]: 2025-10-03 11:26:38.887 2 DEBUG oslo_concurrency.processutils [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:38 compute-0 nova_compute[351685]: 2025-10-03 11:26:38.897 2 DEBUG nova.compute.provider_tree [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:26:38 compute-0 nova_compute[351685]: 2025-10-03 11:26:38.955 2 DEBUG nova.scheduler.client.report [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:26:39 compute-0 nova_compute[351685]: 2025-10-03 11:26:39.003 2 DEBUG oslo_concurrency.lockutils [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:39 compute-0 nova_compute[351685]: 2025-10-03 11:26:39.041 2 INFO nova.scheduler.client.report [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Deleted allocations for instance b5df7002-5185-4a75-ae2e-e8a44a0be062#033[00m
Oct  3 11:26:39 compute-0 nova_compute[351685]: 2025-10-03 11:26:39.141 2 DEBUG oslo_concurrency.lockutils [None req-99331bcc-a6bd-427c-a5ac-8899916c91fd 7851dde78b9e4e9abf7463836db57a8e 57f47db3919c4f3797a1434bfeebe880 - - default default] Lock "b5df7002-5185-4a75-ae2e-e8a44a0be062" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3585: 321 pgs: 321 active+clean; 282 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 286 KiB/s rd, 4.8 MiB/s wr, 132 op/s
Oct  3 11:26:39 compute-0 nova_compute[351685]: 2025-10-03 11:26:39.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.477 2 DEBUG nova.compute.manager [req-096dfb25-1b54-4112-8108-865ce19d6555 req-f7c5fca3-1f57-4abc-a3bf-8e45ee0da44a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Received event network-vif-deleted-f7d0064f-83c7-44b3-839d-5811852ce687 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.732 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.737 2 DEBUG nova.network.neutron [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Updating instance_info_cache with network_info: [{"id": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "address": "fa:16:3e:f9:87:f3", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd84c98dc-84", "ovs_interfaceid": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.797 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.798 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Releasing lock "refresh_cache-fd405fd5-7402-43b4-8ab3-a52c18493a6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.799 2 DEBUG nova.compute.manager [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Instance network_info: |[{"id": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "address": "fa:16:3e:f9:87:f3", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd84c98dc-84", "ovs_interfaceid": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.800 2 DEBUG oslo_concurrency.lockutils [req-94425328-ee99-4d5b-bb5c-747431f50f3e req-604c67d3-9c13-48d7-aec9-4a948f92028a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-fd405fd5-7402-43b4-8ab3-a52c18493a6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.800 2 DEBUG nova.network.neutron [req-94425328-ee99-4d5b-bb5c-747431f50f3e req-604c67d3-9c13-48d7-aec9-4a948f92028a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Refreshing network info cache for port d84c98dc-8422-4b51-aaf4-2f9403a4649c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.803 2 DEBUG nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Start _get_guest_xml network_info=[{"id": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "address": "fa:16:3e:f9:87:f3", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd84c98dc-84", "ovs_interfaceid": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:24:00Z,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:24:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 0, 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '6a34ed8d-90df-4a16-a968-c59b7cafa2f1'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.812 2 WARNING nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.825 2 DEBUG nova.virt.libvirt.host [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.826 2 DEBUG nova.virt.libvirt.host [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.834 2 DEBUG nova.virt.libvirt.host [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.835 2 DEBUG nova.virt.libvirt.host [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.836 2 DEBUG nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.836 2 DEBUG nova.virt.hardware [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-03T11:23:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b93eb926-1d95-406e-aec3-a907be067084',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:24:00Z,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:24:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.837 2 DEBUG nova.virt.hardware [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.837 2 DEBUG nova.virt.hardware [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.838 2 DEBUG nova.virt.hardware [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.838 2 DEBUG nova.virt.hardware [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.838 2 DEBUG nova.virt.hardware [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.839 2 DEBUG nova.virt.hardware [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.839 2 DEBUG nova.virt.hardware [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.840 2 DEBUG nova.virt.hardware [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.840 2 DEBUG nova.virt.hardware [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.840 2 DEBUG nova.virt.hardware [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.845 2 DEBUG oslo_concurrency.processutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:40 compute-0 nova_compute[351685]: 2025-10-03 11:26:40.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:26:41 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4175292885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.329 2 DEBUG oslo_concurrency.processutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.357 2 DEBUG nova.storage.rbd_utils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] rbd image fd405fd5-7402-43b4-8ab3-a52c18493a6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.364 2 DEBUG oslo_concurrency.processutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:41.704 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:41.704 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:41.705 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3586: 321 pgs: 321 active+clean; 265 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 4.3 MiB/s wr, 107 op/s
Oct  3 11:26:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:26:41 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3392657404' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.827 2 DEBUG oslo_concurrency.processutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.829 2 DEBUG nova.virt.libvirt.vif [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:26:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-447198342',display_name='tempest-TestNetworkBasicOps-server-447198342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-447198342',id=12,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPPFGiS/9H/NKZhedR671AmR5bc0vWONukTWlO3x050R+mQUBddzuqLnrqAfqEtxXZBullE/O5LHjMA86f5AxVTlzZ9Lb5pyHDZ0PNtZgTOmhM6mUFx3RhorO28GwrSoFw==',key_name='tempest-TestNetworkBasicOps-176805461',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ea98f29bce64ae8ba81224645237ac7',ramdisk_id='',reservation_id='r-u5uhlazy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-975938423',owner_user_name='tempest-TestNetworkBasicOps-975938423-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:26:34Z,user_data=None,user_id='a62337822a774597b9068cf3aed6a92f',uuid=fd405fd5-7402-43b4-8ab3-a52c18493a6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "address": "fa:16:3e:f9:87:f3", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd84c98dc-84", "ovs_interfaceid": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.830 2 DEBUG nova.network.os_vif_util [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Converting VIF {"id": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "address": "fa:16:3e:f9:87:f3", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd84c98dc-84", "ovs_interfaceid": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.831 2 DEBUG nova.network.os_vif_util [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:87:f3,bridge_name='br-int',has_traffic_filtering=True,id=d84c98dc-8422-4b51-aaf4-2f9403a4649c,network=Network(0cae90f5-24f0-45af-a3e3-a77dbb0a12af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd84c98dc-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.833 2 DEBUG nova.objects.instance [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lazy-loading 'pci_devices' on Instance uuid fd405fd5-7402-43b4-8ab3-a52c18493a6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.850 2 DEBUG nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] End _get_guest_xml xml=<domain type="kvm">
Oct  3 11:26:41 compute-0 nova_compute[351685]:  <uuid>fd405fd5-7402-43b4-8ab3-a52c18493a6e</uuid>
Oct  3 11:26:41 compute-0 nova_compute[351685]:  <name>instance-0000000c</name>
Oct  3 11:26:41 compute-0 nova_compute[351685]:  <memory>131072</memory>
Oct  3 11:26:41 compute-0 nova_compute[351685]:  <vcpu>1</vcpu>
Oct  3 11:26:41 compute-0 nova_compute[351685]:  <metadata>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <nova:name>tempest-TestNetworkBasicOps-server-447198342</nova:name>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <nova:creationTime>2025-10-03 11:26:40</nova:creationTime>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <nova:flavor name="m1.nano">
Oct  3 11:26:41 compute-0 nova_compute[351685]:        <nova:memory>128</nova:memory>
Oct  3 11:26:41 compute-0 nova_compute[351685]:        <nova:disk>1</nova:disk>
Oct  3 11:26:41 compute-0 nova_compute[351685]:        <nova:swap>0</nova:swap>
Oct  3 11:26:41 compute-0 nova_compute[351685]:        <nova:ephemeral>0</nova:ephemeral>
Oct  3 11:26:41 compute-0 nova_compute[351685]:        <nova:vcpus>1</nova:vcpus>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      </nova:flavor>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <nova:owner>
Oct  3 11:26:41 compute-0 nova_compute[351685]:        <nova:user uuid="a62337822a774597b9068cf3aed6a92f">tempest-TestNetworkBasicOps-975938423-project-member</nova:user>
Oct  3 11:26:41 compute-0 nova_compute[351685]:        <nova:project uuid="5ea98f29bce64ae8ba81224645237ac7">tempest-TestNetworkBasicOps-975938423</nova:project>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      </nova:owner>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <nova:root type="image" uuid="6a34ed8d-90df-4a16-a968-c59b7cafa2f1"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <nova:ports>
Oct  3 11:26:41 compute-0 nova_compute[351685]:        <nova:port uuid="d84c98dc-8422-4b51-aaf4-2f9403a4649c">
Oct  3 11:26:41 compute-0 nova_compute[351685]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:        </nova:port>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      </nova:ports>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    </nova:instance>
Oct  3 11:26:41 compute-0 nova_compute[351685]:  </metadata>
Oct  3 11:26:41 compute-0 nova_compute[351685]:  <sysinfo type="smbios">
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <system>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <entry name="manufacturer">RDO</entry>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <entry name="product">OpenStack Compute</entry>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <entry name="serial">fd405fd5-7402-43b4-8ab3-a52c18493a6e</entry>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <entry name="uuid">fd405fd5-7402-43b4-8ab3-a52c18493a6e</entry>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <entry name="family">Virtual Machine</entry>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    </system>
Oct  3 11:26:41 compute-0 nova_compute[351685]:  </sysinfo>
Oct  3 11:26:41 compute-0 nova_compute[351685]:  <os>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <boot dev="hd"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <smbios mode="sysinfo"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:  </os>
Oct  3 11:26:41 compute-0 nova_compute[351685]:  <features>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <acpi/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <apic/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <vmcoreinfo/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:  </features>
Oct  3 11:26:41 compute-0 nova_compute[351685]:  <clock offset="utc">
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <timer name="pit" tickpolicy="delay"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <timer name="hpet" present="no"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:  </clock>
Oct  3 11:26:41 compute-0 nova_compute[351685]:  <cpu mode="host-model" match="exact">
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <topology sockets="1" cores="1" threads="1"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:  </cpu>
Oct  3 11:26:41 compute-0 nova_compute[351685]:  <devices>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/fd405fd5-7402-43b4-8ab3-a52c18493a6e_disk">
Oct  3 11:26:41 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      </source>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:26:41 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <target dev="vda" bus="virtio"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <disk type="network" device="cdrom">
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/fd405fd5-7402-43b4-8ab3-a52c18493a6e_disk.config">
Oct  3 11:26:41 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      </source>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:26:41 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <target dev="sda" bus="sata"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <interface type="ethernet">
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <mac address="fa:16:3e:f9:87:f3"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <driver name="vhost" rx_queue_size="512"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <mtu size="1442"/>
Oct  3 11:26:41 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <target dev="tapd84c98dc-84"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    </interface>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <serial type="pty">
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <log file="/var/lib/nova/instances/fd405fd5-7402-43b4-8ab3-a52c18493a6e/console.log" append="off"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    </serial>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <video>
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    </video>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <input type="tablet" bus="usb"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <rng model="virtio">
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <backend model="random">/dev/urandom</backend>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    </rng>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <controller type="usb" index="0"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    <memballoon model="virtio">
Oct  3 11:26:41 compute-0 nova_compute[351685]:      <stats period="10"/>
Oct  3 11:26:41 compute-0 nova_compute[351685]:    </memballoon>
Oct  3 11:26:41 compute-0 nova_compute[351685]:  </devices>
Oct  3 11:26:41 compute-0 nova_compute[351685]: </domain>
Oct  3 11:26:41 compute-0 nova_compute[351685]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.851 2 DEBUG nova.compute.manager [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Preparing to wait for external event network-vif-plugged-d84c98dc-8422-4b51-aaf4-2f9403a4649c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.851 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.851 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.852 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.853 2 DEBUG nova.virt.libvirt.vif [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:26:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-447198342',display_name='tempest-TestNetworkBasicOps-server-447198342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-447198342',id=12,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPPFGiS/9H/NKZhedR671AmR5bc0vWONukTWlO3x050R+mQUBddzuqLnrqAfqEtxXZBullE/O5LHjMA86f5AxVTlzZ9Lb5pyHDZ0PNtZgTOmhM6mUFx3RhorO28GwrSoFw==',key_name='tempest-TestNetworkBasicOps-176805461',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ea98f29bce64ae8ba81224645237ac7',ramdisk_id='',reservation_id='r-u5uhlazy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-975938423',owner_user_name='tempest-TestNetworkBasicOps-975938423-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:26:34Z,user_data=None,user_id='a62337822a774597b9068cf3aed6a92f',uuid=fd405fd5-7402-43b4-8ab3-a52c18493a6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "address": "fa:16:3e:f9:87:f3", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd84c98dc-84", "ovs_interfaceid": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.853 2 DEBUG nova.network.os_vif_util [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Converting VIF {"id": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "address": "fa:16:3e:f9:87:f3", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd84c98dc-84", "ovs_interfaceid": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.854 2 DEBUG nova.network.os_vif_util [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:87:f3,bridge_name='br-int',has_traffic_filtering=True,id=d84c98dc-8422-4b51-aaf4-2f9403a4649c,network=Network(0cae90f5-24f0-45af-a3e3-a77dbb0a12af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd84c98dc-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.854 2 DEBUG os_vif [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:87:f3,bridge_name='br-int',has_traffic_filtering=True,id=d84c98dc-8422-4b51-aaf4-2f9403a4649c,network=Network(0cae90f5-24f0-45af-a3e3-a77dbb0a12af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd84c98dc-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.856 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.856 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd84c98dc-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.861 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd84c98dc-84, col_values=(('external_ids', {'iface-id': 'd84c98dc-8422-4b51-aaf4-2f9403a4649c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:87:f3', 'vm-uuid': 'fd405fd5-7402-43b4-8ab3-a52c18493a6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:41 compute-0 NetworkManager[45015]: <info>  [1759490801.8649] manager: (tapd84c98dc-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.877 2 INFO os_vif [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:87:f3,bridge_name='br-int',has_traffic_filtering=True,id=d84c98dc-8422-4b51-aaf4-2f9403a4649c,network=Network(0cae90f5-24f0-45af-a3e3-a77dbb0a12af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd84c98dc-84')#033[00m
Oct  3 11:26:41 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.954 2 DEBUG nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.955 2 DEBUG nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.955 2 DEBUG nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] No VIF found with MAC fa:16:3e:f9:87:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.956 2 INFO nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Using config drive#033[00m
Oct  3 11:26:41 compute-0 nova_compute[351685]: 2025-10-03 11:26:41.992 2 DEBUG nova.storage.rbd_utils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] rbd image fd405fd5-7402-43b4-8ab3-a52c18493a6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:42 compute-0 nova_compute[351685]: 2025-10-03 11:26:42.773 2 INFO nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Creating config drive at /var/lib/nova/instances/fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.config#033[00m
Oct  3 11:26:42 compute-0 nova_compute[351685]: 2025-10-03 11:26:42.778 2 DEBUG oslo_concurrency.processutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb0i7svbp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:42 compute-0 nova_compute[351685]: 2025-10-03 11:26:42.907 2 DEBUG oslo_concurrency.processutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb0i7svbp" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:42 compute-0 nova_compute[351685]: 2025-10-03 11:26:42.958 2 DEBUG nova.storage.rbd_utils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] rbd image fd405fd5-7402-43b4-8ab3-a52c18493a6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:42 compute-0 nova_compute[351685]: 2025-10-03 11:26:42.970 2 DEBUG oslo_concurrency.processutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.config fd405fd5-7402-43b4-8ab3-a52c18493a6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.017 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "83fc22ce-d2e4-468a-b166-04f2743fa68d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.018 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.042 2 DEBUG nova.compute.manager [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.174 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.175 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.176 2 DEBUG nova.network.neutron [req-94425328-ee99-4d5b-bb5c-747431f50f3e req-604c67d3-9c13-48d7-aec9-4a948f92028a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Updated VIF entry in instance network info cache for port d84c98dc-8422-4b51-aaf4-2f9403a4649c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.177 2 DEBUG nova.network.neutron [req-94425328-ee99-4d5b-bb5c-747431f50f3e req-604c67d3-9c13-48d7-aec9-4a948f92028a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Updating instance_info_cache with network_info: [{"id": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "address": "fa:16:3e:f9:87:f3", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd84c98dc-84", "ovs_interfaceid": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.186 2 DEBUG nova.virt.hardware [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.186 2 INFO nova.compute.claims [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.221 2 DEBUG oslo_concurrency.lockutils [req-94425328-ee99-4d5b-bb5c-747431f50f3e req-604c67d3-9c13-48d7-aec9-4a948f92028a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-fd405fd5-7402-43b4-8ab3-a52c18493a6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:26:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.225 2 DEBUG oslo_concurrency.processutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.config fd405fd5-7402-43b4-8ab3-a52c18493a6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.226 2 INFO nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Deleting local config drive /var/lib/nova/instances/fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.config because it was imported into RBD.#033[00m
Oct  3 11:26:43 compute-0 kernel: tapd84c98dc-84: entered promiscuous mode
Oct  3 11:26:43 compute-0 NetworkManager[45015]: <info>  [1759490803.3093] manager: (tapd84c98dc-84): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:43 compute-0 ovn_controller[88471]: 2025-10-03T11:26:43Z|00131|binding|INFO|Claiming lport d84c98dc-8422-4b51-aaf4-2f9403a4649c for this chassis.
Oct  3 11:26:43 compute-0 ovn_controller[88471]: 2025-10-03T11:26:43Z|00132|binding|INFO|d84c98dc-8422-4b51-aaf4-2f9403a4649c: Claiming fa:16:3e:f9:87:f3 10.100.0.5
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.324 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:87:f3 10.100.0.5'], port_security=['fa:16:3e:f9:87:f3 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'fd405fd5-7402-43b4-8ab3-a52c18493a6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cae90f5-24f0-45af-a3e3-a77dbb0a12af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ea98f29bce64ae8ba81224645237ac7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ea5a8a77-844c-46ab-af9d-4b5c2ea8e4c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26eb2c44-9631-42b8-a4d9-ab8785ccd098, chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=d84c98dc-8422-4b51-aaf4-2f9403a4649c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.325 284328 INFO neutron.agent.ovn.metadata.agent [-] Port d84c98dc-8422-4b51-aaf4-2f9403a4649c in datapath 0cae90f5-24f0-45af-a3e3-a77dbb0a12af bound to our chassis#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.328 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0cae90f5-24f0-45af-a3e3-a77dbb0a12af#033[00m
Oct  3 11:26:43 compute-0 ovn_controller[88471]: 2025-10-03T11:26:43Z|00133|binding|INFO|Setting lport d84c98dc-8422-4b51-aaf4-2f9403a4649c ovn-installed in OVS
Oct  3 11:26:43 compute-0 ovn_controller[88471]: 2025-10-03T11:26:43Z|00134|binding|INFO|Setting lport d84c98dc-8422-4b51-aaf4-2f9403a4649c up in Southbound
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.349 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[91371e05-9f8a-4da7-bd42-7593384f560d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:43 compute-0 systemd-udevd[531006]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.352 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0cae90f5-21 in ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.354 412583 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0cae90f5-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.354 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[18a57207-3d08-4d51-9a37-e12564d15199]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.356 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[537b9cfd-c227-4070-a53b-8ab5d6babbc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:43 compute-0 NetworkManager[45015]: <info>  [1759490803.3715] device (tapd84c98dc-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 11:26:43 compute-0 systemd-machined[137653]: New machine qemu-12-instance-0000000c.
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.375 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[9361cd70-7fdb-46c0-81a0-03efdee418a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:43 compute-0 NetworkManager[45015]: <info>  [1759490803.3777] device (tapd84c98dc-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.378 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:43 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.393 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea4bb8a-44a7-4146-8caf-1c12180f61ce]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.433 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2ef60c-ff54-4c07-8406-8367ad6197b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.440 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[70fc3c87-c8a7-4139-9d16-c417baa9c9fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:43 compute-0 NetworkManager[45015]: <info>  [1759490803.4416] manager: (tap0cae90f5-20): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.480 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[54b9aa1b-e703-4b50-9c83-18e0e53eda68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.484 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[fcef1ae5-fc21-4ef0-adc6-48f474c6dbc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:43 compute-0 NetworkManager[45015]: <info>  [1759490803.5120] device (tap0cae90f5-20): carrier: link connected
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.527 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[d54aa0bf-03a4-4953-83a1-a90229736032]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.544 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[2d114bf8-34de-4b5e-b08a-7490bd9a8077]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cae90f5-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:a3:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 998611, 'reachable_time': 19686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 531059, 'error': None, 'target': 'ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.561 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[73cbf1ed-b029-450b-8e29-3494575dca11]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:a3ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 998611, 'tstamp': 998611}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 531060, 'error': None, 'target': 'ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.584 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[fd528c75-fa4d-4115-90de-664139c1b219]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cae90f5-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:a3:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 998611, 'reachable_time': 19686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 531061, 'error': None, 'target': 'ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.629 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d1d5ce-f3d1-46b5-b806-75feb53a88ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.727 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[f1dd24cd-27e0-4924-8fe1-f6957f4f52b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.729 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cae90f5-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.729 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.730 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cae90f5-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:43 compute-0 kernel: tap0cae90f5-20: entered promiscuous mode
Oct  3 11:26:43 compute-0 NetworkManager[45015]: <info>  [1759490803.7340] manager: (tap0cae90f5-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.743 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0cae90f5-20, col_values=(('external_ids', {'iface-id': 'e51b3658-d946-4608-953e-6b26039ed1fd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:43 compute-0 ovn_controller[88471]: 2025-10-03T11:26:43Z|00135|binding|INFO|Releasing lport e51b3658-d946-4608-953e-6b26039ed1fd from this chassis (sb_readonly=0)
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.747 284328 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0cae90f5-24f0-45af-a3e3-a77dbb0a12af.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0cae90f5-24f0-45af-a3e3-a77dbb0a12af.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.748 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[fae68c43-7573-48d2-9a2b-a2ba1d0bb913]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.749 284328 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: global
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    log         /dev/log local0 debug
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    log-tag     haproxy-metadata-proxy-0cae90f5-24f0-45af-a3e3-a77dbb0a12af
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    user        root
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    group       root
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    maxconn     1024
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    pidfile     /var/lib/neutron/external/pids/0cae90f5-24f0-45af-a3e3-a77dbb0a12af.pid.haproxy
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    daemon
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: defaults
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    log global
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    mode http
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    option httplog
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    option dontlognull
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    option http-server-close
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    option forwardfor
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    retries                 3
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    timeout http-request    30s
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    timeout connect         30s
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    timeout client          32s
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    timeout server          32s
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    timeout http-keep-alive 30s
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: listen listener
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    bind 169.254.169.254:80
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]:    http-request add-header X-OVN-Network-ID 0cae90f5-24f0-45af-a3e3-a77dbb0a12af
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  3 11:26:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:43.750 284328 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af', 'env', 'PROCESS_TAG=haproxy-0cae90f5-24f0-45af-a3e3-a77dbb0a12af', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0cae90f5-24f0-45af-a3e3-a77dbb0a12af.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3587: 321 pgs: 321 active+clean; 265 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 4.0 MiB/s wr, 98 op/s
Oct  3 11:26:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:26:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2311211049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.918 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.927 2 DEBUG nova.compute.provider_tree [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.948 2 DEBUG nova.scheduler.client.report [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.975 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:43 compute-0 nova_compute[351685]: 2025-10-03 11:26:43.977 2 DEBUG nova.compute.manager [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  3 11:26:44 compute-0 nova_compute[351685]: 2025-10-03 11:26:44.021 2 DEBUG nova.compute.manager [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  3 11:26:44 compute-0 nova_compute[351685]: 2025-10-03 11:26:44.021 2 DEBUG nova.network.neutron [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  3 11:26:44 compute-0 nova_compute[351685]: 2025-10-03 11:26:44.070 2 INFO nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  3 11:26:44 compute-0 nova_compute[351685]: 2025-10-03 11:26:44.090 2 DEBUG nova.compute.manager [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  3 11:26:44 compute-0 podman[531094]: 2025-10-03 11:26:44.186124673 +0000 UTC m=+0.084542421 container create f54cccda760ad0ed4821575473bb208b39c6a6d5b3b7206112e3cf4a7a38cada (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  3 11:26:44 compute-0 nova_compute[351685]: 2025-10-03 11:26:44.212 2 DEBUG nova.compute.manager [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  3 11:26:44 compute-0 nova_compute[351685]: 2025-10-03 11:26:44.213 2 DEBUG nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  3 11:26:44 compute-0 nova_compute[351685]: 2025-10-03 11:26:44.214 2 INFO nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Creating image(s)#033[00m
Oct  3 11:26:44 compute-0 podman[531094]: 2025-10-03 11:26:44.139945632 +0000 UTC m=+0.038363430 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  3 11:26:44 compute-0 systemd[1]: Started libpod-conmon-f54cccda760ad0ed4821575473bb208b39c6a6d5b3b7206112e3cf4a7a38cada.scope.
Oct  3 11:26:44 compute-0 nova_compute[351685]: 2025-10-03 11:26:44.273 2 DEBUG nova.storage.rbd_utils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] rbd image 83fc22ce-d2e4-468a-b166-04f2743fa68d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:44 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:26:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e913b6595b5ff98fbc8d5b5132e43c4498aecbcaa5829b9968753d6c93f43c32/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  3 11:26:44 compute-0 podman[531094]: 2025-10-03 11:26:44.341464821 +0000 UTC m=+0.239882569 container init f54cccda760ad0ed4821575473bb208b39c6a6d5b3b7206112e3cf4a7a38cada (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:26:44 compute-0 nova_compute[351685]: 2025-10-03 11:26:44.345 2 DEBUG nova.storage.rbd_utils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] rbd image 83fc22ce-d2e4-468a-b166-04f2743fa68d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:44 compute-0 podman[531094]: 2025-10-03 11:26:44.357489354 +0000 UTC m=+0.255907072 container start f54cccda760ad0ed4821575473bb208b39c6a6d5b3b7206112e3cf4a7a38cada (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 11:26:44 compute-0 neutron-haproxy-ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af[531137]: [NOTICE]   (531186) : New worker (531204) forked
Oct  3 11:26:44 compute-0 neutron-haproxy-ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af[531137]: [NOTICE]   (531186) : Loading success.
Oct  3 11:26:44 compute-0 nova_compute[351685]: 2025-10-03 11:26:44.386 2 DEBUG nova.storage.rbd_utils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] rbd image 83fc22ce-d2e4-468a-b166-04f2743fa68d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:44 compute-0 nova_compute[351685]: 2025-10-03 11:26:44.395 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "54e4a806ae3db3ffd1941099a5274840605d8464" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:44 compute-0 nova_compute[351685]: 2025-10-03 11:26:44.396 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "54e4a806ae3db3ffd1941099a5274840605d8464" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:44 compute-0 nova_compute[351685]: 2025-10-03 11:26:44.502 2 DEBUG nova.policy [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8990c210ba8740dc9714739f27702391', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ebbd19d68501417398b05a6a4b7c22de', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  3 11:26:44 compute-0 nova_compute[351685]: 2025-10-03 11:26:44.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.109 2 DEBUG nova.compute.manager [req-ef985b4a-8a2d-480f-a81d-af3e3fce8ef5 req-04191cdb-2483-4191-800e-897b40c8cc91 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Received event network-vif-plugged-d84c98dc-8422-4b51-aaf4-2f9403a4649c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.109 2 DEBUG oslo_concurrency.lockutils [req-ef985b4a-8a2d-480f-a81d-af3e3fce8ef5 req-04191cdb-2483-4191-800e-897b40c8cc91 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.109 2 DEBUG oslo_concurrency.lockutils [req-ef985b4a-8a2d-480f-a81d-af3e3fce8ef5 req-04191cdb-2483-4191-800e-897b40c8cc91 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.109 2 DEBUG oslo_concurrency.lockutils [req-ef985b4a-8a2d-480f-a81d-af3e3fce8ef5 req-04191cdb-2483-4191-800e-897b40c8cc91 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.110 2 DEBUG nova.compute.manager [req-ef985b4a-8a2d-480f-a81d-af3e3fce8ef5 req-04191cdb-2483-4191-800e-897b40c8cc91 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Processing event network-vif-plugged-d84c98dc-8422-4b51-aaf4-2f9403a4649c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.164 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490805.1644819, fd405fd5-7402-43b4-8ab3-a52c18493a6e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.165 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] VM Started (Lifecycle Event)#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.167 2 DEBUG nova.compute.manager [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.172 2 DEBUG nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.177 2 INFO nova.virt.libvirt.driver [-] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Instance spawned successfully.#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.177 2 DEBUG nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.185 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.190 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.198 2 DEBUG nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.198 2 DEBUG nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.201 2 DEBUG nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.201 2 DEBUG nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.201 2 DEBUG nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.202 2 DEBUG nova.virt.libvirt.driver [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.210 2 DEBUG nova.virt.libvirt.imagebackend [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Image locations are: [{'url': 'rbd://9b4e8c9a-5555-5510-a631-4742a1182561/images/b9c8e0cc-ecf1-4fa8-92ee-328b108123cd/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://9b4e8c9a-5555-5510-a631-4742a1182561/images/b9c8e0cc-ecf1-4fa8-92ee-328b108123cd/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.212 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.213 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490805.1646187, fd405fd5-7402-43b4-8ab3-a52c18493a6e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.213 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] VM Paused (Lifecycle Event)#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.237 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.242 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490805.171885, fd405fd5-7402-43b4-8ab3-a52c18493a6e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.242 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] VM Resumed (Lifecycle Event)#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.261 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.267 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.295 2 INFO nova.compute.manager [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Took 11.11 seconds to spawn the instance on the hypervisor.#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.296 2 DEBUG nova.compute.manager [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.296 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.378 2 INFO nova.compute.manager [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Took 12.41 seconds to build instance.#033[00m
Oct  3 11:26:45 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:45.383 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:45 compute-0 nova_compute[351685]: 2025-10-03 11:26:45.405 2 DEBUG oslo_concurrency.lockutils [None req-eec77f92-122d-49bf-bf9d-e8ba31ffef75 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3588: 321 pgs: 321 active+clean; 265 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 3.5 MiB/s wr, 80 op/s
Oct  3 11:26:46 compute-0 nova_compute[351685]: 2025-10-03 11:26:46.165 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/54e4a806ae3db3ffd1941099a5274840605d8464.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:26:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:26:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:26:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:26:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:26:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:26:46 compute-0 nova_compute[351685]: 2025-10-03 11:26:46.238 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/54e4a806ae3db3ffd1941099a5274840605d8464.part --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:46 compute-0 nova_compute[351685]: 2025-10-03 11:26:46.240 2 DEBUG nova.virt.images [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] b9c8e0cc-ecf1-4fa8-92ee-328b108123cd was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  3 11:26:46 compute-0 nova_compute[351685]: 2025-10-03 11:26:46.242 2 DEBUG nova.privsep.utils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  3 11:26:46 compute-0 nova_compute[351685]: 2025-10-03 11:26:46.243 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/54e4a806ae3db3ffd1941099a5274840605d8464.part /var/lib/nova/instances/_base/54e4a806ae3db3ffd1941099a5274840605d8464.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:26:46
Oct  3 11:26:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:26:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:26:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['images', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'backups', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.control', 'default.rgw.meta']
Oct  3 11:26:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:26:46 compute-0 nova_compute[351685]: 2025-10-03 11:26:46.481 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/54e4a806ae3db3ffd1941099a5274840605d8464.part /var/lib/nova/instances/_base/54e4a806ae3db3ffd1941099a5274840605d8464.converted" returned: 0 in 0.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:46 compute-0 nova_compute[351685]: 2025-10-03 11:26:46.486 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/54e4a806ae3db3ffd1941099a5274840605d8464.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:46 compute-0 nova_compute[351685]: 2025-10-03 11:26:46.588 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/54e4a806ae3db3ffd1941099a5274840605d8464.converted --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:46 compute-0 nova_compute[351685]: 2025-10-03 11:26:46.589 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "54e4a806ae3db3ffd1941099a5274840605d8464" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:46 compute-0 nova_compute[351685]: 2025-10-03 11:26:46.631 2 DEBUG nova.storage.rbd_utils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] rbd image 83fc22ce-d2e4-468a-b166-04f2743fa68d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:46 compute-0 nova_compute[351685]: 2025-10-03 11:26:46.640 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/54e4a806ae3db3ffd1941099a5274840605d8464 83fc22ce-d2e4-468a-b166-04f2743fa68d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:46 compute-0 nova_compute[351685]: 2025-10-03 11:26:46.683 2 DEBUG nova.network.neutron [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Successfully created port: 226590bd-fa92-4e26-8879-8782d015ad61 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  3 11:26:46 compute-0 nova_compute[351685]: 2025-10-03 11:26:46.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:47 compute-0 nova_compute[351685]: 2025-10-03 11:26:47.029 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/54e4a806ae3db3ffd1941099a5274840605d8464 83fc22ce-d2e4-468a-b166-04f2743fa68d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:47 compute-0 nova_compute[351685]: 2025-10-03 11:26:47.140 2 DEBUG nova.storage.rbd_utils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] resizing rbd image 83fc22ce-d2e4-468a-b166-04f2743fa68d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  3 11:26:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:26:47 compute-0 nova_compute[351685]: 2025-10-03 11:26:47.319 2 DEBUG nova.objects.instance [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lazy-loading 'migration_context' on Instance uuid 83fc22ce-d2e4-468a-b166-04f2743fa68d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:26:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:26:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:26:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:26:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:26:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:26:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:26:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:26:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:26:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:26:47 compute-0 nova_compute[351685]: 2025-10-03 11:26:47.335 2 DEBUG nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  3 11:26:47 compute-0 nova_compute[351685]: 2025-10-03 11:26:47.335 2 DEBUG nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Ensure instance console log exists: /var/lib/nova/instances/83fc22ce-d2e4-468a-b166-04f2743fa68d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  3 11:26:47 compute-0 nova_compute[351685]: 2025-10-03 11:26:47.336 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:47 compute-0 nova_compute[351685]: 2025-10-03 11:26:47.337 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:47 compute-0 nova_compute[351685]: 2025-10-03 11:26:47.337 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3589: 321 pgs: 321 active+clean; 265 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.6 MiB/s wr, 56 op/s
Oct  3 11:26:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:26:48 compute-0 nova_compute[351685]: 2025-10-03 11:26:48.793 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:26:48 compute-0 nova_compute[351685]: 2025-10-03 11:26:48.843 2 DEBUG nova.compute.manager [req-55c9a1cf-e4e2-481d-8f7c-ade7476834e1 req-819dec45-f61c-4a5e-95e8-1b488a8c728d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Received event network-vif-plugged-d84c98dc-8422-4b51-aaf4-2f9403a4649c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:48 compute-0 nova_compute[351685]: 2025-10-03 11:26:48.844 2 DEBUG oslo_concurrency.lockutils [req-55c9a1cf-e4e2-481d-8f7c-ade7476834e1 req-819dec45-f61c-4a5e-95e8-1b488a8c728d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:48 compute-0 nova_compute[351685]: 2025-10-03 11:26:48.844 2 DEBUG oslo_concurrency.lockutils [req-55c9a1cf-e4e2-481d-8f7c-ade7476834e1 req-819dec45-f61c-4a5e-95e8-1b488a8c728d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:48 compute-0 nova_compute[351685]: 2025-10-03 11:26:48.844 2 DEBUG oslo_concurrency.lockutils [req-55c9a1cf-e4e2-481d-8f7c-ade7476834e1 req-819dec45-f61c-4a5e-95e8-1b488a8c728d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:48 compute-0 nova_compute[351685]: 2025-10-03 11:26:48.844 2 DEBUG nova.compute.manager [req-55c9a1cf-e4e2-481d-8f7c-ade7476834e1 req-819dec45-f61c-4a5e-95e8-1b488a8c728d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] No waiting events found dispatching network-vif-plugged-d84c98dc-8422-4b51-aaf4-2f9403a4649c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:26:48 compute-0 nova_compute[351685]: 2025-10-03 11:26:48.844 2 WARNING nova.compute.manager [req-55c9a1cf-e4e2-481d-8f7c-ade7476834e1 req-819dec45-f61c-4a5e-95e8-1b488a8c728d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Received unexpected event network-vif-plugged-d84c98dc-8422-4b51-aaf4-2f9403a4649c for instance with vm_state active and task_state None.#033[00m
Oct  3 11:26:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3590: 321 pgs: 321 active+clean; 296 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.8 MiB/s wr, 147 op/s
Oct  3 11:26:49 compute-0 nova_compute[351685]: 2025-10-03 11:26:49.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:50 compute-0 nova_compute[351685]: 2025-10-03 11:26:50.173 2 DEBUG nova.network.neutron [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Successfully updated port: 226590bd-fa92-4e26-8879-8782d015ad61 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  3 11:26:50 compute-0 nova_compute[351685]: 2025-10-03 11:26:50.190 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:26:50 compute-0 nova_compute[351685]: 2025-10-03 11:26:50.190 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquired lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:26:50 compute-0 nova_compute[351685]: 2025-10-03 11:26:50.191 2 DEBUG nova.network.neutron [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 11:26:50 compute-0 nova_compute[351685]: 2025-10-03 11:26:50.776 2 DEBUG nova.network.neutron [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  3 11:26:50 compute-0 ovn_controller[88471]: 2025-10-03T11:26:50Z|00136|binding|INFO|Releasing lport e51b3658-d946-4608-953e-6b26039ed1fd from this chassis (sb_readonly=0)
Oct  3 11:26:50 compute-0 ovn_controller[88471]: 2025-10-03T11:26:50Z|00137|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:26:50 compute-0 ovn_controller[88471]: 2025-10-03T11:26:50Z|00138|binding|INFO|Releasing lport 1eb40ea8-53b0-46a1-bf82-85a3448330ac from this chassis (sb_readonly=0)
Oct  3 11:26:50 compute-0 nova_compute[351685]: 2025-10-03 11:26:50.878 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759490795.8774173, b5df7002-5185-4a75-ae2e-e8a44a0be062 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:26:50 compute-0 nova_compute[351685]: 2025-10-03 11:26:50.879 2 INFO nova.compute.manager [-] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] VM Stopped (Lifecycle Event)#033[00m
Oct  3 11:26:50 compute-0 podman[531346]: 2025-10-03 11:26:50.892224308 +0000 UTC m=+0.140152382 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:26:50 compute-0 podman[531352]: 2025-10-03 11:26:50.899650527 +0000 UTC m=+0.124337916 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 11:26:50 compute-0 podman[531365]: 2025-10-03 11:26:50.903962275 +0000 UTC m=+0.129283915 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:26:50 compute-0 nova_compute[351685]: 2025-10-03 11:26:50.902 2 DEBUG nova.compute.manager [None req-dd3496c2-9705-4ad7-9ac7-837a0e4f1d5a - - - - - -] [instance: b5df7002-5185-4a75-ae2e-e8a44a0be062] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:26:50 compute-0 podman[531348]: 2025-10-03 11:26:50.918340066 +0000 UTC m=+0.149924907 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:26:50 compute-0 nova_compute[351685]: 2025-10-03 11:26:50.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:50 compute-0 podman[531347]: 2025-10-03 11:26:50.921118685 +0000 UTC m=+0.156504507 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, release=1755695350, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, version=9.6, io.openshift.expose-services=)
Oct  3 11:26:50 compute-0 podman[531349]: 2025-10-03 11:26:50.922600422 +0000 UTC m=+0.158566153 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:26:50 compute-0 podman[531358]: 2025-10-03 11:26:50.945068502 +0000 UTC m=+0.171899270 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  3 11:26:51 compute-0 nova_compute[351685]: 2025-10-03 11:26:51.100 2 DEBUG nova.compute.manager [req-f9b86581-516b-49d7-a72d-a03e84715034 req-21e2a698-9e0d-446d-aece-886f6766cd48 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Received event network-changed-226590bd-fa92-4e26-8879-8782d015ad61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:51 compute-0 nova_compute[351685]: 2025-10-03 11:26:51.101 2 DEBUG nova.compute.manager [req-f9b86581-516b-49d7-a72d-a03e84715034 req-21e2a698-9e0d-446d-aece-886f6766cd48 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Refreshing instance network info cache due to event network-changed-226590bd-fa92-4e26-8879-8782d015ad61. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:26:51 compute-0 nova_compute[351685]: 2025-10-03 11:26:51.104 2 DEBUG oslo_concurrency.lockutils [req-f9b86581-516b-49d7-a72d-a03e84715034 req-21e2a698-9e0d-446d-aece-886f6766cd48 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:26:51 compute-0 nova_compute[351685]: 2025-10-03 11:26:51.105 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Acquiring lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:51 compute-0 nova_compute[351685]: 2025-10-03 11:26:51.106 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:51 compute-0 nova_compute[351685]: 2025-10-03 11:26:51.126 2 DEBUG nova.compute.manager [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  3 11:26:51 compute-0 nova_compute[351685]: 2025-10-03 11:26:51.203 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:51 compute-0 nova_compute[351685]: 2025-10-03 11:26:51.203 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:51 compute-0 nova_compute[351685]: 2025-10-03 11:26:51.212 2 DEBUG nova.virt.hardware [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  3 11:26:51 compute-0 nova_compute[351685]: 2025-10-03 11:26:51.212 2 INFO nova.compute.claims [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  3 11:26:51 compute-0 nova_compute[351685]: 2025-10-03 11:26:51.416 2 DEBUG oslo_concurrency.processutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3591: 321 pgs: 321 active+clean; 311 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Oct  3 11:26:51 compute-0 nova_compute[351685]: 2025-10-03 11:26:51.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:26:51 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2100555302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:26:51 compute-0 nova_compute[351685]: 2025-10-03 11:26:51.902 2 DEBUG oslo_concurrency.processutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:51 compute-0 nova_compute[351685]: 2025-10-03 11:26:51.916 2 DEBUG nova.compute.provider_tree [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:26:51 compute-0 nova_compute[351685]: 2025-10-03 11:26:51.934 2 DEBUG nova.scheduler.client.report [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:26:51 compute-0 nova_compute[351685]: 2025-10-03 11:26:51.958 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:51 compute-0 nova_compute[351685]: 2025-10-03 11:26:51.960 2 DEBUG nova.compute.manager [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.021 2 DEBUG nova.compute.manager [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.021 2 DEBUG nova.network.neutron [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.047 2 INFO nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.069 2 DEBUG nova.compute.manager [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.178 2 DEBUG nova.compute.manager [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.179 2 DEBUG nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.180 2 INFO nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Creating image(s)#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.211 2 DEBUG nova.storage.rbd_utils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] rbd image 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.240 2 DEBUG nova.storage.rbd_utils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] rbd image 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.271 2 DEBUG nova.storage.rbd_utils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] rbd image 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.278 2 DEBUG oslo_concurrency.processutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.350 2 DEBUG nova.policy [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e95897c85bf04672a829b11af6ed10c1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '76485b7490844f9181c1821d135ade02', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.365 2 DEBUG oslo_concurrency.processutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.367 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Acquiring lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.368 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.369 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.406 2 DEBUG nova.storage.rbd_utils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] rbd image 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.417 2 DEBUG oslo_concurrency.processutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.466 2 DEBUG nova.network.neutron [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Updating instance_info_cache with network_info: [{"id": "226590bd-fa92-4e26-8879-8782d015ad61", "address": "fa:16:3e:c0:36:62", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.141", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap226590bd-fa", "ovs_interfaceid": "226590bd-fa92-4e26-8879-8782d015ad61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.489 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Releasing lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.490 2 DEBUG nova.compute.manager [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Instance network_info: |[{"id": "226590bd-fa92-4e26-8879-8782d015ad61", "address": "fa:16:3e:c0:36:62", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.141", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap226590bd-fa", "ovs_interfaceid": "226590bd-fa92-4e26-8879-8782d015ad61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.491 2 DEBUG oslo_concurrency.lockutils [req-f9b86581-516b-49d7-a72d-a03e84715034 req-21e2a698-9e0d-446d-aece-886f6766cd48 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.492 2 DEBUG nova.network.neutron [req-f9b86581-516b-49d7-a72d-a03e84715034 req-21e2a698-9e0d-446d-aece-886f6766cd48 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Refreshing network info cache for port 226590bd-fa92-4e26-8879-8782d015ad61 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.495 2 DEBUG nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Start _get_guest_xml network_info=[{"id": "226590bd-fa92-4e26-8879-8782d015ad61", "address": "fa:16:3e:c0:36:62", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.141", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap226590bd-fa", "ovs_interfaceid": "226590bd-fa92-4e26-8879-8782d015ad61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:26:31Z,direct_url=<?>,disk_format='qcow2',id=b9c8e0cc-ecf1-4fa8-92ee-328b108123cd,min_disk=0,min_ram=0,name='tempest-scenario-img--982789236',owner='ebbd19d68501417398b05a6a4b7c22de',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:26:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 0, 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': 'b9c8e0cc-ecf1-4fa8-92ee-328b108123cd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.504 2 WARNING nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.513 2 DEBUG nova.virt.libvirt.host [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.514 2 DEBUG nova.virt.libvirt.host [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.524 2 DEBUG nova.virt.libvirt.host [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.525 2 DEBUG nova.virt.libvirt.host [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.526 2 DEBUG nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.527 2 DEBUG nova.virt.hardware [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-03T11:23:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b93eb926-1d95-406e-aec3-a907be067084',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:26:31Z,direct_url=<?>,disk_format='qcow2',id=b9c8e0cc-ecf1-4fa8-92ee-328b108123cd,min_disk=0,min_ram=0,name='tempest-scenario-img--982789236',owner='ebbd19d68501417398b05a6a4b7c22de',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:26:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.529 2 DEBUG nova.virt.hardware [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.529 2 DEBUG nova.virt.hardware [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.530 2 DEBUG nova.virt.hardware [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.531 2 DEBUG nova.virt.hardware [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.531 2 DEBUG nova.virt.hardware [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.532 2 DEBUG nova.virt.hardware [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.533 2 DEBUG nova.virt.hardware [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.534 2 DEBUG nova.virt.hardware [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.535 2 DEBUG nova.virt.hardware [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.535 2 DEBUG nova.virt.hardware [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.540 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.746 2 DEBUG oslo_concurrency.processutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:52 compute-0 nova_compute[351685]: 2025-10-03 11:26:52.841 2 DEBUG nova.storage.rbd_utils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] resizing rbd image 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  3 11:26:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:26:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3234461148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.033 2 DEBUG nova.objects.instance [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lazy-loading 'migration_context' on Instance uuid 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.035 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.075 2 DEBUG nova.storage.rbd_utils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] rbd image 83fc22ce-d2e4-468a-b166-04f2743fa68d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.084 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.107 2 DEBUG nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.108 2 DEBUG nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Ensure instance console log exists: /var/lib/nova/instances/218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.108 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.108 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.109 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.184 2 DEBUG nova.network.neutron [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Successfully created port: ff068d12-ba56-4465-a024-881b428d0ad9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  3 11:26:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:26:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:26:53 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/515768642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.534 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.537 2 DEBUG nova.virt.libvirt.vif [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz',id=13,image_ref='b9c8e0cc-ecf1-4fa8-92ee-328b108123cd',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ebbd19d68501417398b05a6a4b7c22de',ramdisk_id='',reservation_id='r-j0c04m1s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b9c8e0cc-ecf1-4fa8-92ee-328b108123cd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-298349364',owner_user_name='tempest-PrometheusGabbiTest-298349364-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:26:44Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8990c210ba8740dc9714739f27702391',uuid=83fc22ce-d2e4-468a-b166-04f2743fa68d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "226590bd-fa92-4e26-8879-8782d015ad61", "address": "fa:16:3e:c0:36:62", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.141", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap226590bd-fa", "ovs_interfaceid": "226590bd-fa92-4e26-8879-8782d015ad61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.537 2 DEBUG nova.network.os_vif_util [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Converting VIF {"id": "226590bd-fa92-4e26-8879-8782d015ad61", "address": "fa:16:3e:c0:36:62", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.141", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap226590bd-fa", "ovs_interfaceid": "226590bd-fa92-4e26-8879-8782d015ad61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.539 2 DEBUG nova.network.os_vif_util [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:36:62,bridge_name='br-int',has_traffic_filtering=True,id=226590bd-fa92-4e26-8879-8782d015ad61,network=Network(9844dad0-501d-443c-9110-8dd633c460de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap226590bd-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.541 2 DEBUG nova.objects.instance [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lazy-loading 'pci_devices' on Instance uuid 83fc22ce-d2e4-468a-b166-04f2743fa68d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.557 2 DEBUG nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] End _get_guest_xml xml=<domain type="kvm">
Oct  3 11:26:53 compute-0 nova_compute[351685]:  <uuid>83fc22ce-d2e4-468a-b166-04f2743fa68d</uuid>
Oct  3 11:26:53 compute-0 nova_compute[351685]:  <name>instance-0000000d</name>
Oct  3 11:26:53 compute-0 nova_compute[351685]:  <memory>131072</memory>
Oct  3 11:26:53 compute-0 nova_compute[351685]:  <vcpu>1</vcpu>
Oct  3 11:26:53 compute-0 nova_compute[351685]:  <metadata>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <nova:name>te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz</nova:name>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <nova:creationTime>2025-10-03 11:26:52</nova:creationTime>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <nova:flavor name="m1.nano">
Oct  3 11:26:53 compute-0 nova_compute[351685]:        <nova:memory>128</nova:memory>
Oct  3 11:26:53 compute-0 nova_compute[351685]:        <nova:disk>1</nova:disk>
Oct  3 11:26:53 compute-0 nova_compute[351685]:        <nova:swap>0</nova:swap>
Oct  3 11:26:53 compute-0 nova_compute[351685]:        <nova:ephemeral>0</nova:ephemeral>
Oct  3 11:26:53 compute-0 nova_compute[351685]:        <nova:vcpus>1</nova:vcpus>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      </nova:flavor>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <nova:owner>
Oct  3 11:26:53 compute-0 nova_compute[351685]:        <nova:user uuid="8990c210ba8740dc9714739f27702391">tempest-PrometheusGabbiTest-298349364-project-member</nova:user>
Oct  3 11:26:53 compute-0 nova_compute[351685]:        <nova:project uuid="ebbd19d68501417398b05a6a4b7c22de">tempest-PrometheusGabbiTest-298349364</nova:project>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      </nova:owner>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <nova:root type="image" uuid="b9c8e0cc-ecf1-4fa8-92ee-328b108123cd"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <nova:ports>
Oct  3 11:26:53 compute-0 nova_compute[351685]:        <nova:port uuid="226590bd-fa92-4e26-8879-8782d015ad61">
Oct  3 11:26:53 compute-0 nova_compute[351685]:          <nova:ip type="fixed" address="10.100.1.141" ipVersion="4"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:        </nova:port>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      </nova:ports>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    </nova:instance>
Oct  3 11:26:53 compute-0 nova_compute[351685]:  </metadata>
Oct  3 11:26:53 compute-0 nova_compute[351685]:  <sysinfo type="smbios">
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <system>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <entry name="manufacturer">RDO</entry>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <entry name="product">OpenStack Compute</entry>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <entry name="serial">83fc22ce-d2e4-468a-b166-04f2743fa68d</entry>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <entry name="uuid">83fc22ce-d2e4-468a-b166-04f2743fa68d</entry>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <entry name="family">Virtual Machine</entry>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    </system>
Oct  3 11:26:53 compute-0 nova_compute[351685]:  </sysinfo>
Oct  3 11:26:53 compute-0 nova_compute[351685]:  <os>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <boot dev="hd"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <smbios mode="sysinfo"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:  </os>
Oct  3 11:26:53 compute-0 nova_compute[351685]:  <features>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <acpi/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <apic/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <vmcoreinfo/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:  </features>
Oct  3 11:26:53 compute-0 nova_compute[351685]:  <clock offset="utc">
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <timer name="pit" tickpolicy="delay"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <timer name="hpet" present="no"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:  </clock>
Oct  3 11:26:53 compute-0 nova_compute[351685]:  <cpu mode="host-model" match="exact">
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <topology sockets="1" cores="1" threads="1"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:  </cpu>
Oct  3 11:26:53 compute-0 nova_compute[351685]:  <devices>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/83fc22ce-d2e4-468a-b166-04f2743fa68d_disk">
Oct  3 11:26:53 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      </source>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:26:53 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <target dev="vda" bus="virtio"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <disk type="network" device="cdrom">
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/83fc22ce-d2e4-468a-b166-04f2743fa68d_disk.config">
Oct  3 11:26:53 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      </source>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:26:53 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <target dev="sda" bus="sata"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <interface type="ethernet">
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <mac address="fa:16:3e:c0:36:62"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <driver name="vhost" rx_queue_size="512"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <mtu size="1442"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <target dev="tap226590bd-fa"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    </interface>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <serial type="pty">
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <log file="/var/lib/nova/instances/83fc22ce-d2e4-468a-b166-04f2743fa68d/console.log" append="off"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    </serial>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <video>
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    </video>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <input type="tablet" bus="usb"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <rng model="virtio">
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <backend model="random">/dev/urandom</backend>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    </rng>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <controller type="usb" index="0"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    <memballoon model="virtio">
Oct  3 11:26:53 compute-0 nova_compute[351685]:      <stats period="10"/>
Oct  3 11:26:53 compute-0 nova_compute[351685]:    </memballoon>
Oct  3 11:26:53 compute-0 nova_compute[351685]:  </devices>
Oct  3 11:26:53 compute-0 nova_compute[351685]: </domain>
Oct  3 11:26:53 compute-0 nova_compute[351685]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.560 2 DEBUG nova.compute.manager [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Preparing to wait for external event network-vif-plugged-226590bd-fa92-4e26-8879-8782d015ad61 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.561 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.562 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.562 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.564 2 DEBUG nova.virt.libvirt.vif [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz',id=13,image_ref='b9c8e0cc-ecf1-4fa8-92ee-328b108123cd',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ebbd19d68501417398b05a6a4b7c22de',ramdisk_id='',reservation_id='r-j0c04m1s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b9c8e0cc-ecf1-4fa8-92ee-328b108123cd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-298349364',owner_user_name='tempest-PrometheusGabbiTest-298349364-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:26:44Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8990c210ba8740dc9714739f27702391',uuid=83fc22ce-d2e4-468a-b166-04f2743fa68d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "226590bd-fa92-4e26-8879-8782d015ad61", "address": "fa:16:3e:c0:36:62", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.141", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap226590bd-fa", "ovs_interfaceid": "226590bd-fa92-4e26-8879-8782d015ad61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.564 2 DEBUG nova.network.os_vif_util [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Converting VIF {"id": "226590bd-fa92-4e26-8879-8782d015ad61", "address": "fa:16:3e:c0:36:62", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.141", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap226590bd-fa", "ovs_interfaceid": "226590bd-fa92-4e26-8879-8782d015ad61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.566 2 DEBUG nova.network.os_vif_util [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:36:62,bridge_name='br-int',has_traffic_filtering=True,id=226590bd-fa92-4e26-8879-8782d015ad61,network=Network(9844dad0-501d-443c-9110-8dd633c460de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap226590bd-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.566 2 DEBUG os_vif [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:36:62,bridge_name='br-int',has_traffic_filtering=True,id=226590bd-fa92-4e26-8879-8782d015ad61,network=Network(9844dad0-501d-443c-9110-8dd633c460de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap226590bd-fa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.569 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.570 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.576 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap226590bd-fa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap226590bd-fa, col_values=(('external_ids', {'iface-id': '226590bd-fa92-4e26-8879-8782d015ad61', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:36:62', 'vm-uuid': '83fc22ce-d2e4-468a-b166-04f2743fa68d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:53 compute-0 NetworkManager[45015]: <info>  [1759490813.5826] manager: (tap226590bd-fa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.591 2 INFO os_vif [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:36:62,bridge_name='br-int',has_traffic_filtering=True,id=226590bd-fa92-4e26-8879-8782d015ad61,network=Network(9844dad0-501d-443c-9110-8dd633c460de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap226590bd-fa')#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.682 2 DEBUG nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.682 2 DEBUG nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.683 2 DEBUG nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] No VIF found with MAC fa:16:3e:c0:36:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.684 2 INFO nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Using config drive#033[00m
Oct  3 11:26:53 compute-0 nova_compute[351685]: 2025-10-03 11:26:53.722 2 DEBUG nova.storage.rbd_utils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] rbd image 83fc22ce-d2e4-468a-b166-04f2743fa68d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3592: 321 pgs: 321 active+clean; 327 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.4 MiB/s wr, 121 op/s
Oct  3 11:26:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:26:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3918623298' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:26:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:26:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3918623298' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.394 2 DEBUG nova.network.neutron [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Successfully updated port: ff068d12-ba56-4465-a024-881b428d0ad9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.429 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Acquiring lock "refresh_cache-218fdfd8-b66b-4ba2-90b0-5eb27dcacddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.429 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Acquired lock "refresh_cache-218fdfd8-b66b-4ba2-90b0-5eb27dcacddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.430 2 DEBUG nova.network.neutron [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.439 2 INFO nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Creating config drive at /var/lib/nova/instances/83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.config#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.449 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb53bb_u2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.507 2 DEBUG nova.compute.manager [req-83d5879b-df90-4ec8-8ffe-01df88a2d9af req-d6d99ecd-7357-4381-9b45-4fc9cf57b507 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Received event network-changed-d84c98dc-8422-4b51-aaf4-2f9403a4649c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.508 2 DEBUG nova.compute.manager [req-83d5879b-df90-4ec8-8ffe-01df88a2d9af req-d6d99ecd-7357-4381-9b45-4fc9cf57b507 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Refreshing instance network info cache due to event network-changed-d84c98dc-8422-4b51-aaf4-2f9403a4649c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.509 2 DEBUG oslo_concurrency.lockutils [req-83d5879b-df90-4ec8-8ffe-01df88a2d9af req-d6d99ecd-7357-4381-9b45-4fc9cf57b507 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-fd405fd5-7402-43b4-8ab3-a52c18493a6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.509 2 DEBUG oslo_concurrency.lockutils [req-83d5879b-df90-4ec8-8ffe-01df88a2d9af req-d6d99ecd-7357-4381-9b45-4fc9cf57b507 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-fd405fd5-7402-43b4-8ab3-a52c18493a6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.509 2 DEBUG nova.network.neutron [req-83d5879b-df90-4ec8-8ffe-01df88a2d9af req-d6d99ecd-7357-4381-9b45-4fc9cf57b507 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Refreshing network info cache for port d84c98dc-8422-4b51-aaf4-2f9403a4649c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.531 2 DEBUG nova.compute.manager [req-a4d30fea-3aa2-4f57-b225-4ae59050bc79 req-eed30ff3-b7f2-458b-bc5f-a2ea86b86057 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Received event network-changed-ff068d12-ba56-4465-a024-881b428d0ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.532 2 DEBUG nova.compute.manager [req-a4d30fea-3aa2-4f57-b225-4ae59050bc79 req-eed30ff3-b7f2-458b-bc5f-a2ea86b86057 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Refreshing instance network info cache due to event network-changed-ff068d12-ba56-4465-a024-881b428d0ad9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.532 2 DEBUG oslo_concurrency.lockutils [req-a4d30fea-3aa2-4f57-b225-4ae59050bc79 req-eed30ff3-b7f2-458b-bc5f-a2ea86b86057 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-218fdfd8-b66b-4ba2-90b0-5eb27dcacddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.579 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb53bb_u2" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.614 2 DEBUG nova.storage.rbd_utils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] rbd image 83fc22ce-d2e4-468a-b166-04f2743fa68d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.622 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.config 83fc22ce-d2e4-468a-b166-04f2743fa68d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.854 2 DEBUG nova.network.neutron [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  3 11:26:54 compute-0 nova_compute[351685]: 2025-10-03 11:26:54.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:55 compute-0 nova_compute[351685]: 2025-10-03 11:26:55.079 2 DEBUG oslo_concurrency.processutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.config 83fc22ce-d2e4-468a-b166-04f2743fa68d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:55 compute-0 nova_compute[351685]: 2025-10-03 11:26:55.081 2 INFO nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Deleting local config drive /var/lib/nova/instances/83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.config because it was imported into RBD.#033[00m
Oct  3 11:26:55 compute-0 kernel: tap226590bd-fa: entered promiscuous mode
Oct  3 11:26:55 compute-0 ovn_controller[88471]: 2025-10-03T11:26:55Z|00139|binding|INFO|Claiming lport 226590bd-fa92-4e26-8879-8782d015ad61 for this chassis.
Oct  3 11:26:55 compute-0 ovn_controller[88471]: 2025-10-03T11:26:55Z|00140|binding|INFO|226590bd-fa92-4e26-8879-8782d015ad61: Claiming fa:16:3e:c0:36:62 10.100.1.141
Oct  3 11:26:55 compute-0 nova_compute[351685]: 2025-10-03 11:26:55.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:55 compute-0 NetworkManager[45015]: <info>  [1759490815.1671] manager: (tap226590bd-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.184 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:36:62 10.100.1.141'], port_security=['fa:16:3e:c0:36:62 10.100.1.141'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.141/16', 'neutron:device_id': '83fc22ce-d2e4-468a-b166-04f2743fa68d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9844dad0-501d-443c-9110-8dd633c460de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ebbd19d68501417398b05a6a4b7c22de', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6c689562-b70d-4f38-a6f1-f8491b7c8598', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=557eeff1-fb6f-4cc0-9427-7ac15dbf385b, chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=226590bd-fa92-4e26-8879-8782d015ad61) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.187 284328 INFO neutron.agent.ovn.metadata.agent [-] Port 226590bd-fa92-4e26-8879-8782d015ad61 in datapath 9844dad0-501d-443c-9110-8dd633c460de bound to our chassis#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.190 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9844dad0-501d-443c-9110-8dd633c460de#033[00m
Oct  3 11:26:55 compute-0 systemd-machined[137653]: New machine qemu-13-instance-0000000d.
Oct  3 11:26:55 compute-0 systemd-udevd[531802]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.210 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[a4dec1a9-252e-46b8-836e-e969af7a2238]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.211 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9844dad0-51 in ovnmeta-9844dad0-501d-443c-9110-8dd633c460de namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.216 412583 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9844dad0-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.217 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd11f81-7146-4e16-9625-08a96a3f89ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:55 compute-0 NetworkManager[45015]: <info>  [1759490815.2188] device (tap226590bd-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 11:26:55 compute-0 NetworkManager[45015]: <info>  [1759490815.2195] device (tap226590bd-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.220 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef2d916-a857-4bbb-8072-cb11f19be3f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:55 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.236 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[f73e4e80-563c-48af-bb83-dd7e5e2d49a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:55 compute-0 nova_compute[351685]: 2025-10-03 11:26:55.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:55 compute-0 ovn_controller[88471]: 2025-10-03T11:26:55Z|00141|binding|INFO|Setting lport 226590bd-fa92-4e26-8879-8782d015ad61 ovn-installed in OVS
Oct  3 11:26:55 compute-0 ovn_controller[88471]: 2025-10-03T11:26:55Z|00142|binding|INFO|Setting lport 226590bd-fa92-4e26-8879-8782d015ad61 up in Southbound
Oct  3 11:26:55 compute-0 nova_compute[351685]: 2025-10-03 11:26:55.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.264 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[5c13152b-0026-4399-8728-ef134c6a3809]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:55 compute-0 nova_compute[351685]: 2025-10-03 11:26:55.297 2 DEBUG nova.network.neutron [req-f9b86581-516b-49d7-a72d-a03e84715034 req-21e2a698-9e0d-446d-aece-886f6766cd48 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Updated VIF entry in instance network info cache for port 226590bd-fa92-4e26-8879-8782d015ad61. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:26:55 compute-0 nova_compute[351685]: 2025-10-03 11:26:55.298 2 DEBUG nova.network.neutron [req-f9b86581-516b-49d7-a72d-a03e84715034 req-21e2a698-9e0d-446d-aece-886f6766cd48 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Updating instance_info_cache with network_info: [{"id": "226590bd-fa92-4e26-8879-8782d015ad61", "address": "fa:16:3e:c0:36:62", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.141", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap226590bd-fa", "ovs_interfaceid": "226590bd-fa92-4e26-8879-8782d015ad61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.307 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[3fc4a89c-1712-4ca2-aca9-7b51fd846052]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:55 compute-0 NetworkManager[45015]: <info>  [1759490815.3177] manager: (tap9844dad0-50): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.316 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[cf589582-5c40-44e3-892b-f272f07c6446]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:55 compute-0 nova_compute[351685]: 2025-10-03 11:26:55.323 2 DEBUG oslo_concurrency.lockutils [req-f9b86581-516b-49d7-a72d-a03e84715034 req-21e2a698-9e0d-446d-aece-886f6766cd48 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.361 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[c77e43fd-6790-4a91-9587-02ffb4458b5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.365 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[f3da67cf-f9ff-43d5-b14a-e954c1634745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:55 compute-0 NetworkManager[45015]: <info>  [1759490815.3898] device (tap9844dad0-50): carrier: link connected
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.397 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[218dd54c-b7c4-4b70-9820-6985a1292e5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.423 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[ed513a51-5735-46ca-9cda-ca8cf7a698e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9844dad0-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:82:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 999798, 'reachable_time': 18746, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 531834, 'error': None, 'target': 'ovnmeta-9844dad0-501d-443c-9110-8dd633c460de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.448 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[03c122f2-a257-452e-991a-3d2514a59809]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe70:82ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 999798, 'tstamp': 999798}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 531835, 'error': None, 'target': 'ovnmeta-9844dad0-501d-443c-9110-8dd633c460de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.468 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[50b506dc-4453-43f5-94b1-b6d22933750a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9844dad0-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:82:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 999798, 'reachable_time': 18746, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 531836, 'error': None, 'target': 'ovnmeta-9844dad0-501d-443c-9110-8dd633c460de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.506 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[e01de516-c3c9-49f0-a55c-345814dea1d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.612 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8270f3-75e7-4f20-ae56-c967fe6d2c47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.617 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9844dad0-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.618 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.619 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9844dad0-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:55 compute-0 nova_compute[351685]: 2025-10-03 11:26:55.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:55 compute-0 NetworkManager[45015]: <info>  [1759490815.6228] manager: (tap9844dad0-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Oct  3 11:26:55 compute-0 kernel: tap9844dad0-50: entered promiscuous mode
Oct  3 11:26:55 compute-0 nova_compute[351685]: 2025-10-03 11:26:55.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.629 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9844dad0-50, col_values=(('external_ids', {'iface-id': '71787e87-58e9-457f-840d-4d7e879d0280'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:55 compute-0 nova_compute[351685]: 2025-10-03 11:26:55.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:55 compute-0 ovn_controller[88471]: 2025-10-03T11:26:55Z|00143|binding|INFO|Releasing lport 71787e87-58e9-457f-840d-4d7e879d0280 from this chassis (sb_readonly=0)
Oct  3 11:26:55 compute-0 nova_compute[351685]: 2025-10-03 11:26:55.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.672 284328 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9844dad0-501d-443c-9110-8dd633c460de.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9844dad0-501d-443c-9110-8dd633c460de.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  3 11:26:55 compute-0 nova_compute[351685]: 2025-10-03 11:26:55.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.674 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[2abeebb7-c997-40e6-b008-237760f0b02e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.676 284328 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: global
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    log         /dev/log local0 debug
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    log-tag     haproxy-metadata-proxy-9844dad0-501d-443c-9110-8dd633c460de
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    user        root
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    group       root
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    maxconn     1024
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    pidfile     /var/lib/neutron/external/pids/9844dad0-501d-443c-9110-8dd633c460de.pid.haproxy
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    daemon
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: defaults
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    log global
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    mode http
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    option httplog
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    option dontlognull
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    option http-server-close
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    option forwardfor
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    retries                 3
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    timeout http-request    30s
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    timeout connect         30s
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    timeout client          32s
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    timeout server          32s
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    timeout http-keep-alive 30s
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: listen listener
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    bind 169.254.169.254:80
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]:    http-request add-header X-OVN-Network-ID 9844dad0-501d-443c-9110-8dd633c460de
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  3 11:26:55 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:26:55.680 284328 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9844dad0-501d-443c-9110-8dd633c460de', 'env', 'PROCESS_TAG=haproxy-9844dad0-501d-443c-9110-8dd633c460de', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9844dad0-501d-443c-9110-8dd633c460de.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  3 11:26:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3593: 321 pgs: 321 active+clean; 357 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 137 op/s
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.144 2 DEBUG nova.network.neutron [req-83d5879b-df90-4ec8-8ffe-01df88a2d9af req-d6d99ecd-7357-4381-9b45-4fc9cf57b507 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Updated VIF entry in instance network info cache for port d84c98dc-8422-4b51-aaf4-2f9403a4649c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.145 2 DEBUG nova.network.neutron [req-83d5879b-df90-4ec8-8ffe-01df88a2d9af req-d6d99ecd-7357-4381-9b45-4fc9cf57b507 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Updating instance_info_cache with network_info: [{"id": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "address": "fa:16:3e:f9:87:f3", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd84c98dc-84", "ovs_interfaceid": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:26:56 compute-0 podman[531868]: 2025-10-03 11:26:56.117129964 +0000 UTC m=+0.037723901 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.211 2 DEBUG oslo_concurrency.lockutils [req-83d5879b-df90-4ec8-8ffe-01df88a2d9af req-d6d99ecd-7357-4381-9b45-4fc9cf57b507 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-fd405fd5-7402-43b4-8ab3-a52c18493a6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:26:56 compute-0 podman[531868]: 2025-10-03 11:26:56.223456151 +0000 UTC m=+0.144050038 container create 2ef57576c1edb1b4c2226583eeaaa7dbf5e5e1f0c4e73681eb9ddf25417b077a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9844dad0-501d-443c-9110-8dd633c460de, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  3 11:26:56 compute-0 systemd[1]: Started libpod-conmon-2ef57576c1edb1b4c2226583eeaaa7dbf5e5e1f0c4e73681eb9ddf25417b077a.scope.
Oct  3 11:26:56 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7cc04b6a4e1214966669a7868dca073ef1348bef424041b383eb294ac96b35/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  3 11:26:56 compute-0 podman[531868]: 2025-10-03 11:26:56.324932644 +0000 UTC m=+0.245526551 container init 2ef57576c1edb1b4c2226583eeaaa7dbf5e5e1f0c4e73681eb9ddf25417b077a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9844dad0-501d-443c-9110-8dd633c460de, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  3 11:26:56 compute-0 podman[531868]: 2025-10-03 11:26:56.335140461 +0000 UTC m=+0.255734348 container start 2ef57576c1edb1b4c2226583eeaaa7dbf5e5e1f0c4e73681eb9ddf25417b077a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9844dad0-501d-443c-9110-8dd633c460de, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 11:26:56 compute-0 neutron-haproxy-ovnmeta-9844dad0-501d-443c-9110-8dd633c460de[531917]: [NOTICE]   (531926) : New worker (531930) forked
Oct  3 11:26:56 compute-0 neutron-haproxy-ovnmeta-9844dad0-501d-443c-9110-8dd633c460de[531917]: [NOTICE]   (531926) : Loading success.
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0023541915942241324 of space, bias 1.0, pg target 0.7062574782672397 quantized to 32 (current 32)
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:26:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.595 2 DEBUG nova.compute.manager [req-3f1bb454-b01c-4cf6-b77b-a09a7f3e091f req-8f9a794c-4694-4289-b45b-553e65a8a562 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Received event network-vif-plugged-226590bd-fa92-4e26-8879-8782d015ad61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.595 2 DEBUG oslo_concurrency.lockutils [req-3f1bb454-b01c-4cf6-b77b-a09a7f3e091f req-8f9a794c-4694-4289-b45b-553e65a8a562 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.596 2 DEBUG oslo_concurrency.lockutils [req-3f1bb454-b01c-4cf6-b77b-a09a7f3e091f req-8f9a794c-4694-4289-b45b-553e65a8a562 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.596 2 DEBUG oslo_concurrency.lockutils [req-3f1bb454-b01c-4cf6-b77b-a09a7f3e091f req-8f9a794c-4694-4289-b45b-553e65a8a562 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.596 2 DEBUG nova.compute.manager [req-3f1bb454-b01c-4cf6-b77b-a09a7f3e091f req-8f9a794c-4694-4289-b45b-553e65a8a562 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Processing event network-vif-plugged-226590bd-fa92-4e26-8879-8782d015ad61 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.596 2 DEBUG nova.compute.manager [req-3f1bb454-b01c-4cf6-b77b-a09a7f3e091f req-8f9a794c-4694-4289-b45b-553e65a8a562 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Received event network-vif-plugged-226590bd-fa92-4e26-8879-8782d015ad61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.596 2 DEBUG oslo_concurrency.lockutils [req-3f1bb454-b01c-4cf6-b77b-a09a7f3e091f req-8f9a794c-4694-4289-b45b-553e65a8a562 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.597 2 DEBUG oslo_concurrency.lockutils [req-3f1bb454-b01c-4cf6-b77b-a09a7f3e091f req-8f9a794c-4694-4289-b45b-553e65a8a562 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.597 2 DEBUG oslo_concurrency.lockutils [req-3f1bb454-b01c-4cf6-b77b-a09a7f3e091f req-8f9a794c-4694-4289-b45b-553e65a8a562 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.597 2 DEBUG nova.compute.manager [req-3f1bb454-b01c-4cf6-b77b-a09a7f3e091f req-8f9a794c-4694-4289-b45b-553e65a8a562 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] No waiting events found dispatching network-vif-plugged-226590bd-fa92-4e26-8879-8782d015ad61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.597 2 WARNING nova.compute.manager [req-3f1bb454-b01c-4cf6-b77b-a09a7f3e091f req-8f9a794c-4694-4289-b45b-553e65a8a562 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Received unexpected event network-vif-plugged-226590bd-fa92-4e26-8879-8782d015ad61 for instance with vm_state building and task_state spawning.#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.715 2 DEBUG nova.network.neutron [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Updating instance_info_cache with network_info: [{"id": "ff068d12-ba56-4465-a024-881b428d0ad9", "address": "fa:16:3e:f8:b6:fb", "network": {"id": "cbf38614-3700-41ae-a5fa-3eef08992fc4", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1070704057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76485b7490844f9181c1821d135ade02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff068d12-ba", "ovs_interfaceid": "ff068d12-ba56-4465-a024-881b428d0ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.806 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Releasing lock "refresh_cache-218fdfd8-b66b-4ba2-90b0-5eb27dcacddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.807 2 DEBUG nova.compute.manager [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Instance network_info: |[{"id": "ff068d12-ba56-4465-a024-881b428d0ad9", "address": "fa:16:3e:f8:b6:fb", "network": {"id": "cbf38614-3700-41ae-a5fa-3eef08992fc4", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1070704057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76485b7490844f9181c1821d135ade02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff068d12-ba", "ovs_interfaceid": "ff068d12-ba56-4465-a024-881b428d0ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.807 2 DEBUG oslo_concurrency.lockutils [req-a4d30fea-3aa2-4f57-b225-4ae59050bc79 req-eed30ff3-b7f2-458b-bc5f-a2ea86b86057 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-218fdfd8-b66b-4ba2-90b0-5eb27dcacddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.807 2 DEBUG nova.network.neutron [req-a4d30fea-3aa2-4f57-b225-4ae59050bc79 req-eed30ff3-b7f2-458b-bc5f-a2ea86b86057 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Refreshing network info cache for port ff068d12-ba56-4465-a024-881b428d0ad9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.810 2 DEBUG nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Start _get_guest_xml network_info=[{"id": "ff068d12-ba56-4465-a024-881b428d0ad9", "address": "fa:16:3e:f8:b6:fb", "network": {"id": "cbf38614-3700-41ae-a5fa-3eef08992fc4", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1070704057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76485b7490844f9181c1821d135ade02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff068d12-ba", "ovs_interfaceid": "ff068d12-ba56-4465-a024-881b428d0ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:24:00Z,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:24:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 0, 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '6a34ed8d-90df-4a16-a968-c59b7cafa2f1'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.822 2 WARNING nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.832 2 DEBUG nova.virt.libvirt.host [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.833 2 DEBUG nova.virt.libvirt.host [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.837 2 DEBUG nova.virt.libvirt.host [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.838 2 DEBUG nova.virt.libvirt.host [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.838 2 DEBUG nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.838 2 DEBUG nova.virt.hardware [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-03T11:23:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b93eb926-1d95-406e-aec3-a907be067084',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:24:00Z,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:24:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.839 2 DEBUG nova.virt.hardware [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.839 2 DEBUG nova.virt.hardware [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.839 2 DEBUG nova.virt.hardware [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.840 2 DEBUG nova.virt.hardware [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.840 2 DEBUG nova.virt.hardware [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.840 2 DEBUG nova.virt.hardware [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.840 2 DEBUG nova.virt.hardware [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.841 2 DEBUG nova.virt.hardware [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.841 2 DEBUG nova.virt.hardware [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.841 2 DEBUG nova.virt.hardware [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  3 11:26:56 compute-0 nova_compute[351685]: 2025-10-03 11:26:56.844 2 DEBUG oslo_concurrency.processutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.138 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490817.1384652, 83fc22ce-d2e4-468a-b166-04f2743fa68d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.139 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] VM Started (Lifecycle Event)#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.141 2 DEBUG nova.compute.manager [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.148 2 DEBUG nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.155 2 INFO nova.virt.libvirt.driver [-] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Instance spawned successfully.#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.156 2 DEBUG nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.173 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.180 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.203 2 DEBUG nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.204 2 DEBUG nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.204 2 DEBUG nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.205 2 DEBUG nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.205 2 DEBUG nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.206 2 DEBUG nova.virt.libvirt.driver [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.310 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.311 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490817.1385372, 83fc22ce-d2e4-468a-b166-04f2743fa68d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.311 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] VM Paused (Lifecycle Event)#033[00m
Oct  3 11:26:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:26:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3430216519' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.368 2 INFO nova.compute.manager [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Took 13.16 seconds to spawn the instance on the hypervisor.#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.368 2 DEBUG nova.compute.manager [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.370 2 DEBUG oslo_concurrency.processutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.370 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.399 2 DEBUG nova.storage.rbd_utils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] rbd image 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.413 2 DEBUG oslo_concurrency.processutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.451 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490817.1469078, 83fc22ce-d2e4-468a-b166-04f2743fa68d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.452 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] VM Resumed (Lifecycle Event)#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.513 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.518 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.558 2 INFO nova.compute.manager [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Took 14.41 seconds to build instance.#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.616 2 DEBUG oslo_concurrency.lockutils [None req-8fc03fe2-ef1a-4d4b-8716-c7bc7db07ad2 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:26:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3594: 321 pgs: 321 active+clean; 357 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 135 op/s
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.830 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.830 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.831 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.831 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.831 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:26:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3294657652' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.963 2 DEBUG oslo_concurrency.processutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.966 2 DEBUG nova.virt.libvirt.vif [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T11:26:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1706208204',display_name='tempest-TestServerBasicOps-server-1706208204',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1706208204',id=14,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4IdgRom8xciift3CqBxkOwzbGRh74MT9xo6gBBaoPMGhzW4Bc2FU4s1cpGhIUHp6nZ3hiaNmmCb8/mcUU5OJ7lzr0gs5Z8XEvCqTH1rwJMNTBbNYbyTpSWsIk/mk2Mng==',key_name='tempest-TestServerBasicOps-1309063488',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76485b7490844f9181c1821d135ade02',ramdisk_id='',reservation_id='r-x1lujgxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-533300983',owner_user_name='tempest-TestServerBasicOps-533300983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:26:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e95897c85bf04672a829b11af6ed10c1',uuid=218fdfd8-b66b-4ba2-90b0-5eb27dcacddf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff068d12-ba56-4465-a024-881b428d0ad9", "address": "fa:16:3e:f8:b6:fb", "network": {"id": "cbf38614-3700-41ae-a5fa-3eef08992fc4", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1070704057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76485b7490844f9181c1821d135ade02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff068d12-ba", "ovs_interfaceid": "ff068d12-ba56-4465-a024-881b428d0ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.967 2 DEBUG nova.network.os_vif_util [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Converting VIF {"id": "ff068d12-ba56-4465-a024-881b428d0ad9", "address": "fa:16:3e:f8:b6:fb", "network": {"id": "cbf38614-3700-41ae-a5fa-3eef08992fc4", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1070704057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76485b7490844f9181c1821d135ade02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff068d12-ba", "ovs_interfaceid": "ff068d12-ba56-4465-a024-881b428d0ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.968 2 DEBUG nova.network.os_vif_util [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:b6:fb,bridge_name='br-int',has_traffic_filtering=True,id=ff068d12-ba56-4465-a024-881b428d0ad9,network=Network(cbf38614-3700-41ae-a5fa-3eef08992fc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff068d12-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:26:57 compute-0 nova_compute[351685]: 2025-10-03 11:26:57.970 2 DEBUG nova.objects.instance [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lazy-loading 'pci_devices' on Instance uuid 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.009 2 DEBUG nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] End _get_guest_xml xml=<domain type="kvm">
Oct  3 11:26:58 compute-0 nova_compute[351685]:  <uuid>218fdfd8-b66b-4ba2-90b0-5eb27dcacddf</uuid>
Oct  3 11:26:58 compute-0 nova_compute[351685]:  <name>instance-0000000e</name>
Oct  3 11:26:58 compute-0 nova_compute[351685]:  <memory>131072</memory>
Oct  3 11:26:58 compute-0 nova_compute[351685]:  <vcpu>1</vcpu>
Oct  3 11:26:58 compute-0 nova_compute[351685]:  <metadata>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <nova:name>tempest-TestServerBasicOps-server-1706208204</nova:name>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <nova:creationTime>2025-10-03 11:26:56</nova:creationTime>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <nova:flavor name="m1.nano">
Oct  3 11:26:58 compute-0 nova_compute[351685]:        <nova:memory>128</nova:memory>
Oct  3 11:26:58 compute-0 nova_compute[351685]:        <nova:disk>1</nova:disk>
Oct  3 11:26:58 compute-0 nova_compute[351685]:        <nova:swap>0</nova:swap>
Oct  3 11:26:58 compute-0 nova_compute[351685]:        <nova:ephemeral>0</nova:ephemeral>
Oct  3 11:26:58 compute-0 nova_compute[351685]:        <nova:vcpus>1</nova:vcpus>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      </nova:flavor>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <nova:owner>
Oct  3 11:26:58 compute-0 nova_compute[351685]:        <nova:user uuid="e95897c85bf04672a829b11af6ed10c1">tempest-TestServerBasicOps-533300983-project-member</nova:user>
Oct  3 11:26:58 compute-0 nova_compute[351685]:        <nova:project uuid="76485b7490844f9181c1821d135ade02">tempest-TestServerBasicOps-533300983</nova:project>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      </nova:owner>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <nova:root type="image" uuid="6a34ed8d-90df-4a16-a968-c59b7cafa2f1"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <nova:ports>
Oct  3 11:26:58 compute-0 nova_compute[351685]:        <nova:port uuid="ff068d12-ba56-4465-a024-881b428d0ad9">
Oct  3 11:26:58 compute-0 nova_compute[351685]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:        </nova:port>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      </nova:ports>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    </nova:instance>
Oct  3 11:26:58 compute-0 nova_compute[351685]:  </metadata>
Oct  3 11:26:58 compute-0 nova_compute[351685]:  <sysinfo type="smbios">
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <system>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <entry name="manufacturer">RDO</entry>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <entry name="product">OpenStack Compute</entry>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <entry name="serial">218fdfd8-b66b-4ba2-90b0-5eb27dcacddf</entry>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <entry name="uuid">218fdfd8-b66b-4ba2-90b0-5eb27dcacddf</entry>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <entry name="family">Virtual Machine</entry>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    </system>
Oct  3 11:26:58 compute-0 nova_compute[351685]:  </sysinfo>
Oct  3 11:26:58 compute-0 nova_compute[351685]:  <os>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <boot dev="hd"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <smbios mode="sysinfo"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:  </os>
Oct  3 11:26:58 compute-0 nova_compute[351685]:  <features>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <acpi/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <apic/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <vmcoreinfo/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:  </features>
Oct  3 11:26:58 compute-0 nova_compute[351685]:  <clock offset="utc">
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <timer name="pit" tickpolicy="delay"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <timer name="hpet" present="no"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:  </clock>
Oct  3 11:26:58 compute-0 nova_compute[351685]:  <cpu mode="host-model" match="exact">
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <topology sockets="1" cores="1" threads="1"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:  </cpu>
Oct  3 11:26:58 compute-0 nova_compute[351685]:  <devices>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/218fdfd8-b66b-4ba2-90b0-5eb27dcacddf_disk">
Oct  3 11:26:58 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      </source>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:26:58 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <target dev="vda" bus="virtio"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <disk type="network" device="cdrom">
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/218fdfd8-b66b-4ba2-90b0-5eb27dcacddf_disk.config">
Oct  3 11:26:58 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      </source>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:26:58 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <target dev="sda" bus="sata"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <interface type="ethernet">
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <mac address="fa:16:3e:f8:b6:fb"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <driver name="vhost" rx_queue_size="512"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <mtu size="1442"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <target dev="tapff068d12-ba"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    </interface>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <serial type="pty">
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <log file="/var/lib/nova/instances/218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/console.log" append="off"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    </serial>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <video>
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    </video>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <input type="tablet" bus="usb"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <rng model="virtio">
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <backend model="random">/dev/urandom</backend>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    </rng>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <controller type="usb" index="0"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    <memballoon model="virtio">
Oct  3 11:26:58 compute-0 nova_compute[351685]:      <stats period="10"/>
Oct  3 11:26:58 compute-0 nova_compute[351685]:    </memballoon>
Oct  3 11:26:58 compute-0 nova_compute[351685]:  </devices>
Oct  3 11:26:58 compute-0 nova_compute[351685]: </domain>
Oct  3 11:26:58 compute-0 nova_compute[351685]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.012 2 DEBUG nova.compute.manager [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Preparing to wait for external event network-vif-plugged-ff068d12-ba56-4465-a024-881b428d0ad9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.012 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Acquiring lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.012 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.013 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.016 2 DEBUG nova.virt.libvirt.vif [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T11:26:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1706208204',display_name='tempest-TestServerBasicOps-server-1706208204',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1706208204',id=14,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4IdgRom8xciift3CqBxkOwzbGRh74MT9xo6gBBaoPMGhzW4Bc2FU4s1cpGhIUHp6nZ3hiaNmmCb8/mcUU5OJ7lzr0gs5Z8XEvCqTH1rwJMNTBbNYbyTpSWsIk/mk2Mng==',key_name='tempest-TestServerBasicOps-1309063488',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76485b7490844f9181c1821d135ade02',ramdisk_id='',reservation_id='r-x1lujgxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-533300983',owner_user_name='tempest-TestServerBasicOps-533300983-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:26:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e95897c85bf04672a829b11af6ed10c1',uuid=218fdfd8-b66b-4ba2-90b0-5eb27dcacddf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff068d12-ba56-4465-a024-881b428d0ad9", "address": "fa:16:3e:f8:b6:fb", "network": {"id": "cbf38614-3700-41ae-a5fa-3eef08992fc4", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1070704057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76485b7490844f9181c1821d135ade02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff068d12-ba", "ovs_interfaceid": "ff068d12-ba56-4465-a024-881b428d0ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.017 2 DEBUG nova.network.os_vif_util [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Converting VIF {"id": "ff068d12-ba56-4465-a024-881b428d0ad9", "address": "fa:16:3e:f8:b6:fb", "network": {"id": "cbf38614-3700-41ae-a5fa-3eef08992fc4", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1070704057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76485b7490844f9181c1821d135ade02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff068d12-ba", "ovs_interfaceid": "ff068d12-ba56-4465-a024-881b428d0ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.020 2 DEBUG nova.network.os_vif_util [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:b6:fb,bridge_name='br-int',has_traffic_filtering=True,id=ff068d12-ba56-4465-a024-881b428d0ad9,network=Network(cbf38614-3700-41ae-a5fa-3eef08992fc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff068d12-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.022 2 DEBUG os_vif [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:b6:fb,bridge_name='br-int',has_traffic_filtering=True,id=ff068d12-ba56-4465-a024-881b428d0ad9,network=Network(cbf38614-3700-41ae-a5fa-3eef08992fc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff068d12-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.032 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.034 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.042 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff068d12-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.043 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapff068d12-ba, col_values=(('external_ids', {'iface-id': 'ff068d12-ba56-4465-a024-881b428d0ad9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:b6:fb', 'vm-uuid': '218fdfd8-b66b-4ba2-90b0-5eb27dcacddf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:26:58 compute-0 NetworkManager[45015]: <info>  [1759490818.0511] manager: (tapff068d12-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.062 2 INFO os_vif [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:b6:fb,bridge_name='br-int',has_traffic_filtering=True,id=ff068d12-ba56-4465-a024-881b428d0ad9,network=Network(cbf38614-3700-41ae-a5fa-3eef08992fc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff068d12-ba')#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.208 2 DEBUG nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.208 2 DEBUG nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.208 2 DEBUG nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] No VIF found with MAC fa:16:3e:f8:b6:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.209 2 INFO nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Using config drive#033[00m
Oct  3 11:26:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.244 2 DEBUG nova.storage.rbd_utils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] rbd image 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:26:58 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2697615271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.411 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.639 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.639 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.644 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.645 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.649 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.649 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.654 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.654 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.654 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.658 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:26:58 compute-0 nova_compute[351685]: 2025-10-03 11:26:58.658 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.101 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.103 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3252MB free_disk=59.8470573425293GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.103 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.104 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.271 2 INFO nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Creating config drive at /var/lib/nova/instances/218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.config#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.280 2 DEBUG oslo_concurrency.processutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpym28ic0s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.312 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.313 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance f7465889-4aed-4799-835b-1c604f730144 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.313 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance fd405fd5-7402-43b4-8ab3-a52c18493a6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.314 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 83fc22ce-d2e4-468a-b166-04f2743fa68d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.316 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.317 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.318 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.426 2 DEBUG oslo_concurrency.processutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpym28ic0s" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.473 2 DEBUG nova.storage.rbd_utils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] rbd image 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.493 2 DEBUG oslo_concurrency.processutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.config 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.542 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.640 2 DEBUG nova.network.neutron [req-a4d30fea-3aa2-4f57-b225-4ae59050bc79 req-eed30ff3-b7f2-458b-bc5f-a2ea86b86057 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Updated VIF entry in instance network info cache for port ff068d12-ba56-4465-a024-881b428d0ad9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.641 2 DEBUG nova.network.neutron [req-a4d30fea-3aa2-4f57-b225-4ae59050bc79 req-eed30ff3-b7f2-458b-bc5f-a2ea86b86057 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Updating instance_info_cache with network_info: [{"id": "ff068d12-ba56-4465-a024-881b428d0ad9", "address": "fa:16:3e:f8:b6:fb", "network": {"id": "cbf38614-3700-41ae-a5fa-3eef08992fc4", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1070704057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76485b7490844f9181c1821d135ade02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff068d12-ba", "ovs_interfaceid": "ff068d12-ba56-4465-a024-881b428d0ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.679 2 DEBUG oslo_concurrency.lockutils [req-a4d30fea-3aa2-4f57-b225-4ae59050bc79 req-eed30ff3-b7f2-458b-bc5f-a2ea86b86057 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-218fdfd8-b66b-4ba2-90b0-5eb27dcacddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:26:59 compute-0 podman[157165]: time="2025-10-03T11:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:26:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 49966 "" "Go-http-client/1.1"
Oct  3 11:26:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10511 "" "Go-http-client/1.1"
Oct  3 11:26:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3595: 321 pgs: 321 active+clean; 357 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.6 MiB/s wr, 155 op/s
Oct  3 11:26:59 compute-0 nova_compute[351685]: 2025-10-03 11:26:59.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:27:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1862832860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:27:00 compute-0 nova_compute[351685]: 2025-10-03 11:27:00.179 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:27:00 compute-0 nova_compute[351685]: 2025-10-03 11:27:00.189 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:27:00 compute-0 nova_compute[351685]: 2025-10-03 11:27:00.201 2 DEBUG oslo_concurrency.processutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.config 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.709s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:27:00 compute-0 nova_compute[351685]: 2025-10-03 11:27:00.202 2 INFO nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Deleting local config drive /var/lib/nova/instances/218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.config because it was imported into RBD.#033[00m
Oct  3 11:27:00 compute-0 nova_compute[351685]: 2025-10-03 11:27:00.238 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:27:00 compute-0 kernel: tapff068d12-ba: entered promiscuous mode
Oct  3 11:27:00 compute-0 nova_compute[351685]: 2025-10-03 11:27:00.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:00 compute-0 ovn_controller[88471]: 2025-10-03T11:27:00Z|00144|binding|INFO|Claiming lport ff068d12-ba56-4465-a024-881b428d0ad9 for this chassis.
Oct  3 11:27:00 compute-0 ovn_controller[88471]: 2025-10-03T11:27:00Z|00145|binding|INFO|ff068d12-ba56-4465-a024-881b428d0ad9: Claiming fa:16:3e:f8:b6:fb 10.100.0.10
Oct  3 11:27:00 compute-0 NetworkManager[45015]: <info>  [1759490820.2812] manager: (tapff068d12-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Oct  3 11:27:00 compute-0 ovn_controller[88471]: 2025-10-03T11:27:00Z|00146|binding|INFO|Setting lport ff068d12-ba56-4465-a024-881b428d0ad9 ovn-installed in OVS
Oct  3 11:27:00 compute-0 nova_compute[351685]: 2025-10-03 11:27:00.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:00 compute-0 systemd-machined[137653]: New machine qemu-14-instance-0000000e.
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.338 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:b6:fb 10.100.0.10'], port_security=['fa:16:3e:f8:b6:fb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '218fdfd8-b66b-4ba2-90b0-5eb27dcacddf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cbf38614-3700-41ae-a5fa-3eef08992fc4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76485b7490844f9181c1821d135ade02', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a82c35dd-6ba0-4f5d-9ad8-15c7e30c4b0b feca681c-a11c-4324-8e50-0b4af75046f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24941024-5cd7-42f4-b8b5-41479ac2ff8e, chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=ff068d12-ba56-4465-a024-881b428d0ad9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.339 284328 INFO neutron.agent.ovn.metadata.agent [-] Port ff068d12-ba56-4465-a024-881b428d0ad9 in datapath cbf38614-3700-41ae-a5fa-3eef08992fc4 bound to our chassis#033[00m
Oct  3 11:27:00 compute-0 ovn_controller[88471]: 2025-10-03T11:27:00Z|00147|binding|INFO|Setting lport ff068d12-ba56-4465-a024-881b428d0ad9 up in Southbound
Oct  3 11:27:00 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.343 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cbf38614-3700-41ae-a5fa-3eef08992fc4#033[00m
Oct  3 11:27:00 compute-0 nova_compute[351685]: 2025-10-03 11:27:00.346 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:27:00 compute-0 nova_compute[351685]: 2025-10-03 11:27:00.346 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.355 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[83844581-b900-4a21-bd6d-e154f969a681]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.356 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcbf38614-31 in ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.358 412583 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcbf38614-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.358 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[3aadb8ca-c903-4459-921d-0b74f9d922e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.359 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[9c86f905-4fc0-4d6e-aa56-fe23a86a3f27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:00 compute-0 systemd-udevd[532122]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.373 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[8e466ee6-b6e2-488b-a7b2-929572cca9aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:00 compute-0 NetworkManager[45015]: <info>  [1759490820.3878] device (tapff068d12-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 11:27:00 compute-0 NetworkManager[45015]: <info>  [1759490820.3890] device (tapff068d12-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.405 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[ad763722-661a-40d9-b8eb-1cc64e0337f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.443 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[3b128719-1175-4c0f-a470-fdd6d0639d1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:00 compute-0 NetworkManager[45015]: <info>  [1759490820.4518] manager: (tapcbf38614-30): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.451 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[62ae5739-3be4-428c-93c3-9ee86cd51069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.499 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[e0038870-3bd3-4305-8a85-af8aaa5da1e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.503 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[5dfbe780-8079-4e3d-be3f-e41c594299ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:00 compute-0 NetworkManager[45015]: <info>  [1759490820.5341] device (tapcbf38614-30): carrier: link connected
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.539 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[e430bee8-5745-4293-be1a-a577688ad235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.562 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb7ceb5-d558-489a-8242-907201f6a0d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcbf38614-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cc:09:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1000313, 'reachable_time': 28244, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 532153, 'error': None, 'target': 'ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.581 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[60d81af9-a7bc-4219-bfcc-bcd4637bacec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecc:98d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1000313, 'tstamp': 1000313}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 532154, 'error': None, 'target': 'ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.602 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3e9c7d-36dc-470d-a54f-183c19ffccc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcbf38614-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cc:09:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1000313, 'reachable_time': 28244, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 532155, 'error': None, 'target': 'ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.645 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[838d6331-b7a0-4231-9f40-280678d98006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.729 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d79237-2bfa-4f4a-87a8-10fa0ad13920]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.732 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbf38614-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.732 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.733 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcbf38614-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:00 compute-0 nova_compute[351685]: 2025-10-03 11:27:00.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:00 compute-0 NetworkManager[45015]: <info>  [1759490820.7379] manager: (tapcbf38614-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Oct  3 11:27:00 compute-0 kernel: tapcbf38614-30: entered promiscuous mode
Oct  3 11:27:00 compute-0 nova_compute[351685]: 2025-10-03 11:27:00.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.748 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcbf38614-30, col_values=(('external_ids', {'iface-id': 'bef064e6-45ec-48bd-af03-741f51d8edf0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:00 compute-0 nova_compute[351685]: 2025-10-03 11:27:00.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:00 compute-0 ovn_controller[88471]: 2025-10-03T11:27:00Z|00148|binding|INFO|Releasing lport bef064e6-45ec-48bd-af03-741f51d8edf0 from this chassis (sb_readonly=0)
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.753 284328 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cbf38614-3700-41ae-a5fa-3eef08992fc4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cbf38614-3700-41ae-a5fa-3eef08992fc4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.754 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[154d62bb-fae6-48ab-8f6d-93bc543d6196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.755 284328 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: global
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    log         /dev/log local0 debug
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    log-tag     haproxy-metadata-proxy-cbf38614-3700-41ae-a5fa-3eef08992fc4
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    user        root
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    group       root
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    maxconn     1024
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    pidfile     /var/lib/neutron/external/pids/cbf38614-3700-41ae-a5fa-3eef08992fc4.pid.haproxy
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    daemon
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: defaults
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    log global
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    mode http
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    option httplog
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    option dontlognull
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    option http-server-close
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    option forwardfor
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    retries                 3
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    timeout http-request    30s
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    timeout connect         30s
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    timeout client          32s
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    timeout server          32s
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    timeout http-keep-alive 30s
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: listen listener
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    bind 169.254.169.254:80
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]:    http-request add-header X-OVN-Network-ID cbf38614-3700-41ae-a5fa-3eef08992fc4
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  3 11:27:00 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:00.755 284328 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4', 'env', 'PROCESS_TAG=haproxy-cbf38614-3700-41ae-a5fa-3eef08992fc4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cbf38614-3700-41ae-a5fa-3eef08992fc4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  3 11:27:00 compute-0 nova_compute[351685]: 2025-10-03 11:27:00.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:01 compute-0 podman[532227]: 2025-10-03 11:27:01.256196297 +0000 UTC m=+0.086870396 container create 4b8495b67c00ea1cfd74af3842af7751750907913697b44dc6aaea80cc5a4665 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  3 11:27:01 compute-0 podman[532227]: 2025-10-03 11:27:01.206557176 +0000 UTC m=+0.037231285 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  3 11:27:01 compute-0 systemd[1]: Started libpod-conmon-4b8495b67c00ea1cfd74af3842af7751750907913697b44dc6aaea80cc5a4665.scope.
Oct  3 11:27:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b42a0c634f03ce93d659d50d301d351476d4bdcfe08da6f33503a849101b78a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  3 11:27:01 compute-0 podman[532227]: 2025-10-03 11:27:01.375761449 +0000 UTC m=+0.206435538 container init 4b8495b67c00ea1cfd74af3842af7751750907913697b44dc6aaea80cc5a4665 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  3 11:27:01 compute-0 podman[532227]: 2025-10-03 11:27:01.384941563 +0000 UTC m=+0.215615642 container start 4b8495b67c00ea1cfd74af3842af7751750907913697b44dc6aaea80cc5a4665 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 11:27:01 compute-0 openstack_network_exporter[367524]: ERROR   11:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:27:01 compute-0 openstack_network_exporter[367524]: ERROR   11:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:27:01 compute-0 openstack_network_exporter[367524]: ERROR   11:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:27:01 compute-0 openstack_network_exporter[367524]: ERROR   11:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:27:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:27:01 compute-0 openstack_network_exporter[367524]: ERROR   11:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:27:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:27:01 compute-0 neutron-haproxy-ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4[532242]: [NOTICE]   (532246) : New worker (532248) forked
Oct  3 11:27:01 compute-0 neutron-haproxy-ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4[532242]: [NOTICE]   (532246) : Loading success.
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.601 2 DEBUG nova.compute.manager [req-67590891-ccfa-497a-b665-eafd983965dc req-0fe431e1-7277-4a3b-854c-25fc6aec8c3a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Received event network-vif-plugged-ff068d12-ba56-4465-a024-881b428d0ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.601 2 DEBUG oslo_concurrency.lockutils [req-67590891-ccfa-497a-b665-eafd983965dc req-0fe431e1-7277-4a3b-854c-25fc6aec8c3a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.602 2 DEBUG oslo_concurrency.lockutils [req-67590891-ccfa-497a-b665-eafd983965dc req-0fe431e1-7277-4a3b-854c-25fc6aec8c3a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.602 2 DEBUG oslo_concurrency.lockutils [req-67590891-ccfa-497a-b665-eafd983965dc req-0fe431e1-7277-4a3b-854c-25fc6aec8c3a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.603 2 DEBUG nova.compute.manager [req-67590891-ccfa-497a-b665-eafd983965dc req-0fe431e1-7277-4a3b-854c-25fc6aec8c3a 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Processing event network-vif-plugged-ff068d12-ba56-4465-a024-881b428d0ad9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  3 11:27:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3596: 321 pgs: 321 active+clean; 358 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 80 op/s
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.912 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490821.9113026, 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.912 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] VM Started (Lifecycle Event)#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.915 2 DEBUG nova.compute.manager [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.927 2 DEBUG nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.933 2 INFO nova.virt.libvirt.driver [-] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Instance spawned successfully.#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.933 2 DEBUG nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.938 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.947 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.970 2 DEBUG nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.971 2 DEBUG nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.971 2 DEBUG nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.972 2 DEBUG nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.973 2 DEBUG nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.973 2 DEBUG nova.virt.libvirt.driver [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.980 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.981 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490821.9114869, 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.981 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] VM Paused (Lifecycle Event)#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.983 2 DEBUG oslo_concurrency.lockutils [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Acquiring lock "f7465889-4aed-4799-835b-1c604f730144" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.983 2 DEBUG oslo_concurrency.lockutils [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:27:01 compute-0 nova_compute[351685]: 2025-10-03 11:27:01.983 2 INFO nova.compute.manager [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Rebooting instance#033[00m
Oct  3 11:27:02 compute-0 nova_compute[351685]: 2025-10-03 11:27:02.014 2 DEBUG oslo_concurrency.lockutils [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Acquiring lock "refresh_cache-f7465889-4aed-4799-835b-1c604f730144" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:27:02 compute-0 nova_compute[351685]: 2025-10-03 11:27:02.015 2 DEBUG oslo_concurrency.lockutils [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Acquired lock "refresh_cache-f7465889-4aed-4799-835b-1c604f730144" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:27:02 compute-0 nova_compute[351685]: 2025-10-03 11:27:02.015 2 DEBUG nova.network.neutron [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 11:27:02 compute-0 nova_compute[351685]: 2025-10-03 11:27:02.019 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:27:02 compute-0 nova_compute[351685]: 2025-10-03 11:27:02.026 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490821.9228935, 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:27:02 compute-0 nova_compute[351685]: 2025-10-03 11:27:02.027 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] VM Resumed (Lifecycle Event)#033[00m
Oct  3 11:27:02 compute-0 nova_compute[351685]: 2025-10-03 11:27:02.055 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:27:02 compute-0 nova_compute[351685]: 2025-10-03 11:27:02.062 2 INFO nova.compute.manager [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Took 9.88 seconds to spawn the instance on the hypervisor.#033[00m
Oct  3 11:27:02 compute-0 nova_compute[351685]: 2025-10-03 11:27:02.062 2 DEBUG nova.compute.manager [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:27:02 compute-0 nova_compute[351685]: 2025-10-03 11:27:02.067 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:27:02 compute-0 nova_compute[351685]: 2025-10-03 11:27:02.099 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:27:02 compute-0 nova_compute[351685]: 2025-10-03 11:27:02.145 2 INFO nova.compute.manager [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Took 10.97 seconds to build instance.#033[00m
Oct  3 11:27:02 compute-0 nova_compute[351685]: 2025-10-03 11:27:02.163 2 DEBUG oslo_concurrency.lockutils [None req-053d1b58-7f2c-4bcd-8a85-a6fc3d748ce5 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:02 compute-0 nova_compute[351685]: 2025-10-03 11:27:02.346 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:27:02 compute-0 nova_compute[351685]: 2025-10-03 11:27:02.347 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:27:02 compute-0 nova_compute[351685]: 2025-10-03 11:27:02.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:27:02 compute-0 nova_compute[351685]: 2025-10-03 11:27:02.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:27:02 compute-0 podman[532257]: 2025-10-03 11:27:02.835469271 +0000 UTC m=+0.094628613 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:27:02 compute-0 podman[532259]: 2025-10-03 11:27:02.846830586 +0000 UTC m=+0.096454012 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:27:02 compute-0 podman[532258]: 2025-10-03 11:27:02.920071313 +0000 UTC m=+0.176193108 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, config_id=edpm, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Oct  3 11:27:03 compute-0 nova_compute[351685]: 2025-10-03 11:27:03.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:27:03 compute-0 nova_compute[351685]: 2025-10-03 11:27:03.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:27:03 compute-0 nova_compute[351685]: 2025-10-03 11:27:03.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:27:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3597: 321 pgs: 321 active+clean; 358 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.075 2 DEBUG nova.compute.manager [req-ac4abdf8-9591-4701-ad28-9428920f14a1 req-0fd36e1a-28d7-490d-9a9b-740fc0ba3360 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Received event network-vif-plugged-ff068d12-ba56-4465-a024-881b428d0ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.075 2 DEBUG oslo_concurrency.lockutils [req-ac4abdf8-9591-4701-ad28-9428920f14a1 req-0fd36e1a-28d7-490d-9a9b-740fc0ba3360 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.076 2 DEBUG oslo_concurrency.lockutils [req-ac4abdf8-9591-4701-ad28-9428920f14a1 req-0fd36e1a-28d7-490d-9a9b-740fc0ba3360 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.076 2 DEBUG oslo_concurrency.lockutils [req-ac4abdf8-9591-4701-ad28-9428920f14a1 req-0fd36e1a-28d7-490d-9a9b-740fc0ba3360 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.077 2 DEBUG nova.compute.manager [req-ac4abdf8-9591-4701-ad28-9428920f14a1 req-0fd36e1a-28d7-490d-9a9b-740fc0ba3360 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] No waiting events found dispatching network-vif-plugged-ff068d12-ba56-4465-a024-881b428d0ad9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.078 2 WARNING nova.compute.manager [req-ac4abdf8-9591-4701-ad28-9428920f14a1 req-0fd36e1a-28d7-490d-9a9b-740fc0ba3360 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Received unexpected event network-vif-plugged-ff068d12-ba56-4465-a024-881b428d0ad9 for instance with vm_state active and task_state None.#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.473 2 DEBUG nova.network.neutron [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Updating instance_info_cache with network_info: [{"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.494 2 DEBUG oslo_concurrency.lockutils [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Releasing lock "refresh_cache-f7465889-4aed-4799-835b-1c604f730144" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.496 2 DEBUG nova.compute.manager [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:27:04 compute-0 kernel: tapd444b4b5-52 (unregistering): left promiscuous mode
Oct  3 11:27:04 compute-0 NetworkManager[45015]: <info>  [1759490824.7271] device (tapd444b4b5-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  3 11:27:04 compute-0 ovn_controller[88471]: 2025-10-03T11:27:04Z|00149|binding|INFO|Releasing lport d444b4b5-5243-48c2-80dd-3074b56d4277 from this chassis (sb_readonly=0)
Oct  3 11:27:04 compute-0 ovn_controller[88471]: 2025-10-03T11:27:04Z|00150|binding|INFO|Setting lport d444b4b5-5243-48c2-80dd-3074b56d4277 down in Southbound
Oct  3 11:27:04 compute-0 ovn_controller[88471]: 2025-10-03T11:27:04Z|00151|binding|INFO|Removing iface tapd444b4b5-52 ovn-installed in OVS
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:04 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:04.760 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:8a:bc 10.100.0.7'], port_security=['fa:16:3e:5d:8a:bc 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f7465889-4aed-4799-835b-1c604f730144', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527efcd5-9efe-47de-97ae-4c1c2ca2b999', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8ac8b91115c2483686f9dc31c58b49fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c15d67bc-31ac-4909-a6df-d8296b99758d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2b7eff5-cbee-4a08-96a7-16ae54234c96, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=d444b4b5-5243-48c2-80dd-3074b56d4277) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:27:04 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:04.761 284328 INFO neutron.agent.ovn.metadata.agent [-] Port d444b4b5-5243-48c2-80dd-3074b56d4277 in datapath 527efcd5-9efe-47de-97ae-4c1c2ca2b999 unbound from our chassis#033[00m
Oct  3 11:27:04 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:04.763 284328 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 527efcd5-9efe-47de-97ae-4c1c2ca2b999, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  3 11:27:04 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:04.764 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae215d9-5f60-48cc-b422-03994c56492a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:04 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:04.765 284328 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999 namespace which is not needed anymore#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:04 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct  3 11:27:04 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000000a.scope: Consumed 46.881s CPU time.
Oct  3 11:27:04 compute-0 systemd-machined[137653]: Machine qemu-9-instance-0000000a terminated.
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.853 2 INFO nova.virt.libvirt.driver [-] [instance: f7465889-4aed-4799-835b-1c604f730144] Instance destroyed successfully.#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.855 2 DEBUG nova.objects.instance [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lazy-loading 'resources' on Instance uuid f7465889-4aed-4799-835b-1c604f730144 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.877 2 DEBUG nova.virt.libvirt.vif [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T11:25:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1342038803',display_name='tempest-ServerActionsTestJSON-server-1342038803',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1342038803',id=10,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAI0U91Y0bWTV/Ghqu9u2/YJCcMsSuIqJJqou9MoVKU4IwCcE850tanFrdmeQ7ELHWboekr6vOg1XXLEEvVERh2sZ+QqKxSzY5UgETm25CB7b1mAR5wQF+48QlfVG8JFgw==',key_name='tempest-keypair-881855979',keypairs=<?>,launch_index=0,launched_at=2025-10-03T11:25:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8ac8b91115c2483686f9dc31c58b49fc',ramdisk_id='',reservation_id='r-5st1uthq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-136578470',owner_user_name='tempest-ServerActionsTestJSON-136578470-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-03T11:27:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a98b98aa35184e41a4ae6e74ba3a32e6',uuid=f7465889-4aed-4799-835b-1c604f730144,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.878 2 DEBUG nova.network.os_vif_util [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Converting VIF {"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.879 2 DEBUG nova.network.os_vif_util [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:8a:bc,bridge_name='br-int',has_traffic_filtering=True,id=d444b4b5-5243-48c2-80dd-3074b56d4277,network=Network(527efcd5-9efe-47de-97ae-4c1c2ca2b999),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd444b4b5-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.880 2 DEBUG os_vif [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:8a:bc,bridge_name='br-int',has_traffic_filtering=True,id=d444b4b5-5243-48c2-80dd-3074b56d4277,network=Network(527efcd5-9efe-47de-97ae-4c1c2ca2b999),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd444b4b5-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.883 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd444b4b5-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.889 2 INFO os_vif [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:8a:bc,bridge_name='br-int',has_traffic_filtering=True,id=d444b4b5-5243-48c2-80dd-3074b56d4277,network=Network(527efcd5-9efe-47de-97ae-4c1c2ca2b999),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd444b4b5-52')#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.898 2 DEBUG nova.virt.libvirt.driver [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Start _get_guest_xml network_info=[{"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 0, 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '6a34ed8d-90df-4a16-a968-c59b7cafa2f1'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.905 2 WARNING nova.virt.libvirt.driver [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.911 2 DEBUG nova.virt.libvirt.host [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.912 2 DEBUG nova.virt.libvirt.host [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.917 2 DEBUG nova.virt.libvirt.host [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.918 2 DEBUG nova.virt.libvirt.host [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.918 2 DEBUG nova.virt.libvirt.driver [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.919 2 DEBUG nova.virt.hardware [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-03T11:23:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b93eb926-1d95-406e-aec3-a907be067084',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.919 2 DEBUG nova.virt.hardware [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.919 2 DEBUG nova.virt.hardware [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.920 2 DEBUG nova.virt.hardware [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.920 2 DEBUG nova.virt.hardware [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.920 2 DEBUG nova.virt.hardware [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.920 2 DEBUG nova.virt.hardware [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.920 2 DEBUG nova.virt.hardware [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.921 2 DEBUG nova.virt.hardware [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.921 2 DEBUG nova.virt.hardware [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.921 2 DEBUG nova.virt.hardware [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.921 2 DEBUG nova.objects.instance [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lazy-loading 'vcpu_model' on Instance uuid f7465889-4aed-4799-835b-1c604f730144 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:04 compute-0 nova_compute[351685]: 2025-10-03 11:27:04.956 2 DEBUG oslo_concurrency.processutils [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:27:05 compute-0 neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999[528588]: [NOTICE]   (528592) : haproxy version is 2.8.14-c23fe91
Oct  3 11:27:05 compute-0 neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999[528588]: [NOTICE]   (528592) : path to executable is /usr/sbin/haproxy
Oct  3 11:27:05 compute-0 neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999[528588]: [WARNING]  (528592) : Exiting Master process...
Oct  3 11:27:05 compute-0 neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999[528588]: [WARNING]  (528592) : Exiting Master process...
Oct  3 11:27:05 compute-0 neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999[528588]: [ALERT]    (528592) : Current worker (528594) exited with code 143 (Terminated)
Oct  3 11:27:05 compute-0 neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999[528588]: [WARNING]  (528592) : All workers exited. Exiting... (0)
Oct  3 11:27:05 compute-0 systemd[1]: libpod-30e7694129c25a42fd92444750bd9b97df2f96181aed88b0a5c4f46e23a1e081.scope: Deactivated successfully.
Oct  3 11:27:05 compute-0 podman[532348]: 2025-10-03 11:27:05.022439683 +0000 UTC m=+0.091992540 container died 30e7694129c25a42fd92444750bd9b97df2f96181aed88b0a5c4f46e23a1e081 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 11:27:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-30e7694129c25a42fd92444750bd9b97df2f96181aed88b0a5c4f46e23a1e081-userdata-shm.mount: Deactivated successfully.
Oct  3 11:27:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a4921de8be30ede5a84cf1958a003657de1f2781f2e8195583c65d2fd26e866-merged.mount: Deactivated successfully.
Oct  3 11:27:05 compute-0 podman[532348]: 2025-10-03 11:27:05.09317152 +0000 UTC m=+0.162724367 container cleanup 30e7694129c25a42fd92444750bd9b97df2f96181aed88b0a5c4f46e23a1e081 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:27:05 compute-0 systemd[1]: libpod-conmon-30e7694129c25a42fd92444750bd9b97df2f96181aed88b0a5c4f46e23a1e081.scope: Deactivated successfully.
Oct  3 11:27:05 compute-0 podman[532373]: 2025-10-03 11:27:05.183300438 +0000 UTC m=+0.062475963 container remove 30e7694129c25a42fd92444750bd9b97df2f96181aed88b0a5c4f46e23a1e081 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  3 11:27:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:05.192 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab50263-23b6-4af1-8aa9-afdb7eedb1b9]: (4, ('Fri Oct  3 11:27:04 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999 (30e7694129c25a42fd92444750bd9b97df2f96181aed88b0a5c4f46e23a1e081)\n30e7694129c25a42fd92444750bd9b97df2f96181aed88b0a5c4f46e23a1e081\nFri Oct  3 11:27:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999 (30e7694129c25a42fd92444750bd9b97df2f96181aed88b0a5c4f46e23a1e081)\n30e7694129c25a42fd92444750bd9b97df2f96181aed88b0a5c4f46e23a1e081\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:05.195 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[7bb3edf0-c5d2-4411-80b7-34381c33073e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:05.196 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527efcd5-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:05 compute-0 kernel: tap527efcd5-90: left promiscuous mode
Oct  3 11:27:05 compute-0 nova_compute[351685]: 2025-10-03 11:27:05.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:05.212 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[33ee06b5-dfbf-4983-8751-397f549fdb57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:05 compute-0 nova_compute[351685]: 2025-10-03 11:27:05.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:05.237 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[66078681-ce68-468d-8a63-5cc741304d2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:05.240 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[6dad101d-7cb0-4de7-8eb8-58b72196ca79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:05.262 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac307f9-2d4d-412b-ace0-8a58e8533e81]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 993091, 'reachable_time': 42866, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 532407, 'error': None, 'target': 'ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d527efcd5\x2d9efe\x2d47de\x2d97ae\x2d4c1c2ca2b999.mount: Deactivated successfully.
Oct  3 11:27:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:05.268 284439 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  3 11:27:05 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:05.268 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[2215b4ca-84e1-4c67-8279-b65bf1133ec5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:05 compute-0 nova_compute[351685]: 2025-10-03 11:27:05.317 2 DEBUG nova.compute.manager [req-73ed4fac-5336-4999-8413-7a45c4190864 req-f06e3663-50f1-4337-9232-8c2792d46e65 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received event network-vif-unplugged-d444b4b5-5243-48c2-80dd-3074b56d4277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:27:05 compute-0 nova_compute[351685]: 2025-10-03 11:27:05.317 2 DEBUG oslo_concurrency.lockutils [req-73ed4fac-5336-4999-8413-7a45c4190864 req-f06e3663-50f1-4337-9232-8c2792d46e65 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "f7465889-4aed-4799-835b-1c604f730144-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:27:05 compute-0 nova_compute[351685]: 2025-10-03 11:27:05.318 2 DEBUG oslo_concurrency.lockutils [req-73ed4fac-5336-4999-8413-7a45c4190864 req-f06e3663-50f1-4337-9232-8c2792d46e65 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:27:05 compute-0 nova_compute[351685]: 2025-10-03 11:27:05.318 2 DEBUG oslo_concurrency.lockutils [req-73ed4fac-5336-4999-8413-7a45c4190864 req-f06e3663-50f1-4337-9232-8c2792d46e65 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:05 compute-0 nova_compute[351685]: 2025-10-03 11:27:05.319 2 DEBUG nova.compute.manager [req-73ed4fac-5336-4999-8413-7a45c4190864 req-f06e3663-50f1-4337-9232-8c2792d46e65 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] No waiting events found dispatching network-vif-unplugged-d444b4b5-5243-48c2-80dd-3074b56d4277 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:27:05 compute-0 nova_compute[351685]: 2025-10-03 11:27:05.319 2 WARNING nova.compute.manager [req-73ed4fac-5336-4999-8413-7a45c4190864 req-f06e3663-50f1-4337-9232-8c2792d46e65 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received unexpected event network-vif-unplugged-d444b4b5-5243-48c2-80dd-3074b56d4277 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  3 11:27:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:27:05 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1375627053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:27:05 compute-0 nova_compute[351685]: 2025-10-03 11:27:05.481 2 DEBUG oslo_concurrency.processutils [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:27:05 compute-0 nova_compute[351685]: 2025-10-03 11:27:05.550 2 DEBUG oslo_concurrency.processutils [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:27:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3598: 321 pgs: 321 active+clean; 358 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 134 op/s
Oct  3 11:27:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:27:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1756353306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.065 2 DEBUG oslo_concurrency.processutils [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.068 2 DEBUG nova.virt.libvirt.vif [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T11:25:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1342038803',display_name='tempest-ServerActionsTestJSON-server-1342038803',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1342038803',id=10,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAI0U91Y0bWTV/Ghqu9u2/YJCcMsSuIqJJqou9MoVKU4IwCcE850tanFrdmeQ7ELHWboekr6vOg1XXLEEvVERh2sZ+QqKxSzY5UgETm25CB7b1mAR5wQF+48QlfVG8JFgw==',key_name='tempest-keypair-881855979',keypairs=<?>,launch_index=0,launched_at=2025-10-03T11:25:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8ac8b91115c2483686f9dc31c58b49fc',ramdisk_id='',reservation_id='r-5st1uthq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-136578470',owner_user_name='tempest-ServerActionsTestJSON-136578470-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-03T11:27:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a98b98aa35184e41a4ae6e74ba3a32e6',uuid=f7465889-4aed-4799-835b-1c604f730144,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.069 2 DEBUG nova.network.os_vif_util [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Converting VIF {"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.071 2 DEBUG nova.network.os_vif_util [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:8a:bc,bridge_name='br-int',has_traffic_filtering=True,id=d444b4b5-5243-48c2-80dd-3074b56d4277,network=Network(527efcd5-9efe-47de-97ae-4c1c2ca2b999),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd444b4b5-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.073 2 DEBUG nova.objects.instance [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lazy-loading 'pci_devices' on Instance uuid f7465889-4aed-4799-835b-1c604f730144 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.097 2 DEBUG nova.virt.libvirt.driver [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] End _get_guest_xml xml=<domain type="kvm">
Oct  3 11:27:06 compute-0 nova_compute[351685]:  <uuid>f7465889-4aed-4799-835b-1c604f730144</uuid>
Oct  3 11:27:06 compute-0 nova_compute[351685]:  <name>instance-0000000a</name>
Oct  3 11:27:06 compute-0 nova_compute[351685]:  <memory>131072</memory>
Oct  3 11:27:06 compute-0 nova_compute[351685]:  <vcpu>1</vcpu>
Oct  3 11:27:06 compute-0 nova_compute[351685]:  <metadata>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <nova:name>tempest-ServerActionsTestJSON-server-1342038803</nova:name>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <nova:creationTime>2025-10-03 11:27:04</nova:creationTime>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <nova:flavor name="m1.nano">
Oct  3 11:27:06 compute-0 nova_compute[351685]:        <nova:memory>128</nova:memory>
Oct  3 11:27:06 compute-0 nova_compute[351685]:        <nova:disk>1</nova:disk>
Oct  3 11:27:06 compute-0 nova_compute[351685]:        <nova:swap>0</nova:swap>
Oct  3 11:27:06 compute-0 nova_compute[351685]:        <nova:ephemeral>0</nova:ephemeral>
Oct  3 11:27:06 compute-0 nova_compute[351685]:        <nova:vcpus>1</nova:vcpus>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      </nova:flavor>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <nova:owner>
Oct  3 11:27:06 compute-0 nova_compute[351685]:        <nova:user uuid="a98b98aa35184e41a4ae6e74ba3a32e6">tempest-ServerActionsTestJSON-136578470-project-member</nova:user>
Oct  3 11:27:06 compute-0 nova_compute[351685]:        <nova:project uuid="8ac8b91115c2483686f9dc31c58b49fc">tempest-ServerActionsTestJSON-136578470</nova:project>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      </nova:owner>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <nova:root type="image" uuid="6a34ed8d-90df-4a16-a968-c59b7cafa2f1"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <nova:ports>
Oct  3 11:27:06 compute-0 nova_compute[351685]:        <nova:port uuid="d444b4b5-5243-48c2-80dd-3074b56d4277">
Oct  3 11:27:06 compute-0 nova_compute[351685]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:        </nova:port>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      </nova:ports>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    </nova:instance>
Oct  3 11:27:06 compute-0 nova_compute[351685]:  </metadata>
Oct  3 11:27:06 compute-0 nova_compute[351685]:  <sysinfo type="smbios">
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <system>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <entry name="manufacturer">RDO</entry>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <entry name="product">OpenStack Compute</entry>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <entry name="serial">f7465889-4aed-4799-835b-1c604f730144</entry>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <entry name="uuid">f7465889-4aed-4799-835b-1c604f730144</entry>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <entry name="family">Virtual Machine</entry>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    </system>
Oct  3 11:27:06 compute-0 nova_compute[351685]:  </sysinfo>
Oct  3 11:27:06 compute-0 nova_compute[351685]:  <os>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <boot dev="hd"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <smbios mode="sysinfo"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:  </os>
Oct  3 11:27:06 compute-0 nova_compute[351685]:  <features>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <acpi/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <apic/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <vmcoreinfo/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:  </features>
Oct  3 11:27:06 compute-0 nova_compute[351685]:  <clock offset="utc">
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <timer name="pit" tickpolicy="delay"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <timer name="hpet" present="no"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:  </clock>
Oct  3 11:27:06 compute-0 nova_compute[351685]:  <cpu mode="host-model" match="exact">
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <topology sockets="1" cores="1" threads="1"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:  </cpu>
Oct  3 11:27:06 compute-0 nova_compute[351685]:  <devices>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/f7465889-4aed-4799-835b-1c604f730144_disk">
Oct  3 11:27:06 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      </source>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:27:06 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <target dev="vda" bus="virtio"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <disk type="network" device="cdrom">
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/f7465889-4aed-4799-835b-1c604f730144_disk.config">
Oct  3 11:27:06 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      </source>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:27:06 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <target dev="sda" bus="sata"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <interface type="ethernet">
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <mac address="fa:16:3e:5d:8a:bc"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <driver name="vhost" rx_queue_size="512"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <mtu size="1442"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <target dev="tapd444b4b5-52"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    </interface>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <serial type="pty">
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <log file="/var/lib/nova/instances/f7465889-4aed-4799-835b-1c604f730144/console.log" append="off"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    </serial>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <video>
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    </video>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <input type="tablet" bus="usb"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <input type="keyboard" bus="usb"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <rng model="virtio">
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <backend model="random">/dev/urandom</backend>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    </rng>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <controller type="usb" index="0"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    <memballoon model="virtio">
Oct  3 11:27:06 compute-0 nova_compute[351685]:      <stats period="10"/>
Oct  3 11:27:06 compute-0 nova_compute[351685]:    </memballoon>
Oct  3 11:27:06 compute-0 nova_compute[351685]:  </devices>
Oct  3 11:27:06 compute-0 nova_compute[351685]: </domain>
Oct  3 11:27:06 compute-0 nova_compute[351685]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.099 2 DEBUG nova.virt.libvirt.driver [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.099 2 DEBUG nova.virt.libvirt.driver [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.100 2 DEBUG nova.virt.libvirt.vif [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T11:25:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1342038803',display_name='tempest-ServerActionsTestJSON-server-1342038803',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1342038803',id=10,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAI0U91Y0bWTV/Ghqu9u2/YJCcMsSuIqJJqou9MoVKU4IwCcE850tanFrdmeQ7ELHWboekr6vOg1XXLEEvVERh2sZ+QqKxSzY5UgETm25CB7b1mAR5wQF+48QlfVG8JFgw==',key_name='tempest-keypair-881855979',keypairs=<?>,launch_index=0,launched_at=2025-10-03T11:25:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='8ac8b91115c2483686f9dc31c58b49fc',ramdisk_id='',reservation_id='r-5st1uthq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-136578470',owner_user_name='tempest-ServerActionsTestJSON-136578470-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-03T11:27:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a98b98aa35184e41a4ae6e74ba3a32e6',uuid=f7465889-4aed-4799-835b-1c604f730144,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.100 2 DEBUG nova.network.os_vif_util [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Converting VIF {"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.101 2 DEBUG nova.network.os_vif_util [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:8a:bc,bridge_name='br-int',has_traffic_filtering=True,id=d444b4b5-5243-48c2-80dd-3074b56d4277,network=Network(527efcd5-9efe-47de-97ae-4c1c2ca2b999),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd444b4b5-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.102 2 DEBUG os_vif [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:8a:bc,bridge_name='br-int',has_traffic_filtering=True,id=d444b4b5-5243-48c2-80dd-3074b56d4277,network=Network(527efcd5-9efe-47de-97ae-4c1c2ca2b999),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd444b4b5-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.103 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.104 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.108 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd444b4b5-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.109 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd444b4b5-52, col_values=(('external_ids', {'iface-id': 'd444b4b5-5243-48c2-80dd-3074b56d4277', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:8a:bc', 'vm-uuid': 'f7465889-4aed-4799-835b-1c604f730144'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:06 compute-0 NetworkManager[45015]: <info>  [1759490826.1121] manager: (tapd444b4b5-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.120 2 INFO os_vif [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:8a:bc,bridge_name='br-int',has_traffic_filtering=True,id=d444b4b5-5243-48c2-80dd-3074b56d4277,network=Network(527efcd5-9efe-47de-97ae-4c1c2ca2b999),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd444b4b5-52')#033[00m
Oct  3 11:27:06 compute-0 kernel: tapd444b4b5-52: entered promiscuous mode
Oct  3 11:27:06 compute-0 NetworkManager[45015]: <info>  [1759490826.2182] manager: (tapd444b4b5-52): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Oct  3 11:27:06 compute-0 systemd-udevd[532318]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 11:27:06 compute-0 ovn_controller[88471]: 2025-10-03T11:27:06Z|00152|binding|INFO|Claiming lport d444b4b5-5243-48c2-80dd-3074b56d4277 for this chassis.
Oct  3 11:27:06 compute-0 ovn_controller[88471]: 2025-10-03T11:27:06Z|00153|binding|INFO|d444b4b5-5243-48c2-80dd-3074b56d4277: Claiming fa:16:3e:5d:8a:bc 10.100.0.7
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.226 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:8a:bc 10.100.0.7'], port_security=['fa:16:3e:5d:8a:bc 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f7465889-4aed-4799-835b-1c604f730144', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527efcd5-9efe-47de-97ae-4c1c2ca2b999', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8ac8b91115c2483686f9dc31c58b49fc', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c15d67bc-31ac-4909-a6df-d8296b99758d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2b7eff5-cbee-4a08-96a7-16ae54234c96, chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=d444b4b5-5243-48c2-80dd-3074b56d4277) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.228 284328 INFO neutron.agent.ovn.metadata.agent [-] Port d444b4b5-5243-48c2-80dd-3074b56d4277 in datapath 527efcd5-9efe-47de-97ae-4c1c2ca2b999 bound to our chassis#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.231 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 527efcd5-9efe-47de-97ae-4c1c2ca2b999#033[00m
Oct  3 11:27:06 compute-0 NetworkManager[45015]: <info>  [1759490826.2466] device (tapd444b4b5-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 11:27:06 compute-0 NetworkManager[45015]: <info>  [1759490826.2475] device (tapd444b4b5-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  3 11:27:06 compute-0 ovn_controller[88471]: 2025-10-03T11:27:06Z|00154|binding|INFO|Setting lport d444b4b5-5243-48c2-80dd-3074b56d4277 ovn-installed in OVS
Oct  3 11:27:06 compute-0 ovn_controller[88471]: 2025-10-03T11:27:06Z|00155|binding|INFO|Setting lport d444b4b5-5243-48c2-80dd-3074b56d4277 up in Southbound
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.262 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[d99954fc-51c5-4f80-bc78-29193c748e85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.263 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap527efcd5-91 in ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.266 412583 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap527efcd5-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.266 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[de174b8c-6841-4e54-b635-20e3e20192dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.267 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[99bc661c-6d5a-4f95-b25b-4f5d3aa212ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:06 compute-0 systemd-machined[137653]: New machine qemu-15-instance-0000000a.
Oct  3 11:27:06 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000a.
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.308 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a94d39-5d5b-4b79-8b9c-32f1c0336fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.335 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[0c09b0d7-958e-4f8d-8354-371193f09f2e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.371 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[3f84d860-5b58-40c4-b0fe-d65211397474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:06 compute-0 NetworkManager[45015]: <info>  [1759490826.3957] manager: (tap527efcd5-90): new Veth device (/org/freedesktop/NetworkManager/Devices/75)
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.397 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[53c3f08a-0e32-441a-94f2-d8ccf34aa579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.433 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0d9c48-d779-4011-b369-f782b5440db5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.438 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[ea144ecb-c85e-4b02-89df-4363671582be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:06 compute-0 NetworkManager[45015]: <info>  [1759490826.4669] device (tap527efcd5-90): carrier: link connected
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.475 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[49c43e01-70f1-4d70-a3e4-0b271671579f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.491 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[9eac7ae2-4552-4a01-a885-6fa41881eb5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527efcd5-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:5d:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1000906, 'reachable_time': 41297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 532497, 'error': None, 'target': 'ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.508 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[fad4813a-3452-4973-906d-82c358e56a60]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feff:5d1f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1000906, 'tstamp': 1000906}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 532498, 'error': None, 'target': 'ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.528 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[b6cf1980-ff75-49d8-b181-e39710b1ebf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527efcd5-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:5d:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1000906, 'reachable_time': 41297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 532499, 'error': None, 'target': 'ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.562 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc96454-5eaf-4b34-88bd-ad4fbc0fd355]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.623 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[81dbdcbe-8b60-40b0-a45b-e43cab6257a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.625 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527efcd5-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.626 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.626 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap527efcd5-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:06 compute-0 NetworkManager[45015]: <info>  [1759490826.6307] manager: (tap527efcd5-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Oct  3 11:27:06 compute-0 kernel: tap527efcd5-90: entered promiscuous mode
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.639 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap527efcd5-90, col_values=(('external_ids', {'iface-id': '1eb40ea8-53b0-46a1-bf82-85a3448330ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:06 compute-0 ovn_controller[88471]: 2025-10-03T11:27:06Z|00156|binding|INFO|Releasing lport 1eb40ea8-53b0-46a1-bf82-85a3448330ac from this chassis (sb_readonly=0)
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:06 compute-0 nova_compute[351685]: 2025-10-03 11:27:06.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.660 284328 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/527efcd5-9efe-47de-97ae-4c1c2ca2b999.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/527efcd5-9efe-47de-97ae-4c1c2ca2b999.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.662 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[81d5d890-2d0d-4eb1-b33a-74e679cb1eeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.662 284328 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: global
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    log         /dev/log local0 debug
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    log-tag     haproxy-metadata-proxy-527efcd5-9efe-47de-97ae-4c1c2ca2b999
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    user        root
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    group       root
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    maxconn     1024
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    pidfile     /var/lib/neutron/external/pids/527efcd5-9efe-47de-97ae-4c1c2ca2b999.pid.haproxy
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    daemon
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: defaults
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    log global
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    mode http
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    option httplog
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    option dontlognull
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    option http-server-close
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    option forwardfor
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    retries                 3
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    timeout http-request    30s
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    timeout connect         30s
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    timeout client          32s
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    timeout server          32s
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    timeout http-keep-alive 30s
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: listen listener
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    bind 169.254.169.254:80
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    server metadata /var/lib/neutron/metadata_proxy
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]:    http-request add-header X-OVN-Network-ID 527efcd5-9efe-47de-97ae-4c1c2ca2b999
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  3 11:27:06 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:06.663 284328 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999', 'env', 'PROCESS_TAG=haproxy-527efcd5-9efe-47de-97ae-4c1c2ca2b999', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/527efcd5-9efe-47de-97ae-4c1c2ca2b999.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  3 11:27:07 compute-0 podman[532570]: 2025-10-03 11:27:07.21929649 +0000 UTC m=+0.090478190 container create 56a24dca06973ce7428d6d7ee9c129828b19ceaf9796a5cb0e922859121f3607 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  3 11:27:07 compute-0 podman[532570]: 2025-10-03 11:27:07.175981972 +0000 UTC m=+0.047163702 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  3 11:27:07 compute-0 systemd[1]: Started libpod-conmon-56a24dca06973ce7428d6d7ee9c129828b19ceaf9796a5cb0e922859121f3607.scope.
Oct  3 11:27:07 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c85e5eab452bcdb4fdfcea344b6bd7ee04c9b4cdf375378d98574f1c7b8d180/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  3 11:27:07 compute-0 podman[532570]: 2025-10-03 11:27:07.336406273 +0000 UTC m=+0.207588003 container init 56a24dca06973ce7428d6d7ee9c129828b19ceaf9796a5cb0e922859121f3607 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  3 11:27:07 compute-0 podman[532570]: 2025-10-03 11:27:07.344846524 +0000 UTC m=+0.216028224 container start 56a24dca06973ce7428d6d7ee9c129828b19ceaf9796a5cb0e922859121f3607 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:27:07 compute-0 neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999[532585]: [NOTICE]   (532589) : New worker (532591) forked
Oct  3 11:27:07 compute-0 neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999[532585]: [NOTICE]   (532589) : Loading success.
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.430 2 DEBUG nova.compute.manager [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Received event network-changed-ff068d12-ba56-4465-a024-881b428d0ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.430 2 DEBUG nova.compute.manager [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Refreshing instance network info cache due to event network-changed-ff068d12-ba56-4465-a024-881b428d0ad9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.431 2 DEBUG oslo_concurrency.lockutils [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-218fdfd8-b66b-4ba2-90b0-5eb27dcacddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.431 2 DEBUG oslo_concurrency.lockutils [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-218fdfd8-b66b-4ba2-90b0-5eb27dcacddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.432 2 DEBUG nova.network.neutron [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Refreshing network info cache for port ff068d12-ba56-4465-a024-881b428d0ad9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.745 2 DEBUG nova.virt.libvirt.host [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Removed pending event for f7465889-4aed-4799-835b-1c604f730144 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.745 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490827.7441528, f7465889-4aed-4799-835b-1c604f730144 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.746 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] VM Resumed (Lifecycle Event)#033[00m
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.747 2 DEBUG nova.compute.manager [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.752 2 INFO nova.virt.libvirt.driver [-] [instance: f7465889-4aed-4799-835b-1c604f730144] Instance rebooted successfully.#033[00m
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.752 2 DEBUG nova.compute.manager [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.780 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.784 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:27:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3599: 321 pgs: 321 active+clean; 358 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 28 KiB/s wr, 118 op/s
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.815 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.815 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490827.745265, f7465889-4aed-4799-835b-1c604f730144 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.816 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] VM Started (Lifecycle Event)#033[00m
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.821 2 DEBUG oslo_concurrency.lockutils [None req-fd848824-1e23-4c65-8a20-878fa140101e a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.842 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:27:07 compute-0 nova_compute[351685]: 2025-10-03 11:27:07.848 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:27:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.204 2 DEBUG nova.network.neutron [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Updated VIF entry in instance network info cache for port ff068d12-ba56-4465-a024-881b428d0ad9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.205 2 DEBUG nova.network.neutron [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Updating instance_info_cache with network_info: [{"id": "ff068d12-ba56-4465-a024-881b428d0ad9", "address": "fa:16:3e:f8:b6:fb", "network": {"id": "cbf38614-3700-41ae-a5fa-3eef08992fc4", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1070704057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76485b7490844f9181c1821d135ade02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff068d12-ba", "ovs_interfaceid": "ff068d12-ba56-4465-a024-881b428d0ad9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.235 2 DEBUG oslo_concurrency.lockutils [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-218fdfd8-b66b-4ba2-90b0-5eb27dcacddf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.235 2 DEBUG nova.compute.manager [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received event network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.236 2 DEBUG oslo_concurrency.lockutils [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "f7465889-4aed-4799-835b-1c604f730144-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.236 2 DEBUG oslo_concurrency.lockutils [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.237 2 DEBUG oslo_concurrency.lockutils [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.238 2 DEBUG nova.compute.manager [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] No waiting events found dispatching network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.238 2 WARNING nova.compute.manager [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received unexpected event network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.238 2 DEBUG nova.compute.manager [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received event network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.239 2 DEBUG oslo_concurrency.lockutils [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "f7465889-4aed-4799-835b-1c604f730144-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.239 2 DEBUG oslo_concurrency.lockutils [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.240 2 DEBUG oslo_concurrency.lockutils [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.240 2 DEBUG nova.compute.manager [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] No waiting events found dispatching network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.240 2 WARNING nova.compute.manager [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received unexpected event network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.241 2 DEBUG nova.compute.manager [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received event network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.241 2 DEBUG oslo_concurrency.lockutils [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "f7465889-4aed-4799-835b-1c604f730144-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.242 2 DEBUG oslo_concurrency.lockutils [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.242 2 DEBUG oslo_concurrency.lockutils [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.242 2 DEBUG nova.compute.manager [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] No waiting events found dispatching network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.243 2 WARNING nova.compute.manager [req-0f547795-edd7-4be6-a2ac-0badeb59068c req-ca99b7a9-567d-4641-8777-d16f7d09eb26 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received unexpected event network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  3 11:27:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3600: 321 pgs: 321 active+clean; 358 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 28 KiB/s wr, 151 op/s
Oct  3 11:27:09 compute-0 nova_compute[351685]: 2025-10-03 11:27:09.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:11 compute-0 nova_compute[351685]: 2025-10-03 11:27:11.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:11 compute-0 nova_compute[351685]: 2025-10-03 11:27:11.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:27:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3601: 321 pgs: 321 active+clean; 358 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 27 KiB/s wr, 152 op/s
Oct  3 11:27:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:27:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3602: 321 pgs: 321 active+clean; 358 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 13 KiB/s wr, 165 op/s
Oct  3 11:27:14 compute-0 nova_compute[351685]: 2025-10-03 11:27:14.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3603: 321 pgs: 321 active+clean; 358 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 9.8 KiB/s wr, 169 op/s
Oct  3 11:27:16 compute-0 nova_compute[351685]: 2025-10-03 11:27:16.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:27:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:27:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:27:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:27:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:27:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:27:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3604: 321 pgs: 321 active+clean; 358 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 85 B/s wr, 100 op/s
Oct  3 11:27:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:27:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:27:18 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:27:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:27:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:27:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:27:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:27:18 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8e051a67-5405-43d0-aa75-feb44ad63b73 does not exist
Oct  3 11:27:18 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b6cdeea6-0d13-440b-bd62-eb584aef06dd does not exist
Oct  3 11:27:18 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev c6f099fe-ffdb-439d-8fce-ad17d5ab73ac does not exist
Oct  3 11:27:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:27:18 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:27:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:27:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:27:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:27:18 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:27:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:27:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:27:19 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:27:19 compute-0 podman[532868]: 2025-10-03 11:27:19.725681612 +0000 UTC m=+0.081813783 container create a761ade22df264cdf1f5303be53f547cc065b4c85b4708e63b0ff9e2c3640306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_babbage, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  3 11:27:19 compute-0 podman[532868]: 2025-10-03 11:27:19.696881449 +0000 UTC m=+0.053013620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:27:19 compute-0 systemd[1]: Started libpod-conmon-a761ade22df264cdf1f5303be53f547cc065b4c85b4708e63b0ff9e2c3640306.scope.
Oct  3 11:27:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3605: 321 pgs: 321 active+clean; 358 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 85 B/s wr, 100 op/s
Oct  3 11:27:19 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:27:19 compute-0 podman[532868]: 2025-10-03 11:27:19.845912855 +0000 UTC m=+0.202045056 container init a761ade22df264cdf1f5303be53f547cc065b4c85b4708e63b0ff9e2c3640306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_babbage, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 11:27:19 compute-0 podman[532868]: 2025-10-03 11:27:19.863341374 +0000 UTC m=+0.219473545 container start a761ade22df264cdf1f5303be53f547cc065b4c85b4708e63b0ff9e2c3640306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:27:19 compute-0 podman[532868]: 2025-10-03 11:27:19.868948814 +0000 UTC m=+0.225080995 container attach a761ade22df264cdf1f5303be53f547cc065b4c85b4708e63b0ff9e2c3640306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_babbage, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:27:19 compute-0 thirsty_babbage[532884]: 167 167
Oct  3 11:27:19 compute-0 systemd[1]: libpod-a761ade22df264cdf1f5303be53f547cc065b4c85b4708e63b0ff9e2c3640306.scope: Deactivated successfully.
Oct  3 11:27:19 compute-0 conmon[532884]: conmon a761ade22df264cdf1f5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a761ade22df264cdf1f5303be53f547cc065b4c85b4708e63b0ff9e2c3640306.scope/container/memory.events
Oct  3 11:27:19 compute-0 podman[532868]: 2025-10-03 11:27:19.901338711 +0000 UTC m=+0.257470902 container died a761ade22df264cdf1f5303be53f547cc065b4c85b4708e63b0ff9e2c3640306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_babbage, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 11:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e548c371ac7344a87ef314bc22e590f676aa480f4ebff5bdccfa1a0c0d751fe-merged.mount: Deactivated successfully.
Oct  3 11:27:19 compute-0 nova_compute[351685]: 2025-10-03 11:27:19.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:19 compute-0 podman[532868]: 2025-10-03 11:27:19.954155484 +0000 UTC m=+0.310287655 container remove a761ade22df264cdf1f5303be53f547cc065b4c85b4708e63b0ff9e2c3640306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 11:27:19 compute-0 systemd[1]: libpod-conmon-a761ade22df264cdf1f5303be53f547cc065b4c85b4708e63b0ff9e2c3640306.scope: Deactivated successfully.
Oct  3 11:27:20 compute-0 podman[532907]: 2025-10-03 11:27:20.269542813 +0000 UTC m=+0.071697470 container create 82996d6fbd6879b16e4437ed6a4f3551999fff58becf71e98117659a545be6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 11:27:20 compute-0 systemd[1]: Started libpod-conmon-82996d6fbd6879b16e4437ed6a4f3551999fff58becf71e98117659a545be6b0.scope.
Oct  3 11:27:20 compute-0 podman[532907]: 2025-10-03 11:27:20.24046657 +0000 UTC m=+0.042621277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:27:20 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8e2d919560f93aa534daa5bba6b7efea8494c105b96a41780f8f389761a841/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8e2d919560f93aa534daa5bba6b7efea8494c105b96a41780f8f389761a841/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8e2d919560f93aa534daa5bba6b7efea8494c105b96a41780f8f389761a841/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8e2d919560f93aa534daa5bba6b7efea8494c105b96a41780f8f389761a841/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:27:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8e2d919560f93aa534daa5bba6b7efea8494c105b96a41780f8f389761a841/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:27:20 compute-0 podman[532907]: 2025-10-03 11:27:20.370227239 +0000 UTC m=+0.172381896 container init 82996d6fbd6879b16e4437ed6a4f3551999fff58becf71e98117659a545be6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:27:20 compute-0 podman[532907]: 2025-10-03 11:27:20.387214244 +0000 UTC m=+0.189368901 container start 82996d6fbd6879b16e4437ed6a4f3551999fff58becf71e98117659a545be6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:27:20 compute-0 podman[532907]: 2025-10-03 11:27:20.39117997 +0000 UTC m=+0.193334627 container attach 82996d6fbd6879b16e4437ed6a4f3551999fff58becf71e98117659a545be6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:27:21 compute-0 nova_compute[351685]: 2025-10-03 11:27:21.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:21 compute-0 elegant_elgamal[532922]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:27:21 compute-0 elegant_elgamal[532922]: --> relative data size: 1.0
Oct  3 11:27:21 compute-0 elegant_elgamal[532922]: --> All data devices are unavailable
Oct  3 11:27:21 compute-0 systemd[1]: libpod-82996d6fbd6879b16e4437ed6a4f3551999fff58becf71e98117659a545be6b0.scope: Deactivated successfully.
Oct  3 11:27:21 compute-0 systemd[1]: libpod-82996d6fbd6879b16e4437ed6a4f3551999fff58becf71e98117659a545be6b0.scope: Consumed 1.143s CPU time.
Oct  3 11:27:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3606: 321 pgs: 321 active+clean; 358 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 67 op/s
Oct  3 11:27:21 compute-0 podman[532951]: 2025-10-03 11:27:21.842544896 +0000 UTC m=+0.120939757 container died 82996d6fbd6879b16e4437ed6a4f3551999fff58becf71e98117659a545be6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 11:27:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b8e2d919560f93aa534daa5bba6b7efea8494c105b96a41780f8f389761a841-merged.mount: Deactivated successfully.
Oct  3 11:27:21 compute-0 podman[532970]: 2025-10-03 11:27:21.907326172 +0000 UTC m=+0.155518445 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 11:27:21 compute-0 podman[532962]: 2025-10-03 11:27:21.955167996 +0000 UTC m=+0.203704181 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 11:27:21 compute-0 podman[532960]: 2025-10-03 11:27:21.957630195 +0000 UTC m=+0.201538971 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  3 11:27:21 compute-0 podman[532963]: 2025-10-03 11:27:21.958823962 +0000 UTC m=+0.203652007 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20250930, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 11:27:21 compute-0 podman[532959]: 2025-10-03 11:27:21.961689364 +0000 UTC m=+0.221828060 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Oct  3 11:27:21 compute-0 podman[532952]: 2025-10-03 11:27:21.96372317 +0000 UTC m=+0.229093804 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 11:27:21 compute-0 podman[532951]: 2025-10-03 11:27:21.982216882 +0000 UTC m=+0.260611723 container remove 82996d6fbd6879b16e4437ed6a4f3551999fff58becf71e98117659a545be6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_elgamal, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 11:27:21 compute-0 podman[532969]: 2025-10-03 11:27:21.991286613 +0000 UTC m=+0.235370324 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 11:27:21 compute-0 systemd[1]: libpod-conmon-82996d6fbd6879b16e4437ed6a4f3551999fff58becf71e98117659a545be6b0.scope: Deactivated successfully.
Oct  3 11:27:22 compute-0 podman[533231]: 2025-10-03 11:27:22.871010988 +0000 UTC m=+0.034038072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:27:23 compute-0 podman[533231]: 2025-10-03 11:27:23.006833851 +0000 UTC m=+0.169860975 container create 6d5e48373732d73be9b26967d1dc88561294a7b5273428cbb0f5b3f81388edfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_taussig, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 11:27:23 compute-0 systemd[1]: Started libpod-conmon-6d5e48373732d73be9b26967d1dc88561294a7b5273428cbb0f5b3f81388edfb.scope.
Oct  3 11:27:23 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:27:23 compute-0 podman[533231]: 2025-10-03 11:27:23.225027214 +0000 UTC m=+0.388054318 container init 6d5e48373732d73be9b26967d1dc88561294a7b5273428cbb0f5b3f81388edfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  3 11:27:23 compute-0 podman[533231]: 2025-10-03 11:27:23.236893414 +0000 UTC m=+0.399920498 container start 6d5e48373732d73be9b26967d1dc88561294a7b5273428cbb0f5b3f81388edfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  3 11:27:23 compute-0 sad_taussig[533246]: 167 167
Oct  3 11:27:23 compute-0 systemd[1]: libpod-6d5e48373732d73be9b26967d1dc88561294a7b5273428cbb0f5b3f81388edfb.scope: Deactivated successfully.
Oct  3 11:27:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:27:23 compute-0 podman[533231]: 2025-10-03 11:27:23.296542886 +0000 UTC m=+0.459569990 container attach 6d5e48373732d73be9b26967d1dc88561294a7b5273428cbb0f5b3f81388edfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_taussig, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 11:27:23 compute-0 podman[533231]: 2025-10-03 11:27:23.296966829 +0000 UTC m=+0.459993903 container died 6d5e48373732d73be9b26967d1dc88561294a7b5273428cbb0f5b3f81388edfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_taussig, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 11:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-34460cce845c440cddb602e6b954a5055c9ddef21c6e7ed6f9941b7dc1cc54fc-merged.mount: Deactivated successfully.
Oct  3 11:27:23 compute-0 podman[533231]: 2025-10-03 11:27:23.57312354 +0000 UTC m=+0.736150624 container remove 6d5e48373732d73be9b26967d1dc88561294a7b5273428cbb0f5b3f81388edfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:27:23 compute-0 systemd[1]: libpod-conmon-6d5e48373732d73be9b26967d1dc88561294a7b5273428cbb0f5b3f81388edfb.scope: Deactivated successfully.
Oct  3 11:27:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3607: 321 pgs: 321 active+clean; 358 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 46 op/s
Oct  3 11:27:23 compute-0 podman[533269]: 2025-10-03 11:27:23.879814929 +0000 UTC m=+0.104535881 container create f5224b78384cc48ffaaa84d9e7409c2c523dcab05c36bb32b58c87dd82248854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 11:27:23 compute-0 podman[533269]: 2025-10-03 11:27:23.810079584 +0000 UTC m=+0.034800566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:27:23 compute-0 systemd[1]: Started libpod-conmon-f5224b78384cc48ffaaa84d9e7409c2c523dcab05c36bb32b58c87dd82248854.scope.
Oct  3 11:27:24 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde8d3317dfb1e4a0b8bd6033b9bfccddda4659c5e95e17d23322c8680972e1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde8d3317dfb1e4a0b8bd6033b9bfccddda4659c5e95e17d23322c8680972e1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde8d3317dfb1e4a0b8bd6033b9bfccddda4659c5e95e17d23322c8680972e1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:27:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde8d3317dfb1e4a0b8bd6033b9bfccddda4659c5e95e17d23322c8680972e1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:27:24 compute-0 podman[533269]: 2025-10-03 11:27:24.176319372 +0000 UTC m=+0.401040364 container init f5224b78384cc48ffaaa84d9e7409c2c523dcab05c36bb32b58c87dd82248854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:27:24 compute-0 podman[533269]: 2025-10-03 11:27:24.19093643 +0000 UTC m=+0.415657382 container start f5224b78384cc48ffaaa84d9e7409c2c523dcab05c36bb32b58c87dd82248854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:27:24 compute-0 podman[533269]: 2025-10-03 11:27:24.265876742 +0000 UTC m=+0.490597684 container attach f5224b78384cc48ffaaa84d9e7409c2c523dcab05c36bb32b58c87dd82248854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 11:27:24 compute-0 nova_compute[351685]: 2025-10-03 11:27:24.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]: {
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:    "0": [
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:        {
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "devices": [
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "/dev/loop3"
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            ],
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "lv_name": "ceph_lv0",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "lv_size": "21470642176",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "name": "ceph_lv0",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "tags": {
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.cluster_name": "ceph",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.crush_device_class": "",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.encrypted": "0",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.osd_id": "0",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.type": "block",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.vdo": "0"
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            },
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "type": "block",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "vg_name": "ceph_vg0"
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:        }
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:    ],
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:    "1": [
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:        {
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "devices": [
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "/dev/loop4"
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            ],
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "lv_name": "ceph_lv1",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "lv_size": "21470642176",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "name": "ceph_lv1",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "tags": {
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.cluster_name": "ceph",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.crush_device_class": "",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.encrypted": "0",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.osd_id": "1",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.type": "block",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.vdo": "0"
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            },
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "type": "block",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "vg_name": "ceph_vg1"
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:        }
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:    ],
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:    "2": [
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:        {
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "devices": [
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "/dev/loop5"
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            ],
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "lv_name": "ceph_lv2",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "lv_size": "21470642176",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "name": "ceph_lv2",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "tags": {
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.cluster_name": "ceph",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.crush_device_class": "",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.encrypted": "0",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.osd_id": "2",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.type": "block",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:                "ceph.vdo": "0"
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            },
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "type": "block",
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:            "vg_name": "ceph_vg2"
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:        }
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]:    ]
Oct  3 11:27:25 compute-0 nostalgic_brown[533283]: }
Oct  3 11:27:25 compute-0 systemd[1]: libpod-f5224b78384cc48ffaaa84d9e7409c2c523dcab05c36bb32b58c87dd82248854.scope: Deactivated successfully.
Oct  3 11:27:25 compute-0 podman[533269]: 2025-10-03 11:27:25.231991515 +0000 UTC m=+1.456712467 container died f5224b78384cc48ffaaa84d9e7409c2c523dcab05c36bb32b58c87dd82248854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-bde8d3317dfb1e4a0b8bd6033b9bfccddda4659c5e95e17d23322c8680972e1f-merged.mount: Deactivated successfully.
Oct  3 11:27:25 compute-0 podman[533269]: 2025-10-03 11:27:25.584460732 +0000 UTC m=+1.809181684 container remove f5224b78384cc48ffaaa84d9e7409c2c523dcab05c36bb32b58c87dd82248854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brown, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:27:25 compute-0 systemd[1]: libpod-conmon-f5224b78384cc48ffaaa84d9e7409c2c523dcab05c36bb32b58c87dd82248854.scope: Deactivated successfully.
Oct  3 11:27:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3608: 321 pgs: 321 active+clean; 360 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 553 KiB/s rd, 169 KiB/s wr, 23 op/s
Oct  3 11:27:26 compute-0 nova_compute[351685]: 2025-10-03 11:27:26.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:26 compute-0 podman[533440]: 2025-10-03 11:27:26.378039925 +0000 UTC m=+0.036862272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:27:26 compute-0 podman[533440]: 2025-10-03 11:27:26.489421885 +0000 UTC m=+0.148244232 container create 93d0e25ed183d7e5b4b1c642bb9d28c3343a951e5b4f715a03fa652a5bca38fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 11:27:26 compute-0 systemd[1]: Started libpod-conmon-93d0e25ed183d7e5b4b1c642bb9d28c3343a951e5b4f715a03fa652a5bca38fd.scope.
Oct  3 11:27:26 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:27:26 compute-0 podman[533440]: 2025-10-03 11:27:26.811579219 +0000 UTC m=+0.470401656 container init 93d0e25ed183d7e5b4b1c642bb9d28c3343a951e5b4f715a03fa652a5bca38fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:27:26 compute-0 podman[533440]: 2025-10-03 11:27:26.826059234 +0000 UTC m=+0.484881581 container start 93d0e25ed183d7e5b4b1c642bb9d28c3343a951e5b4f715a03fa652a5bca38fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  3 11:27:26 compute-0 podman[533440]: 2025-10-03 11:27:26.831993044 +0000 UTC m=+0.490815441 container attach 93d0e25ed183d7e5b4b1c642bb9d28c3343a951e5b4f715a03fa652a5bca38fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:27:26 compute-0 sharp_elion[533456]: 167 167
Oct  3 11:27:26 compute-0 systemd[1]: libpod-93d0e25ed183d7e5b4b1c642bb9d28c3343a951e5b4f715a03fa652a5bca38fd.scope: Deactivated successfully.
Oct  3 11:27:26 compute-0 podman[533440]: 2025-10-03 11:27:26.839153283 +0000 UTC m=+0.497975630 container died 93d0e25ed183d7e5b4b1c642bb9d28c3343a951e5b4f715a03fa652a5bca38fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:27:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-11347153dcc42af4111d5baeeda9a6d7b4a789d95aef3858047c8d0cd5cfbd84-merged.mount: Deactivated successfully.
Oct  3 11:27:27 compute-0 podman[533440]: 2025-10-03 11:27:27.567716503 +0000 UTC m=+1.226538890 container remove 93d0e25ed183d7e5b4b1c642bb9d28c3343a951e5b4f715a03fa652a5bca38fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 11:27:27 compute-0 systemd[1]: libpod-conmon-93d0e25ed183d7e5b4b1c642bb9d28c3343a951e5b4f715a03fa652a5bca38fd.scope: Deactivated successfully.
Oct  3 11:27:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3609: 321 pgs: 321 active+clean; 360 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 169 KiB/s wr, 6 op/s
Oct  3 11:27:27 compute-0 ovn_controller[88471]: 2025-10-03T11:27:27Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f9:87:f3 10.100.0.5
Oct  3 11:27:27 compute-0 ovn_controller[88471]: 2025-10-03T11:27:27Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f9:87:f3 10.100.0.5
Oct  3 11:27:27 compute-0 podman[533481]: 2025-10-03 11:27:27.901909774 +0000 UTC m=+0.109797620 container create 5a5e944fa6a164a6726089ccfd36fec3c988f5e3d203352879dd5a7a353c7d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:27:27 compute-0 podman[533481]: 2025-10-03 11:27:27.837762287 +0000 UTC m=+0.045650133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:27:27 compute-0 systemd[1]: Started libpod-conmon-5a5e944fa6a164a6726089ccfd36fec3c988f5e3d203352879dd5a7a353c7d42.scope.
Oct  3 11:27:28 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/581d2c78146191c17b17add50f2d6b4eb381f5baf04b99d1b90469169405eb73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/581d2c78146191c17b17add50f2d6b4eb381f5baf04b99d1b90469169405eb73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/581d2c78146191c17b17add50f2d6b4eb381f5baf04b99d1b90469169405eb73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:27:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/581d2c78146191c17b17add50f2d6b4eb381f5baf04b99d1b90469169405eb73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:27:28 compute-0 podman[533481]: 2025-10-03 11:27:28.082961716 +0000 UTC m=+0.290849582 container init 5a5e944fa6a164a6726089ccfd36fec3c988f5e3d203352879dd5a7a353c7d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:27:28 compute-0 podman[533481]: 2025-10-03 11:27:28.10054677 +0000 UTC m=+0.308434616 container start 5a5e944fa6a164a6726089ccfd36fec3c988f5e3d203352879dd5a7a353c7d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 11:27:28 compute-0 podman[533481]: 2025-10-03 11:27:28.170553954 +0000 UTC m=+0.378441810 container attach 5a5e944fa6a164a6726089ccfd36fec3c988f5e3d203352879dd5a7a353c7d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:27:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:27:29 compute-0 frosty_nash[533495]: {
Oct  3 11:27:29 compute-0 frosty_nash[533495]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:27:29 compute-0 frosty_nash[533495]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:27:29 compute-0 frosty_nash[533495]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:27:29 compute-0 frosty_nash[533495]:        "osd_id": 1,
Oct  3 11:27:29 compute-0 frosty_nash[533495]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:27:29 compute-0 frosty_nash[533495]:        "type": "bluestore"
Oct  3 11:27:29 compute-0 frosty_nash[533495]:    },
Oct  3 11:27:29 compute-0 frosty_nash[533495]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:27:29 compute-0 frosty_nash[533495]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:27:29 compute-0 frosty_nash[533495]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:27:29 compute-0 frosty_nash[533495]:        "osd_id": 2,
Oct  3 11:27:29 compute-0 frosty_nash[533495]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:27:29 compute-0 frosty_nash[533495]:        "type": "bluestore"
Oct  3 11:27:29 compute-0 frosty_nash[533495]:    },
Oct  3 11:27:29 compute-0 frosty_nash[533495]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:27:29 compute-0 frosty_nash[533495]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:27:29 compute-0 frosty_nash[533495]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:27:29 compute-0 frosty_nash[533495]:        "osd_id": 0,
Oct  3 11:27:29 compute-0 frosty_nash[533495]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:27:29 compute-0 frosty_nash[533495]:        "type": "bluestore"
Oct  3 11:27:29 compute-0 frosty_nash[533495]:    }
Oct  3 11:27:29 compute-0 frosty_nash[533495]: }
Oct  3 11:27:29 compute-0 systemd[1]: libpod-5a5e944fa6a164a6726089ccfd36fec3c988f5e3d203352879dd5a7a353c7d42.scope: Deactivated successfully.
Oct  3 11:27:29 compute-0 systemd[1]: libpod-5a5e944fa6a164a6726089ccfd36fec3c988f5e3d203352879dd5a7a353c7d42.scope: Consumed 1.063s CPU time.
Oct  3 11:27:29 compute-0 podman[533481]: 2025-10-03 11:27:29.311862091 +0000 UTC m=+1.519749967 container died 5a5e944fa6a164a6726089ccfd36fec3c988f5e3d203352879dd5a7a353c7d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 11:27:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-581d2c78146191c17b17add50f2d6b4eb381f5baf04b99d1b90469169405eb73-merged.mount: Deactivated successfully.
Oct  3 11:27:29 compute-0 podman[533481]: 2025-10-03 11:27:29.401528625 +0000 UTC m=+1.609416471 container remove 5a5e944fa6a164a6726089ccfd36fec3c988f5e3d203352879dd5a7a353c7d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 11:27:29 compute-0 systemd[1]: libpod-conmon-5a5e944fa6a164a6726089ccfd36fec3c988f5e3d203352879dd5a7a353c7d42.scope: Deactivated successfully.
Oct  3 11:27:29 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:27:29 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:27:29 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:27:29 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:27:29 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 5effb883-c642-4800-aeca-368dc319ec74 does not exist
Oct  3 11:27:29 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 34264384-b088-494c-bd1d-c64697773702 does not exist
Oct  3 11:27:29 compute-0 podman[157165]: time="2025-10-03T11:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:27:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 51199 "" "Go-http-client/1.1"
Oct  3 11:27:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10988 "" "Go-http-client/1.1"
Oct  3 11:27:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3610: 321 pgs: 321 active+clean; 382 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 231 KiB/s rd, 1.9 MiB/s wr, 49 op/s
Oct  3 11:27:29 compute-0 nova_compute[351685]: 2025-10-03 11:27:29.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:30 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:27:30 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:27:31 compute-0 nova_compute[351685]: 2025-10-03 11:27:31.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:31 compute-0 nova_compute[351685]: 2025-10-03 11:27:31.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:31 compute-0 openstack_network_exporter[367524]: ERROR   11:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:27:31 compute-0 openstack_network_exporter[367524]: ERROR   11:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:27:31 compute-0 openstack_network_exporter[367524]: ERROR   11:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:27:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:27:31 compute-0 openstack_network_exporter[367524]: ERROR   11:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:27:31 compute-0 openstack_network_exporter[367524]: ERROR   11:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:27:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:27:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3611: 321 pgs: 321 active+clean; 388 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  3 11:27:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:27:33 compute-0 nova_compute[351685]: 2025-10-03 11:27:33.568 2 INFO nova.compute.manager [None req-af8f3d5d-7fd3-443d-afc6-31b73a365b71 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Get console output#033[00m
Oct  3 11:27:33 compute-0 nova_compute[351685]: 2025-10-03 11:27:33.683 4814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  3 11:27:33 compute-0 nova_compute[351685]: 2025-10-03 11:27:33.745 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:27:33 compute-0 nova_compute[351685]: 2025-10-03 11:27:33.746 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 11:27:33 compute-0 nova_compute[351685]: 2025-10-03 11:27:33.791 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 11:27:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3612: 321 pgs: 321 active+clean; 391 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 397 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Oct  3 11:27:33 compute-0 podman[533591]: 2025-10-03 11:27:33.900129322 +0000 UTC m=+0.148551162 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 11:27:33 compute-0 podman[533592]: 2025-10-03 11:27:33.906111013 +0000 UTC m=+0.145394620 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, com.redhat.component=ubi9-container, config_id=edpm, version=9.4, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, name=ubi9)
Oct  3 11:27:33 compute-0 podman[533593]: 2025-10-03 11:27:33.909771571 +0000 UTC m=+0.150462064 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  3 11:27:34 compute-0 nova_compute[351685]: 2025-10-03 11:27:34.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:35 compute-0 ovn_controller[88471]: 2025-10-03T11:27:35Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:36:62 10.100.1.141
Oct  3 11:27:35 compute-0 ovn_controller[88471]: 2025-10-03T11:27:35Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:36:62 10.100.1.141
Oct  3 11:27:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3613: 321 pgs: 321 active+clean; 410 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 637 KiB/s rd, 4.1 MiB/s wr, 111 op/s
Oct  3 11:27:36 compute-0 nova_compute[351685]: 2025-10-03 11:27:36.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:36 compute-0 nova_compute[351685]: 2025-10-03 11:27:36.637 2 DEBUG nova.compute.manager [req-a5a59a7a-e4de-4dbf-bf9a-8bc6f9be9f69 req-b551711c-0b97-4995-9acd-b6e30c0f836f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Received event network-changed-d84c98dc-8422-4b51-aaf4-2f9403a4649c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:27:36 compute-0 nova_compute[351685]: 2025-10-03 11:27:36.638 2 DEBUG nova.compute.manager [req-a5a59a7a-e4de-4dbf-bf9a-8bc6f9be9f69 req-b551711c-0b97-4995-9acd-b6e30c0f836f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Refreshing instance network info cache due to event network-changed-d84c98dc-8422-4b51-aaf4-2f9403a4649c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:27:36 compute-0 nova_compute[351685]: 2025-10-03 11:27:36.638 2 DEBUG oslo_concurrency.lockutils [req-a5a59a7a-e4de-4dbf-bf9a-8bc6f9be9f69 req-b551711c-0b97-4995-9acd-b6e30c0f836f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-fd405fd5-7402-43b4-8ab3-a52c18493a6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:27:36 compute-0 nova_compute[351685]: 2025-10-03 11:27:36.638 2 DEBUG oslo_concurrency.lockutils [req-a5a59a7a-e4de-4dbf-bf9a-8bc6f9be9f69 req-b551711c-0b97-4995-9acd-b6e30c0f836f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-fd405fd5-7402-43b4-8ab3-a52c18493a6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:27:36 compute-0 nova_compute[351685]: 2025-10-03 11:27:36.638 2 DEBUG nova.network.neutron [req-a5a59a7a-e4de-4dbf-bf9a-8bc6f9be9f69 req-b551711c-0b97-4995-9acd-b6e30c0f836f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Refreshing network info cache for port d84c98dc-8422-4b51-aaf4-2f9403a4649c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:27:36 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:36.688 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:73:2e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:70:f7:73:f2:48'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:27:36 compute-0 nova_compute[351685]: 2025-10-03 11:27:36.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:36 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:36.690 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  3 11:27:37 compute-0 nova_compute[351685]: 2025-10-03 11:27:37.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:37 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:37.692 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3614: 321 pgs: 321 active+clean; 410 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 625 KiB/s rd, 4.0 MiB/s wr, 105 op/s
Oct  3 11:27:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:27:38 compute-0 nova_compute[351685]: 2025-10-03 11:27:38.596 2 DEBUG nova.network.neutron [req-a5a59a7a-e4de-4dbf-bf9a-8bc6f9be9f69 req-b551711c-0b97-4995-9acd-b6e30c0f836f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Updated VIF entry in instance network info cache for port d84c98dc-8422-4b51-aaf4-2f9403a4649c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:27:38 compute-0 nova_compute[351685]: 2025-10-03 11:27:38.597 2 DEBUG nova.network.neutron [req-a5a59a7a-e4de-4dbf-bf9a-8bc6f9be9f69 req-b551711c-0b97-4995-9acd-b6e30c0f836f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Updating instance_info_cache with network_info: [{"id": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "address": "fa:16:3e:f9:87:f3", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd84c98dc-84", "ovs_interfaceid": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:27:38 compute-0 nova_compute[351685]: 2025-10-03 11:27:38.629 2 DEBUG oslo_concurrency.lockutils [req-a5a59a7a-e4de-4dbf-bf9a-8bc6f9be9f69 req-b551711c-0b97-4995-9acd-b6e30c0f836f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-fd405fd5-7402-43b4-8ab3-a52c18493a6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:27:39 compute-0 ovn_controller[88471]: 2025-10-03T11:27:39Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:b6:fb 10.100.0.10
Oct  3 11:27:39 compute-0 ovn_controller[88471]: 2025-10-03T11:27:39Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:b6:fb 10.100.0.10
Oct  3 11:27:39 compute-0 nova_compute[351685]: 2025-10-03 11:27:39.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:27:39 compute-0 nova_compute[351685]: 2025-10-03 11:27:39.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 11:27:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3615: 321 pgs: 321 active+clean; 442 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 816 KiB/s rd, 5.6 MiB/s wr, 148 op/s
Oct  3 11:27:39 compute-0 nova_compute[351685]: 2025-10-03 11:27:39.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.906 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.907 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.919 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f7465889-4aed-4799-835b-1c604f730144 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.919 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.920 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f7465889-4aed-4799-835b-1c604f730144 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}05c71207458c74aad35f0b171d3453ab31f2036fb50a6b94fe7b4d338da45aed" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.921 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.922 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.923 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.923 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.923 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.924 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.924 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.924 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:40.925 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:27:41 compute-0 nova_compute[351685]: 2025-10-03 11:27:41.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:41.471 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1979 Content-Type: application/json Date: Fri, 03 Oct 2025 11:27:40 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-cd242126-59a5-496a-8322-8ecf41eb8558 x-openstack-request-id: req-cd242126-59a5-496a-8322-8ecf41eb8558 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct  3 11:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:41.471 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "f7465889-4aed-4799-835b-1c604f730144", "name": "tempest-ServerActionsTestJSON-server-1342038803", "status": "ACTIVE", "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "user_id": "a98b98aa35184e41a4ae6e74ba3a32e6", "metadata": {}, "hostId": "d15290f5359ce5902473e696829acd6570ec24a05780540c59fa2286", "image": {"id": "6a34ed8d-90df-4a16-a968-c59b7cafa2f1", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/6a34ed8d-90df-4a16-a968-c59b7cafa2f1"}]}, "flavor": {"id": "b93eb926-1d95-406e-aec3-a907be067084", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b93eb926-1d95-406e-aec3-a907be067084"}]}, "created": "2025-10-03T11:25:40Z", "updated": "2025-10-03T11:27:07Z", "addresses": {"tempest-ServerActionsTestJSON-2109595368-network": [{"version": 4, "addr": "10.100.0.7", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:5d:8a:bc"}, {"version": 4, "addr": "192.168.122.185", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:5d:8a:bc"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/f7465889-4aed-4799-835b-1c604f730144"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/f7465889-4aed-4799-835b-1c604f730144"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-881855979", "OS-SRV-USG:launched_at": "2025-10-03T11:25:49.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--564302520"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000a", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct  3 11:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:41.471 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f7465889-4aed-4799-835b-1c604f730144 used request id req-cd242126-59a5-496a-8322-8ecf41eb8558 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct  3 11:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:41.472 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f7465889-4aed-4799-835b-1c604f730144', 'name': 'tempest-ServerActionsTestJSON-server-1342038803', 'flavor': {'id': 'b93eb926-1d95-406e-aec3-a907be067084', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '6a34ed8d-90df-4a16-a968-c59b7cafa2f1'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '8ac8b91115c2483686f9dc31c58b49fc', 'user_id': 'a98b98aa35184e41a4ae6e74ba3a32e6', 'hostId': 'd15290f5359ce5902473e696829acd6570ec24a05780540c59fa2286', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:41.474 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct  3 11:27:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:41.475 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/218fdfd8-b66b-4ba2-90b0-5eb27dcacddf -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}05c71207458c74aad35f0b171d3453ab31f2036fb50a6b94fe7b4d338da45aed" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct  3 11:27:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:41.704 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:27:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:41.705 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:27:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:41.707 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:41 compute-0 nova_compute[351685]: 2025-10-03 11:27:41.750 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:27:41 compute-0 nova_compute[351685]: 2025-10-03 11:27:41.750 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:27:41 compute-0 nova_compute[351685]: 2025-10-03 11:27:41.751 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:27:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3616: 321 pgs: 321 active+clean; 447 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 746 KiB/s rd, 4.4 MiB/s wr, 125 op/s
Oct  3 11:27:41 compute-0 nova_compute[351685]: 2025-10-03 11:27:41.980 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:27:41 compute-0 nova_compute[351685]: 2025-10-03 11:27:41.981 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:27:41 compute-0 nova_compute[351685]: 2025-10-03 11:27:41.981 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:27:41 compute-0 nova_compute[351685]: 2025-10-03 11:27:41.981 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:27:42 compute-0 ovn_controller[88471]: 2025-10-03T11:27:42Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:8a:bc 10.100.0.7
Oct  3 11:27:42 compute-0 ovn_controller[88471]: 2025-10-03T11:27:42Z|00157|binding|INFO|Releasing lport e51b3658-d946-4608-953e-6b26039ed1fd from this chassis (sb_readonly=0)
Oct  3 11:27:42 compute-0 ovn_controller[88471]: 2025-10-03T11:27:42Z|00158|binding|INFO|Releasing lport 71787e87-58e9-457f-840d-4d7e879d0280 from this chassis (sb_readonly=0)
Oct  3 11:27:42 compute-0 ovn_controller[88471]: 2025-10-03T11:27:42Z|00159|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:27:42 compute-0 ovn_controller[88471]: 2025-10-03T11:27:42Z|00160|binding|INFO|Releasing lport bef064e6-45ec-48bd-af03-741f51d8edf0 from this chassis (sb_readonly=0)
Oct  3 11:27:42 compute-0 ovn_controller[88471]: 2025-10-03T11:27:42Z|00161|binding|INFO|Releasing lport 1eb40ea8-53b0-46a1-bf82-85a3448330ac from this chassis (sb_readonly=0)
Oct  3 11:27:42 compute-0 nova_compute[351685]: 2025-10-03 11:27:42.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:42.625 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2086 Content-Type: application/json Date: Fri, 03 Oct 2025 11:27:41 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-02936628-3e2b-418b-b93b-bbff697fa230 x-openstack-request-id: req-02936628-3e2b-418b-b93b-bbff697fa230 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct  3 11:27:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:42.625 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf", "name": "tempest-TestServerBasicOps-server-1706208204", "status": "ACTIVE", "tenant_id": "76485b7490844f9181c1821d135ade02", "user_id": "e95897c85bf04672a829b11af6ed10c1", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "1102819022e86bb910b8740f3fae48444ef31e50236f55075294e6da", "image": {"id": "6a34ed8d-90df-4a16-a968-c59b7cafa2f1", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/6a34ed8d-90df-4a16-a968-c59b7cafa2f1"}]}, "flavor": {"id": "b93eb926-1d95-406e-aec3-a907be067084", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b93eb926-1d95-406e-aec3-a907be067084"}]}, "created": "2025-10-03T11:26:50Z", "updated": "2025-10-03T11:27:02Z", "addresses": {"tempest-TestServerBasicOps-1070704057-network": [{"version": 4, "addr": "10.100.0.10", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f8:b6:fb"}, {"version": 4, "addr": "192.168.122.249", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f8:b6:fb"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/218fdfd8-b66b-4ba2-90b0-5eb27dcacddf"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/218fdfd8-b66b-4ba2-90b0-5eb27dcacddf"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-1309063488", "OS-SRV-USG:launched_at": "2025-10-03T11:27:02.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1040088715"}, {"name": "tempest-securitygroup--1964665437"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct  3 11:27:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:42.625 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/218fdfd8-b66b-4ba2-90b0-5eb27dcacddf used request id req-02936628-3e2b-418b-b93b-bbff697fa230 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct  3 11:27:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:42.626 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '218fdfd8-b66b-4ba2-90b0-5eb27dcacddf', 'name': 'tempest-TestServerBasicOps-server-1706208204', 'flavor': {'id': 'b93eb926-1d95-406e-aec3-a907be067084', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '6a34ed8d-90df-4a16-a968-c59b7cafa2f1'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '76485b7490844f9181c1821d135ade02', 'user_id': 'e95897c85bf04672a829b11af6ed10c1', 'hostId': '1102819022e86bb910b8740f3fae48444ef31e50236f55075294e6da', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:27:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:42.629 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance fd405fd5-7402-43b4-8ab3-a52c18493a6e from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct  3 11:27:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:42.630 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/fd405fd5-7402-43b4-8ab3-a52c18493a6e -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}05c71207458c74aad35f0b171d3453ab31f2036fb50a6b94fe7b4d338da45aed" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.207 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1852 Content-Type: application/json Date: Fri, 03 Oct 2025 11:27:42 GMT Keep-Alive: timeout=5, max=98 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-fbde3b73-d471-47cd-a82b-776ff57e2b6b x-openstack-request-id: req-fbde3b73-d471-47cd-a82b-776ff57e2b6b _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.207 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "fd405fd5-7402-43b4-8ab3-a52c18493a6e", "name": "tempest-TestNetworkBasicOps-server-447198342", "status": "ACTIVE", "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "user_id": "a62337822a774597b9068cf3aed6a92f", "metadata": {}, "hostId": "2d0ee2c51728fb976d2a2b13a2bac84a4e3968fe038d0ab8b79597d3", "image": {"id": "6a34ed8d-90df-4a16-a968-c59b7cafa2f1", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/6a34ed8d-90df-4a16-a968-c59b7cafa2f1"}]}, "flavor": {"id": "b93eb926-1d95-406e-aec3-a907be067084", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b93eb926-1d95-406e-aec3-a907be067084"}]}, "created": "2025-10-03T11:26:31Z", "updated": "2025-10-03T11:26:45Z", "addresses": {"tempest-network-smoke--1012052952": [{"version": 4, "addr": "10.100.0.5", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f9:87:f3"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/fd405fd5-7402-43b4-8ab3-a52c18493a6e"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/fd405fd5-7402-43b4-8ab3-a52c18493a6e"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-176805461", "OS-SRV-USG:launched_at": "2025-10-03T11:26:45.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1312730502"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000c", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.208 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/fd405fd5-7402-43b4-8ab3-a52c18493a6e used request id req-fbde3b73-d471-47cd-a82b-776ff57e2b6b request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.209 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fd405fd5-7402-43b4-8ab3-a52c18493a6e', 'name': 'tempest-TestNetworkBasicOps-server-447198342', 'flavor': {'id': 'b93eb926-1d95-406e-aec3-a907be067084', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '6a34ed8d-90df-4a16-a968-c59b7cafa2f1'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '5ea98f29bce64ae8ba81224645237ac7', 'user_id': 'a62337822a774597b9068cf3aed6a92f', 'hostId': '2d0ee2c51728fb976d2a2b13a2bac84a4e3968fe038d0ab8b79597d3', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.212 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.214 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 83fc22ce-d2e4-468a-b166-04f2743fa68d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.214 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/83fc22ce-d2e4-468a-b166-04f2743fa68d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}05c71207458c74aad35f0b171d3453ab31f2036fb50a6b94fe7b4d338da45aed" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct  3 11:27:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:27:43 compute-0 nova_compute[351685]: 2025-10-03 11:27:43.471 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:27:43 compute-0 nova_compute[351685]: 2025-10-03 11:27:43.473 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:27:43 compute-0 nova_compute[351685]: 2025-10-03 11:27:43.491 2 DEBUG nova.compute.manager [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  3 11:27:43 compute-0 nova_compute[351685]: 2025-10-03 11:27:43.570 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:27:43 compute-0 nova_compute[351685]: 2025-10-03 11:27:43.571 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:27:43 compute-0 nova_compute[351685]: 2025-10-03 11:27:43.579 2 DEBUG nova.virt.hardware [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  3 11:27:43 compute-0 nova_compute[351685]: 2025-10-03 11:27:43.580 2 INFO nova.compute.claims [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  3 11:27:43 compute-0 nova_compute[351685]: 2025-10-03 11:27:43.732 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:27:43 compute-0 nova_compute[351685]: 2025-10-03 11:27:43.751 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:27:43 compute-0 nova_compute[351685]: 2025-10-03 11:27:43.752 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.811 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Fri, 03 Oct 2025 11:27:43 GMT Keep-Alive: timeout=5, max=97 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-e4556d4b-c42a-4593-9dff-c4df4f41570f x-openstack-request-id: req-e4556d4b-c42a-4593-9dff-c4df4f41570f _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.812 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "83fc22ce-d2e4-468a-b166-04f2743fa68d", "name": "te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz", "status": "ACTIVE", "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "user_id": "8990c210ba8740dc9714739f27702391", "metadata": {"metering.server_group": "0f5ccd31-0ab5-424c-9868-9c1f9b1ba831"}, "hostId": "68f1a860c9647e69f481ba6f1cfa913085c986ba3b27987b76a63364", "image": {"id": "b9c8e0cc-ecf1-4fa8-92ee-328b108123cd", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/b9c8e0cc-ecf1-4fa8-92ee-328b108123cd"}]}, "flavor": {"id": "b93eb926-1d95-406e-aec3-a907be067084", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b93eb926-1d95-406e-aec3-a907be067084"}]}, "created": "2025-10-03T11:26:41Z", "updated": "2025-10-03T11:26:57Z", "addresses": {"": [{"version": 4, "addr": "10.100.1.141", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c0:36:62"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/83fc22ce-d2e4-468a-b166-04f2743fa68d"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/83fc22ce-d2e4-468a-b166-04f2743fa68d"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-03T11:26:57.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000d", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.812 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/83fc22ce-d2e4-468a-b166-04f2743fa68d used request id req-e4556d4b-c42a-4593-9dff-c4df4f41570f request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.813 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '83fc22ce-d2e4-468a-b166-04f2743fa68d', 'name': 'te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz', 'flavor': {'id': 'b93eb926-1d95-406e-aec3-a907be067084', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b9c8e0cc-ecf1-4fa8-92ee-328b108123cd'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ebbd19d68501417398b05a6a4b7c22de', 'user_id': '8990c210ba8740dc9714739f27702391', 'hostId': '68f1a860c9647e69f481ba6f1cfa913085c986ba3b27987b76a63364', 'status': 'active', 'metadata': {'metering.server_group': '0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.813 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.813 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.813 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.813 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.814 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:27:43.813705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.818 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for f7465889-4aed-4799-835b-1c604f730144 / tapd444b4b5-52 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.819 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.823 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf / tapff068d12-ba inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.823 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3617: 321 pgs: 321 active+clean; 454 MiB data, 519 MiB used, 59 GiB / 60 GiB avail; 889 KiB/s rd, 4.2 MiB/s wr, 138 op/s
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.826 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for fd405fd5-7402-43b4-8ab3-a52c18493a6e / tapd84c98dc-84 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.826 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.830 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 nova_compute[351685]: 2025-10-03 11:27:43.830 2 DEBUG oslo_concurrency.processutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.833 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 83fc22ce-d2e4-468a-b166-04f2743fa68d / tap226590bd-fa inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.833 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.834 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.834 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.834 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.834 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.834 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.834 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.835 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.835 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.835 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:27:43.834702) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.835 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.836 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.836 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.836 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.836 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.836 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.836 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:27:43.836570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.853 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.854 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.869 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.869 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.882 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.882 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.903 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.904 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.904 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.917 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.917 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.918 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.918 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.918 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.918 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.918 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.918 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:27:43.918742) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.952 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.read.bytes volume: 32016384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.952 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.978 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.read.bytes volume: 31013376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:43 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:43.979 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.001 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.read.bytes volume: 31861248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.001 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.049 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.049 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.049 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.075 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.bytes volume: 30612480 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.076 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.076 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.076 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.076 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.077 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.077 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.read.latency volume: 2068716002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.077 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.read.latency volume: 143655001 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.077 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.read.latency volume: 2293896541 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.077 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.read.latency volume: 166581728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.078 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.read.latency volume: 2404334564 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.078 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.read.latency volume: 155493014 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.078 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.078 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.078 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.079 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.latency volume: 2539266810 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.079 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.latency volume: 146824610 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.080 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.080 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.080 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.080 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.080 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.read.requests volume: 1207 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.080 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.080 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.read.requests volume: 1132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.081 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.081 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.read.requests volume: 1166 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.081 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.081 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.082 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.082 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.082 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.requests volume: 1112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.083 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.084 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.084 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.085 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.085 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.085 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.085 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.086 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.086 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:27:44.077020) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:27:44.080375) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.087 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:27:44.085206) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.087 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.087 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.087 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.088 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.088 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.088 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.089 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.089 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.090 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.090 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.090 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.090 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.090 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.090 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.091 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.091 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.091 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.091 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.092 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.092 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.092 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.092 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.093 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.093 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.094 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.094 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.094 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.094 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.094 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.write.bytes volume: 147456 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.094 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.095 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.write.bytes volume: 72761344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.095 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.095 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.write.bytes volume: 72945664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:27:44.090465) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:27:44.094611) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.096 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.096 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.097 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.097 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.097 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.bytes volume: 72781824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.098 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.098 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.099 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.099 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.099 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.099 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.099 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:27:44.099224) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.128 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.147 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.170 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.190 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.211 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.211 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.212 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.212 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.212 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.212 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.212 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.212 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.write.latency volume: 797035381 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.212 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.213 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.write.latency volume: 10949393824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.213 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.213 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.write.latency volume: 96828004976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.214 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.214 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.214 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.215 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:27:44.212475) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.215 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.latency volume: 9706688989 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.215 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.216 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.216 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.216 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.216 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.216 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.216 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.216 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.write.requests volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:27:44.216820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.217 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.217 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.write.requests volume: 292 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.217 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.218 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.write.requests volume: 320 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.218 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.218 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.218 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.219 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.219 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.requests volume: 311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.219 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.220 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.220 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.220 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.220 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.220 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.220 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.221 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.221 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.221 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.221 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:27:44.220938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.222 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.222 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.222 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.222 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.222 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.223 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.223 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.223 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.223 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.223 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1342038803>, <NovaLikeServer: tempest-TestServerBasicOps-server-1706208204>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-447198342>, <NovaLikeServer: te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1342038803>, <NovaLikeServer: tempest-TestServerBasicOps-server-1706208204>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-447198342>, <NovaLikeServer: te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz>]
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.223 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-03T11:27:44.223223) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.223 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.224 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.224 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.224 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.224 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.225 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.225 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.225 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.225 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.225 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:27:44.224398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.225 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.226 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.226 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.226 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.226 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/network.incoming.packets volume: 117 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.227 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.227 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:27:44.225998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.227 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets volume: 11 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.228 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.228 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.228 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.228 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.228 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.228 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.229 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.229 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.229 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.229 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.230 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.230 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.230 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.230 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:27:44.228698) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.230 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:27:44.230089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.230 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.231 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.231 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.231 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.232 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.232 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.232 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.232 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.232 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.232 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.232 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/cpu volume: 33470000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.233 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/cpu volume: 35720000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.233 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/cpu volume: 39380000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.233 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 107910000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.234 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/cpu volume: 43650000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.234 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.234 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.234 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.234 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.234 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.234 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.235 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.235 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:27:44.232676) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.235 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:27:44.234931) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.235 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.235 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.236 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.236 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.236 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.236 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.236 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.237 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.237 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.237 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.237 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/network.outgoing.bytes volume: 900 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.237 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/network.outgoing.bytes volume: 1326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:27:44.237520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.238 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/network.outgoing.bytes volume: 15952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.238 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.238 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.bytes volume: 1550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.239 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.239 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.239 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.239 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.239 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.240 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.240 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.240 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.240 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.241 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.241 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.241 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.242 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.242 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.242 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.242 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.242 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.242 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/memory.usage volume: 40.390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.242 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/memory.usage volume: 42.54296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.243 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/memory.usage volume: 42.85546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.243 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.243 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:27:44.239999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.243 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:27:44.242389) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.243 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/memory.usage volume: 42.85546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.244 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.244 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.244 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.244 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.244 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.244 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.245 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.245 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1342038803>, <NovaLikeServer: tempest-TestServerBasicOps-server-1706208204>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-447198342>, <NovaLikeServer: te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1342038803>, <NovaLikeServer: tempest-TestServerBasicOps-server-1706208204>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-447198342>, <NovaLikeServer: te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz>]
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.245 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.245 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.245 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.245 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.245 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.246 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-03T11:27:44.244707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.245 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/network.incoming.bytes volume: 1731 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.246 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:27:44.245757) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.246 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/network.incoming.bytes volume: 2096 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.246 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/network.incoming.bytes volume: 20542 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.246 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.246 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.bytes volume: 1652 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.247 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.247 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.247 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.247 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.247 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.247 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.248 14 DEBUG ceilometer.compute.pollsters [-] f7465889-4aed-4799-835b-1c604f730144/network.outgoing.packets volume: 8 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.248 14 DEBUG ceilometer.compute.pollsters [-] 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf/network.outgoing.packets volume: 11 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.248 14 DEBUG ceilometer.compute.pollsters [-] fd405fd5-7402-43b4-8ab3-a52c18493a6e/network.outgoing.packets volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.249 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:27:44.247908) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.249 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.249 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.249 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.250 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.250 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.250 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.250 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.250 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.250 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.250 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.250 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.250 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.250 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.250 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.250 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.251 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.251 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.251 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.251 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.251 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.251 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.251 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.251 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.251 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.251 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.251 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.251 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.251 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:27:44.251 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:27:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:27:44 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1613342919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.415 2 DEBUG oslo_concurrency.processutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.424 2 DEBUG nova.compute.provider_tree [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.451 2 DEBUG nova.scheduler.client.report [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.475 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.476 2 DEBUG nova.compute.manager [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.530 2 DEBUG nova.compute.manager [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.530 2 DEBUG nova.network.neutron [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.547 2 INFO nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.571 2 DEBUG nova.compute.manager [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.683 2 DEBUG nova.compute.manager [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.685 2 DEBUG nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.686 2 INFO nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Creating image(s)#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.724 2 DEBUG nova.storage.rbd_utils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] rbd image 1cd61d6b-0ef5-458f-88f0-44a4951ea368_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.774 2 DEBUG nova.storage.rbd_utils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] rbd image 1cd61d6b-0ef5-458f-88f0-44a4951ea368_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.828 2 DEBUG nova.storage.rbd_utils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] rbd image 1cd61d6b-0ef5-458f-88f0-44a4951ea368_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.844 2 DEBUG oslo_concurrency.processutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.934 2 DEBUG oslo_concurrency.processutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.936 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.936 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.937 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "b22c1ef3bc301c8ccf7962419a5752d07e6a82a8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.980 2 DEBUG nova.storage.rbd_utils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] rbd image 1cd61d6b-0ef5-458f-88f0-44a4951ea368_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:27:44 compute-0 nova_compute[351685]: 2025-10-03 11:27:44.988 2 DEBUG oslo_concurrency.processutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 1cd61d6b-0ef5-458f-88f0-44a4951ea368_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:27:45 compute-0 nova_compute[351685]: 2025-10-03 11:27:45.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:45 compute-0 nova_compute[351685]: 2025-10-03 11:27:45.174 2 DEBUG nova.policy [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a62337822a774597b9068cf3aed6a92f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5ea98f29bce64ae8ba81224645237ac7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  3 11:27:45 compute-0 nova_compute[351685]: 2025-10-03 11:27:45.409 2 DEBUG oslo_concurrency.processutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b22c1ef3bc301c8ccf7962419a5752d07e6a82a8 1cd61d6b-0ef5-458f-88f0-44a4951ea368_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:27:45 compute-0 nova_compute[351685]: 2025-10-03 11:27:45.522 2 DEBUG nova.storage.rbd_utils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] resizing rbd image 1cd61d6b-0ef5-458f-88f0-44a4951ea368_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  3 11:27:45 compute-0 nova_compute[351685]: 2025-10-03 11:27:45.700 2 DEBUG nova.objects.instance [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lazy-loading 'migration_context' on Instance uuid 1cd61d6b-0ef5-458f-88f0-44a4951ea368 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:27:45 compute-0 nova_compute[351685]: 2025-10-03 11:27:45.714 2 DEBUG nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  3 11:27:45 compute-0 nova_compute[351685]: 2025-10-03 11:27:45.714 2 DEBUG nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Ensure instance console log exists: /var/lib/nova/instances/1cd61d6b-0ef5-458f-88f0-44a4951ea368/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  3 11:27:45 compute-0 nova_compute[351685]: 2025-10-03 11:27:45.714 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:27:45 compute-0 nova_compute[351685]: 2025-10-03 11:27:45.715 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:27:45 compute-0 nova_compute[351685]: 2025-10-03 11:27:45.715 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3618: 321 pgs: 321 active+clean; 456 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.2 MiB/s wr, 167 op/s
Oct  3 11:27:46 compute-0 nova_compute[351685]: 2025-10-03 11:27:46.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:27:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:27:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:27:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:27:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:27:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:27:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:27:46
Oct  3 11:27:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:27:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:27:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'images', 'default.rgw.control', '.rgw.root', 'default.rgw.meta']
Oct  3 11:27:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:27:47 compute-0 nova_compute[351685]: 2025-10-03 11:27:47.147 2 DEBUG nova.network.neutron [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Successfully created port: b3d448d1-073b-4561-93d3-d26eb20839fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  3 11:27:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:27:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:27:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:27:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:27:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:27:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:27:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:27:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:27:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:27:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:27:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3619: 321 pgs: 321 active+clean; 456 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 948 KiB/s rd, 2.3 MiB/s wr, 126 op/s
Oct  3 11:27:48 compute-0 nova_compute[351685]: 2025-10-03 11:27:48.123 2 DEBUG nova.network.neutron [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Successfully updated port: b3d448d1-073b-4561-93d3-d26eb20839fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  3 11:27:48 compute-0 nova_compute[351685]: 2025-10-03 11:27:48.138 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "refresh_cache-1cd61d6b-0ef5-458f-88f0-44a4951ea368" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:27:48 compute-0 nova_compute[351685]: 2025-10-03 11:27:48.138 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquired lock "refresh_cache-1cd61d6b-0ef5-458f-88f0-44a4951ea368" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:27:48 compute-0 nova_compute[351685]: 2025-10-03 11:27:48.139 2 DEBUG nova.network.neutron [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 11:27:48 compute-0 nova_compute[351685]: 2025-10-03 11:27:48.216 2 DEBUG nova.compute.manager [req-f5a7a4df-5e75-43f7-8b66-0b81e6a48499 req-70733abd-1b0e-45cb-936a-028d1319f0fe 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Received event network-changed-b3d448d1-073b-4561-93d3-d26eb20839fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:27:48 compute-0 nova_compute[351685]: 2025-10-03 11:27:48.216 2 DEBUG nova.compute.manager [req-f5a7a4df-5e75-43f7-8b66-0b81e6a48499 req-70733abd-1b0e-45cb-936a-028d1319f0fe 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Refreshing instance network info cache due to event network-changed-b3d448d1-073b-4561-93d3-d26eb20839fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:27:48 compute-0 nova_compute[351685]: 2025-10-03 11:27:48.216 2 DEBUG oslo_concurrency.lockutils [req-f5a7a4df-5e75-43f7-8b66-0b81e6a48499 req-70733abd-1b0e-45cb-936a-028d1319f0fe 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-1cd61d6b-0ef5-458f-88f0-44a4951ea368" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:27:48 compute-0 nova_compute[351685]: 2025-10-03 11:27:48.257 2 DEBUG nova.network.neutron [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  3 11:27:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:27:48 compute-0 nova_compute[351685]: 2025-10-03 11:27:48.964 2 DEBUG nova.network.neutron [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Updating instance_info_cache with network_info: [{"id": "b3d448d1-073b-4561-93d3-d26eb20839fe", "address": "fa:16:3e:68:fd:58", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3d448d1-07", "ovs_interfaceid": "b3d448d1-073b-4561-93d3-d26eb20839fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.092 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Releasing lock "refresh_cache-1cd61d6b-0ef5-458f-88f0-44a4951ea368" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.092 2 DEBUG nova.compute.manager [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Instance network_info: |[{"id": "b3d448d1-073b-4561-93d3-d26eb20839fe", "address": "fa:16:3e:68:fd:58", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3d448d1-07", "ovs_interfaceid": "b3d448d1-073b-4561-93d3-d26eb20839fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.093 2 DEBUG oslo_concurrency.lockutils [req-f5a7a4df-5e75-43f7-8b66-0b81e6a48499 req-70733abd-1b0e-45cb-936a-028d1319f0fe 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-1cd61d6b-0ef5-458f-88f0-44a4951ea368" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.093 2 DEBUG nova.network.neutron [req-f5a7a4df-5e75-43f7-8b66-0b81e6a48499 req-70733abd-1b0e-45cb-936a-028d1319f0fe 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Refreshing network info cache for port b3d448d1-073b-4561-93d3-d26eb20839fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.096 2 DEBUG nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Start _get_guest_xml network_info=[{"id": "b3d448d1-073b-4561-93d3-d26eb20839fe", "address": "fa:16:3e:68:fd:58", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3d448d1-07", "ovs_interfaceid": "b3d448d1-073b-4561-93d3-d26eb20839fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:24:00Z,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:24:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 0, 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '6a34ed8d-90df-4a16-a968-c59b7cafa2f1'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.103 2 WARNING nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.110 2 DEBUG nova.virt.libvirt.host [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.111 2 DEBUG nova.virt.libvirt.host [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.118 2 DEBUG nova.virt.libvirt.host [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.120 2 DEBUG nova.virt.libvirt.host [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.120 2 DEBUG nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.121 2 DEBUG nova.virt.hardware [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-03T11:23:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b93eb926-1d95-406e-aec3-a907be067084',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:24:00Z,direct_url=<?>,disk_format='qcow2',id=6a34ed8d-90df-4a16-a968-c59b7cafa2f1,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ee75a4dc6ade43baab6ee923c9cf4cdf',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:24:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.122 2 DEBUG nova.virt.hardware [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.123 2 DEBUG nova.virt.hardware [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.123 2 DEBUG nova.virt.hardware [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.124 2 DEBUG nova.virt.hardware [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.124 2 DEBUG nova.virt.hardware [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.124 2 DEBUG nova.virt.hardware [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.125 2 DEBUG nova.virt.hardware [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.125 2 DEBUG nova.virt.hardware [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.126 2 DEBUG nova.virt.hardware [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.127 2 DEBUG nova.virt.hardware [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.132 2 DEBUG oslo_concurrency.processutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:27:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:27:49 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4054061492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.670 2 DEBUG oslo_concurrency.processutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.701 2 DEBUG nova.storage.rbd_utils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] rbd image 1cd61d6b-0ef5-458f-88f0-44a4951ea368_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.708 2 DEBUG oslo_concurrency.processutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:27:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3620: 321 pgs: 321 active+clean; 475 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 957 KiB/s rd, 2.8 MiB/s wr, 142 op/s
Oct  3 11:27:49 compute-0 nova_compute[351685]: 2025-10-03 11:27:49.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:50 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:27:50 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3038026442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.213 2 DEBUG oslo_concurrency.processutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.215 2 DEBUG nova.virt.libvirt.vif [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:27:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-629685759',display_name='tempest-TestNetworkBasicOps-server-629685759',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-629685759',id=15,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN1SBN1fMS2ySXPXG88Q+Bof2aHg32/EVVpyEUfGXr3FecKVqnRmKLwEF1cg7BJuFNMKxBLDa8CU5ZPyFgNKz8S2mm0PZmn4oUL9aK9wp84MY7q1xyNFjx92ssuIV4ADhw==',key_name='tempest-TestNetworkBasicOps-1542639153',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ea98f29bce64ae8ba81224645237ac7',ramdisk_id='',reservation_id='r-x0kukrr5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-975938423',owner_user_name='tempest-TestNetworkBasicOps-975938423-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:27:44Z,user_data=None,user_id='a62337822a774597b9068cf3aed6a92f',uuid=1cd61d6b-0ef5-458f-88f0-44a4951ea368,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3d448d1-073b-4561-93d3-d26eb20839fe", "address": "fa:16:3e:68:fd:58", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3d448d1-07", "ovs_interfaceid": "b3d448d1-073b-4561-93d3-d26eb20839fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.215 2 DEBUG nova.network.os_vif_util [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Converting VIF {"id": "b3d448d1-073b-4561-93d3-d26eb20839fe", "address": "fa:16:3e:68:fd:58", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3d448d1-07", "ovs_interfaceid": "b3d448d1-073b-4561-93d3-d26eb20839fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.217 2 DEBUG nova.network.os_vif_util [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:fd:58,bridge_name='br-int',has_traffic_filtering=True,id=b3d448d1-073b-4561-93d3-d26eb20839fe,network=Network(0cae90f5-24f0-45af-a3e3-a77dbb0a12af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3d448d1-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.218 2 DEBUG nova.objects.instance [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1cd61d6b-0ef5-458f-88f0-44a4951ea368 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.240 2 DEBUG nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] End _get_guest_xml xml=<domain type="kvm">
Oct  3 11:27:50 compute-0 nova_compute[351685]:  <uuid>1cd61d6b-0ef5-458f-88f0-44a4951ea368</uuid>
Oct  3 11:27:50 compute-0 nova_compute[351685]:  <name>instance-0000000f</name>
Oct  3 11:27:50 compute-0 nova_compute[351685]:  <memory>131072</memory>
Oct  3 11:27:50 compute-0 nova_compute[351685]:  <vcpu>1</vcpu>
Oct  3 11:27:50 compute-0 nova_compute[351685]:  <metadata>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <nova:name>tempest-TestNetworkBasicOps-server-629685759</nova:name>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <nova:creationTime>2025-10-03 11:27:49</nova:creationTime>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <nova:flavor name="m1.nano">
Oct  3 11:27:50 compute-0 nova_compute[351685]:        <nova:memory>128</nova:memory>
Oct  3 11:27:50 compute-0 nova_compute[351685]:        <nova:disk>1</nova:disk>
Oct  3 11:27:50 compute-0 nova_compute[351685]:        <nova:swap>0</nova:swap>
Oct  3 11:27:50 compute-0 nova_compute[351685]:        <nova:ephemeral>0</nova:ephemeral>
Oct  3 11:27:50 compute-0 nova_compute[351685]:        <nova:vcpus>1</nova:vcpus>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      </nova:flavor>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <nova:owner>
Oct  3 11:27:50 compute-0 nova_compute[351685]:        <nova:user uuid="a62337822a774597b9068cf3aed6a92f">tempest-TestNetworkBasicOps-975938423-project-member</nova:user>
Oct  3 11:27:50 compute-0 nova_compute[351685]:        <nova:project uuid="5ea98f29bce64ae8ba81224645237ac7">tempest-TestNetworkBasicOps-975938423</nova:project>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      </nova:owner>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <nova:root type="image" uuid="6a34ed8d-90df-4a16-a968-c59b7cafa2f1"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <nova:ports>
Oct  3 11:27:50 compute-0 nova_compute[351685]:        <nova:port uuid="b3d448d1-073b-4561-93d3-d26eb20839fe">
Oct  3 11:27:50 compute-0 nova_compute[351685]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:        </nova:port>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      </nova:ports>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    </nova:instance>
Oct  3 11:27:50 compute-0 nova_compute[351685]:  </metadata>
Oct  3 11:27:50 compute-0 nova_compute[351685]:  <sysinfo type="smbios">
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <system>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <entry name="manufacturer">RDO</entry>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <entry name="product">OpenStack Compute</entry>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <entry name="serial">1cd61d6b-0ef5-458f-88f0-44a4951ea368</entry>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <entry name="uuid">1cd61d6b-0ef5-458f-88f0-44a4951ea368</entry>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <entry name="family">Virtual Machine</entry>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    </system>
Oct  3 11:27:50 compute-0 nova_compute[351685]:  </sysinfo>
Oct  3 11:27:50 compute-0 nova_compute[351685]:  <os>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <boot dev="hd"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <smbios mode="sysinfo"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:  </os>
Oct  3 11:27:50 compute-0 nova_compute[351685]:  <features>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <acpi/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <apic/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <vmcoreinfo/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:  </features>
Oct  3 11:27:50 compute-0 nova_compute[351685]:  <clock offset="utc">
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <timer name="pit" tickpolicy="delay"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <timer name="hpet" present="no"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:  </clock>
Oct  3 11:27:50 compute-0 nova_compute[351685]:  <cpu mode="host-model" match="exact">
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <topology sockets="1" cores="1" threads="1"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:  </cpu>
Oct  3 11:27:50 compute-0 nova_compute[351685]:  <devices>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/1cd61d6b-0ef5-458f-88f0-44a4951ea368_disk">
Oct  3 11:27:50 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      </source>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:27:50 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <target dev="vda" bus="virtio"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <disk type="network" device="cdrom">
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/1cd61d6b-0ef5-458f-88f0-44a4951ea368_disk.config">
Oct  3 11:27:50 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      </source>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:27:50 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <target dev="sda" bus="sata"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <interface type="ethernet">
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <mac address="fa:16:3e:68:fd:58"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <driver name="vhost" rx_queue_size="512"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <mtu size="1442"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <target dev="tapb3d448d1-07"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    </interface>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <serial type="pty">
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <log file="/var/lib/nova/instances/1cd61d6b-0ef5-458f-88f0-44a4951ea368/console.log" append="off"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    </serial>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <video>
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    </video>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <input type="tablet" bus="usb"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <rng model="virtio">
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <backend model="random">/dev/urandom</backend>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    </rng>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <controller type="usb" index="0"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    <memballoon model="virtio">
Oct  3 11:27:50 compute-0 nova_compute[351685]:      <stats period="10"/>
Oct  3 11:27:50 compute-0 nova_compute[351685]:    </memballoon>
Oct  3 11:27:50 compute-0 nova_compute[351685]:  </devices>
Oct  3 11:27:50 compute-0 nova_compute[351685]: </domain>
Oct  3 11:27:50 compute-0 nova_compute[351685]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.241 2 DEBUG nova.compute.manager [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Preparing to wait for external event network-vif-plugged-b3d448d1-073b-4561-93d3-d26eb20839fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.241 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.241 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.242 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.243 2 DEBUG nova.virt.libvirt.vif [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:27:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-629685759',display_name='tempest-TestNetworkBasicOps-server-629685759',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-629685759',id=15,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN1SBN1fMS2ySXPXG88Q+Bof2aHg32/EVVpyEUfGXr3FecKVqnRmKLwEF1cg7BJuFNMKxBLDa8CU5ZPyFgNKz8S2mm0PZmn4oUL9aK9wp84MY7q1xyNFjx92ssuIV4ADhw==',key_name='tempest-TestNetworkBasicOps-1542639153',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ea98f29bce64ae8ba81224645237ac7',ramdisk_id='',reservation_id='r-x0kukrr5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-975938423',owner_user_name='tempest-TestNetworkBasicOps-975938423-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:27:44Z,user_data=None,user_id='a62337822a774597b9068cf3aed6a92f',uuid=1cd61d6b-0ef5-458f-88f0-44a4951ea368,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3d448d1-073b-4561-93d3-d26eb20839fe", "address": "fa:16:3e:68:fd:58", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3d448d1-07", "ovs_interfaceid": "b3d448d1-073b-4561-93d3-d26eb20839fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.243 2 DEBUG nova.network.os_vif_util [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Converting VIF {"id": "b3d448d1-073b-4561-93d3-d26eb20839fe", "address": "fa:16:3e:68:fd:58", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3d448d1-07", "ovs_interfaceid": "b3d448d1-073b-4561-93d3-d26eb20839fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.244 2 DEBUG nova.network.os_vif_util [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:fd:58,bridge_name='br-int',has_traffic_filtering=True,id=b3d448d1-073b-4561-93d3-d26eb20839fe,network=Network(0cae90f5-24f0-45af-a3e3-a77dbb0a12af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3d448d1-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.245 2 DEBUG os_vif [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:fd:58,bridge_name='br-int',has_traffic_filtering=True,id=b3d448d1-073b-4561-93d3-d26eb20839fe,network=Network(0cae90f5-24f0-45af-a3e3-a77dbb0a12af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3d448d1-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.246 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.250 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3d448d1-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.251 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3d448d1-07, col_values=(('external_ids', {'iface-id': 'b3d448d1-073b-4561-93d3-d26eb20839fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:fd:58', 'vm-uuid': '1cd61d6b-0ef5-458f-88f0-44a4951ea368'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:50 compute-0 NetworkManager[45015]: <info>  [1759490870.2555] manager: (tapb3d448d1-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.264 2 INFO os_vif [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:fd:58,bridge_name='br-int',has_traffic_filtering=True,id=b3d448d1-073b-4561-93d3-d26eb20839fe,network=Network(0cae90f5-24f0-45af-a3e3-a77dbb0a12af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3d448d1-07')#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.322 2 DEBUG nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.322 2 DEBUG nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.322 2 DEBUG nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] No VIF found with MAC fa:16:3e:68:fd:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.323 2 INFO nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Using config drive#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.353 2 DEBUG nova.storage.rbd_utils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] rbd image 1cd61d6b-0ef5-458f-88f0-44a4951ea368_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.727 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.992 2 INFO nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Creating config drive at /var/lib/nova/instances/1cd61d6b-0ef5-458f-88f0-44a4951ea368/disk.config#033[00m
Oct  3 11:27:50 compute-0 nova_compute[351685]: 2025-10-03 11:27:50.998 2 DEBUG oslo_concurrency.processutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1cd61d6b-0ef5-458f-88f0-44a4951ea368/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpinm5_mhj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:27:51 compute-0 nova_compute[351685]: 2025-10-03 11:27:51.137 2 DEBUG oslo_concurrency.processutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1cd61d6b-0ef5-458f-88f0-44a4951ea368/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpinm5_mhj" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:27:51 compute-0 nova_compute[351685]: 2025-10-03 11:27:51.179 2 DEBUG nova.storage.rbd_utils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] rbd image 1cd61d6b-0ef5-458f-88f0-44a4951ea368_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:27:51 compute-0 nova_compute[351685]: 2025-10-03 11:27:51.185 2 DEBUG oslo_concurrency.processutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1cd61d6b-0ef5-458f-88f0-44a4951ea368/disk.config 1cd61d6b-0ef5-458f-88f0-44a4951ea368_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:27:51 compute-0 nova_compute[351685]: 2025-10-03 11:27:51.462 2 DEBUG nova.network.neutron [req-f5a7a4df-5e75-43f7-8b66-0b81e6a48499 req-70733abd-1b0e-45cb-936a-028d1319f0fe 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Updated VIF entry in instance network info cache for port b3d448d1-073b-4561-93d3-d26eb20839fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:27:51 compute-0 nova_compute[351685]: 2025-10-03 11:27:51.463 2 DEBUG nova.network.neutron [req-f5a7a4df-5e75-43f7-8b66-0b81e6a48499 req-70733abd-1b0e-45cb-936a-028d1319f0fe 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Updating instance_info_cache with network_info: [{"id": "b3d448d1-073b-4561-93d3-d26eb20839fe", "address": "fa:16:3e:68:fd:58", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3d448d1-07", "ovs_interfaceid": "b3d448d1-073b-4561-93d3-d26eb20839fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:27:51 compute-0 nova_compute[351685]: 2025-10-03 11:27:51.507 2 DEBUG oslo_concurrency.lockutils [req-f5a7a4df-5e75-43f7-8b66-0b81e6a48499 req-70733abd-1b0e-45cb-936a-028d1319f0fe 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-1cd61d6b-0ef5-458f-88f0-44a4951ea368" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:27:51 compute-0 nova_compute[351685]: 2025-10-03 11:27:51.623 2 DEBUG oslo_concurrency.processutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1cd61d6b-0ef5-458f-88f0-44a4951ea368/disk.config 1cd61d6b-0ef5-458f-88f0-44a4951ea368_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:27:51 compute-0 nova_compute[351685]: 2025-10-03 11:27:51.624 2 INFO nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Deleting local config drive /var/lib/nova/instances/1cd61d6b-0ef5-458f-88f0-44a4951ea368/disk.config because it was imported into RBD.#033[00m
Oct  3 11:27:51 compute-0 kernel: tapb3d448d1-07: entered promiscuous mode
Oct  3 11:27:51 compute-0 NetworkManager[45015]: <info>  [1759490871.6791] manager: (tapb3d448d1-07): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Oct  3 11:27:51 compute-0 nova_compute[351685]: 2025-10-03 11:27:51.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:51 compute-0 ovn_controller[88471]: 2025-10-03T11:27:51Z|00162|binding|INFO|Claiming lport b3d448d1-073b-4561-93d3-d26eb20839fe for this chassis.
Oct  3 11:27:51 compute-0 ovn_controller[88471]: 2025-10-03T11:27:51Z|00163|binding|INFO|b3d448d1-073b-4561-93d3-d26eb20839fe: Claiming fa:16:3e:68:fd:58 10.100.0.7
Oct  3 11:27:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:51.693 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:fd:58 10.100.0.7'], port_security=['fa:16:3e:68:fd:58 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1cd61d6b-0ef5-458f-88f0-44a4951ea368', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cae90f5-24f0-45af-a3e3-a77dbb0a12af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ea98f29bce64ae8ba81224645237ac7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e5fb1caf-ff45-4eab-be84-810c07ecb149', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26eb2c44-9631-42b8-a4d9-ab8785ccd098, chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=b3d448d1-073b-4561-93d3-d26eb20839fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:27:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:51.695 284328 INFO neutron.agent.ovn.metadata.agent [-] Port b3d448d1-073b-4561-93d3-d26eb20839fe in datapath 0cae90f5-24f0-45af-a3e3-a77dbb0a12af bound to our chassis#033[00m
Oct  3 11:27:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:51.699 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0cae90f5-24f0-45af-a3e3-a77dbb0a12af#033[00m
Oct  3 11:27:51 compute-0 ovn_controller[88471]: 2025-10-03T11:27:51Z|00164|binding|INFO|Setting lport b3d448d1-073b-4561-93d3-d26eb20839fe ovn-installed in OVS
Oct  3 11:27:51 compute-0 ovn_controller[88471]: 2025-10-03T11:27:51Z|00165|binding|INFO|Setting lport b3d448d1-073b-4561-93d3-d26eb20839fe up in Southbound
Oct  3 11:27:51 compute-0 nova_compute[351685]: 2025-10-03 11:27:51.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:51.720 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[fc5d69ab-08f1-475b-aa64-a5ad9758e0cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:51 compute-0 nova_compute[351685]: 2025-10-03 11:27:51.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:51 compute-0 systemd-udevd[533978]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 11:27:51 compute-0 systemd-machined[137653]: New machine qemu-16-instance-0000000f.
Oct  3 11:27:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:51.755 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[c3309a14-16f9-46ed-a8ae-e6f049558985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:51.759 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[1893fda7-132b-4c71-b809-5b0c1486bf38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:51 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Oct  3 11:27:51 compute-0 NetworkManager[45015]: <info>  [1759490871.7642] device (tapb3d448d1-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 11:27:51 compute-0 NetworkManager[45015]: <info>  [1759490871.7685] device (tapb3d448d1-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  3 11:27:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:51.788 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[a7dbee9d-538e-475d-a539-2b2ee3a77583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:51.803 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[21819b80-829b-4bef-a7e3-ca9905bc4a5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cae90f5-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:a3:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 998611, 'reachable_time': 40400, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 533984, 'error': None, 'target': 'ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:51.824 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[2377f8b2-2b37-4e4d-b1b9-a408aa43bfe8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0cae90f5-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 998626, 'tstamp': 998626}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 533986, 'error': None, 'target': 'ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0cae90f5-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 998631, 'tstamp': 998631}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 533986, 'error': None, 'target': 'ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:27:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:51.826 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cae90f5-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:51 compute-0 nova_compute[351685]: 2025-10-03 11:27:51.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3621: 321 pgs: 321 active+clean; 505 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 776 KiB/s rd, 2.5 MiB/s wr, 112 op/s
Oct  3 11:27:51 compute-0 nova_compute[351685]: 2025-10-03 11:27:51.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:51.829 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cae90f5-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:51.830 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:27:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:51.830 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0cae90f5-20, col_values=(('external_ids', {'iface-id': 'e51b3658-d946-4608-953e-6b26039ed1fd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:27:51 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:27:51.830 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:27:52 compute-0 nova_compute[351685]: 2025-10-03 11:27:52.237 2 DEBUG nova.compute.manager [req-8f39c0da-1e4d-414c-87c0-86568d3d48e1 req-0979e5d0-9a50-43cc-bbad-beecad8c7b2f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Received event network-vif-plugged-b3d448d1-073b-4561-93d3-d26eb20839fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:27:52 compute-0 nova_compute[351685]: 2025-10-03 11:27:52.237 2 DEBUG oslo_concurrency.lockutils [req-8f39c0da-1e4d-414c-87c0-86568d3d48e1 req-0979e5d0-9a50-43cc-bbad-beecad8c7b2f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:27:52 compute-0 nova_compute[351685]: 2025-10-03 11:27:52.238 2 DEBUG oslo_concurrency.lockutils [req-8f39c0da-1e4d-414c-87c0-86568d3d48e1 req-0979e5d0-9a50-43cc-bbad-beecad8c7b2f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:27:52 compute-0 nova_compute[351685]: 2025-10-03 11:27:52.238 2 DEBUG oslo_concurrency.lockutils [req-8f39c0da-1e4d-414c-87c0-86568d3d48e1 req-0979e5d0-9a50-43cc-bbad-beecad8c7b2f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:52 compute-0 nova_compute[351685]: 2025-10-03 11:27:52.238 2 DEBUG nova.compute.manager [req-8f39c0da-1e4d-414c-87c0-86568d3d48e1 req-0979e5d0-9a50-43cc-bbad-beecad8c7b2f 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Processing event network-vif-plugged-b3d448d1-073b-4561-93d3-d26eb20839fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  3 11:27:52 compute-0 nova_compute[351685]: 2025-10-03 11:27:52.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:27:52 compute-0 podman[534036]: 2025-10-03 11:27:52.87551059 +0000 UTC m=+0.116554406 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:27:52 compute-0 podman[534045]: 2025-10-03 11:27:52.911566806 +0000 UTC m=+0.136530737 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3)
Oct  3 11:27:52 compute-0 podman[534034]: 2025-10-03 11:27:52.917780335 +0000 UTC m=+0.168188281 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:27:52 compute-0 podman[534035]: 2025-10-03 11:27:52.935653198 +0000 UTC m=+0.176559899 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-type=git)
Oct  3 11:27:52 compute-0 podman[534037]: 2025-10-03 11:27:52.939605784 +0000 UTC m=+0.182739238 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  3 11:27:52 compute-0 podman[534042]: 2025-10-03 11:27:52.993241334 +0000 UTC m=+0.219318951 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:27:53 compute-0 podman[534038]: 2025-10-03 11:27:53.003637186 +0000 UTC m=+0.237759321 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Oct  3 11:27:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.390 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490873.3895886, 1cd61d6b-0ef5-458f-88f0-44a4951ea368 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.391 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] VM Started (Lifecycle Event)#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.394 2 DEBUG nova.compute.manager [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.399 2 DEBUG nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.406 2 INFO nova.virt.libvirt.driver [-] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Instance spawned successfully.#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.406 2 DEBUG nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.447 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.454 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.488 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.489 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490873.3897555, 1cd61d6b-0ef5-458f-88f0-44a4951ea368 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.489 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] VM Paused (Lifecycle Event)#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.498 2 DEBUG nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.499 2 DEBUG nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.499 2 DEBUG nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.500 2 DEBUG nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.500 2 DEBUG nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.501 2 DEBUG nova.virt.libvirt.driver [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.509 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.514 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759490873.3975108, 1cd61d6b-0ef5-458f-88f0-44a4951ea368 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.514 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] VM Resumed (Lifecycle Event)#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.534 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.538 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.563 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.574 2 INFO nova.compute.manager [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Took 8.89 seconds to spawn the instance on the hypervisor.#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.575 2 DEBUG nova.compute.manager [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.637 2 INFO nova.compute.manager [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Took 10.09 seconds to build instance.#033[00m
Oct  3 11:27:53 compute-0 nova_compute[351685]: 2025-10-03 11:27:53.695 2 DEBUG oslo_concurrency.lockutils [None req-5310eaa0-3954-49d7-9065-b29b9f606824 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3622: 321 pgs: 321 active+clean; 505 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 631 KiB/s rd, 1.9 MiB/s wr, 95 op/s
Oct  3 11:27:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:27:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1869293694' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:27:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:27:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1869293694' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:27:54 compute-0 nova_compute[351685]: 2025-10-03 11:27:54.340 2 DEBUG nova.compute.manager [req-f46e8d2a-c3c9-46cd-9745-840803dbb37b req-21311c33-82b2-4c6c-9e27-13a4b3ab7098 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Received event network-vif-plugged-b3d448d1-073b-4561-93d3-d26eb20839fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:27:54 compute-0 nova_compute[351685]: 2025-10-03 11:27:54.341 2 DEBUG oslo_concurrency.lockutils [req-f46e8d2a-c3c9-46cd-9745-840803dbb37b req-21311c33-82b2-4c6c-9e27-13a4b3ab7098 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:27:54 compute-0 nova_compute[351685]: 2025-10-03 11:27:54.342 2 DEBUG oslo_concurrency.lockutils [req-f46e8d2a-c3c9-46cd-9745-840803dbb37b req-21311c33-82b2-4c6c-9e27-13a4b3ab7098 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:27:54 compute-0 nova_compute[351685]: 2025-10-03 11:27:54.343 2 DEBUG oslo_concurrency.lockutils [req-f46e8d2a-c3c9-46cd-9745-840803dbb37b req-21311c33-82b2-4c6c-9e27-13a4b3ab7098 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:54 compute-0 nova_compute[351685]: 2025-10-03 11:27:54.343 2 DEBUG nova.compute.manager [req-f46e8d2a-c3c9-46cd-9745-840803dbb37b req-21311c33-82b2-4c6c-9e27-13a4b3ab7098 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] No waiting events found dispatching network-vif-plugged-b3d448d1-073b-4561-93d3-d26eb20839fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:27:54 compute-0 nova_compute[351685]: 2025-10-03 11:27:54.344 2 WARNING nova.compute.manager [req-f46e8d2a-c3c9-46cd-9745-840803dbb37b req-21311c33-82b2-4c6c-9e27-13a4b3ab7098 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Received unexpected event network-vif-plugged-b3d448d1-073b-4561-93d3-d26eb20839fe for instance with vm_state active and task_state None.#033[00m
Oct  3 11:27:54 compute-0 nova_compute[351685]: 2025-10-03 11:27:54.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:55 compute-0 nova_compute[351685]: 2025-10-03 11:27:55.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:27:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3623: 321 pgs: 321 active+clean; 505 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 552 KiB/s rd, 1.9 MiB/s wr, 81 op/s
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003937536628247705 of space, bias 1.0, pg target 1.1812609884743115 quantized to 32 (current 32)
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.37435919643729243 quantized to 32 (current 32)
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 32)
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:27:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct  3 11:27:57 compute-0 nova_compute[351685]: 2025-10-03 11:27:57.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:27:57 compute-0 nova_compute[351685]: 2025-10-03 11:27:57.784 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:27:57 compute-0 nova_compute[351685]: 2025-10-03 11:27:57.785 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:27:57 compute-0 nova_compute[351685]: 2025-10-03 11:27:57.785 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:27:57 compute-0 nova_compute[351685]: 2025-10-03 11:27:57.785 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:27:57 compute-0 nova_compute[351685]: 2025-10-03 11:27:57.786 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:27:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3624: 321 pgs: 321 active+clean; 505 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 224 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Oct  3 11:27:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:27:58 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3080690874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:27:58 compute-0 nova_compute[351685]: 2025-10-03 11:27:58.251 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:27:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:27:58 compute-0 nova_compute[351685]: 2025-10-03 11:27:58.716 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:27:58 compute-0 nova_compute[351685]: 2025-10-03 11:27:58.717 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:27:58 compute-0 nova_compute[351685]: 2025-10-03 11:27:58.722 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:27:58 compute-0 nova_compute[351685]: 2025-10-03 11:27:58.723 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:27:58 compute-0 nova_compute[351685]: 2025-10-03 11:27:58.728 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:27:58 compute-0 nova_compute[351685]: 2025-10-03 11:27:58.728 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:27:58 compute-0 nova_compute[351685]: 2025-10-03 11:27:58.734 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:27:58 compute-0 nova_compute[351685]: 2025-10-03 11:27:58.734 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:27:58 compute-0 nova_compute[351685]: 2025-10-03 11:27:58.740 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:27:58 compute-0 nova_compute[351685]: 2025-10-03 11:27:58.741 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:27:58 compute-0 nova_compute[351685]: 2025-10-03 11:27:58.741 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:27:58 compute-0 nova_compute[351685]: 2025-10-03 11:27:58.747 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:27:58 compute-0 nova_compute[351685]: 2025-10-03 11:27:58.747 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:27:59 compute-0 nova_compute[351685]: 2025-10-03 11:27:59.334 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:27:59 compute-0 nova_compute[351685]: 2025-10-03 11:27:59.337 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2690MB free_disk=59.7520751953125GB free_vcpus=2 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:27:59 compute-0 nova_compute[351685]: 2025-10-03 11:27:59.338 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:27:59 compute-0 nova_compute[351685]: 2025-10-03 11:27:59.338 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:27:59 compute-0 nova_compute[351685]: 2025-10-03 11:27:59.498 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:27:59 compute-0 nova_compute[351685]: 2025-10-03 11:27:59.498 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance f7465889-4aed-4799-835b-1c604f730144 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:27:59 compute-0 nova_compute[351685]: 2025-10-03 11:27:59.499 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance fd405fd5-7402-43b4-8ab3-a52c18493a6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:27:59 compute-0 nova_compute[351685]: 2025-10-03 11:27:59.499 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 83fc22ce-d2e4-468a-b166-04f2743fa68d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:27:59 compute-0 nova_compute[351685]: 2025-10-03 11:27:59.500 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:27:59 compute-0 nova_compute[351685]: 2025-10-03 11:27:59.500 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 1cd61d6b-0ef5-458f-88f0-44a4951ea368 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:27:59 compute-0 nova_compute[351685]: 2025-10-03 11:27:59.501 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:27:59 compute-0 nova_compute[351685]: 2025-10-03 11:27:59.501 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1664MB phys_disk=59GB used_disk=7GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:27:59 compute-0 nova_compute[351685]: 2025-10-03 11:27:59.656 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:27:59 compute-0 podman[157165]: time="2025-10-03T11:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:27:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 51199 "" "Go-http-client/1.1"
Oct  3 11:27:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10993 "" "Go-http-client/1.1"
Oct  3 11:27:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3625: 321 pgs: 321 active+clean; 505 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 704 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Oct  3 11:27:59 compute-0 nova_compute[351685]: 2025-10-03 11:27:59.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:28:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1382254322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:28:00 compute-0 nova_compute[351685]: 2025-10-03 11:28:00.155 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:28:00 compute-0 nova_compute[351685]: 2025-10-03 11:28:00.163 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:28:00 compute-0 nova_compute[351685]: 2025-10-03 11:28:00.206 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:28:00 compute-0 nova_compute[351685]: 2025-10-03 11:28:00.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:00 compute-0 nova_compute[351685]: 2025-10-03 11:28:00.294 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:28:00 compute-0 nova_compute[351685]: 2025-10-03 11:28:00.294 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:00 compute-0 nova_compute[351685]: 2025-10-03 11:28:00.318 2 DEBUG nova.compute.manager [req-7b6fdba5-599f-4461-b01f-b3f080e53708 req-1d1f1e91-b2e6-43f8-8132-2b02aa55856c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Received event network-changed-b3d448d1-073b-4561-93d3-d26eb20839fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:28:00 compute-0 nova_compute[351685]: 2025-10-03 11:28:00.319 2 DEBUG nova.compute.manager [req-7b6fdba5-599f-4461-b01f-b3f080e53708 req-1d1f1e91-b2e6-43f8-8132-2b02aa55856c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Refreshing instance network info cache due to event network-changed-b3d448d1-073b-4561-93d3-d26eb20839fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:28:00 compute-0 nova_compute[351685]: 2025-10-03 11:28:00.319 2 DEBUG oslo_concurrency.lockutils [req-7b6fdba5-599f-4461-b01f-b3f080e53708 req-1d1f1e91-b2e6-43f8-8132-2b02aa55856c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-1cd61d6b-0ef5-458f-88f0-44a4951ea368" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:28:00 compute-0 nova_compute[351685]: 2025-10-03 11:28:00.319 2 DEBUG oslo_concurrency.lockutils [req-7b6fdba5-599f-4461-b01f-b3f080e53708 req-1d1f1e91-b2e6-43f8-8132-2b02aa55856c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-1cd61d6b-0ef5-458f-88f0-44a4951ea368" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:28:00 compute-0 nova_compute[351685]: 2025-10-03 11:28:00.320 2 DEBUG nova.network.neutron [req-7b6fdba5-599f-4461-b01f-b3f080e53708 req-1d1f1e91-b2e6-43f8-8132-2b02aa55856c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Refreshing network info cache for port b3d448d1-073b-4561-93d3-d26eb20839fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:28:01 compute-0 openstack_network_exporter[367524]: ERROR   11:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:28:01 compute-0 openstack_network_exporter[367524]: ERROR   11:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:28:01 compute-0 openstack_network_exporter[367524]: ERROR   11:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:28:01 compute-0 openstack_network_exporter[367524]: ERROR   11:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:28:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:28:01 compute-0 openstack_network_exporter[367524]: ERROR   11:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:28:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:28:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3626: 321 pgs: 321 active+clean; 505 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 88 op/s
Oct  3 11:28:02 compute-0 nova_compute[351685]: 2025-10-03 11:28:02.215 2 DEBUG nova.network.neutron [req-7b6fdba5-599f-4461-b01f-b3f080e53708 req-1d1f1e91-b2e6-43f8-8132-2b02aa55856c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Updated VIF entry in instance network info cache for port b3d448d1-073b-4561-93d3-d26eb20839fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:28:02 compute-0 nova_compute[351685]: 2025-10-03 11:28:02.216 2 DEBUG nova.network.neutron [req-7b6fdba5-599f-4461-b01f-b3f080e53708 req-1d1f1e91-b2e6-43f8-8132-2b02aa55856c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Updating instance_info_cache with network_info: [{"id": "b3d448d1-073b-4561-93d3-d26eb20839fe", "address": "fa:16:3e:68:fd:58", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3d448d1-07", "ovs_interfaceid": "b3d448d1-073b-4561-93d3-d26eb20839fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:28:02 compute-0 nova_compute[351685]: 2025-10-03 11:28:02.275 2 DEBUG oslo_concurrency.lockutils [req-7b6fdba5-599f-4461-b01f-b3f080e53708 req-1d1f1e91-b2e6-43f8-8132-2b02aa55856c 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-1cd61d6b-0ef5-458f-88f0-44a4951ea368" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:28:03 compute-0 nova_compute[351685]: 2025-10-03 11:28:03.295 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:28:03 compute-0 nova_compute[351685]: 2025-10-03 11:28:03.295 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:28:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:28:03 compute-0 nova_compute[351685]: 2025-10-03 11:28:03.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:28:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3627: 321 pgs: 321 active+clean; 505 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 75 op/s
Oct  3 11:28:04 compute-0 nova_compute[351685]: 2025-10-03 11:28:04.733 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:28:04 compute-0 nova_compute[351685]: 2025-10-03 11:28:04.734 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:28:04 compute-0 podman[534209]: 2025-10-03 11:28:04.833758835 +0000 UTC m=+0.093721615 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:28:04 compute-0 podman[534210]: 2025-10-03 11:28:04.84735109 +0000 UTC m=+0.102954551 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, release=1214.1726694543, release-0.7.12=)
Oct  3 11:28:04 compute-0 podman[534211]: 2025-10-03 11:28:04.864783788 +0000 UTC m=+0.117393323 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:28:04 compute-0 nova_compute[351685]: 2025-10-03 11:28:04.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:05 compute-0 nova_compute[351685]: 2025-10-03 11:28:05.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:05 compute-0 nova_compute[351685]: 2025-10-03 11:28:05.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:28:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3628: 321 pgs: 321 active+clean; 505 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 72 op/s
Oct  3 11:28:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3629: 321 pgs: 321 active+clean; 505 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.2 KiB/s wr, 58 op/s
Oct  3 11:28:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:28:09 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:09.523 284434 DEBUG eventlet.wsgi.server [-] (284434) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Oct  3 11:28:09 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:09.526 284434 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Oct  3 11:28:09 compute-0 ovn_metadata_agent[284320]: Accept: */*#015
Oct  3 11:28:09 compute-0 ovn_metadata_agent[284320]: Connection: close#015
Oct  3 11:28:09 compute-0 ovn_metadata_agent[284320]: Content-Type: text/plain#015
Oct  3 11:28:09 compute-0 ovn_metadata_agent[284320]: Host: 169.254.169.254#015
Oct  3 11:28:09 compute-0 ovn_metadata_agent[284320]: User-Agent: curl/7.84.0#015
Oct  3 11:28:09 compute-0 ovn_metadata_agent[284320]: X-Forwarded-For: 10.100.0.10#015
Oct  3 11:28:09 compute-0 ovn_metadata_agent[284320]: X-Ovn-Network-Id: cbf38614-3700-41ae-a5fa-3eef08992fc4 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Oct  3 11:28:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3630: 321 pgs: 321 active+clean; 505 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.5 KiB/s wr, 58 op/s
Oct  3 11:28:09 compute-0 nova_compute[351685]: 2025-10-03 11:28:09.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:10 compute-0 nova_compute[351685]: 2025-10-03 11:28:10.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:10.617 284434 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Oct  3 11:28:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:10.617 284434 INFO eventlet.wsgi.server [-] 10.100.0.10,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.0918372#033[00m
Oct  3 11:28:10 compute-0 haproxy-metadata-proxy-cbf38614-3700-41ae-a5fa-3eef08992fc4[532248]: 10.100.0.10:58238 [03/Oct/2025:11:28:09.522] listener listener/metadata 0/0/0/1095/1095 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Oct  3 11:28:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:10.752 284434 DEBUG eventlet.wsgi.server [-] (284434) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Oct  3 11:28:10 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:10.754 284434 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Oct  3 11:28:10 compute-0 ovn_metadata_agent[284320]: Accept: */*#015
Oct  3 11:28:10 compute-0 ovn_metadata_agent[284320]: Connection: close#015
Oct  3 11:28:10 compute-0 ovn_metadata_agent[284320]: Content-Length: 100#015
Oct  3 11:28:10 compute-0 ovn_metadata_agent[284320]: Content-Type: application/x-www-form-urlencoded#015
Oct  3 11:28:10 compute-0 ovn_metadata_agent[284320]: Host: 169.254.169.254#015
Oct  3 11:28:10 compute-0 ovn_metadata_agent[284320]: User-Agent: curl/7.84.0#015
Oct  3 11:28:10 compute-0 ovn_metadata_agent[284320]: X-Forwarded-For: 10.100.0.10#015
Oct  3 11:28:10 compute-0 ovn_metadata_agent[284320]: X-Ovn-Network-Id: cbf38614-3700-41ae-a5fa-3eef08992fc4#015
Oct  3 11:28:10 compute-0 ovn_metadata_agent[284320]: #015
Oct  3 11:28:10 compute-0 ovn_metadata_agent[284320]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Oct  3 11:28:11 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:11.028 284434 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Oct  3 11:28:11 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:11.029 284434 INFO eventlet.wsgi.server [-] 10.100.0.10,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2747366#033[00m
Oct  3 11:28:11 compute-0 haproxy-metadata-proxy-cbf38614-3700-41ae-a5fa-3eef08992fc4[532248]: 10.100.0.10:58254 [03/Oct/2025:11:28:10.751] listener listener/metadata 0/0/0/277/277 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Oct  3 11:28:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3631: 321 pgs: 321 active+clean; 505 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.5 KiB/s wr, 41 op/s
Oct  3 11:28:12 compute-0 nova_compute[351685]: 2025-10-03 11:28:12.805 2 DEBUG oslo_concurrency.lockutils [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Acquiring lock "f7465889-4aed-4799-835b-1c604f730144" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:12 compute-0 nova_compute[351685]: 2025-10-03 11:28:12.807 2 DEBUG oslo_concurrency.lockutils [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:12 compute-0 nova_compute[351685]: 2025-10-03 11:28:12.810 2 DEBUG oslo_concurrency.lockutils [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Acquiring lock "f7465889-4aed-4799-835b-1c604f730144-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:12 compute-0 nova_compute[351685]: 2025-10-03 11:28:12.811 2 DEBUG oslo_concurrency.lockutils [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:12 compute-0 nova_compute[351685]: 2025-10-03 11:28:12.812 2 DEBUG oslo_concurrency.lockutils [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:12 compute-0 nova_compute[351685]: 2025-10-03 11:28:12.815 2 INFO nova.compute.manager [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Terminating instance#033[00m
Oct  3 11:28:12 compute-0 nova_compute[351685]: 2025-10-03 11:28:12.817 2 DEBUG nova.compute.manager [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  3 11:28:12 compute-0 kernel: tapd444b4b5-52 (unregistering): left promiscuous mode
Oct  3 11:28:12 compute-0 NetworkManager[45015]: <info>  [1759490892.9148] device (tapd444b4b5-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  3 11:28:12 compute-0 nova_compute[351685]: 2025-10-03 11:28:12.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:12 compute-0 ovn_controller[88471]: 2025-10-03T11:28:12Z|00166|binding|INFO|Releasing lport d444b4b5-5243-48c2-80dd-3074b56d4277 from this chassis (sb_readonly=0)
Oct  3 11:28:12 compute-0 ovn_controller[88471]: 2025-10-03T11:28:12Z|00167|binding|INFO|Setting lport d444b4b5-5243-48c2-80dd-3074b56d4277 down in Southbound
Oct  3 11:28:12 compute-0 ovn_controller[88471]: 2025-10-03T11:28:12Z|00168|binding|INFO|Removing iface tapd444b4b5-52 ovn-installed in OVS
Oct  3 11:28:12 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:12.951 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:8a:bc 10.100.0.7'], port_security=['fa:16:3e:5d:8a:bc 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f7465889-4aed-4799-835b-1c604f730144', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527efcd5-9efe-47de-97ae-4c1c2ca2b999', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8ac8b91115c2483686f9dc31c58b49fc', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c15d67bc-31ac-4909-a6df-d8296b99758d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2b7eff5-cbee-4a08-96a7-16ae54234c96, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=d444b4b5-5243-48c2-80dd-3074b56d4277) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:28:12 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:12.952 284328 INFO neutron.agent.ovn.metadata.agent [-] Port d444b4b5-5243-48c2-80dd-3074b56d4277 in datapath 527efcd5-9efe-47de-97ae-4c1c2ca2b999 unbound from our chassis#033[00m
Oct  3 11:28:12 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:12.954 284328 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 527efcd5-9efe-47de-97ae-4c1c2ca2b999, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  3 11:28:12 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:12.956 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[7c51a11e-2a42-4206-ad84-d264b5cdb945]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:12 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:12.957 284328 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999 namespace which is not needed anymore#033[00m
Oct  3 11:28:12 compute-0 nova_compute[351685]: 2025-10-03 11:28:12.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:12 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct  3 11:28:12 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000a.scope: Consumed 42.416s CPU time.
Oct  3 11:28:12 compute-0 systemd-machined[137653]: Machine qemu-15-instance-0000000a terminated.
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.059 2 INFO nova.virt.libvirt.driver [-] [instance: f7465889-4aed-4799-835b-1c604f730144] Instance destroyed successfully.#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.060 2 DEBUG nova.objects.instance [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lazy-loading 'resources' on Instance uuid f7465889-4aed-4799-835b-1c604f730144 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.081 2 DEBUG nova.virt.libvirt.vif [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T11:25:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1342038803',display_name='tempest-ServerActionsTestJSON-server-1342038803',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1342038803',id=10,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAI0U91Y0bWTV/Ghqu9u2/YJCcMsSuIqJJqou9MoVKU4IwCcE850tanFrdmeQ7ELHWboekr6vOg1XXLEEvVERh2sZ+QqKxSzY5UgETm25CB7b1mAR5wQF+48QlfVG8JFgw==',key_name='tempest-keypair-881855979',keypairs=<?>,launch_index=0,launched_at=2025-10-03T11:25:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8ac8b91115c2483686f9dc31c58b49fc',ramdisk_id='',reservation_id='r-5st1uthq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-136578470',owner_user_name='tempest-ServerActionsTestJSON-136578470-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-03T11:27:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a98b98aa35184e41a4ae6e74ba3a32e6',uuid=f7465889-4aed-4799-835b-1c604f730144,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.082 2 DEBUG nova.network.os_vif_util [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Converting VIF {"id": "d444b4b5-5243-48c2-80dd-3074b56d4277", "address": "fa:16:3e:5d:8a:bc", "network": {"id": "527efcd5-9efe-47de-97ae-4c1c2ca2b999", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2109595368-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8ac8b91115c2483686f9dc31c58b49fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd444b4b5-52", "ovs_interfaceid": "d444b4b5-5243-48c2-80dd-3074b56d4277", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.083 2 DEBUG nova.network.os_vif_util [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:8a:bc,bridge_name='br-int',has_traffic_filtering=True,id=d444b4b5-5243-48c2-80dd-3074b56d4277,network=Network(527efcd5-9efe-47de-97ae-4c1c2ca2b999),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd444b4b5-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.084 2 DEBUG os_vif [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:8a:bc,bridge_name='br-int',has_traffic_filtering=True,id=d444b4b5-5243-48c2-80dd-3074b56d4277,network=Network(527efcd5-9efe-47de-97ae-4c1c2ca2b999),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd444b4b5-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.086 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd444b4b5-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.097 2 INFO os_vif [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:8a:bc,bridge_name='br-int',has_traffic_filtering=True,id=d444b4b5-5243-48c2-80dd-3074b56d4277,network=Network(527efcd5-9efe-47de-97ae-4c1c2ca2b999),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd444b4b5-52')#033[00m
Oct  3 11:28:13 compute-0 neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999[532585]: [NOTICE]   (532589) : haproxy version is 2.8.14-c23fe91
Oct  3 11:28:13 compute-0 neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999[532585]: [NOTICE]   (532589) : path to executable is /usr/sbin/haproxy
Oct  3 11:28:13 compute-0 neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999[532585]: [WARNING]  (532589) : Exiting Master process...
Oct  3 11:28:13 compute-0 neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999[532585]: [ALERT]    (532589) : Current worker (532591) exited with code 143 (Terminated)
Oct  3 11:28:13 compute-0 neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999[532585]: [WARNING]  (532589) : All workers exited. Exiting... (0)
Oct  3 11:28:13 compute-0 systemd[1]: libpod-56a24dca06973ce7428d6d7ee9c129828b19ceaf9796a5cb0e922859121f3607.scope: Deactivated successfully.
Oct  3 11:28:13 compute-0 podman[534299]: 2025-10-03 11:28:13.181179634 +0000 UTC m=+0.067360450 container died 56a24dca06973ce7428d6d7ee9c129828b19ceaf9796a5cb0e922859121f3607 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.208 2 DEBUG oslo_concurrency.lockutils [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Acquiring lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.209 2 DEBUG oslo_concurrency.lockutils [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.209 2 DEBUG oslo_concurrency.lockutils [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Acquiring lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.210 2 DEBUG oslo_concurrency.lockutils [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.210 2 DEBUG oslo_concurrency.lockutils [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.211 2 INFO nova.compute.manager [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Terminating instance#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.212 2 DEBUG nova.compute.manager [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  3 11:28:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-56a24dca06973ce7428d6d7ee9c129828b19ceaf9796a5cb0e922859121f3607-userdata-shm.mount: Deactivated successfully.
Oct  3 11:28:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c85e5eab452bcdb4fdfcea344b6bd7ee04c9b4cdf375378d98574f1c7b8d180-merged.mount: Deactivated successfully.
Oct  3 11:28:13 compute-0 podman[534299]: 2025-10-03 11:28:13.266764786 +0000 UTC m=+0.152945592 container cleanup 56a24dca06973ce7428d6d7ee9c129828b19ceaf9796a5cb0e922859121f3607 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 11:28:13 compute-0 systemd[1]: libpod-conmon-56a24dca06973ce7428d6d7ee9c129828b19ceaf9796a5cb0e922859121f3607.scope: Deactivated successfully.
Oct  3 11:28:13 compute-0 kernel: tapff068d12-ba (unregistering): left promiscuous mode
Oct  3 11:28:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:28:13 compute-0 NetworkManager[45015]: <info>  [1759490893.3100] device (tapff068d12-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:13 compute-0 ovn_controller[88471]: 2025-10-03T11:28:13Z|00169|binding|INFO|Releasing lport ff068d12-ba56-4465-a024-881b428d0ad9 from this chassis (sb_readonly=0)
Oct  3 11:28:13 compute-0 ovn_controller[88471]: 2025-10-03T11:28:13Z|00170|binding|INFO|Setting lport ff068d12-ba56-4465-a024-881b428d0ad9 down in Southbound
Oct  3 11:28:13 compute-0 ovn_controller[88471]: 2025-10-03T11:28:13Z|00171|binding|INFO|Removing iface tapff068d12-ba ovn-installed in OVS
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.328 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:b6:fb 10.100.0.10'], port_security=['fa:16:3e:f8:b6:fb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '218fdfd8-b66b-4ba2-90b0-5eb27dcacddf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cbf38614-3700-41ae-a5fa-3eef08992fc4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76485b7490844f9181c1821d135ade02', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a82c35dd-6ba0-4f5d-9ad8-15c7e30c4b0b feca681c-a11c-4324-8e50-0b4af75046f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24941024-5cd7-42f4-b8b5-41479ac2ff8e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=ff068d12-ba56-4465-a024-881b428d0ad9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:13 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Oct  3 11:28:13 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 43.804s CPU time.
Oct  3 11:28:13 compute-0 systemd-machined[137653]: Machine qemu-14-instance-0000000e terminated.
Oct  3 11:28:13 compute-0 podman[534342]: 2025-10-03 11:28:13.409133369 +0000 UTC m=+0.113226619 container remove 56a24dca06973ce7428d6d7ee9c129828b19ceaf9796a5cb0e922859121f3607 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.414 2 DEBUG nova.compute.manager [req-a1d09407-29bf-404c-91bb-2c55b4a90400 req-c8ccda40-f6ff-4180-a34c-27cdabd044b7 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received event network-vif-unplugged-d444b4b5-5243-48c2-80dd-3074b56d4277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.415 2 DEBUG oslo_concurrency.lockutils [req-a1d09407-29bf-404c-91bb-2c55b4a90400 req-c8ccda40-f6ff-4180-a34c-27cdabd044b7 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "f7465889-4aed-4799-835b-1c604f730144-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.415 2 DEBUG oslo_concurrency.lockutils [req-a1d09407-29bf-404c-91bb-2c55b4a90400 req-c8ccda40-f6ff-4180-a34c-27cdabd044b7 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.416 2 DEBUG oslo_concurrency.lockutils [req-a1d09407-29bf-404c-91bb-2c55b4a90400 req-c8ccda40-f6ff-4180-a34c-27cdabd044b7 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.416 2 DEBUG nova.compute.manager [req-a1d09407-29bf-404c-91bb-2c55b4a90400 req-c8ccda40-f6ff-4180-a34c-27cdabd044b7 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] No waiting events found dispatching network-vif-unplugged-d444b4b5-5243-48c2-80dd-3074b56d4277 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.416 2 DEBUG nova.compute.manager [req-a1d09407-29bf-404c-91bb-2c55b4a90400 req-c8ccda40-f6ff-4180-a34c-27cdabd044b7 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received event network-vif-unplugged-d444b4b5-5243-48c2-80dd-3074b56d4277 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.422 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[a17c7adc-e687-4abb-99fd-496aa83d85d8]: (4, ('Fri Oct  3 11:28:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999 (56a24dca06973ce7428d6d7ee9c129828b19ceaf9796a5cb0e922859121f3607)\n56a24dca06973ce7428d6d7ee9c129828b19ceaf9796a5cb0e922859121f3607\nFri Oct  3 11:28:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999 (56a24dca06973ce7428d6d7ee9c129828b19ceaf9796a5cb0e922859121f3607)\n56a24dca06973ce7428d6d7ee9c129828b19ceaf9796a5cb0e922859121f3607\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.428 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1a4faa-f2e4-462c-8097-132f1a646fc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.430 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527efcd5-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:13 compute-0 kernel: tap527efcd5-90: left promiscuous mode
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.458 2 INFO nova.virt.libvirt.driver [-] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Instance destroyed successfully.#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.459 2 DEBUG nova.objects.instance [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lazy-loading 'resources' on Instance uuid 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.468 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[0c47d160-9fcd-4305-b38d-3851ab03fa38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.478 2 DEBUG nova.virt.libvirt.vif [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T11:26:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1706208204',display_name='tempest-TestServerBasicOps-server-1706208204',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1706208204',id=14,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM4IdgRom8xciift3CqBxkOwzbGRh74MT9xo6gBBaoPMGhzW4Bc2FU4s1cpGhIUHp6nZ3hiaNmmCb8/mcUU5OJ7lzr0gs5Z8XEvCqTH1rwJMNTBbNYbyTpSWsIk/mk2Mng==',key_name='tempest-TestServerBasicOps-1309063488',keypairs=<?>,launch_index=0,launched_at=2025-10-03T11:27:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='76485b7490844f9181c1821d135ade02',ramdisk_id='',reservation_id='r-x1lujgxi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-533300983',owner_user_name='tempest-TestServerBasicOps-533300983-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-03T11:28:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e95897c85bf04672a829b11af6ed10c1',uuid=218fdfd8-b66b-4ba2-90b0-5eb27dcacddf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff068d12-ba56-4465-a024-881b428d0ad9", "address": "fa:16:3e:f8:b6:fb", "network": {"id": "cbf38614-3700-41ae-a5fa-3eef08992fc4", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1070704057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76485b7490844f9181c1821d135ade02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff068d12-ba", "ovs_interfaceid": "ff068d12-ba56-4465-a024-881b428d0ad9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.479 2 DEBUG nova.network.os_vif_util [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Converting VIF {"id": "ff068d12-ba56-4465-a024-881b428d0ad9", "address": "fa:16:3e:f8:b6:fb", "network": {"id": "cbf38614-3700-41ae-a5fa-3eef08992fc4", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1070704057-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76485b7490844f9181c1821d135ade02", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff068d12-ba", "ovs_interfaceid": "ff068d12-ba56-4465-a024-881b428d0ad9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.480 2 DEBUG nova.network.os_vif_util [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:b6:fb,bridge_name='br-int',has_traffic_filtering=True,id=ff068d12-ba56-4465-a024-881b428d0ad9,network=Network(cbf38614-3700-41ae-a5fa-3eef08992fc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff068d12-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.480 2 DEBUG os_vif [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:b6:fb,bridge_name='br-int',has_traffic_filtering=True,id=ff068d12-ba56-4465-a024-881b428d0ad9,network=Network(cbf38614-3700-41ae-a5fa-3eef08992fc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff068d12-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.486 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff068d12-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.500 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[f681255b-e037-4ac1-91ac-c00a6c124507]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.501 2 INFO os_vif [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:b6:fb,bridge_name='br-int',has_traffic_filtering=True,id=ff068d12-ba56-4465-a024-881b428d0ad9,network=Network(cbf38614-3700-41ae-a5fa-3eef08992fc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff068d12-ba')#033[00m
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.501 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[10e3c42c-2e34-4fa5-8ff4-7edd3775a43c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.523 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[aa47a4eb-ab97-4cf1-843c-67211513da31]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1000896, 'reachable_time': 40003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 534372, 'error': None, 'target': 'ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d527efcd5\x2d9efe\x2d47de\x2d97ae\x2d4c1c2ca2b999.mount: Deactivated successfully.
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.528 284439 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-527efcd5-9efe-47de-97ae-4c1c2ca2b999 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.528 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[34f87d3f-53e8-4801-9655-bd2cfc49445d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.529 284328 INFO neutron.agent.ovn.metadata.agent [-] Port ff068d12-ba56-4465-a024-881b428d0ad9 in datapath cbf38614-3700-41ae-a5fa-3eef08992fc4 unbound from our chassis#033[00m
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.532 284328 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cbf38614-3700-41ae-a5fa-3eef08992fc4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.534 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[5d7af66a-5a3b-423e-ad47-56bed4fc31e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.534 284328 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4 namespace which is not needed anymore#033[00m
Oct  3 11:28:13 compute-0 neutron-haproxy-ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4[532242]: [NOTICE]   (532246) : haproxy version is 2.8.14-c23fe91
Oct  3 11:28:13 compute-0 neutron-haproxy-ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4[532242]: [NOTICE]   (532246) : path to executable is /usr/sbin/haproxy
Oct  3 11:28:13 compute-0 neutron-haproxy-ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4[532242]: [WARNING]  (532246) : Exiting Master process...
Oct  3 11:28:13 compute-0 neutron-haproxy-ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4[532242]: [WARNING]  (532246) : Exiting Master process...
Oct  3 11:28:13 compute-0 neutron-haproxy-ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4[532242]: [ALERT]    (532246) : Current worker (532248) exited with code 143 (Terminated)
Oct  3 11:28:13 compute-0 neutron-haproxy-ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4[532242]: [WARNING]  (532246) : All workers exited. Exiting... (0)
Oct  3 11:28:13 compute-0 systemd[1]: libpod-4b8495b67c00ea1cfd74af3842af7751750907913697b44dc6aaea80cc5a4665.scope: Deactivated successfully.
Oct  3 11:28:13 compute-0 podman[534404]: 2025-10-03 11:28:13.773510457 +0000 UTC m=+0.103167407 container died 4b8495b67c00ea1cfd74af3842af7751750907913697b44dc6aaea80cc5a4665 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.827 2 DEBUG nova.compute.manager [req-aa101612-d12e-487d-97e9-c1208ff4fbdf req-61f62004-5501-4544-8ad3-2d124e3024e9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Received event network-vif-unplugged-ff068d12-ba56-4465-a024-881b428d0ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.827 2 DEBUG oslo_concurrency.lockutils [req-aa101612-d12e-487d-97e9-c1208ff4fbdf req-61f62004-5501-4544-8ad3-2d124e3024e9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b8495b67c00ea1cfd74af3842af7751750907913697b44dc6aaea80cc5a4665-userdata-shm.mount: Deactivated successfully.
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.827 2 DEBUG oslo_concurrency.lockutils [req-aa101612-d12e-487d-97e9-c1208ff4fbdf req-61f62004-5501-4544-8ad3-2d124e3024e9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.828 2 DEBUG oslo_concurrency.lockutils [req-aa101612-d12e-487d-97e9-c1208ff4fbdf req-61f62004-5501-4544-8ad3-2d124e3024e9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.828 2 DEBUG nova.compute.manager [req-aa101612-d12e-487d-97e9-c1208ff4fbdf req-61f62004-5501-4544-8ad3-2d124e3024e9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] No waiting events found dispatching network-vif-unplugged-ff068d12-ba56-4465-a024-881b428d0ad9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.828 2 DEBUG nova.compute.manager [req-aa101612-d12e-487d-97e9-c1208ff4fbdf req-61f62004-5501-4544-8ad3-2d124e3024e9 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Received event network-vif-unplugged-ff068d12-ba56-4465-a024-881b428d0ad9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  3 11:28:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b42a0c634f03ce93d659d50d301d351476d4bdcfe08da6f33503a849101b78a-merged.mount: Deactivated successfully.
Oct  3 11:28:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3632: 321 pgs: 321 active+clean; 505 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 4.5 KiB/s wr, 2 op/s
Oct  3 11:28:13 compute-0 podman[534404]: 2025-10-03 11:28:13.851071744 +0000 UTC m=+0.180728694 container cleanup 4b8495b67c00ea1cfd74af3842af7751750907913697b44dc6aaea80cc5a4665 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:28:13 compute-0 systemd[1]: libpod-conmon-4b8495b67c00ea1cfd74af3842af7751750907913697b44dc6aaea80cc5a4665.scope: Deactivated successfully.
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.874 2 INFO nova.virt.libvirt.driver [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Deleting instance files /var/lib/nova/instances/f7465889-4aed-4799-835b-1c604f730144_del#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.874 2 INFO nova.virt.libvirt.driver [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Deletion of /var/lib/nova/instances/f7465889-4aed-4799-835b-1c604f730144_del complete#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.932 2 INFO nova.compute.manager [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Took 1.11 seconds to destroy the instance on the hypervisor.#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.932 2 DEBUG oslo.service.loopingcall [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.932 2 DEBUG nova.compute.manager [-] [instance: f7465889-4aed-4799-835b-1c604f730144] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.932 2 DEBUG nova.network.neutron [-] [instance: f7465889-4aed-4799-835b-1c604f730144] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  3 11:28:13 compute-0 podman[534433]: 2025-10-03 11:28:13.951315126 +0000 UTC m=+0.063169356 container remove 4b8495b67c00ea1cfd74af3842af7751750907913697b44dc6aaea80cc5a4665 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.959 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[d78c310f-0969-4874-877a-62697d39bb0b]: (4, ('Fri Oct  3 11:28:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4 (4b8495b67c00ea1cfd74af3842af7751750907913697b44dc6aaea80cc5a4665)\n4b8495b67c00ea1cfd74af3842af7751750907913697b44dc6aaea80cc5a4665\nFri Oct  3 11:28:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4 (4b8495b67c00ea1cfd74af3842af7751750907913697b44dc6aaea80cc5a4665)\n4b8495b67c00ea1cfd74af3842af7751750907913697b44dc6aaea80cc5a4665\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.961 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[41b32fe5-aeed-4abf-b512-3022d5af52b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.963 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbf38614-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:13 compute-0 kernel: tapcbf38614-30: left promiscuous mode
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:13 compute-0 nova_compute[351685]: 2025-10-03 11:28:13.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:13 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:13.988 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[9eaabccd-fbd0-4886-86a4-11eae2cc1e1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:14 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:14.008 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[84089a14-0d24-4328-9a6f-1ed4c91dc94b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:14 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:14.009 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[8ddc0623-b11e-4a52-a1ea-5cb2abc34ff8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:14 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:14.024 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[336e7ee6-24a9-43a7-b78a-ac77abcb0649]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1000303, 'reachable_time': 17971, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 534446, 'error': None, 'target': 'ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:14 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:14.031 284439 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cbf38614-3700-41ae-a5fa-3eef08992fc4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  3 11:28:14 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:14.031 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[46a418dc-6104-4d1f-b648-9b288ebe7a11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:14 compute-0 systemd[1]: run-netns-ovnmeta\x2dcbf38614\x2d3700\x2d41ae\x2da5fa\x2d3eef08992fc4.mount: Deactivated successfully.
Oct  3 11:28:14 compute-0 nova_compute[351685]: 2025-10-03 11:28:14.272 2 INFO nova.virt.libvirt.driver [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Deleting instance files /var/lib/nova/instances/218fdfd8-b66b-4ba2-90b0-5eb27dcacddf_del#033[00m
Oct  3 11:28:14 compute-0 nova_compute[351685]: 2025-10-03 11:28:14.273 2 INFO nova.virt.libvirt.driver [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Deletion of /var/lib/nova/instances/218fdfd8-b66b-4ba2-90b0-5eb27dcacddf_del complete#033[00m
Oct  3 11:28:14 compute-0 nova_compute[351685]: 2025-10-03 11:28:14.371 2 INFO nova.compute.manager [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Took 1.16 seconds to destroy the instance on the hypervisor.#033[00m
Oct  3 11:28:14 compute-0 nova_compute[351685]: 2025-10-03 11:28:14.372 2 DEBUG oslo.service.loopingcall [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  3 11:28:14 compute-0 nova_compute[351685]: 2025-10-03 11:28:14.372 2 DEBUG nova.compute.manager [-] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  3 11:28:14 compute-0 nova_compute[351685]: 2025-10-03 11:28:14.373 2 DEBUG nova.network.neutron [-] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  3 11:28:14 compute-0 nova_compute[351685]: 2025-10-03 11:28:14.883 2 DEBUG nova.network.neutron [-] [instance: f7465889-4aed-4799-835b-1c604f730144] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:28:14 compute-0 nova_compute[351685]: 2025-10-03 11:28:14.905 2 INFO nova.compute.manager [-] [instance: f7465889-4aed-4799-835b-1c604f730144] Took 0.97 seconds to deallocate network for instance.#033[00m
Oct  3 11:28:14 compute-0 nova_compute[351685]: 2025-10-03 11:28:14.956 2 DEBUG oslo_concurrency.lockutils [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:14 compute-0 nova_compute[351685]: 2025-10-03 11:28:14.957 2 DEBUG oslo_concurrency.lockutils [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:14 compute-0 nova_compute[351685]: 2025-10-03 11:28:14.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.125 2 DEBUG oslo_concurrency.processutils [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.519 2 DEBUG nova.compute.manager [req-65a3a656-6bd6-485f-8876-d2ccaaa26715 req-3d107197-ec43-4df5-accd-e48d7f685045 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received event network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.520 2 DEBUG oslo_concurrency.lockutils [req-65a3a656-6bd6-485f-8876-d2ccaaa26715 req-3d107197-ec43-4df5-accd-e48d7f685045 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "f7465889-4aed-4799-835b-1c604f730144-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.521 2 DEBUG oslo_concurrency.lockutils [req-65a3a656-6bd6-485f-8876-d2ccaaa26715 req-3d107197-ec43-4df5-accd-e48d7f685045 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.521 2 DEBUG oslo_concurrency.lockutils [req-65a3a656-6bd6-485f-8876-d2ccaaa26715 req-3d107197-ec43-4df5-accd-e48d7f685045 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.521 2 DEBUG nova.compute.manager [req-65a3a656-6bd6-485f-8876-d2ccaaa26715 req-3d107197-ec43-4df5-accd-e48d7f685045 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] No waiting events found dispatching network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.521 2 WARNING nova.compute.manager [req-65a3a656-6bd6-485f-8876-d2ccaaa26715 req-3d107197-ec43-4df5-accd-e48d7f685045 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received unexpected event network-vif-plugged-d444b4b5-5243-48c2-80dd-3074b56d4277 for instance with vm_state deleted and task_state None.#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.522 2 DEBUG nova.compute.manager [req-65a3a656-6bd6-485f-8876-d2ccaaa26715 req-3d107197-ec43-4df5-accd-e48d7f685045 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: f7465889-4aed-4799-835b-1c604f730144] Received event network-vif-deleted-d444b4b5-5243-48c2-80dd-3074b56d4277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:28:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:28:15 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3147681653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.610 2 DEBUG oslo_concurrency.processutils [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.618 2 DEBUG nova.compute.provider_tree [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.631 2 DEBUG nova.scheduler.client.report [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.663 2 DEBUG oslo_concurrency.lockutils [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.690 2 INFO nova.scheduler.client.report [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Deleted allocations for instance f7465889-4aed-4799-835b-1c604f730144#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.762 2 DEBUG oslo_concurrency.lockutils [None req-6e74df36-0440-4a4b-bec4-38a19b119fc0 a98b98aa35184e41a4ae6e74ba3a32e6 8ac8b91115c2483686f9dc31c58b49fc - - default default] Lock "f7465889-4aed-4799-835b-1c604f730144" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3633: 321 pgs: 321 active+clean; 412 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 22 KiB/s wr, 40 op/s
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.849 2 DEBUG nova.network.neutron [-] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.865 2 INFO nova.compute.manager [-] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Took 1.49 seconds to deallocate network for instance.#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.921 2 DEBUG oslo_concurrency.lockutils [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.922 2 DEBUG oslo_concurrency.lockutils [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.943 2 DEBUG nova.compute.manager [req-ad9421df-22c8-438e-b616-d9ec1adb31e2 req-3f80951d-b887-455f-9ae3-63e6f7584f15 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Received event network-vif-plugged-ff068d12-ba56-4465-a024-881b428d0ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.944 2 DEBUG oslo_concurrency.lockutils [req-ad9421df-22c8-438e-b616-d9ec1adb31e2 req-3f80951d-b887-455f-9ae3-63e6f7584f15 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.944 2 DEBUG oslo_concurrency.lockutils [req-ad9421df-22c8-438e-b616-d9ec1adb31e2 req-3f80951d-b887-455f-9ae3-63e6f7584f15 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.944 2 DEBUG oslo_concurrency.lockutils [req-ad9421df-22c8-438e-b616-d9ec1adb31e2 req-3f80951d-b887-455f-9ae3-63e6f7584f15 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.944 2 DEBUG nova.compute.manager [req-ad9421df-22c8-438e-b616-d9ec1adb31e2 req-3f80951d-b887-455f-9ae3-63e6f7584f15 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] No waiting events found dispatching network-vif-plugged-ff068d12-ba56-4465-a024-881b428d0ad9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.945 2 WARNING nova.compute.manager [req-ad9421df-22c8-438e-b616-d9ec1adb31e2 req-3f80951d-b887-455f-9ae3-63e6f7584f15 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Received unexpected event network-vif-plugged-ff068d12-ba56-4465-a024-881b428d0ad9 for instance with vm_state deleted and task_state None.#033[00m
Oct  3 11:28:15 compute-0 nova_compute[351685]: 2025-10-03 11:28:15.945 2 DEBUG nova.compute.manager [req-ad9421df-22c8-438e-b616-d9ec1adb31e2 req-3f80951d-b887-455f-9ae3-63e6f7584f15 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Received event network-vif-deleted-ff068d12-ba56-4465-a024-881b428d0ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:28:16 compute-0 nova_compute[351685]: 2025-10-03 11:28:16.097 2 DEBUG oslo_concurrency.processutils [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:28:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:28:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:28:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:28:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:28:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:28:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:28:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:28:16 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2597288989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:28:16 compute-0 nova_compute[351685]: 2025-10-03 11:28:16.554 2 DEBUG oslo_concurrency.processutils [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:28:16 compute-0 nova_compute[351685]: 2025-10-03 11:28:16.562 2 DEBUG nova.compute.provider_tree [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:28:16 compute-0 nova_compute[351685]: 2025-10-03 11:28:16.584 2 DEBUG nova.scheduler.client.report [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:28:16 compute-0 nova_compute[351685]: 2025-10-03 11:28:16.616 2 DEBUG oslo_concurrency.lockutils [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:16 compute-0 nova_compute[351685]: 2025-10-03 11:28:16.656 2 INFO nova.scheduler.client.report [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Deleted allocations for instance 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf#033[00m
Oct  3 11:28:16 compute-0 nova_compute[351685]: 2025-10-03 11:28:16.739 2 DEBUG oslo_concurrency.lockutils [None req-f54adb85-9191-4dca-83ff-fce6f7fc3b23 e95897c85bf04672a829b11af6ed10c1 76485b7490844f9181c1821d135ade02 - - default default] Lock "218fdfd8-b66b-4ba2-90b0-5eb27dcacddf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3634: 321 pgs: 321 active+clean; 412 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 20 KiB/s wr, 40 op/s
Oct  3 11:28:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:28:18 compute-0 nova_compute[351685]: 2025-10-03 11:28:18.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3635: 321 pgs: 321 active+clean; 344 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 21 KiB/s wr, 62 op/s
Oct  3 11:28:19 compute-0 nova_compute[351685]: 2025-10-03 11:28:19.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3636: 321 pgs: 321 active+clean; 344 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 21 KiB/s wr, 61 op/s
Oct  3 11:28:22 compute-0 ovn_controller[88471]: 2025-10-03T11:28:22Z|00172|binding|INFO|Releasing lport e51b3658-d946-4608-953e-6b26039ed1fd from this chassis (sb_readonly=0)
Oct  3 11:28:22 compute-0 ovn_controller[88471]: 2025-10-03T11:28:22Z|00173|binding|INFO|Releasing lport 71787e87-58e9-457f-840d-4d7e879d0280 from this chassis (sb_readonly=0)
Oct  3 11:28:22 compute-0 ovn_controller[88471]: 2025-10-03T11:28:22Z|00174|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:28:22 compute-0 nova_compute[351685]: 2025-10-03 11:28:22.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:28:23 compute-0 nova_compute[351685]: 2025-10-03 11:28:23.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3637: 321 pgs: 321 active+clean; 344 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 20 KiB/s wr, 61 op/s
Oct  3 11:28:23 compute-0 podman[534494]: 2025-10-03 11:28:23.881502983 +0000 UTC m=+0.119751159 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  3 11:28:23 compute-0 podman[534499]: 2025-10-03 11:28:23.89046669 +0000 UTC m=+0.128175850 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 11:28:23 compute-0 podman[534492]: 2025-10-03 11:28:23.894109946 +0000 UTC m=+0.151565539 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:28:23 compute-0 podman[534495]: 2025-10-03 11:28:23.913388734 +0000 UTC m=+0.160416822 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 11:28:23 compute-0 podman[534509]: 2025-10-03 11:28:23.918477798 +0000 UTC m=+0.134349847 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 11:28:23 compute-0 podman[534493]: 2025-10-03 11:28:23.927537777 +0000 UTC m=+0.161942611 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container)
Oct  3 11:28:23 compute-0 podman[534502]: 2025-10-03 11:28:23.931465714 +0000 UTC m=+0.163250494 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:28:24 compute-0 nova_compute[351685]: 2025-10-03 11:28:24.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3638: 321 pgs: 321 active+clean; 344 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 18 KiB/s wr, 60 op/s
Oct  3 11:28:26 compute-0 ovn_controller[88471]: 2025-10-03T11:28:26Z|00175|binding|INFO|Releasing lport e51b3658-d946-4608-953e-6b26039ed1fd from this chassis (sb_readonly=0)
Oct  3 11:28:26 compute-0 ovn_controller[88471]: 2025-10-03T11:28:26Z|00176|binding|INFO|Releasing lport 71787e87-58e9-457f-840d-4d7e879d0280 from this chassis (sb_readonly=0)
Oct  3 11:28:26 compute-0 ovn_controller[88471]: 2025-10-03T11:28:26Z|00177|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:28:26 compute-0 nova_compute[351685]: 2025-10-03 11:28:26.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3639: 321 pgs: 321 active+clean; 344 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1023 B/s wr, 21 op/s
Oct  3 11:28:28 compute-0 nova_compute[351685]: 2025-10-03 11:28:28.056 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759490893.0556495, f7465889-4aed-4799-835b-1c604f730144 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:28:28 compute-0 nova_compute[351685]: 2025-10-03 11:28:28.057 2 INFO nova.compute.manager [-] [instance: f7465889-4aed-4799-835b-1c604f730144] VM Stopped (Lifecycle Event)#033[00m
Oct  3 11:28:28 compute-0 nova_compute[351685]: 2025-10-03 11:28:28.082 2 DEBUG nova.compute.manager [None req-49ac1b09-9054-428b-ad65-e170eac2d80a - - - - - -] [instance: f7465889-4aed-4799-835b-1c604f730144] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:28:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:28:28 compute-0 nova_compute[351685]: 2025-10-03 11:28:28.450 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759490893.4489272, 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:28:28 compute-0 nova_compute[351685]: 2025-10-03 11:28:28.450 2 INFO nova.compute.manager [-] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] VM Stopped (Lifecycle Event)#033[00m
Oct  3 11:28:28 compute-0 nova_compute[351685]: 2025-10-03 11:28:28.473 2 DEBUG nova.compute.manager [None req-2bbf7df4-8074-4c70-9222-bb70eb87c117 - - - - - -] [instance: 218fdfd8-b66b-4ba2-90b0-5eb27dcacddf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:28:28 compute-0 nova_compute[351685]: 2025-10-03 11:28:28.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:29 compute-0 podman[157165]: time="2025-10-03T11:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:28:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 48733 "" "Go-http-client/1.1"
Oct  3 11:28:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10072 "" "Go-http-client/1.1"
Oct  3 11:28:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3640: 321 pgs: 321 active+clean; 344 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1023 B/s wr, 21 op/s
Oct  3 11:28:29 compute-0 nova_compute[351685]: 2025-10-03 11:28:29.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:28:30 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:28:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:28:30 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:28:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:28:30 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:28:30 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3455d654-93a1-4ea0-8f54-f0def980ec1e does not exist
Oct  3 11:28:30 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 77378f18-e8c3-4a24-b801-35b8d665e487 does not exist
Oct  3 11:28:30 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev cd58f7d9-df81-4936-9033-bc7ad02d87e8 does not exist
Oct  3 11:28:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:28:30 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:28:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:28:30 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:28:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:28:30 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:28:30 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:28:30 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:28:30 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:28:31 compute-0 openstack_network_exporter[367524]: ERROR   11:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:28:31 compute-0 openstack_network_exporter[367524]: ERROR   11:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:28:31 compute-0 openstack_network_exporter[367524]: ERROR   11:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:28:31 compute-0 openstack_network_exporter[367524]: ERROR   11:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:28:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:28:31 compute-0 openstack_network_exporter[367524]: ERROR   11:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:28:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:28:31 compute-0 podman[534894]: 2025-10-03 11:28:31.454912876 +0000 UTC m=+0.075064628 container create edf65d65a1da2aeea4f04ff3e7d945db194fe20295aa5d815ddee37e41b21291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_solomon, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:28:31 compute-0 ovn_controller[88471]: 2025-10-03T11:28:31Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:fd:58 10.100.0.7
Oct  3 11:28:31 compute-0 ovn_controller[88471]: 2025-10-03T11:28:31Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:fd:58 10.100.0.7
Oct  3 11:28:31 compute-0 systemd[1]: Started libpod-conmon-edf65d65a1da2aeea4f04ff3e7d945db194fe20295aa5d815ddee37e41b21291.scope.
Oct  3 11:28:31 compute-0 podman[534894]: 2025-10-03 11:28:31.422694782 +0000 UTC m=+0.042846554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:28:31 compute-0 ovn_controller[88471]: 2025-10-03T11:28:31Z|00178|binding|INFO|Releasing lport e51b3658-d946-4608-953e-6b26039ed1fd from this chassis (sb_readonly=0)
Oct  3 11:28:31 compute-0 ovn_controller[88471]: 2025-10-03T11:28:31Z|00179|binding|INFO|Releasing lport 71787e87-58e9-457f-840d-4d7e879d0280 from this chassis (sb_readonly=0)
Oct  3 11:28:31 compute-0 ovn_controller[88471]: 2025-10-03T11:28:31Z|00180|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:28:31 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:28:31 compute-0 podman[534894]: 2025-10-03 11:28:31.568793815 +0000 UTC m=+0.188945587 container init edf65d65a1da2aeea4f04ff3e7d945db194fe20295aa5d815ddee37e41b21291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_solomon, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  3 11:28:31 compute-0 podman[534894]: 2025-10-03 11:28:31.585004844 +0000 UTC m=+0.205156596 container start edf65d65a1da2aeea4f04ff3e7d945db194fe20295aa5d815ddee37e41b21291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_solomon, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 11:28:31 compute-0 nova_compute[351685]: 2025-10-03 11:28:31.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:31 compute-0 podman[534894]: 2025-10-03 11:28:31.590710287 +0000 UTC m=+0.210862039 container attach edf65d65a1da2aeea4f04ff3e7d945db194fe20295aa5d815ddee37e41b21291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:28:31 compute-0 zealous_solomon[534909]: 167 167
Oct  3 11:28:31 compute-0 systemd[1]: libpod-edf65d65a1da2aeea4f04ff3e7d945db194fe20295aa5d815ddee37e41b21291.scope: Deactivated successfully.
Oct  3 11:28:31 compute-0 podman[534894]: 2025-10-03 11:28:31.597889377 +0000 UTC m=+0.218041129 container died edf65d65a1da2aeea4f04ff3e7d945db194fe20295aa5d815ddee37e41b21291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_solomon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:28:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-562de1fbb51264a3e012fc025004238c0091d3f2f28754b7665ef631e2832bf6-merged.mount: Deactivated successfully.
Oct  3 11:28:31 compute-0 podman[534894]: 2025-10-03 11:28:31.659160021 +0000 UTC m=+0.279311763 container remove edf65d65a1da2aeea4f04ff3e7d945db194fe20295aa5d815ddee37e41b21291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_solomon, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 11:28:31 compute-0 systemd[1]: libpod-conmon-edf65d65a1da2aeea4f04ff3e7d945db194fe20295aa5d815ddee37e41b21291.scope: Deactivated successfully.
Oct  3 11:28:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3641: 321 pgs: 321 active+clean; 351 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 780 KiB/s wr, 13 op/s
Oct  3 11:28:31 compute-0 podman[534931]: 2025-10-03 11:28:31.925145406 +0000 UTC m=+0.066152802 container create 41c8841b465a4d034c2e3591c59cfcdf0cd3fcf95156e7f945dd551768be84eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goldberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:28:31 compute-0 podman[534931]: 2025-10-03 11:28:31.899087051 +0000 UTC m=+0.040094397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:28:32 compute-0 systemd[1]: Started libpod-conmon-41c8841b465a4d034c2e3591c59cfcdf0cd3fcf95156e7f945dd551768be84eb.scope.
Oct  3 11:28:32 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9aba28d6a558d2a6731ad08b7308e3eb53ad6b3c5ab5247b512dc6f8bf0c5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9aba28d6a558d2a6731ad08b7308e3eb53ad6b3c5ab5247b512dc6f8bf0c5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9aba28d6a558d2a6731ad08b7308e3eb53ad6b3c5ab5247b512dc6f8bf0c5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9aba28d6a558d2a6731ad08b7308e3eb53ad6b3c5ab5247b512dc6f8bf0c5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:28:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b9aba28d6a558d2a6731ad08b7308e3eb53ad6b3c5ab5247b512dc6f8bf0c5c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:28:32 compute-0 podman[534931]: 2025-10-03 11:28:32.12091774 +0000 UTC m=+0.261925136 container init 41c8841b465a4d034c2e3591c59cfcdf0cd3fcf95156e7f945dd551768be84eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goldberg, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:28:32 compute-0 podman[534931]: 2025-10-03 11:28:32.146634774 +0000 UTC m=+0.287642120 container start 41c8841b465a4d034c2e3591c59cfcdf0cd3fcf95156e7f945dd551768be84eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 11:28:32 compute-0 podman[534931]: 2025-10-03 11:28:32.152711449 +0000 UTC m=+0.293718815 container attach 41c8841b465a4d034c2e3591c59cfcdf0cd3fcf95156e7f945dd551768be84eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goldberg, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:28:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:28:33 compute-0 nova_compute[351685]: 2025-10-03 11:28:33.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3642: 321 pgs: 321 active+clean; 361 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 132 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Oct  3 11:28:33 compute-0 nice_goldberg[534947]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:28:33 compute-0 nice_goldberg[534947]: --> relative data size: 1.0
Oct  3 11:28:33 compute-0 nice_goldberg[534947]: --> All data devices are unavailable
Oct  3 11:28:33 compute-0 systemd[1]: libpod-41c8841b465a4d034c2e3591c59cfcdf0cd3fcf95156e7f945dd551768be84eb.scope: Deactivated successfully.
Oct  3 11:28:33 compute-0 podman[534931]: 2025-10-03 11:28:33.988798844 +0000 UTC m=+2.129806200 container died 41c8841b465a4d034c2e3591c59cfcdf0cd3fcf95156e7f945dd551768be84eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goldberg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 11:28:33 compute-0 systemd[1]: libpod-41c8841b465a4d034c2e3591c59cfcdf0cd3fcf95156e7f945dd551768be84eb.scope: Consumed 1.183s CPU time.
Oct  3 11:28:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b9aba28d6a558d2a6731ad08b7308e3eb53ad6b3c5ab5247b512dc6f8bf0c5c-merged.mount: Deactivated successfully.
Oct  3 11:28:34 compute-0 podman[534931]: 2025-10-03 11:28:34.723912095 +0000 UTC m=+2.864919441 container remove 41c8841b465a4d034c2e3591c59cfcdf0cd3fcf95156e7f945dd551768be84eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:28:34 compute-0 nova_compute[351685]: 2025-10-03 11:28:34.726 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:28:34 compute-0 systemd[1]: libpod-conmon-41c8841b465a4d034c2e3591c59cfcdf0cd3fcf95156e7f945dd551768be84eb.scope: Deactivated successfully.
Oct  3 11:28:34 compute-0 nova_compute[351685]: 2025-10-03 11:28:34.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:35 compute-0 podman[535040]: 2025-10-03 11:28:35.062217827 +0000 UTC m=+0.101731872 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Oct  3 11:28:35 compute-0 podman[535041]: 2025-10-03 11:28:35.082037192 +0000 UTC m=+0.113265841 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 11:28:35 compute-0 podman[535039]: 2025-10-03 11:28:35.098462098 +0000 UTC m=+0.137980313 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:28:35 compute-0 podman[535186]: 2025-10-03 11:28:35.596891693 +0000 UTC m=+0.033229816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:28:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3643: 321 pgs: 321 active+clean; 376 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  3 11:28:35 compute-0 podman[535186]: 2025-10-03 11:28:35.878210139 +0000 UTC m=+0.314548222 container create 639d31e7c2bbf7104d8030ec92cf43c0bbf04ec84e7d10a52e3b419119eaf057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_khayyam, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:28:36 compute-0 systemd[1]: Started libpod-conmon-639d31e7c2bbf7104d8030ec92cf43c0bbf04ec84e7d10a52e3b419119eaf057.scope.
Oct  3 11:28:36 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:28:36 compute-0 podman[535186]: 2025-10-03 11:28:36.190781197 +0000 UTC m=+0.627119330 container init 639d31e7c2bbf7104d8030ec92cf43c0bbf04ec84e7d10a52e3b419119eaf057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  3 11:28:36 compute-0 podman[535186]: 2025-10-03 11:28:36.202101399 +0000 UTC m=+0.638439492 container start 639d31e7c2bbf7104d8030ec92cf43c0bbf04ec84e7d10a52e3b419119eaf057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 11:28:36 compute-0 gifted_khayyam[535202]: 167 167
Oct  3 11:28:36 compute-0 systemd[1]: libpod-639d31e7c2bbf7104d8030ec92cf43c0bbf04ec84e7d10a52e3b419119eaf057.scope: Deactivated successfully.
Oct  3 11:28:36 compute-0 conmon[535202]: conmon 639d31e7c2bbf7104d80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-639d31e7c2bbf7104d8030ec92cf43c0bbf04ec84e7d10a52e3b419119eaf057.scope/container/memory.events
Oct  3 11:28:36 compute-0 podman[535186]: 2025-10-03 11:28:36.230049915 +0000 UTC m=+0.666388018 container attach 639d31e7c2bbf7104d8030ec92cf43c0bbf04ec84e7d10a52e3b419119eaf057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Oct  3 11:28:36 compute-0 podman[535186]: 2025-10-03 11:28:36.231038147 +0000 UTC m=+0.667376240 container died 639d31e7c2bbf7104d8030ec92cf43c0bbf04ec84e7d10a52e3b419119eaf057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 11:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-420a2806cf01689920e609f164bab213268e1d5816deeb17beebf0ad569c452d-merged.mount: Deactivated successfully.
Oct  3 11:28:36 compute-0 podman[535186]: 2025-10-03 11:28:36.982622665 +0000 UTC m=+1.418960748 container remove 639d31e7c2bbf7104d8030ec92cf43c0bbf04ec84e7d10a52e3b419119eaf057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:28:37 compute-0 systemd[1]: libpod-conmon-639d31e7c2bbf7104d8030ec92cf43c0bbf04ec84e7d10a52e3b419119eaf057.scope: Deactivated successfully.
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.200 2 INFO nova.compute.manager [None req-e3ee5939-9fc7-4fc4-9fc7-cb64aa8b4c66 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Get console output#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.219 4814 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  3 11:28:37 compute-0 podman[535227]: 2025-10-03 11:28:37.254345693 +0000 UTC m=+0.070392997 container create 9d578b6dc2c8eaf4880faa88a2a30ad1ff9cc777b0b4b2657a54c292c53481fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:28:37 compute-0 systemd[1]: Started libpod-conmon-9d578b6dc2c8eaf4880faa88a2a30ad1ff9cc777b0b4b2657a54c292c53481fb.scope.
Oct  3 11:28:37 compute-0 podman[535227]: 2025-10-03 11:28:37.231027596 +0000 UTC m=+0.047074910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:28:37 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e25a9df38eac34680930f358eebb7f5d9f345fea7e60cd0fc21924865446cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e25a9df38eac34680930f358eebb7f5d9f345fea7e60cd0fc21924865446cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e25a9df38eac34680930f358eebb7f5d9f345fea7e60cd0fc21924865446cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3e25a9df38eac34680930f358eebb7f5d9f345fea7e60cd0fc21924865446cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:28:37 compute-0 podman[535227]: 2025-10-03 11:28:37.373562924 +0000 UTC m=+0.189610248 container init 9d578b6dc2c8eaf4880faa88a2a30ad1ff9cc777b0b4b2657a54c292c53481fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:28:37 compute-0 podman[535227]: 2025-10-03 11:28:37.39341918 +0000 UTC m=+0.209466484 container start 9d578b6dc2c8eaf4880faa88a2a30ad1ff9cc777b0b4b2657a54c292c53481fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:28:37 compute-0 podman[535227]: 2025-10-03 11:28:37.398696669 +0000 UTC m=+0.214743983 container attach 9d578b6dc2c8eaf4880faa88a2a30ad1ff9cc777b0b4b2657a54c292c53481fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.622 2 DEBUG oslo_concurrency.lockutils [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.625 2 DEBUG oslo_concurrency.lockutils [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.628 2 DEBUG oslo_concurrency.lockutils [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.629 2 DEBUG oslo_concurrency.lockutils [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.631 2 DEBUG oslo_concurrency.lockutils [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.632 2 INFO nova.compute.manager [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Terminating instance#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.634 2 DEBUG nova.compute.manager [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  3 11:28:37 compute-0 kernel: tapb3d448d1-07 (unregistering): left promiscuous mode
Oct  3 11:28:37 compute-0 NetworkManager[45015]: <info>  [1759490917.7326] device (tapb3d448d1-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:37 compute-0 ovn_controller[88471]: 2025-10-03T11:28:37Z|00181|binding|INFO|Releasing lport b3d448d1-073b-4561-93d3-d26eb20839fe from this chassis (sb_readonly=0)
Oct  3 11:28:37 compute-0 ovn_controller[88471]: 2025-10-03T11:28:37Z|00182|binding|INFO|Setting lport b3d448d1-073b-4561-93d3-d26eb20839fe down in Southbound
Oct  3 11:28:37 compute-0 ovn_controller[88471]: 2025-10-03T11:28:37Z|00183|binding|INFO|Removing iface tapb3d448d1-07 ovn-installed in OVS
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:37 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:37.768 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:fd:58 10.100.0.7'], port_security=['fa:16:3e:68:fd:58 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1cd61d6b-0ef5-458f-88f0-44a4951ea368', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cae90f5-24f0-45af-a3e3-a77dbb0a12af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ea98f29bce64ae8ba81224645237ac7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e5fb1caf-ff45-4eab-be84-810c07ecb149', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26eb2c44-9631-42b8-a4d9-ab8785ccd098, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=b3d448d1-073b-4561-93d3-d26eb20839fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:28:37 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:37.769 284328 INFO neutron.agent.ovn.metadata.agent [-] Port b3d448d1-073b-4561-93d3-d26eb20839fe in datapath 0cae90f5-24f0-45af-a3e3-a77dbb0a12af unbound from our chassis#033[00m
Oct  3 11:28:37 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:37.771 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0cae90f5-24f0-45af-a3e3-a77dbb0a12af#033[00m
Oct  3 11:28:37 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:37.789 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[3869b14f-fa26-4946-a33c-37bbbae16a31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:37 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Oct  3 11:28:37 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 41.336s CPU time.
Oct  3 11:28:37 compute-0 systemd-machined[137653]: Machine qemu-16-instance-0000000f terminated.
Oct  3 11:28:37 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:37.830 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[a4cbc43c-550c-4996-bfc5-7ef9edfc7261]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:37 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:37.834 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[a45077d9-61dd-4c4b-9d60-a8e9ebe0edfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3644: 321 pgs: 321 active+clean; 376 MiB data, 473 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  3 11:28:37 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:37.862 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[c9149dc1-fdb2-4a20-9793-3abb19454e7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:37 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:37.885 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[34b6e9fd-63e8-43c6-9f54-aaeb6f2103bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cae90f5-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:a3:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 998611, 'reachable_time': 40400, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 535263, 'error': None, 'target': 'ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.888 2 INFO nova.virt.libvirt.driver [-] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Instance destroyed successfully.#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.888 2 DEBUG nova.objects.instance [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lazy-loading 'resources' on Instance uuid 1cd61d6b-0ef5-458f-88f0-44a4951ea368 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:28:37 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:37.904 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[37c02f0e-bc81-457e-9a1f-292b4be9c87f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0cae90f5-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 998626, 'tstamp': 998626}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 535269, 'error': None, 'target': 'ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0cae90f5-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 998631, 'tstamp': 998631}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 535269, 'error': None, 'target': 'ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:37 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:37.906 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cae90f5-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.909 2 DEBUG nova.virt.libvirt.vif [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T11:27:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-629685759',display_name='tempest-TestNetworkBasicOps-server-629685759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-629685759',id=15,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN1SBN1fMS2ySXPXG88Q+Bof2aHg32/EVVpyEUfGXr3FecKVqnRmKLwEF1cg7BJuFNMKxBLDa8CU5ZPyFgNKz8S2mm0PZmn4oUL9aK9wp84MY7q1xyNFjx92ssuIV4ADhw==',key_name='tempest-TestNetworkBasicOps-1542639153',keypairs=<?>,launch_index=0,launched_at=2025-10-03T11:27:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5ea98f29bce64ae8ba81224645237ac7',ramdisk_id='',reservation_id='r-x0kukrr5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-975938423',owner_user_name='tempest-TestNetworkBasicOps-975938423-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-03T11:27:53Z,user_data=None,user_id='a62337822a774597b9068cf3aed6a92f',uuid=1cd61d6b-0ef5-458f-88f0-44a4951ea368,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3d448d1-073b-4561-93d3-d26eb20839fe", "address": "fa:16:3e:68:fd:58", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3d448d1-07", "ovs_interfaceid": "b3d448d1-073b-4561-93d3-d26eb20839fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.910 2 DEBUG nova.network.os_vif_util [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Converting VIF {"id": "b3d448d1-073b-4561-93d3-d26eb20839fe", "address": "fa:16:3e:68:fd:58", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3d448d1-07", "ovs_interfaceid": "b3d448d1-073b-4561-93d3-d26eb20839fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.911 2 DEBUG nova.network.os_vif_util [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:fd:58,bridge_name='br-int',has_traffic_filtering=True,id=b3d448d1-073b-4561-93d3-d26eb20839fe,network=Network(0cae90f5-24f0-45af-a3e3-a77dbb0a12af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3d448d1-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.911 2 DEBUG os_vif [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:fd:58,bridge_name='br-int',has_traffic_filtering=True,id=b3d448d1-073b-4561-93d3-d26eb20839fe,network=Network(0cae90f5-24f0-45af-a3e3-a77dbb0a12af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3d448d1-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  3 11:28:37 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:37.915 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cae90f5-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:37 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:37.916 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3d448d1-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:28:37 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:37.917 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0cae90f5-20, col_values=(('external_ids', {'iface-id': 'e51b3658-d946-4608-953e-6b26039ed1fd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:28:37 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:37.917 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:37 compute-0 nova_compute[351685]: 2025-10-03 11:28:37.922 2 INFO os_vif [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:fd:58,bridge_name='br-int',has_traffic_filtering=True,id=b3d448d1-073b-4561-93d3-d26eb20839fe,network=Network(0cae90f5-24f0-45af-a3e3-a77dbb0a12af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3d448d1-07')#033[00m
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]: {
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:    "0": [
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:        {
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "devices": [
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "/dev/loop3"
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            ],
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "lv_name": "ceph_lv0",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "lv_size": "21470642176",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "name": "ceph_lv0",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "tags": {
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.cluster_name": "ceph",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.crush_device_class": "",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.encrypted": "0",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.osd_id": "0",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.type": "block",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.vdo": "0"
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            },
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "type": "block",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "vg_name": "ceph_vg0"
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:        }
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:    ],
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:    "1": [
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:        {
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "devices": [
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "/dev/loop4"
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            ],
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "lv_name": "ceph_lv1",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "lv_size": "21470642176",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "name": "ceph_lv1",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "tags": {
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.cluster_name": "ceph",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.crush_device_class": "",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.encrypted": "0",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.osd_id": "1",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.type": "block",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.vdo": "0"
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            },
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "type": "block",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "vg_name": "ceph_vg1"
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:        }
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:    ],
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:    "2": [
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:        {
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "devices": [
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "/dev/loop5"
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            ],
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "lv_name": "ceph_lv2",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "lv_size": "21470642176",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "name": "ceph_lv2",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "tags": {
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.cluster_name": "ceph",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.crush_device_class": "",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.encrypted": "0",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.osd_id": "2",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.type": "block",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:                "ceph.vdo": "0"
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            },
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "type": "block",
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:            "vg_name": "ceph_vg2"
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:        }
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]:    ]
Oct  3 11:28:38 compute-0 hopeful_bartik[535242]: }
Oct  3 11:28:38 compute-0 systemd[1]: libpod-9d578b6dc2c8eaf4880faa88a2a30ad1ff9cc777b0b4b2657a54c292c53481fb.scope: Deactivated successfully.
Oct  3 11:28:38 compute-0 podman[535227]: 2025-10-03 11:28:38.230974063 +0000 UTC m=+1.047021387 container died 9d578b6dc2c8eaf4880faa88a2a30ad1ff9cc777b0b4b2657a54c292c53481fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:28:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3e25a9df38eac34680930f358eebb7f5d9f345fea7e60cd0fc21924865446cb-merged.mount: Deactivated successfully.
Oct  3 11:28:38 compute-0 podman[535227]: 2025-10-03 11:28:38.305652526 +0000 UTC m=+1.121699820 container remove 9d578b6dc2c8eaf4880faa88a2a30ad1ff9cc777b0b4b2657a54c292c53481fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 11:28:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:28:38 compute-0 systemd[1]: libpod-conmon-9d578b6dc2c8eaf4880faa88a2a30ad1ff9cc777b0b4b2657a54c292c53481fb.scope: Deactivated successfully.
Oct  3 11:28:38 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:38.500 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:73:2e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:70:f7:73:f2:48'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:28:38 compute-0 nova_compute[351685]: 2025-10-03 11:28:38.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:38 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:38.510 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  3 11:28:38 compute-0 nova_compute[351685]: 2025-10-03 11:28:38.528 2 DEBUG nova.compute.manager [req-e0d2a2cb-2352-414c-9b70-772071d4a919 req-2130364a-0d0f-494c-80b4-9ba1ab7f6fdf 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Received event network-vif-unplugged-b3d448d1-073b-4561-93d3-d26eb20839fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:28:38 compute-0 nova_compute[351685]: 2025-10-03 11:28:38.528 2 DEBUG oslo_concurrency.lockutils [req-e0d2a2cb-2352-414c-9b70-772071d4a919 req-2130364a-0d0f-494c-80b4-9ba1ab7f6fdf 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:38 compute-0 nova_compute[351685]: 2025-10-03 11:28:38.529 2 DEBUG oslo_concurrency.lockutils [req-e0d2a2cb-2352-414c-9b70-772071d4a919 req-2130364a-0d0f-494c-80b4-9ba1ab7f6fdf 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:38 compute-0 nova_compute[351685]: 2025-10-03 11:28:38.529 2 DEBUG oslo_concurrency.lockutils [req-e0d2a2cb-2352-414c-9b70-772071d4a919 req-2130364a-0d0f-494c-80b4-9ba1ab7f6fdf 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:38 compute-0 nova_compute[351685]: 2025-10-03 11:28:38.529 2 DEBUG nova.compute.manager [req-e0d2a2cb-2352-414c-9b70-772071d4a919 req-2130364a-0d0f-494c-80b4-9ba1ab7f6fdf 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] No waiting events found dispatching network-vif-unplugged-b3d448d1-073b-4561-93d3-d26eb20839fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:28:38 compute-0 nova_compute[351685]: 2025-10-03 11:28:38.529 2 DEBUG nova.compute.manager [req-e0d2a2cb-2352-414c-9b70-772071d4a919 req-2130364a-0d0f-494c-80b4-9ba1ab7f6fdf 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Received event network-vif-unplugged-b3d448d1-073b-4561-93d3-d26eb20839fe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  3 11:28:38 compute-0 nova_compute[351685]: 2025-10-03 11:28:38.624 2 INFO nova.virt.libvirt.driver [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Deleting instance files /var/lib/nova/instances/1cd61d6b-0ef5-458f-88f0-44a4951ea368_del#033[00m
Oct  3 11:28:38 compute-0 nova_compute[351685]: 2025-10-03 11:28:38.625 2 INFO nova.virt.libvirt.driver [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Deletion of /var/lib/nova/instances/1cd61d6b-0ef5-458f-88f0-44a4951ea368_del complete#033[00m
Oct  3 11:28:38 compute-0 nova_compute[351685]: 2025-10-03 11:28:38.686 2 INFO nova.compute.manager [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Took 1.05 seconds to destroy the instance on the hypervisor.#033[00m
Oct  3 11:28:38 compute-0 nova_compute[351685]: 2025-10-03 11:28:38.687 2 DEBUG oslo.service.loopingcall [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  3 11:28:38 compute-0 nova_compute[351685]: 2025-10-03 11:28:38.687 2 DEBUG nova.compute.manager [-] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  3 11:28:38 compute-0 nova_compute[351685]: 2025-10-03 11:28:38.687 2 DEBUG nova.network.neutron [-] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  3 11:28:39 compute-0 podman[535444]: 2025-10-03 11:28:39.231787849 +0000 UTC m=+0.075075438 container create 4f3bf698b992bc07fb0afd6f5b0ee1877c05ec1b5de434d69c9f24e701ad0c20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_roentgen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:28:39 compute-0 podman[535444]: 2025-10-03 11:28:39.198944456 +0000 UTC m=+0.042232045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:28:39 compute-0 systemd[1]: Started libpod-conmon-4f3bf698b992bc07fb0afd6f5b0ee1877c05ec1b5de434d69c9f24e701ad0c20.scope.
Oct  3 11:28:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:28:39 compute-0 podman[535444]: 2025-10-03 11:28:39.361199346 +0000 UTC m=+0.204486905 container init 4f3bf698b992bc07fb0afd6f5b0ee1877c05ec1b5de434d69c9f24e701ad0c20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  3 11:28:39 compute-0 podman[535444]: 2025-10-03 11:28:39.371936081 +0000 UTC m=+0.215223640 container start 4f3bf698b992bc07fb0afd6f5b0ee1877c05ec1b5de434d69c9f24e701ad0c20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 11:28:39 compute-0 podman[535444]: 2025-10-03 11:28:39.377902411 +0000 UTC m=+0.221189990 container attach 4f3bf698b992bc07fb0afd6f5b0ee1877c05ec1b5de434d69c9f24e701ad0c20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_roentgen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 11:28:39 compute-0 hopeful_roentgen[535458]: 167 167
Oct  3 11:28:39 compute-0 systemd[1]: libpod-4f3bf698b992bc07fb0afd6f5b0ee1877c05ec1b5de434d69c9f24e701ad0c20.scope: Deactivated successfully.
Oct  3 11:28:39 compute-0 podman[535444]: 2025-10-03 11:28:39.383540402 +0000 UTC m=+0.226827951 container died 4f3bf698b992bc07fb0afd6f5b0ee1877c05ec1b5de434d69c9f24e701ad0c20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 11:28:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6063d2bfe012954cc331dd7e5737b9a8e3cb2b5a61f99d8d4c144799615dab3-merged.mount: Deactivated successfully.
Oct  3 11:28:39 compute-0 podman[535444]: 2025-10-03 11:28:39.435609212 +0000 UTC m=+0.278896761 container remove 4f3bf698b992bc07fb0afd6f5b0ee1877c05ec1b5de434d69c9f24e701ad0c20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_roentgen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:28:39 compute-0 systemd[1]: libpod-conmon-4f3bf698b992bc07fb0afd6f5b0ee1877c05ec1b5de434d69c9f24e701ad0c20.scope: Deactivated successfully.
Oct  3 11:28:39 compute-0 podman[535482]: 2025-10-03 11:28:39.660927202 +0000 UTC m=+0.061909505 container create 104b5fa8a10cc4822ccff5ee300a90241bda931b8bf25816e9be0522ba2b04fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 11:28:39 compute-0 systemd[1]: Started libpod-conmon-104b5fa8a10cc4822ccff5ee300a90241bda931b8bf25816e9be0522ba2b04fc.scope.
Oct  3 11:28:39 compute-0 podman[535482]: 2025-10-03 11:28:39.635420515 +0000 UTC m=+0.036402808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:28:39 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37acbe389b959064161569d5772aad6088fd66eb5bed07554e888f4f820119f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37acbe389b959064161569d5772aad6088fd66eb5bed07554e888f4f820119f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37acbe389b959064161569d5772aad6088fd66eb5bed07554e888f4f820119f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37acbe389b959064161569d5772aad6088fd66eb5bed07554e888f4f820119f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:28:39 compute-0 podman[535482]: 2025-10-03 11:28:39.790441903 +0000 UTC m=+0.191424216 container init 104b5fa8a10cc4822ccff5ee300a90241bda931b8bf25816e9be0522ba2b04fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:28:39 compute-0 podman[535482]: 2025-10-03 11:28:39.807678706 +0000 UTC m=+0.208660979 container start 104b5fa8a10cc4822ccff5ee300a90241bda931b8bf25816e9be0522ba2b04fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 11:28:39 compute-0 podman[535482]: 2025-10-03 11:28:39.811931732 +0000 UTC m=+0.212914045 container attach 104b5fa8a10cc4822ccff5ee300a90241bda931b8bf25816e9be0522ba2b04fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct  3 11:28:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3645: 321 pgs: 321 active+clean; 358 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 377 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Oct  3 11:28:39 compute-0 nova_compute[351685]: 2025-10-03 11:28:39.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:40 compute-0 nova_compute[351685]: 2025-10-03 11:28:40.081 2 DEBUG nova.network.neutron [-] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:28:40 compute-0 nova_compute[351685]: 2025-10-03 11:28:40.110 2 INFO nova.compute.manager [-] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Took 1.42 seconds to deallocate network for instance.#033[00m
Oct  3 11:28:40 compute-0 nova_compute[351685]: 2025-10-03 11:28:40.169 2 DEBUG oslo_concurrency.lockutils [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:40 compute-0 nova_compute[351685]: 2025-10-03 11:28:40.171 2 DEBUG oslo_concurrency.lockutils [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:40 compute-0 nova_compute[351685]: 2025-10-03 11:28:40.317 2 DEBUG oslo_concurrency.processutils [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:28:40 compute-0 nova_compute[351685]: 2025-10-03 11:28:40.604 2 DEBUG nova.compute.manager [req-82395a04-1621-4674-9754-d63f43efd0c6 req-a299de32-40f4-4699-9309-e5bc0707001d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Received event network-vif-plugged-b3d448d1-073b-4561-93d3-d26eb20839fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:28:40 compute-0 nova_compute[351685]: 2025-10-03 11:28:40.605 2 DEBUG oslo_concurrency.lockutils [req-82395a04-1621-4674-9754-d63f43efd0c6 req-a299de32-40f4-4699-9309-e5bc0707001d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:40 compute-0 nova_compute[351685]: 2025-10-03 11:28:40.605 2 DEBUG oslo_concurrency.lockutils [req-82395a04-1621-4674-9754-d63f43efd0c6 req-a299de32-40f4-4699-9309-e5bc0707001d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:40 compute-0 nova_compute[351685]: 2025-10-03 11:28:40.609 2 DEBUG oslo_concurrency.lockutils [req-82395a04-1621-4674-9754-d63f43efd0c6 req-a299de32-40f4-4699-9309-e5bc0707001d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:40 compute-0 nova_compute[351685]: 2025-10-03 11:28:40.609 2 DEBUG nova.compute.manager [req-82395a04-1621-4674-9754-d63f43efd0c6 req-a299de32-40f4-4699-9309-e5bc0707001d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] No waiting events found dispatching network-vif-plugged-b3d448d1-073b-4561-93d3-d26eb20839fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:28:40 compute-0 nova_compute[351685]: 2025-10-03 11:28:40.609 2 WARNING nova.compute.manager [req-82395a04-1621-4674-9754-d63f43efd0c6 req-a299de32-40f4-4699-9309-e5bc0707001d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Received unexpected event network-vif-plugged-b3d448d1-073b-4561-93d3-d26eb20839fe for instance with vm_state deleted and task_state None.#033[00m
Oct  3 11:28:40 compute-0 nova_compute[351685]: 2025-10-03 11:28:40.610 2 DEBUG nova.compute.manager [req-82395a04-1621-4674-9754-d63f43efd0c6 req-a299de32-40f4-4699-9309-e5bc0707001d 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Received event network-vif-deleted-b3d448d1-073b-4561-93d3-d26eb20839fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:28:40 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:28:40 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2285390176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:28:40 compute-0 nova_compute[351685]: 2025-10-03 11:28:40.839 2 DEBUG oslo_concurrency.processutils [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:28:40 compute-0 nova_compute[351685]: 2025-10-03 11:28:40.848 2 DEBUG nova.compute.provider_tree [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:28:40 compute-0 nova_compute[351685]: 2025-10-03 11:28:40.872 2 DEBUG nova.scheduler.client.report [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:28:40 compute-0 nova_compute[351685]: 2025-10-03 11:28:40.906 2 DEBUG oslo_concurrency.lockutils [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]: {
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:        "osd_id": 1,
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:        "type": "bluestore"
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:    },
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:        "osd_id": 2,
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:        "type": "bluestore"
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:    },
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:        "osd_id": 0,
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:        "type": "bluestore"
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]:    }
Oct  3 11:28:40 compute-0 exciting_archimedes[535498]: }
Oct  3 11:28:40 compute-0 nova_compute[351685]: 2025-10-03 11:28:40.937 2 INFO nova.scheduler.client.report [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Deleted allocations for instance 1cd61d6b-0ef5-458f-88f0-44a4951ea368#033[00m
Oct  3 11:28:40 compute-0 systemd[1]: libpod-104b5fa8a10cc4822ccff5ee300a90241bda931b8bf25816e9be0522ba2b04fc.scope: Deactivated successfully.
Oct  3 11:28:40 compute-0 systemd[1]: libpod-104b5fa8a10cc4822ccff5ee300a90241bda931b8bf25816e9be0522ba2b04fc.scope: Consumed 1.122s CPU time.
Oct  3 11:28:40 compute-0 podman[535482]: 2025-10-03 11:28:40.953452227 +0000 UTC m=+1.354434480 container died 104b5fa8a10cc4822ccff5ee300a90241bda931b8bf25816e9be0522ba2b04fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 11:28:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-37acbe389b959064161569d5772aad6088fd66eb5bed07554e888f4f820119f2-merged.mount: Deactivated successfully.
Oct  3 11:28:41 compute-0 nova_compute[351685]: 2025-10-03 11:28:41.018 2 DEBUG oslo_concurrency.lockutils [None req-be1684bb-9893-4db3-a7af-47f084863be5 a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "1cd61d6b-0ef5-458f-88f0-44a4951ea368" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:41 compute-0 podman[535482]: 2025-10-03 11:28:41.026988734 +0000 UTC m=+1.427970997 container remove 104b5fa8a10cc4822ccff5ee300a90241bda931b8bf25816e9be0522ba2b04fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:28:41 compute-0 systemd[1]: libpod-conmon-104b5fa8a10cc4822ccff5ee300a90241bda931b8bf25816e9be0522ba2b04fc.scope: Deactivated successfully.
Oct  3 11:28:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:28:41 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:28:41 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:28:41 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:28:41 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 7920a2bc-13f7-4fed-8d92-d2cf20964bc2 does not exist
Oct  3 11:28:41 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev cb80260a-25b7-4a0c-8a5a-50a68d8af553 does not exist
Oct  3 11:28:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:41.706 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:41.706 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:41.707 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3646: 321 pgs: 321 active+clean; 318 MiB data, 439 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.059 2 DEBUG oslo_concurrency.lockutils [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.060 2 DEBUG oslo_concurrency.lockutils [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.060 2 DEBUG oslo_concurrency.lockutils [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.061 2 DEBUG oslo_concurrency.lockutils [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.061 2 DEBUG oslo_concurrency.lockutils [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.062 2 INFO nova.compute.manager [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Terminating instance#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.063 2 DEBUG nova.compute.manager [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  3 11:28:42 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:28:42 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:28:42 compute-0 kernel: tapd84c98dc-84 (unregistering): left promiscuous mode
Oct  3 11:28:42 compute-0 NetworkManager[45015]: <info>  [1759490922.1634] device (tapd84c98dc-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:42 compute-0 ovn_controller[88471]: 2025-10-03T11:28:42Z|00184|binding|INFO|Releasing lport d84c98dc-8422-4b51-aaf4-2f9403a4649c from this chassis (sb_readonly=0)
Oct  3 11:28:42 compute-0 ovn_controller[88471]: 2025-10-03T11:28:42Z|00185|binding|INFO|Setting lport d84c98dc-8422-4b51-aaf4-2f9403a4649c down in Southbound
Oct  3 11:28:42 compute-0 ovn_controller[88471]: 2025-10-03T11:28:42Z|00186|binding|INFO|Removing iface tapd84c98dc-84 ovn-installed in OVS
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:42.206 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:87:f3 10.100.0.5'], port_security=['fa:16:3e:f9:87:f3 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'fd405fd5-7402-43b4-8ab3-a52c18493a6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cae90f5-24f0-45af-a3e3-a77dbb0a12af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ea98f29bce64ae8ba81224645237ac7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ea5a8a77-844c-46ab-af9d-4b5c2ea8e4c1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26eb2c44-9631-42b8-a4d9-ab8785ccd098, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=d84c98dc-8422-4b51-aaf4-2f9403a4649c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:28:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:42.208 284328 INFO neutron.agent.ovn.metadata.agent [-] Port d84c98dc-8422-4b51-aaf4-2f9403a4649c in datapath 0cae90f5-24f0-45af-a3e3-a77dbb0a12af unbound from our chassis#033[00m
Oct  3 11:28:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:42.210 284328 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0cae90f5-24f0-45af-a3e3-a77dbb0a12af, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:42.213 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[e830b429-44a8-45fe-8c94-df64128c61c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:42.216 284328 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af namespace which is not needed anymore#033[00m
Oct  3 11:28:42 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct  3 11:28:42 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 51.573s CPU time.
Oct  3 11:28:42 compute-0 systemd-machined[137653]: Machine qemu-12-instance-0000000c terminated.
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.313 2 INFO nova.virt.libvirt.driver [-] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Instance destroyed successfully.#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.314 2 DEBUG nova.objects.instance [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lazy-loading 'resources' on Instance uuid fd405fd5-7402-43b4-8ab3-a52c18493a6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.334 2 DEBUG nova.virt.libvirt.vif [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T11:26:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-447198342',display_name='tempest-TestNetworkBasicOps-server-447198342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-447198342',id=12,image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPPFGiS/9H/NKZhedR671AmR5bc0vWONukTWlO3x050R+mQUBddzuqLnrqAfqEtxXZBullE/O5LHjMA86f5AxVTlzZ9Lb5pyHDZ0PNtZgTOmhM6mUFx3RhorO28GwrSoFw==',key_name='tempest-TestNetworkBasicOps-176805461',keypairs=<?>,launch_index=0,launched_at=2025-10-03T11:26:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5ea98f29bce64ae8ba81224645237ac7',ramdisk_id='',reservation_id='r-u5uhlazy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a34ed8d-90df-4a16-a968-c59b7cafa2f1',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-975938423',owner_user_name='tempest-TestNetworkBasicOps-975938423-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-03T11:26:45Z,user_data=None,user_id='a62337822a774597b9068cf3aed6a92f',uuid=fd405fd5-7402-43b4-8ab3-a52c18493a6e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "address": "fa:16:3e:f9:87:f3", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd84c98dc-84", "ovs_interfaceid": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.335 2 DEBUG nova.network.os_vif_util [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Converting VIF {"id": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "address": "fa:16:3e:f9:87:f3", "network": {"id": "0cae90f5-24f0-45af-a3e3-a77dbb0a12af", "bridge": "br-int", "label": "tempest-network-smoke--1012052952", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ea98f29bce64ae8ba81224645237ac7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd84c98dc-84", "ovs_interfaceid": "d84c98dc-8422-4b51-aaf4-2f9403a4649c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.336 2 DEBUG nova.network.os_vif_util [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:87:f3,bridge_name='br-int',has_traffic_filtering=True,id=d84c98dc-8422-4b51-aaf4-2f9403a4649c,network=Network(0cae90f5-24f0-45af-a3e3-a77dbb0a12af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd84c98dc-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.337 2 DEBUG os_vif [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:87:f3,bridge_name='br-int',has_traffic_filtering=True,id=d84c98dc-8422-4b51-aaf4-2f9403a4649c,network=Network(0cae90f5-24f0-45af-a3e3-a77dbb0a12af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd84c98dc-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.341 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd84c98dc-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.348 2 INFO os_vif [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:87:f3,bridge_name='br-int',has_traffic_filtering=True,id=d84c98dc-8422-4b51-aaf4-2f9403a4649c,network=Network(0cae90f5-24f0-45af-a3e3-a77dbb0a12af),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd84c98dc-84')#033[00m
Oct  3 11:28:42 compute-0 neutron-haproxy-ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af[531137]: [NOTICE]   (531186) : haproxy version is 2.8.14-c23fe91
Oct  3 11:28:42 compute-0 neutron-haproxy-ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af[531137]: [NOTICE]   (531186) : path to executable is /usr/sbin/haproxy
Oct  3 11:28:42 compute-0 neutron-haproxy-ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af[531137]: [WARNING]  (531186) : Exiting Master process...
Oct  3 11:28:42 compute-0 neutron-haproxy-ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af[531137]: [WARNING]  (531186) : Exiting Master process...
Oct  3 11:28:42 compute-0 neutron-haproxy-ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af[531137]: [ALERT]    (531186) : Current worker (531204) exited with code 143 (Terminated)
Oct  3 11:28:42 compute-0 neutron-haproxy-ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af[531137]: [WARNING]  (531186) : All workers exited. Exiting... (0)
Oct  3 11:28:42 compute-0 systemd[1]: libpod-f54cccda760ad0ed4821575473bb208b39c6a6d5b3b7206112e3cf4a7a38cada.scope: Deactivated successfully.
Oct  3 11:28:42 compute-0 podman[535657]: 2025-10-03 11:28:42.500329013 +0000 UTC m=+0.111454572 container died f54cccda760ad0ed4821575473bb208b39c6a6d5b3b7206112e3cf4a7a38cada (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.836 2 DEBUG nova.compute.manager [req-d08c048b-e646-4ea9-a25e-f652d4d398b5 req-e55e4f60-c42f-4cb5-816b-e42c66760562 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Received event network-vif-unplugged-d84c98dc-8422-4b51-aaf4-2f9403a4649c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.837 2 DEBUG oslo_concurrency.lockutils [req-d08c048b-e646-4ea9-a25e-f652d4d398b5 req-e55e4f60-c42f-4cb5-816b-e42c66760562 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.838 2 DEBUG oslo_concurrency.lockutils [req-d08c048b-e646-4ea9-a25e-f652d4d398b5 req-e55e4f60-c42f-4cb5-816b-e42c66760562 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.839 2 DEBUG oslo_concurrency.lockutils [req-d08c048b-e646-4ea9-a25e-f652d4d398b5 req-e55e4f60-c42f-4cb5-816b-e42c66760562 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.839 2 DEBUG nova.compute.manager [req-d08c048b-e646-4ea9-a25e-f652d4d398b5 req-e55e4f60-c42f-4cb5-816b-e42c66760562 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] No waiting events found dispatching network-vif-unplugged-d84c98dc-8422-4b51-aaf4-2f9403a4649c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.840 2 DEBUG nova.compute.manager [req-d08c048b-e646-4ea9-a25e-f652d4d398b5 req-e55e4f60-c42f-4cb5-816b-e42c66760562 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Received event network-vif-unplugged-d84c98dc-8422-4b51-aaf4-2f9403a4649c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f54cccda760ad0ed4821575473bb208b39c6a6d5b3b7206112e3cf4a7a38cada-userdata-shm.mount: Deactivated successfully.
Oct  3 11:28:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e913b6595b5ff98fbc8d5b5132e43c4498aecbcaa5829b9968753d6c93f43c32-merged.mount: Deactivated successfully.
Oct  3 11:28:42 compute-0 nova_compute[351685]: 2025-10-03 11:28:42.875 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907#033[00m
Oct  3 11:28:42 compute-0 podman[535657]: 2025-10-03 11:28:42.939758227 +0000 UTC m=+0.550883796 container cleanup f54cccda760ad0ed4821575473bb208b39c6a6d5b3b7206112e3cf4a7a38cada (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  3 11:28:42 compute-0 systemd[1]: libpod-conmon-f54cccda760ad0ed4821575473bb208b39c6a6d5b3b7206112e3cf4a7a38cada.scope: Deactivated successfully.
Oct  3 11:28:43 compute-0 nova_compute[351685]: 2025-10-03 11:28:43.183 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:28:43 compute-0 nova_compute[351685]: 2025-10-03 11:28:43.185 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:28:43 compute-0 nova_compute[351685]: 2025-10-03 11:28:43.185 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:28:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:28:43 compute-0 podman[535691]: 2025-10-03 11:28:43.543723793 +0000 UTC m=+0.561627530 container remove f54cccda760ad0ed4821575473bb208b39c6a6d5b3b7206112e3cf4a7a38cada (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  3 11:28:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:43.556 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[ec256bc6-9c4d-4001-93f9-60b803cf7018]: (4, ('Fri Oct  3 11:28:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af (f54cccda760ad0ed4821575473bb208b39c6a6d5b3b7206112e3cf4a7a38cada)\nf54cccda760ad0ed4821575473bb208b39c6a6d5b3b7206112e3cf4a7a38cada\nFri Oct  3 11:28:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af (f54cccda760ad0ed4821575473bb208b39c6a6d5b3b7206112e3cf4a7a38cada)\nf54cccda760ad0ed4821575473bb208b39c6a6d5b3b7206112e3cf4a7a38cada\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:43.560 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[31740fca-1d71-4e82-8ee5-1d309adeead1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:43.562 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cae90f5-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:28:43 compute-0 kernel: tap0cae90f5-20: left promiscuous mode
Oct  3 11:28:43 compute-0 nova_compute[351685]: 2025-10-03 11:28:43.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:43 compute-0 nova_compute[351685]: 2025-10-03 11:28:43.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:43.586 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[471906d9-0d85-4f65-a16e-7de457913cc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:43.625 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[502a6436-3cd7-411d-a2d3-858256bb50cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:43.628 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[7f4eeee1-d651-46ec-8092-d8f4ca4f5844]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:43.647 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[5b8ca4f5-3850-4442-ae92-4754f5a78384]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 998602, 'reachable_time': 43812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 535705, 'error': None, 'target': 'ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d0cae90f5\x2d24f0\x2d45af\x2da3e3\x2da77dbb0a12af.mount: Deactivated successfully.
Oct  3 11:28:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:43.650 284439 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0cae90f5-24f0-45af-a3e3-a77dbb0a12af deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  3 11:28:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:43.650 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[76874fb4-b368-4f71-a79e-7d4bde08b88d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:28:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3647: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 1.4 MiB/s wr, 80 op/s
Oct  3 11:28:44 compute-0 nova_compute[351685]: 2025-10-03 11:28:44.185 2 INFO nova.virt.libvirt.driver [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Deleting instance files /var/lib/nova/instances/fd405fd5-7402-43b4-8ab3-a52c18493a6e_del#033[00m
Oct  3 11:28:44 compute-0 nova_compute[351685]: 2025-10-03 11:28:44.187 2 INFO nova.virt.libvirt.driver [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Deletion of /var/lib/nova/instances/fd405fd5-7402-43b4-8ab3-a52c18493a6e_del complete#033[00m
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:28:44.224440) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490924224502, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 2072, "num_deletes": 251, "total_data_size": 3351628, "memory_usage": 3412928, "flush_reason": "Manual Compaction"}
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490924249357, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 3284945, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71659, "largest_seqno": 73730, "table_properties": {"data_size": 3275539, "index_size": 5899, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19358, "raw_average_key_size": 20, "raw_value_size": 3256716, "raw_average_value_size": 3410, "num_data_blocks": 261, "num_entries": 955, "num_filter_entries": 955, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759490707, "oldest_key_time": 1759490707, "file_creation_time": 1759490924, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 25917 microseconds, and 14996 cpu microseconds.
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:28:44 compute-0 nova_compute[351685]: 2025-10-03 11:28:44.251 2 INFO nova.compute.manager [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Took 2.19 seconds to destroy the instance on the hypervisor.#033[00m
Oct  3 11:28:44 compute-0 nova_compute[351685]: 2025-10-03 11:28:44.253 2 DEBUG oslo.service.loopingcall [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  3 11:28:44 compute-0 nova_compute[351685]: 2025-10-03 11:28:44.253 2 DEBUG nova.compute.manager [-] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  3 11:28:44 compute-0 nova_compute[351685]: 2025-10-03 11:28:44.254 2 DEBUG nova.network.neutron [-] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:28:44.250359) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 3284945 bytes OK
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:28:44.250924) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:28:44.258041) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:28:44.258071) EVENT_LOG_v1 {"time_micros": 1759490924258063, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:28:44.258294) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 3342921, prev total WAL file size 3342921, number of live WAL files 2.
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:28:44.261266) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(3207KB)], [176(8913KB)]
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490924261347, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 12412309, "oldest_snapshot_seqno": -1}
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 8280 keys, 10652190 bytes, temperature: kUnknown
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490924318990, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 10652190, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10600900, "index_size": 29481, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20741, "raw_key_size": 217972, "raw_average_key_size": 26, "raw_value_size": 10454914, "raw_average_value_size": 1262, "num_data_blocks": 1157, "num_entries": 8280, "num_filter_entries": 8280, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759490924, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:28:44.319309) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 10652190 bytes
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:28:44.321780) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 215.0 rd, 184.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.7 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 8799, records dropped: 519 output_compression: NoCompression
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:28:44.321801) EVENT_LOG_v1 {"time_micros": 1759490924321792, "job": 110, "event": "compaction_finished", "compaction_time_micros": 57734, "compaction_time_cpu_micros": 30432, "output_level": 6, "num_output_files": 1, "total_output_size": 10652190, "num_input_records": 8799, "num_output_records": 8280, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490924322578, "job": 110, "event": "table_file_deletion", "file_number": 178}
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490924324463, "job": 110, "event": "table_file_deletion", "file_number": 176}
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:28:44.261137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:28:44.324647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:28:44.324651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:28:44.324653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:28:44.324655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:28:44 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:28:44.324657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:28:44 compute-0 nova_compute[351685]: 2025-10-03 11:28:44.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:45 compute-0 nova_compute[351685]: 2025-10-03 11:28:45.218 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Updating instance_info_cache with network_info: [{"id": "226590bd-fa92-4e26-8879-8782d015ad61", "address": "fa:16:3e:c0:36:62", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.141", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap226590bd-fa", "ovs_interfaceid": "226590bd-fa92-4e26-8879-8782d015ad61", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:28:45 compute-0 nova_compute[351685]: 2025-10-03 11:28:45.245 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:28:45 compute-0 nova_compute[351685]: 2025-10-03 11:28:45.246 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:28:45 compute-0 nova_compute[351685]: 2025-10-03 11:28:45.293 2 DEBUG nova.compute.manager [req-c6e6fa70-065a-4193-b4ef-ccc29d9433ea req-587f9ec0-0742-4bcd-86ee-bef40c8120ea 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Received event network-vif-plugged-d84c98dc-8422-4b51-aaf4-2f9403a4649c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:28:45 compute-0 nova_compute[351685]: 2025-10-03 11:28:45.294 2 DEBUG oslo_concurrency.lockutils [req-c6e6fa70-065a-4193-b4ef-ccc29d9433ea req-587f9ec0-0742-4bcd-86ee-bef40c8120ea 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:45 compute-0 nova_compute[351685]: 2025-10-03 11:28:45.294 2 DEBUG oslo_concurrency.lockutils [req-c6e6fa70-065a-4193-b4ef-ccc29d9433ea req-587f9ec0-0742-4bcd-86ee-bef40c8120ea 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:45 compute-0 nova_compute[351685]: 2025-10-03 11:28:45.295 2 DEBUG oslo_concurrency.lockutils [req-c6e6fa70-065a-4193-b4ef-ccc29d9433ea req-587f9ec0-0742-4bcd-86ee-bef40c8120ea 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:45 compute-0 nova_compute[351685]: 2025-10-03 11:28:45.296 2 DEBUG nova.compute.manager [req-c6e6fa70-065a-4193-b4ef-ccc29d9433ea req-587f9ec0-0742-4bcd-86ee-bef40c8120ea 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] No waiting events found dispatching network-vif-plugged-d84c98dc-8422-4b51-aaf4-2f9403a4649c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:28:45 compute-0 nova_compute[351685]: 2025-10-03 11:28:45.296 2 WARNING nova.compute.manager [req-c6e6fa70-065a-4193-b4ef-ccc29d9433ea req-587f9ec0-0742-4bcd-86ee-bef40c8120ea 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Received unexpected event network-vif-plugged-d84c98dc-8422-4b51-aaf4-2f9403a4649c for instance with vm_state active and task_state deleting.#033[00m
Oct  3 11:28:45 compute-0 nova_compute[351685]: 2025-10-03 11:28:45.392 2 DEBUG nova.network.neutron [-] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:28:45 compute-0 nova_compute[351685]: 2025-10-03 11:28:45.414 2 INFO nova.compute.manager [-] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Took 1.16 seconds to deallocate network for instance.#033[00m
Oct  3 11:28:45 compute-0 nova_compute[351685]: 2025-10-03 11:28:45.468 2 DEBUG oslo_concurrency.lockutils [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:45 compute-0 nova_compute[351685]: 2025-10-03 11:28:45.469 2 DEBUG oslo_concurrency.lockutils [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:45 compute-0 nova_compute[351685]: 2025-10-03 11:28:45.553 2 DEBUG oslo_concurrency.processutils [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:28:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3648: 321 pgs: 321 active+clean; 244 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 265 KiB/s rd, 810 KiB/s wr, 83 op/s
Oct  3 11:28:46 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:28:46 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3324233198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:28:46 compute-0 nova_compute[351685]: 2025-10-03 11:28:46.035 2 DEBUG oslo_concurrency.processutils [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:28:46 compute-0 nova_compute[351685]: 2025-10-03 11:28:46.047 2 DEBUG nova.compute.provider_tree [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:28:46 compute-0 nova_compute[351685]: 2025-10-03 11:28:46.064 2 DEBUG nova.scheduler.client.report [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:28:46 compute-0 nova_compute[351685]: 2025-10-03 11:28:46.090 2 DEBUG oslo_concurrency.lockutils [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:46 compute-0 nova_compute[351685]: 2025-10-03 11:28:46.127 2 INFO nova.scheduler.client.report [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Deleted allocations for instance fd405fd5-7402-43b4-8ab3-a52c18493a6e#033[00m
Oct  3 11:28:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:28:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:28:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:28:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:28:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:28:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:28:46 compute-0 nova_compute[351685]: 2025-10-03 11:28:46.215 2 DEBUG oslo_concurrency.lockutils [None req-736db6f6-dc87-421d-b7a4-0c0ded92beaf a62337822a774597b9068cf3aed6a92f 5ea98f29bce64ae8ba81224645237ac7 - - default default] Lock "fd405fd5-7402-43b4-8ab3-a52c18493a6e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:28:46
Oct  3 11:28:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:28:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:28:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'images', '.mgr', 'vms', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'default.rgw.log']
Oct  3 11:28:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:28:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:28:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:28:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:28:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:28:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:28:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:28:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:28:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:28:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:28:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:28:47 compute-0 nova_compute[351685]: 2025-10-03 11:28:47.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:47 compute-0 nova_compute[351685]: 2025-10-03 11:28:47.562 2 DEBUG nova.compute.manager [req-2455dd52-9f37-42a1-855c-5fd7d54ce7a2 req-0f1e8685-ee7a-4e6b-8b65-fc4c3c146f98 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Received event network-vif-deleted-d84c98dc-8422-4b51-aaf4-2f9403a4649c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:28:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3649: 321 pgs: 321 active+clean; 244 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 7.7 KiB/s wr, 46 op/s
Oct  3 11:28:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:28:48 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:28:48.513 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:28:48 compute-0 nova_compute[351685]: 2025-10-03 11:28:48.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:49 compute-0 ovn_controller[88471]: 2025-10-03T11:28:49Z|00187|binding|INFO|Releasing lport 71787e87-58e9-457f-840d-4d7e879d0280 from this chassis (sb_readonly=0)
Oct  3 11:28:49 compute-0 ovn_controller[88471]: 2025-10-03T11:28:49Z|00188|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:28:49 compute-0 nova_compute[351685]: 2025-10-03 11:28:49.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3650: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 8.0 KiB/s wr, 58 op/s
Oct  3 11:28:49 compute-0 nova_compute[351685]: 2025-10-03 11:28:49.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3651: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.5 KiB/s wr, 42 op/s
Oct  3 11:28:52 compute-0 nova_compute[351685]: 2025-10-03 11:28:52.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:52 compute-0 nova_compute[351685]: 2025-10-03 11:28:52.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:28:52 compute-0 nova_compute[351685]: 2025-10-03 11:28:52.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:28:52 compute-0 nova_compute[351685]: 2025-10-03 11:28:52.880 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759490917.877062, 1cd61d6b-0ef5-458f-88f0-44a4951ea368 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:28:52 compute-0 nova_compute[351685]: 2025-10-03 11:28:52.881 2 INFO nova.compute.manager [-] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] VM Stopped (Lifecycle Event)#033[00m
Oct  3 11:28:52 compute-0 nova_compute[351685]: 2025-10-03 11:28:52.932 2 DEBUG nova.compute.manager [None req-52ad4513-d00c-44bc-9d98-33dbd0f352a0 - - - - - -] [instance: 1cd61d6b-0ef5-458f-88f0-44a4951ea368] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:28:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:28:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3652: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 32 op/s
Oct  3 11:28:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:28:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2260440920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:28:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:28:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2260440920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:28:54 compute-0 nova_compute[351685]: 2025-10-03 11:28:54.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:54 compute-0 podman[535734]: 2025-10-03 11:28:54.851834801 +0000 UTC m=+0.098736625 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 11:28:54 compute-0 podman[535732]: 2025-10-03 11:28:54.867471863 +0000 UTC m=+0.125057120 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 11:28:54 compute-0 podman[535731]: 2025-10-03 11:28:54.86769977 +0000 UTC m=+0.126299109 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 11:28:54 compute-0 podman[535733]: 2025-10-03 11:28:54.871660657 +0000 UTC m=+0.116119862 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 11:28:54 compute-0 podman[535744]: 2025-10-03 11:28:54.877548506 +0000 UTC m=+0.116806975 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  3 11:28:54 compute-0 podman[535755]: 2025-10-03 11:28:54.897214776 +0000 UTC m=+0.124159271 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:28:54 compute-0 podman[535747]: 2025-10-03 11:28:54.905848333 +0000 UTC m=+0.139035817 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  3 11:28:54 compute-0 nova_compute[351685]: 2025-10-03 11:28:54.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3653: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013093962647326952 of space, bias 1.0, pg target 0.39281887941980853 quantized to 32 (current 32)
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:28:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:28:57 compute-0 nova_compute[351685]: 2025-10-03 11:28:57.310 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759490922.3084896, fd405fd5-7402-43b4-8ab3-a52c18493a6e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:28:57 compute-0 nova_compute[351685]: 2025-10-03 11:28:57.311 2 INFO nova.compute.manager [-] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] VM Stopped (Lifecycle Event)#033[00m
Oct  3 11:28:57 compute-0 nova_compute[351685]: 2025-10-03 11:28:57.337 2 DEBUG nova.compute.manager [None req-4d6820fe-45d1-4dff-ac77-6f7448d6339f - - - - - -] [instance: fd405fd5-7402-43b4-8ab3-a52c18493a6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:28:57 compute-0 nova_compute[351685]: 2025-10-03 11:28:57.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:28:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3654: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 341 B/s wr, 12 op/s
Oct  3 11:28:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:28:59 compute-0 nova_compute[351685]: 2025-10-03 11:28:59.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:28:59 compute-0 podman[157165]: time="2025-10-03T11:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:28:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:28:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9592 "" "Go-http-client/1.1"
Oct  3 11:28:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3655: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 341 B/s wr, 12 op/s
Oct  3 11:28:59 compute-0 nova_compute[351685]: 2025-10-03 11:28:59.885 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:28:59 compute-0 nova_compute[351685]: 2025-10-03 11:28:59.886 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:28:59 compute-0 nova_compute[351685]: 2025-10-03 11:28:59.886 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:28:59 compute-0 nova_compute[351685]: 2025-10-03 11:28:59.887 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:28:59 compute-0 nova_compute[351685]: 2025-10-03 11:28:59.887 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:28:59 compute-0 nova_compute[351685]: 2025-10-03 11:28:59.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:29:00 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/44881454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:29:00 compute-0 nova_compute[351685]: 2025-10-03 11:29:00.375 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:29:00 compute-0 nova_compute[351685]: 2025-10-03 11:29:00.592 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:29:00 compute-0 nova_compute[351685]: 2025-10-03 11:29:00.593 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:29:00 compute-0 nova_compute[351685]: 2025-10-03 11:29:00.594 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:29:00 compute-0 nova_compute[351685]: 2025-10-03 11:29:00.601 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:29:00 compute-0 nova_compute[351685]: 2025-10-03 11:29:00.602 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:29:00 compute-0 nova_compute[351685]: 2025-10-03 11:29:00.992 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:29:00 compute-0 nova_compute[351685]: 2025-10-03 11:29:00.993 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3405MB free_disk=59.909732818603516GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:29:00 compute-0 nova_compute[351685]: 2025-10-03 11:29:00.994 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:29:00 compute-0 nova_compute[351685]: 2025-10-03 11:29:00.994 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:29:01 compute-0 nova_compute[351685]: 2025-10-03 11:29:01.128 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:29:01 compute-0 nova_compute[351685]: 2025-10-03 11:29:01.130 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 83fc22ce-d2e4-468a-b166-04f2743fa68d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:29:01 compute-0 nova_compute[351685]: 2025-10-03 11:29:01.131 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:29:01 compute-0 nova_compute[351685]: 2025-10-03 11:29:01.132 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:29:01 compute-0 nova_compute[351685]: 2025-10-03 11:29:01.149 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 11:29:01 compute-0 nova_compute[351685]: 2025-10-03 11:29:01.168 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 11:29:01 compute-0 nova_compute[351685]: 2025-10-03 11:29:01.170 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 11:29:01 compute-0 nova_compute[351685]: 2025-10-03 11:29:01.192 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 11:29:01 compute-0 nova_compute[351685]: 2025-10-03 11:29:01.248 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 11:29:01 compute-0 nova_compute[351685]: 2025-10-03 11:29:01.323 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:29:01 compute-0 nova_compute[351685]: 2025-10-03 11:29:01.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:01 compute-0 openstack_network_exporter[367524]: ERROR   11:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:29:01 compute-0 openstack_network_exporter[367524]: ERROR   11:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:29:01 compute-0 openstack_network_exporter[367524]: ERROR   11:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:29:01 compute-0 openstack_network_exporter[367524]: ERROR   11:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:29:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:29:01 compute-0 openstack_network_exporter[367524]: ERROR   11:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:29:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:29:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:29:01 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1793259485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:29:01 compute-0 nova_compute[351685]: 2025-10-03 11:29:01.814 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:29:01 compute-0 nova_compute[351685]: 2025-10-03 11:29:01.822 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:29:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3656: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:01 compute-0 nova_compute[351685]: 2025-10-03 11:29:01.925 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:29:02 compute-0 nova_compute[351685]: 2025-10-03 11:29:02.167 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:29:02 compute-0 nova_compute[351685]: 2025-10-03 11:29:02.169 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:29:02 compute-0 nova_compute[351685]: 2025-10-03 11:29:02.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:02 compute-0 nova_compute[351685]: 2025-10-03 11:29:02.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:29:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3657: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:05 compute-0 nova_compute[351685]: 2025-10-03 11:29:05.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:05 compute-0 nova_compute[351685]: 2025-10-03 11:29:05.170 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:29:05 compute-0 nova_compute[351685]: 2025-10-03 11:29:05.171 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:29:05 compute-0 nova_compute[351685]: 2025-10-03 11:29:05.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:29:05 compute-0 nova_compute[351685]: 2025-10-03 11:29:05.732 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:29:05 compute-0 podman[535914]: 2025-10-03 11:29:05.809186647 +0000 UTC m=+0.072397401 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 11:29:05 compute-0 podman[535916]: 2025-10-03 11:29:05.814219728 +0000 UTC m=+0.065749178 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 11:29:05 compute-0 podman[535915]: 2025-10-03 11:29:05.838751755 +0000 UTC m=+0.086892766 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_id=edpm, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, managed_by=edpm_ansible, distribution-scope=public, com.redhat.component=ubi9-container, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9)
Oct  3 11:29:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3658: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:06 compute-0 nova_compute[351685]: 2025-10-03 11:29:06.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:29:06 compute-0 nova_compute[351685]: 2025-10-03 11:29:06.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:29:06 compute-0 nova_compute[351685]: 2025-10-03 11:29:06.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:07 compute-0 nova_compute[351685]: 2025-10-03 11:29:07.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3659: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:29:08.545992) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490948546039, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 447, "num_deletes": 250, "total_data_size": 355459, "memory_usage": 363632, "flush_reason": "Manual Compaction"}
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490948597596, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 272729, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73731, "largest_seqno": 74177, "table_properties": {"data_size": 270321, "index_size": 507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6576, "raw_average_key_size": 20, "raw_value_size": 265467, "raw_average_value_size": 819, "num_data_blocks": 23, "num_entries": 324, "num_filter_entries": 324, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759490925, "oldest_key_time": 1759490925, "file_creation_time": 1759490948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 51672 microseconds, and 1651 cpu microseconds.
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:29:08.597661) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 272729 bytes OK
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:29:08.597680) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:29:08.629906) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:29:08.629949) EVENT_LOG_v1 {"time_micros": 1759490948629939, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:29:08.629973) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 352752, prev total WAL file size 352752, number of live WAL files 2.
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:29:08.630905) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323534' seq:72057594037927935, type:22 .. '6D6772737461740033353035' seq:0, type:0; will stop at (end)
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(266KB)], [179(10MB)]
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490948630947, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 10924919, "oldest_snapshot_seqno": -1}
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 8103 keys, 7715657 bytes, temperature: kUnknown
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490948861859, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 7715657, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7670168, "index_size": 24128, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 214483, "raw_average_key_size": 26, "raw_value_size": 7531931, "raw_average_value_size": 929, "num_data_blocks": 934, "num_entries": 8103, "num_filter_entries": 8103, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759490948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:29:08.862106) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 7715657 bytes
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:29:08.924312) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 47.3 rd, 33.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.2 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(68.3) write-amplify(28.3) OK, records in: 8604, records dropped: 501 output_compression: NoCompression
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:29:08.924354) EVENT_LOG_v1 {"time_micros": 1759490948924339, "job": 112, "event": "compaction_finished", "compaction_time_micros": 230994, "compaction_time_cpu_micros": 32796, "output_level": 6, "num_output_files": 1, "total_output_size": 7715657, "num_input_records": 8604, "num_output_records": 8103, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490948924596, "job": 112, "event": "table_file_deletion", "file_number": 181}
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759490948926479, "job": 112, "event": "table_file_deletion", "file_number": 179}
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:29:08.630816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:29:08.926659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:29:08.926663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:29:08.926665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:29:08.926667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:29:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:29:08.926669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:29:09 compute-0 ovn_controller[88471]: 2025-10-03T11:29:09Z|00189|binding|INFO|Releasing lport 71787e87-58e9-457f-840d-4d7e879d0280 from this chassis (sb_readonly=0)
Oct  3 11:29:09 compute-0 ovn_controller[88471]: 2025-10-03T11:29:09Z|00190|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:29:09 compute-0 nova_compute[351685]: 2025-10-03 11:29:09.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3660: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:10 compute-0 nova_compute[351685]: 2025-10-03 11:29:10.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3661: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:12 compute-0 nova_compute[351685]: 2025-10-03 11:29:12.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:13 compute-0 ovn_controller[88471]: 2025-10-03T11:29:13Z|00191|binding|INFO|Releasing lport 71787e87-58e9-457f-840d-4d7e879d0280 from this chassis (sb_readonly=0)
Oct  3 11:29:13 compute-0 ovn_controller[88471]: 2025-10-03T11:29:13Z|00192|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:29:13 compute-0 nova_compute[351685]: 2025-10-03 11:29:13.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:29:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3662: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:15 compute-0 nova_compute[351685]: 2025-10-03 11:29:15.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3663: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:29:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:29:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:29:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:29:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:29:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:29:17 compute-0 nova_compute[351685]: 2025-10-03 11:29:17.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3664: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:29:18 compute-0 ovn_controller[88471]: 2025-10-03T11:29:18Z|00193|binding|INFO|Releasing lport 71787e87-58e9-457f-840d-4d7e879d0280 from this chassis (sb_readonly=0)
Oct  3 11:29:18 compute-0 ovn_controller[88471]: 2025-10-03T11:29:18Z|00194|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:29:18 compute-0 nova_compute[351685]: 2025-10-03 11:29:18.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3665: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:20 compute-0 nova_compute[351685]: 2025-10-03 11:29:20.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3666: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:22 compute-0 nova_compute[351685]: 2025-10-03 11:29:22.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:29:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3667: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:25 compute-0 nova_compute[351685]: 2025-10-03 11:29:25.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:25 compute-0 podman[535987]: 2025-10-03 11:29:25.855712096 +0000 UTC m=+0.102859839 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:29:25 compute-0 podman[535980]: 2025-10-03 11:29:25.861656646 +0000 UTC m=+0.110575785 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:29:25 compute-0 podman[535979]: 2025-10-03 11:29:25.865788028 +0000 UTC m=+0.125045468 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, architecture=x86_64, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Oct  3 11:29:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3668: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:25 compute-0 podman[535978]: 2025-10-03 11:29:25.895318235 +0000 UTC m=+0.157700985 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:29:25 compute-0 podman[535992]: 2025-10-03 11:29:25.897591708 +0000 UTC m=+0.130969288 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 11:29:25 compute-0 podman[536003]: 2025-10-03 11:29:25.91201588 +0000 UTC m=+0.139264325 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 11:29:25 compute-0 podman[536005]: 2025-10-03 11:29:25.920604876 +0000 UTC m=+0.146308351 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible)
Oct  3 11:29:27 compute-0 nova_compute[351685]: 2025-10-03 11:29:27.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3669: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:28 compute-0 nova_compute[351685]: 2025-10-03 11:29:28.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:29:29 compute-0 podman[157165]: time="2025-10-03T11:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:29:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:29:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9594 "" "Go-http-client/1.1"
Oct  3 11:29:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3670: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:30 compute-0 nova_compute[351685]: 2025-10-03 11:29:30.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:31 compute-0 openstack_network_exporter[367524]: ERROR   11:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:29:31 compute-0 openstack_network_exporter[367524]: ERROR   11:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:29:31 compute-0 openstack_network_exporter[367524]: ERROR   11:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:29:31 compute-0 openstack_network_exporter[367524]: ERROR   11:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:29:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:29:31 compute-0 openstack_network_exporter[367524]: ERROR   11:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:29:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:29:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3671: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:32 compute-0 nova_compute[351685]: 2025-10-03 11:29:32.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:33 compute-0 nova_compute[351685]: 2025-10-03 11:29:33.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:29:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3672: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:35 compute-0 nova_compute[351685]: 2025-10-03 11:29:35.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3673: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:36 compute-0 podman[536115]: 2025-10-03 11:29:36.803914528 +0000 UTC m=+0.066514193 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 11:29:36 compute-0 podman[536117]: 2025-10-03 11:29:36.819937881 +0000 UTC m=+0.076948127 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:29:36 compute-0 podman[536116]: 2025-10-03 11:29:36.826643606 +0000 UTC m=+0.083781486 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, version=9.4, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, name=ubi9)
Oct  3 11:29:37 compute-0 nova_compute[351685]: 2025-10-03 11:29:37.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3674: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:29:38 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:29:38.550 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:73:2e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:70:f7:73:f2:48'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:29:38 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:29:38.550 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  3 11:29:38 compute-0 nova_compute[351685]: 2025-10-03 11:29:38.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3675: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:40 compute-0 nova_compute[351685]: 2025-10-03 11:29:40.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.907 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.908 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.908 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.915 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.920 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '83fc22ce-d2e4-468a-b166-04f2743fa68d', 'name': 'te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz', 'flavor': {'id': 'b93eb926-1d95-406e-aec3-a907be067084', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b9c8e0cc-ecf1-4fa8-92ee-328b108123cd'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ebbd19d68501417398b05a6a4b7c22de', 'user_id': '8990c210ba8740dc9714739f27702391', 'hostId': '68f1a860c9647e69f481ba6f1cfa913085c986ba3b27987b76a63364', 'status': 'active', 'metadata': {'metering.server_group': '0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.920 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.921 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.921 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.921 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:29:40.921378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.926 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.930 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.930 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.930 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.930 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.930 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.931 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.931 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.931 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.931 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.931 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.932 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.932 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.932 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.932 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:29:40.931058) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.932 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.933 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:29:40.932628) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.954 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.955 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.955 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.969 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.970 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.970 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.971 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.971 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.971 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.971 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.972 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:40.972 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:29:40.972012) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.019 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.020 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.041 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.bytes volume: 30612480 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.042 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.042 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.042 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.042 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.043 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.043 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.043 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.043 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.043 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.043 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.044 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.latency volume: 2539266810 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.044 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.latency volume: 146824610 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.044 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:29:41.043162) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.044 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.045 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.045 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.045 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.045 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.045 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.045 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.045 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.046 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.requests volume: 1112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.046 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.046 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.047 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.047 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:29:41.045282) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.047 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.047 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.047 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.048 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.048 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.048 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.049 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:29:41.047520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.049 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.049 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.049 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.049 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.050 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.050 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:29:41.050024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.051 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.051 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.052 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.052 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.052 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.053 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.053 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.053 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.054 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.054 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.054 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:29:41.053285) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.055 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.055 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.055 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.055 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.055 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.056 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:29:41.055645) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.075 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.095 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.096 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.096 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.096 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.096 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.096 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.096 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.097 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.097 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.097 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.latency volume: 10666610362 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.097 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.098 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.098 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.098 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.098 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.099 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.099 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.099 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.099 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.099 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.requests volume: 324 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.100 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.100 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.101 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.101 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.101 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:29:41.096738) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.101 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.101 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:29:41.098983) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.101 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.101 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.101 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:29:41.101488) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.102 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.102 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.102 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.102 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.103 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.103 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.103 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.104 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:29:41.103164) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.104 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.104 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.104 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.104 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.104 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.104 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets volume: 11 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.105 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.105 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.105 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.105 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.106 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.106 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:29:41.104584) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.106 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:29:41.105857) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.106 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.107 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.107 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.107 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.107 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.108 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.108 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.108 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.108 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.108 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 109380000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.108 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/cpu volume: 159050000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.109 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.109 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.109 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.109 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.109 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:29:41.107026) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:29:41.108372) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.110 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.110 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.110 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.110 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.111 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.111 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.111 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.111 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.111 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.111 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.112 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.112 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.112 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.112 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.112 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.113 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.113 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.113 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.113 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.114 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.114 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.114 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.114 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.114 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/memory.usage volume: 43.296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.114 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.115 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.115 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.115 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.115 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.115 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.115 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.bytes volume: 1652 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.116 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.116 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.116 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.116 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.116 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.117 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.117 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.117 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:29:41.110028) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:29:41.111358) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:29:41.112874) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:29:41.114159) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:29:41.115653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:29:41.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:29:41.116911) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:29:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:29:41.552 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:29:41 compute-0 ovn_controller[88471]: 2025-10-03T11:29:41Z|00195|binding|INFO|Releasing lport 71787e87-58e9-457f-840d-4d7e879d0280 from this chassis (sb_readonly=0)
Oct  3 11:29:41 compute-0 ovn_controller[88471]: 2025-10-03T11:29:41Z|00196|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:29:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:29:41.706 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:29:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:29:41.707 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:29:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:29:41.708 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:29:41 compute-0 nova_compute[351685]: 2025-10-03 11:29:41.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3676: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:29:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:29:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:29:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:29:42 compute-0 nova_compute[351685]: 2025-10-03 11:29:42.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:42 compute-0 nova_compute[351685]: 2025-10-03 11:29:42.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:29:42 compute-0 nova_compute[351685]: 2025-10-03 11:29:42.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:29:42 compute-0 nova_compute[351685]: 2025-10-03 11:29:42.752 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  3 11:29:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:29:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:29:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:29:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:29:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:29:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:29:43 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 518db160-2c93-45c1-9905-ecd2c2200473 does not exist
Oct  3 11:29:43 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev c4e8d16d-2e88-4dfc-b68a-e102d7a8240e does not exist
Oct  3 11:29:43 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 279089e1-4c48-4dba-979e-99bf6f2d2f88 does not exist
Oct  3 11:29:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:29:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:29:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:29:43 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:29:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:29:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:29:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:29:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:29:43 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:29:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:29:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3677: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:44 compute-0 podman[536560]: 2025-10-03 11:29:44.072784901 +0000 UTC m=+0.073704114 container create 6a0629285d137946dbabd61c2b3d91ad01692816f2886b4cfab0090ade10d5d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_haslett, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:29:44 compute-0 podman[536560]: 2025-10-03 11:29:44.04718419 +0000 UTC m=+0.048103483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:29:44 compute-0 systemd[1]: Started libpod-conmon-6a0629285d137946dbabd61c2b3d91ad01692816f2886b4cfab0090ade10d5d3.scope.
Oct  3 11:29:44 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:29:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:29:44 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:29:44 compute-0 podman[536560]: 2025-10-03 11:29:44.208035905 +0000 UTC m=+0.208955198 container init 6a0629285d137946dbabd61c2b3d91ad01692816f2886b4cfab0090ade10d5d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:29:44 compute-0 podman[536560]: 2025-10-03 11:29:44.224484972 +0000 UTC m=+0.225404215 container start 6a0629285d137946dbabd61c2b3d91ad01692816f2886b4cfab0090ade10d5d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:29:44 compute-0 podman[536560]: 2025-10-03 11:29:44.231442635 +0000 UTC m=+0.232361878 container attach 6a0629285d137946dbabd61c2b3d91ad01692816f2886b4cfab0090ade10d5d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:29:44 compute-0 hungry_haslett[536576]: 167 167
Oct  3 11:29:44 compute-0 systemd[1]: libpod-6a0629285d137946dbabd61c2b3d91ad01692816f2886b4cfab0090ade10d5d3.scope: Deactivated successfully.
Oct  3 11:29:44 compute-0 podman[536560]: 2025-10-03 11:29:44.238607275 +0000 UTC m=+0.239526528 container died 6a0629285d137946dbabd61c2b3d91ad01692816f2886b4cfab0090ade10d5d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 11:29:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f98fc746484cf7a0ca2541ab31fe1f16cc668f213945f81db13bbae145c3511-merged.mount: Deactivated successfully.
Oct  3 11:29:44 compute-0 podman[536560]: 2025-10-03 11:29:44.294086824 +0000 UTC m=+0.295006037 container remove 6a0629285d137946dbabd61c2b3d91ad01692816f2886b4cfab0090ade10d5d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  3 11:29:44 compute-0 systemd[1]: libpod-conmon-6a0629285d137946dbabd61c2b3d91ad01692816f2886b4cfab0090ade10d5d3.scope: Deactivated successfully.
Oct  3 11:29:44 compute-0 podman[536598]: 2025-10-03 11:29:44.55418448 +0000 UTC m=+0.065966736 container create 7fd86712e145305f17f0d60ab07c1c51392cf7f94589cae35e84dd08526a6d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:29:44 compute-0 systemd[1]: Started libpod-conmon-7fd86712e145305f17f0d60ab07c1c51392cf7f94589cae35e84dd08526a6d9d.scope.
Oct  3 11:29:44 compute-0 podman[536598]: 2025-10-03 11:29:44.533974842 +0000 UTC m=+0.045757138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:29:44 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:29:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75b0234776d13685d797b6fd39daaca0616bad2a7373afc6b39a42f02e80a0ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:29:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75b0234776d13685d797b6fd39daaca0616bad2a7373afc6b39a42f02e80a0ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:29:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75b0234776d13685d797b6fd39daaca0616bad2a7373afc6b39a42f02e80a0ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:29:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75b0234776d13685d797b6fd39daaca0616bad2a7373afc6b39a42f02e80a0ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:29:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75b0234776d13685d797b6fd39daaca0616bad2a7373afc6b39a42f02e80a0ae/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:29:44 compute-0 podman[536598]: 2025-10-03 11:29:44.686137859 +0000 UTC m=+0.197920145 container init 7fd86712e145305f17f0d60ab07c1c51392cf7f94589cae35e84dd08526a6d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:29:44 compute-0 podman[536598]: 2025-10-03 11:29:44.702995589 +0000 UTC m=+0.214777865 container start 7fd86712e145305f17f0d60ab07c1c51392cf7f94589cae35e84dd08526a6d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mclaren, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:29:44 compute-0 podman[536598]: 2025-10-03 11:29:44.708434123 +0000 UTC m=+0.220216409 container attach 7fd86712e145305f17f0d60ab07c1c51392cf7f94589cae35e84dd08526a6d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:29:45 compute-0 nova_compute[351685]: 2025-10-03 11:29:45.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:45 compute-0 jovial_mclaren[536614]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:29:45 compute-0 jovial_mclaren[536614]: --> relative data size: 1.0
Oct  3 11:29:45 compute-0 jovial_mclaren[536614]: --> All data devices are unavailable
Oct  3 11:29:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3678: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:45 compute-0 systemd[1]: libpod-7fd86712e145305f17f0d60ab07c1c51392cf7f94589cae35e84dd08526a6d9d.scope: Deactivated successfully.
Oct  3 11:29:45 compute-0 systemd[1]: libpod-7fd86712e145305f17f0d60ab07c1c51392cf7f94589cae35e84dd08526a6d9d.scope: Consumed 1.152s CPU time.
Oct  3 11:29:45 compute-0 podman[536598]: 2025-10-03 11:29:45.914813577 +0000 UTC m=+1.426595843 container died 7fd86712e145305f17f0d60ab07c1c51392cf7f94589cae35e84dd08526a6d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  3 11:29:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-75b0234776d13685d797b6fd39daaca0616bad2a7373afc6b39a42f02e80a0ae-merged.mount: Deactivated successfully.
Oct  3 11:29:45 compute-0 podman[536598]: 2025-10-03 11:29:45.983340913 +0000 UTC m=+1.495123179 container remove 7fd86712e145305f17f0d60ab07c1c51392cf7f94589cae35e84dd08526a6d9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 11:29:46 compute-0 systemd[1]: libpod-conmon-7fd86712e145305f17f0d60ab07c1c51392cf7f94589cae35e84dd08526a6d9d.scope: Deactivated successfully.
Oct  3 11:29:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:29:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:29:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:29:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:29:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:29:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:29:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:29:46
Oct  3 11:29:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:29:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:29:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.meta', 'volumes', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'vms']
Oct  3 11:29:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:29:46 compute-0 podman[536792]: 2025-10-03 11:29:46.818157428 +0000 UTC m=+0.062335039 container create b6bd9588e6214483b0e4be0ecb6e356f4f8716225dcb29bc34128f8c800347f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_robinson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:29:46 compute-0 systemd[1]: Started libpod-conmon-b6bd9588e6214483b0e4be0ecb6e356f4f8716225dcb29bc34128f8c800347f7.scope.
Oct  3 11:29:46 compute-0 podman[536792]: 2025-10-03 11:29:46.795171061 +0000 UTC m=+0.039348712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:29:46 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:29:46 compute-0 podman[536792]: 2025-10-03 11:29:46.924691582 +0000 UTC m=+0.168869213 container init b6bd9588e6214483b0e4be0ecb6e356f4f8716225dcb29bc34128f8c800347f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 11:29:46 compute-0 podman[536792]: 2025-10-03 11:29:46.93616243 +0000 UTC m=+0.180340041 container start b6bd9588e6214483b0e4be0ecb6e356f4f8716225dcb29bc34128f8c800347f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_robinson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 11:29:46 compute-0 festive_robinson[536808]: 167 167
Oct  3 11:29:46 compute-0 systemd[1]: libpod-b6bd9588e6214483b0e4be0ecb6e356f4f8716225dcb29bc34128f8c800347f7.scope: Deactivated successfully.
Oct  3 11:29:46 compute-0 podman[536792]: 2025-10-03 11:29:46.941171921 +0000 UTC m=+0.185349532 container attach b6bd9588e6214483b0e4be0ecb6e356f4f8716225dcb29bc34128f8c800347f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:29:46 compute-0 podman[536792]: 2025-10-03 11:29:46.949536529 +0000 UTC m=+0.193714140 container died b6bd9588e6214483b0e4be0ecb6e356f4f8716225dcb29bc34128f8c800347f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:29:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a22c8e9d3e358979e75338ce42c88e4bcd6e20ae4f1cfcdcd6e3dced9caedf4-merged.mount: Deactivated successfully.
Oct  3 11:29:47 compute-0 podman[536792]: 2025-10-03 11:29:47.021895987 +0000 UTC m=+0.266073608 container remove b6bd9588e6214483b0e4be0ecb6e356f4f8716225dcb29bc34128f8c800347f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_robinson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:29:47 compute-0 systemd[1]: libpod-conmon-b6bd9588e6214483b0e4be0ecb6e356f4f8716225dcb29bc34128f8c800347f7.scope: Deactivated successfully.
Oct  3 11:29:47 compute-0 podman[536833]: 2025-10-03 11:29:47.276063223 +0000 UTC m=+0.064707074 container create 974db6fe319f5164ebcc8e4a2300dd384e7366d623584003afa1ab4452d209fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:29:47 compute-0 systemd[1]: Started libpod-conmon-974db6fe319f5164ebcc8e4a2300dd384e7366d623584003afa1ab4452d209fc.scope.
Oct  3 11:29:47 compute-0 podman[536833]: 2025-10-03 11:29:47.251757505 +0000 UTC m=+0.040401366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:29:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:29:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:29:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:29:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:29:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:29:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:29:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:29:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:29:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:29:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:29:47 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22011a41799c3c121c889a01acc06baaa95608348ad8702aa08f2bc628dae50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22011a41799c3c121c889a01acc06baaa95608348ad8702aa08f2bc628dae50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22011a41799c3c121c889a01acc06baaa95608348ad8702aa08f2bc628dae50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:29:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22011a41799c3c121c889a01acc06baaa95608348ad8702aa08f2bc628dae50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:29:47 compute-0 nova_compute[351685]: 2025-10-03 11:29:47.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:47 compute-0 podman[536833]: 2025-10-03 11:29:47.410814642 +0000 UTC m=+0.199458483 container init 974db6fe319f5164ebcc8e4a2300dd384e7366d623584003afa1ab4452d209fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:29:47 compute-0 podman[536833]: 2025-10-03 11:29:47.422853478 +0000 UTC m=+0.211497319 container start 974db6fe319f5164ebcc8e4a2300dd384e7366d623584003afa1ab4452d209fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:29:47 compute-0 podman[536833]: 2025-10-03 11:29:47.426953 +0000 UTC m=+0.215596861 container attach 974db6fe319f5164ebcc8e4a2300dd384e7366d623584003afa1ab4452d209fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  3 11:29:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3679: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:48 compute-0 keen_swirles[536849]: {
Oct  3 11:29:48 compute-0 keen_swirles[536849]:    "0": [
Oct  3 11:29:48 compute-0 keen_swirles[536849]:        {
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "devices": [
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "/dev/loop3"
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            ],
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "lv_name": "ceph_lv0",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "lv_size": "21470642176",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "name": "ceph_lv0",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "tags": {
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.cluster_name": "ceph",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.crush_device_class": "",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.encrypted": "0",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.osd_id": "0",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.type": "block",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.vdo": "0"
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            },
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "type": "block",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "vg_name": "ceph_vg0"
Oct  3 11:29:48 compute-0 keen_swirles[536849]:        }
Oct  3 11:29:48 compute-0 keen_swirles[536849]:    ],
Oct  3 11:29:48 compute-0 keen_swirles[536849]:    "1": [
Oct  3 11:29:48 compute-0 keen_swirles[536849]:        {
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "devices": [
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "/dev/loop4"
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            ],
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "lv_name": "ceph_lv1",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "lv_size": "21470642176",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "name": "ceph_lv1",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "tags": {
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.cluster_name": "ceph",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.crush_device_class": "",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.encrypted": "0",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.osd_id": "1",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.type": "block",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.vdo": "0"
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            },
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "type": "block",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "vg_name": "ceph_vg1"
Oct  3 11:29:48 compute-0 keen_swirles[536849]:        }
Oct  3 11:29:48 compute-0 keen_swirles[536849]:    ],
Oct  3 11:29:48 compute-0 keen_swirles[536849]:    "2": [
Oct  3 11:29:48 compute-0 keen_swirles[536849]:        {
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "devices": [
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "/dev/loop5"
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            ],
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "lv_name": "ceph_lv2",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "lv_size": "21470642176",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "name": "ceph_lv2",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "tags": {
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.cluster_name": "ceph",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.crush_device_class": "",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.encrypted": "0",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.osd_id": "2",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.type": "block",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:                "ceph.vdo": "0"
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            },
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "type": "block",
Oct  3 11:29:48 compute-0 keen_swirles[536849]:            "vg_name": "ceph_vg2"
Oct  3 11:29:48 compute-0 keen_swirles[536849]:        }
Oct  3 11:29:48 compute-0 keen_swirles[536849]:    ]
Oct  3 11:29:48 compute-0 keen_swirles[536849]: }
Oct  3 11:29:48 compute-0 systemd[1]: libpod-974db6fe319f5164ebcc8e4a2300dd384e7366d623584003afa1ab4452d209fc.scope: Deactivated successfully.
Oct  3 11:29:48 compute-0 podman[536859]: 2025-10-03 11:29:48.39921437 +0000 UTC m=+0.056617686 container died 974db6fe319f5164ebcc8e4a2300dd384e7366d623584003afa1ab4452d209fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:29:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f22011a41799c3c121c889a01acc06baaa95608348ad8702aa08f2bc628dae50-merged.mount: Deactivated successfully.
Oct  3 11:29:48 compute-0 podman[536859]: 2025-10-03 11:29:48.511063254 +0000 UTC m=+0.168466550 container remove 974db6fe319f5164ebcc8e4a2300dd384e7366d623584003afa1ab4452d209fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swirles, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:29:48 compute-0 systemd[1]: libpod-conmon-974db6fe319f5164ebcc8e4a2300dd384e7366d623584003afa1ab4452d209fc.scope: Deactivated successfully.
Oct  3 11:29:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:29:49 compute-0 podman[537012]: 2025-10-03 11:29:49.365452047 +0000 UTC m=+0.050507860 container create 017fa15fa4c778eff7fd44b8718f54d232172bb91d007889852f79f7f604880b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_fermat, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:29:49 compute-0 systemd[1]: Started libpod-conmon-017fa15fa4c778eff7fd44b8718f54d232172bb91d007889852f79f7f604880b.scope.
Oct  3 11:29:49 compute-0 podman[537012]: 2025-10-03 11:29:49.342935295 +0000 UTC m=+0.027991128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:29:49 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:29:49 compute-0 podman[537012]: 2025-10-03 11:29:49.467152617 +0000 UTC m=+0.152208450 container init 017fa15fa4c778eff7fd44b8718f54d232172bb91d007889852f79f7f604880b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_fermat, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  3 11:29:49 compute-0 podman[537012]: 2025-10-03 11:29:49.476078233 +0000 UTC m=+0.161134046 container start 017fa15fa4c778eff7fd44b8718f54d232172bb91d007889852f79f7f604880b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:29:49 compute-0 podman[537012]: 2025-10-03 11:29:49.480854796 +0000 UTC m=+0.165910629 container attach 017fa15fa4c778eff7fd44b8718f54d232172bb91d007889852f79f7f604880b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_fermat, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:29:49 compute-0 festive_fermat[537028]: 167 167
Oct  3 11:29:49 compute-0 systemd[1]: libpod-017fa15fa4c778eff7fd44b8718f54d232172bb91d007889852f79f7f604880b.scope: Deactivated successfully.
Oct  3 11:29:49 compute-0 podman[537012]: 2025-10-03 11:29:49.485864516 +0000 UTC m=+0.170920349 container died 017fa15fa4c778eff7fd44b8718f54d232172bb91d007889852f79f7f604880b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:29:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a73827300a11a02b4c7f47915f823e7cdaaf7cf3f4ed35169ed05677746e020-merged.mount: Deactivated successfully.
Oct  3 11:29:49 compute-0 podman[537012]: 2025-10-03 11:29:49.541458318 +0000 UTC m=+0.226514131 container remove 017fa15fa4c778eff7fd44b8718f54d232172bb91d007889852f79f7f604880b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_fermat, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  3 11:29:49 compute-0 systemd[1]: libpod-conmon-017fa15fa4c778eff7fd44b8718f54d232172bb91d007889852f79f7f604880b.scope: Deactivated successfully.
Oct  3 11:29:49 compute-0 podman[537053]: 2025-10-03 11:29:49.775952083 +0000 UTC m=+0.077766893 container create 57fce4004e5375bcc8f8e0a93a50d8750680ae1d6992fb72c8ec5448dd81554a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:29:49 compute-0 systemd[1]: Started libpod-conmon-57fce4004e5375bcc8f8e0a93a50d8750680ae1d6992fb72c8ec5448dd81554a.scope.
Oct  3 11:29:49 compute-0 podman[537053]: 2025-10-03 11:29:49.74682015 +0000 UTC m=+0.048634990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:29:49 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:29:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc32113523dd15cfc2e8b5bcfe4ab62b94f82cd56cdd9b337a50b257830fdf33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:29:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc32113523dd15cfc2e8b5bcfe4ab62b94f82cd56cdd9b337a50b257830fdf33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:29:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc32113523dd15cfc2e8b5bcfe4ab62b94f82cd56cdd9b337a50b257830fdf33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:29:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc32113523dd15cfc2e8b5bcfe4ab62b94f82cd56cdd9b337a50b257830fdf33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:29:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3680: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:49 compute-0 podman[537053]: 2025-10-03 11:29:49.903437129 +0000 UTC m=+0.205251969 container init 57fce4004e5375bcc8f8e0a93a50d8750680ae1d6992fb72c8ec5448dd81554a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:29:49 compute-0 podman[537053]: 2025-10-03 11:29:49.925596559 +0000 UTC m=+0.227411389 container start 57fce4004e5375bcc8f8e0a93a50d8750680ae1d6992fb72c8ec5448dd81554a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lehmann, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:29:49 compute-0 podman[537053]: 2025-10-03 11:29:49.932706927 +0000 UTC m=+0.234521777 container attach 57fce4004e5375bcc8f8e0a93a50d8750680ae1d6992fb72c8ec5448dd81554a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:29:50 compute-0 nova_compute[351685]: 2025-10-03 11:29:50.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]: {
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:        "osd_id": 1,
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:        "type": "bluestore"
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:    },
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:        "osd_id": 2,
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:        "type": "bluestore"
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:    },
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:        "osd_id": 0,
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:        "type": "bluestore"
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]:    }
Oct  3 11:29:50 compute-0 relaxed_lehmann[537071]: }
Oct  3 11:29:50 compute-0 systemd[1]: libpod-57fce4004e5375bcc8f8e0a93a50d8750680ae1d6992fb72c8ec5448dd81554a.scope: Deactivated successfully.
Oct  3 11:29:50 compute-0 systemd[1]: libpod-57fce4004e5375bcc8f8e0a93a50d8750680ae1d6992fb72c8ec5448dd81554a.scope: Consumed 1.058s CPU time.
Oct  3 11:29:50 compute-0 conmon[537071]: conmon 57fce4004e5375bcc8f8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57fce4004e5375bcc8f8e0a93a50d8750680ae1d6992fb72c8ec5448dd81554a.scope/container/memory.events
Oct  3 11:29:50 compute-0 podman[537053]: 2025-10-03 11:29:50.996471881 +0000 UTC m=+1.298286711 container died 57fce4004e5375bcc8f8e0a93a50d8750680ae1d6992fb72c8ec5448dd81554a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:29:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc32113523dd15cfc2e8b5bcfe4ab62b94f82cd56cdd9b337a50b257830fdf33-merged.mount: Deactivated successfully.
Oct  3 11:29:51 compute-0 podman[537053]: 2025-10-03 11:29:51.07196389 +0000 UTC m=+1.373778710 container remove 57fce4004e5375bcc8f8e0a93a50d8750680ae1d6992fb72c8ec5448dd81554a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:29:51 compute-0 systemd[1]: libpod-conmon-57fce4004e5375bcc8f8e0a93a50d8750680ae1d6992fb72c8ec5448dd81554a.scope: Deactivated successfully.
Oct  3 11:29:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:29:51 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:29:51 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:29:51 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:29:51 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f3b4a267-e2c0-4108-8022-cc42394067cd does not exist
Oct  3 11:29:51 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev de95740d-c3bd-421b-8219-1a45f04fda84 does not exist
Oct  3 11:29:51 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  3 11:29:51 compute-0 nova_compute[351685]: 2025-10-03 11:29:51.748 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:29:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3681: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:52 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:29:52 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:29:52 compute-0 nova_compute[351685]: 2025-10-03 11:29:52.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:29:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3682: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:29:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2330088188' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:29:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:29:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2330088188' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:29:54 compute-0 nova_compute[351685]: 2025-10-03 11:29:54.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:29:55 compute-0 nova_compute[351685]: 2025-10-03 11:29:55.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:55 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  3 11:29:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3683: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013093962647326952 of space, bias 1.0, pg target 0.39281887941980853 quantized to 32 (current 32)
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:29:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:29:56 compute-0 podman[537167]: 2025-10-03 11:29:56.885052085 +0000 UTC m=+0.122605980 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:29:56 compute-0 podman[537171]: 2025-10-03 11:29:56.89705164 +0000 UTC m=+0.118036094 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true)
Oct  3 11:29:56 compute-0 podman[537169]: 2025-10-03 11:29:56.900369377 +0000 UTC m=+0.134707999 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 11:29:56 compute-0 podman[537170]: 2025-10-03 11:29:56.902867727 +0000 UTC m=+0.143181890 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 11:29:56 compute-0 podman[537173]: 2025-10-03 11:29:56.903320131 +0000 UTC m=+0.139583605 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  3 11:29:56 compute-0 podman[537168]: 2025-10-03 11:29:56.909791938 +0000 UTC m=+0.136085012 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, vcs-type=git, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct  3 11:29:56 compute-0 podman[537172]: 2025-10-03 11:29:56.914120437 +0000 UTC m=+0.149789142 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:29:57 compute-0 nova_compute[351685]: 2025-10-03 11:29:57.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:29:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3684: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:29:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:29:59 compute-0 podman[157165]: time="2025-10-03T11:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:29:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:29:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9600 "" "Go-http-client/1.1"
Oct  3 11:29:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3685: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:00 compute-0 nova_compute[351685]: 2025-10-03 11:30:00.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:01 compute-0 openstack_network_exporter[367524]: ERROR   11:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:30:01 compute-0 openstack_network_exporter[367524]: ERROR   11:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:30:01 compute-0 openstack_network_exporter[367524]: ERROR   11:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:30:01 compute-0 openstack_network_exporter[367524]: ERROR   11:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:30:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:30:01 compute-0 openstack_network_exporter[367524]: ERROR   11:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:30:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:30:01 compute-0 nova_compute[351685]: 2025-10-03 11:30:01.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:30:01 compute-0 nova_compute[351685]: 2025-10-03 11:30:01.760 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:30:01 compute-0 nova_compute[351685]: 2025-10-03 11:30:01.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:30:01 compute-0 nova_compute[351685]: 2025-10-03 11:30:01.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:30:01 compute-0 nova_compute[351685]: 2025-10-03 11:30:01.761 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:30:01 compute-0 nova_compute[351685]: 2025-10-03 11:30:01.761 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:30:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3686: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:02 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:30:02 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1922789035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:30:02 compute-0 nova_compute[351685]: 2025-10-03 11:30:02.233 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:30:02 compute-0 nova_compute[351685]: 2025-10-03 11:30:02.314 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:30:02 compute-0 nova_compute[351685]: 2025-10-03 11:30:02.317 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:30:02 compute-0 nova_compute[351685]: 2025-10-03 11:30:02.318 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:30:02 compute-0 nova_compute[351685]: 2025-10-03 11:30:02.325 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:30:02 compute-0 nova_compute[351685]: 2025-10-03 11:30:02.326 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:30:02 compute-0 nova_compute[351685]: 2025-10-03 11:30:02.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:02 compute-0 nova_compute[351685]: 2025-10-03 11:30:02.700 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:30:02 compute-0 nova_compute[351685]: 2025-10-03 11:30:02.702 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3392MB free_disk=59.909732818603516GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:30:02 compute-0 nova_compute[351685]: 2025-10-03 11:30:02.702 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:30:02 compute-0 nova_compute[351685]: 2025-10-03 11:30:02.703 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:30:02 compute-0 nova_compute[351685]: 2025-10-03 11:30:02.873 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:30:02 compute-0 nova_compute[351685]: 2025-10-03 11:30:02.874 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 83fc22ce-d2e4-468a-b166-04f2743fa68d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:30:02 compute-0 nova_compute[351685]: 2025-10-03 11:30:02.874 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:30:02 compute-0 nova_compute[351685]: 2025-10-03 11:30:02.875 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:30:03 compute-0 nova_compute[351685]: 2025-10-03 11:30:03.042 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:30:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:30:03 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4222770667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:30:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:30:03 compute-0 nova_compute[351685]: 2025-10-03 11:30:03.566 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:30:03 compute-0 nova_compute[351685]: 2025-10-03 11:30:03.577 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:30:03 compute-0 nova_compute[351685]: 2025-10-03 11:30:03.593 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:30:03 compute-0 nova_compute[351685]: 2025-10-03 11:30:03.595 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:30:03 compute-0 nova_compute[351685]: 2025-10-03 11:30:03.596 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:30:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3687: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:04 compute-0 nova_compute[351685]: 2025-10-03 11:30:04.598 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:30:04 compute-0 nova_compute[351685]: 2025-10-03 11:30:04.598 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:30:05 compute-0 nova_compute[351685]: 2025-10-03 11:30:05.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3688: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:06 compute-0 nova_compute[351685]: 2025-10-03 11:30:06.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:30:07 compute-0 nova_compute[351685]: 2025-10-03 11:30:07.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:07 compute-0 nova_compute[351685]: 2025-10-03 11:30:07.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:30:07 compute-0 nova_compute[351685]: 2025-10-03 11:30:07.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:30:07 compute-0 podman[537344]: 2025-10-03 11:30:07.811392857 +0000 UTC m=+0.075806851 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:30:07 compute-0 podman[537346]: 2025-10-03 11:30:07.825320933 +0000 UTC m=+0.079344983 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 11:30:07 compute-0 podman[537345]: 2025-10-03 11:30:07.853761615 +0000 UTC m=+0.107709963 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct  3 11:30:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3689: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:30:08 compute-0 nova_compute[351685]: 2025-10-03 11:30:08.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:30:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3690: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:10 compute-0 nova_compute[351685]: 2025-10-03 11:30:10.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3691: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:12 compute-0 nova_compute[351685]: 2025-10-03 11:30:12.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:30:13 compute-0 ovn_controller[88471]: 2025-10-03T11:30:13Z|00197|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Oct  3 11:30:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3692: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:15 compute-0 nova_compute[351685]: 2025-10-03 11:30:15.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3693: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:30:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:30:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:30:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:30:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:30:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:30:17 compute-0 nova_compute[351685]: 2025-10-03 11:30:17.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3694: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:30:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3695: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:20 compute-0 nova_compute[351685]: 2025-10-03 11:30:20.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3696: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:22 compute-0 nova_compute[351685]: 2025-10-03 11:30:22.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:30:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3697: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:25 compute-0 nova_compute[351685]: 2025-10-03 11:30:25.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3698: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:27 compute-0 nova_compute[351685]: 2025-10-03 11:30:27.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:27 compute-0 podman[537406]: 2025-10-03 11:30:27.893531227 +0000 UTC m=+0.114588504 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:30:27 compute-0 podman[537420]: 2025-10-03 11:30:27.900119698 +0000 UTC m=+0.107573559 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 11:30:27 compute-0 podman[537412]: 2025-10-03 11:30:27.901569084 +0000 UTC m=+0.110454600 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  3 11:30:27 compute-0 podman[537404]: 2025-10-03 11:30:27.902457213 +0000 UTC m=+0.131981781 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 11:30:27 compute-0 podman[537441]: 2025-10-03 11:30:27.906400209 +0000 UTC m=+0.089980844 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:30:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3699: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:27 compute-0 podman[537405]: 2025-10-03 11:30:27.918347482 +0000 UTC m=+0.157679035 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, vcs-type=git, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container)
Oct  3 11:30:27 compute-0 podman[537426]: 2025-10-03 11:30:27.933278941 +0000 UTC m=+0.119320236 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:30:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:30:29 compute-0 podman[157165]: time="2025-10-03T11:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:30:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:30:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9599 "" "Go-http-client/1.1"
Oct  3 11:30:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3700: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:30 compute-0 nova_compute[351685]: 2025-10-03 11:30:30.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:31 compute-0 openstack_network_exporter[367524]: ERROR   11:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:30:31 compute-0 openstack_network_exporter[367524]: ERROR   11:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:30:31 compute-0 openstack_network_exporter[367524]: ERROR   11:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:30:31 compute-0 openstack_network_exporter[367524]: ERROR   11:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:30:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:30:31 compute-0 openstack_network_exporter[367524]: ERROR   11:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:30:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:30:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3701: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:32 compute-0 nova_compute[351685]: 2025-10-03 11:30:32.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:32 compute-0 nova_compute[351685]: 2025-10-03 11:30:32.560 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:30:32 compute-0 nova_compute[351685]: 2025-10-03 11:30:32.561 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:30:32 compute-0 nova_compute[351685]: 2025-10-03 11:30:32.596 2 DEBUG nova.compute.manager [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  3 11:30:32 compute-0 nova_compute[351685]: 2025-10-03 11:30:32.679 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:30:32 compute-0 nova_compute[351685]: 2025-10-03 11:30:32.679 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:30:32 compute-0 nova_compute[351685]: 2025-10-03 11:30:32.689 2 DEBUG nova.virt.hardware [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  3 11:30:32 compute-0 nova_compute[351685]: 2025-10-03 11:30:32.689 2 INFO nova.compute.claims [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  3 11:30:32 compute-0 nova_compute[351685]: 2025-10-03 11:30:32.828 2 DEBUG oslo_concurrency.processutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:30:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:30:33 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4148708346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.314 2 DEBUG oslo_concurrency.processutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.327 2 DEBUG nova.compute.provider_tree [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.346 2 DEBUG nova.scheduler.client.report [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.371 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.373 2 DEBUG nova.compute.manager [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.441 2 DEBUG nova.compute.manager [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.442 2 DEBUG nova.network.neutron [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.461 2 INFO nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.479 2 DEBUG nova.compute.manager [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  3 11:30:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.607 2 DEBUG nova.compute.manager [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.609 2 DEBUG nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.610 2 INFO nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Creating image(s)#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.657 2 DEBUG nova.storage.rbd_utils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] rbd image 443e486d-1bf2-4550-a4ae-32f0f8f4af19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.703 2 DEBUG nova.storage.rbd_utils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] rbd image 443e486d-1bf2-4550-a4ae-32f0f8f4af19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.739 2 DEBUG nova.storage.rbd_utils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] rbd image 443e486d-1bf2-4550-a4ae-32f0f8f4af19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.745 2 DEBUG oslo_concurrency.processutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/54e4a806ae3db3ffd1941099a5274840605d8464 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.775 2 DEBUG nova.policy [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8990c210ba8740dc9714739f27702391', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ebbd19d68501417398b05a6a4b7c22de', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.827 2 DEBUG oslo_concurrency.processutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/54e4a806ae3db3ffd1941099a5274840605d8464 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.828 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "54e4a806ae3db3ffd1941099a5274840605d8464" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.829 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "54e4a806ae3db3ffd1941099a5274840605d8464" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.829 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "54e4a806ae3db3ffd1941099a5274840605d8464" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.879 2 DEBUG nova.storage.rbd_utils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] rbd image 443e486d-1bf2-4550-a4ae-32f0f8f4af19_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:30:33 compute-0 nova_compute[351685]: 2025-10-03 11:30:33.889 2 DEBUG oslo_concurrency.processutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/54e4a806ae3db3ffd1941099a5274840605d8464 443e486d-1bf2-4550-a4ae-32f0f8f4af19_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:30:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3702: 321 pgs: 321 active+clean; 218 MiB data, 384 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:30:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:34.288 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:73:2e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:70:f7:73:f2:48'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:30:34 compute-0 nova_compute[351685]: 2025-10-03 11:30:34.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:34 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:34.290 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  3 11:30:34 compute-0 nova_compute[351685]: 2025-10-03 11:30:34.292 2 DEBUG oslo_concurrency.processutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/54e4a806ae3db3ffd1941099a5274840605d8464 443e486d-1bf2-4550-a4ae-32f0f8f4af19_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:30:34 compute-0 nova_compute[351685]: 2025-10-03 11:30:34.406 2 DEBUG nova.network.neutron [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Successfully created port: d6a8cc09-5401-43eb-a552-9e7406a4b201 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  3 11:30:34 compute-0 nova_compute[351685]: 2025-10-03 11:30:34.424 2 DEBUG nova.storage.rbd_utils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] resizing rbd image 443e486d-1bf2-4550-a4ae-32f0f8f4af19_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  3 11:30:34 compute-0 nova_compute[351685]: 2025-10-03 11:30:34.639 2 DEBUG nova.objects.instance [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lazy-loading 'migration_context' on Instance uuid 443e486d-1bf2-4550-a4ae-32f0f8f4af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:30:34 compute-0 nova_compute[351685]: 2025-10-03 11:30:34.722 2 DEBUG nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  3 11:30:34 compute-0 nova_compute[351685]: 2025-10-03 11:30:34.724 2 DEBUG nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Ensure instance console log exists: /var/lib/nova/instances/443e486d-1bf2-4550-a4ae-32f0f8f4af19/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  3 11:30:34 compute-0 nova_compute[351685]: 2025-10-03 11:30:34.725 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:30:34 compute-0 nova_compute[351685]: 2025-10-03 11:30:34.726 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:30:34 compute-0 nova_compute[351685]: 2025-10-03 11:30:34.727 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:30:35 compute-0 nova_compute[351685]: 2025-10-03 11:30:35.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:35 compute-0 nova_compute[351685]: 2025-10-03 11:30:35.872 2 DEBUG nova.network.neutron [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Successfully updated port: d6a8cc09-5401-43eb-a552-9e7406a4b201 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  3 11:30:35 compute-0 nova_compute[351685]: 2025-10-03 11:30:35.885 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "refresh_cache-443e486d-1bf2-4550-a4ae-32f0f8f4af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:30:35 compute-0 nova_compute[351685]: 2025-10-03 11:30:35.886 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquired lock "refresh_cache-443e486d-1bf2-4550-a4ae-32f0f8f4af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:30:35 compute-0 nova_compute[351685]: 2025-10-03 11:30:35.886 2 DEBUG nova.network.neutron [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  3 11:30:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3703: 321 pgs: 321 active+clean; 248 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.4 MiB/s wr, 13 op/s
Oct  3 11:30:35 compute-0 nova_compute[351685]: 2025-10-03 11:30:35.980 2 DEBUG nova.compute.manager [req-19d0cfa4-73be-4256-a1a8-b7bde13b9824 req-1e7c410c-7c5f-4937-824b-babd2f270abb 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Received event network-changed-d6a8cc09-5401-43eb-a552-9e7406a4b201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:30:35 compute-0 nova_compute[351685]: 2025-10-03 11:30:35.981 2 DEBUG nova.compute.manager [req-19d0cfa4-73be-4256-a1a8-b7bde13b9824 req-1e7c410c-7c5f-4937-824b-babd2f270abb 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Refreshing instance network info cache due to event network-changed-d6a8cc09-5401-43eb-a552-9e7406a4b201. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  3 11:30:35 compute-0 nova_compute[351685]: 2025-10-03 11:30:35.981 2 DEBUG oslo_concurrency.lockutils [req-19d0cfa4-73be-4256-a1a8-b7bde13b9824 req-1e7c410c-7c5f-4937-824b-babd2f270abb 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "refresh_cache-443e486d-1bf2-4550-a4ae-32f0f8f4af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:30:36 compute-0 nova_compute[351685]: 2025-10-03 11:30:36.020 2 DEBUG nova.network.neutron [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.394 2 DEBUG nova.network.neutron [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Updating instance_info_cache with network_info: [{"id": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "address": "fa:16:3e:5e:f1:a3", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6a8cc09-54", "ovs_interfaceid": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.429 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Releasing lock "refresh_cache-443e486d-1bf2-4550-a4ae-32f0f8f4af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.430 2 DEBUG nova.compute.manager [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Instance network_info: |[{"id": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "address": "fa:16:3e:5e:f1:a3", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6a8cc09-54", "ovs_interfaceid": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.432 2 DEBUG oslo_concurrency.lockutils [req-19d0cfa4-73be-4256-a1a8-b7bde13b9824 req-1e7c410c-7c5f-4937-824b-babd2f270abb 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquired lock "refresh_cache-443e486d-1bf2-4550-a4ae-32f0f8f4af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.433 2 DEBUG nova.network.neutron [req-19d0cfa4-73be-4256-a1a8-b7bde13b9824 req-1e7c410c-7c5f-4937-824b-babd2f270abb 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Refreshing network info cache for port d6a8cc09-5401-43eb-a552-9e7406a4b201 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.439 2 DEBUG nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Start _get_guest_xml network_info=[{"id": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "address": "fa:16:3e:5e:f1:a3", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6a8cc09-54", "ovs_interfaceid": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:26:31Z,direct_url=<?>,disk_format='qcow2',id=b9c8e0cc-ecf1-4fa8-92ee-328b108123cd,min_disk=0,min_ram=0,name='tempest-scenario-img--982789236',owner='ebbd19d68501417398b05a6a4b7c22de',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:26:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'device_type': 'disk', 'size': 0, 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': 'b9c8e0cc-ecf1-4fa8-92ee-328b108123cd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.460 2 WARNING nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.467 2 DEBUG nova.virt.libvirt.host [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.468 2 DEBUG nova.virt.libvirt.host [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.474 2 DEBUG nova.virt.libvirt.host [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.475 2 DEBUG nova.virt.libvirt.host [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.475 2 DEBUG nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.476 2 DEBUG nova.virt.hardware [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-03T11:23:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b93eb926-1d95-406e-aec3-a907be067084',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-03T11:26:31Z,direct_url=<?>,disk_format='qcow2',id=b9c8e0cc-ecf1-4fa8-92ee-328b108123cd,min_disk=0,min_ram=0,name='tempest-scenario-img--982789236',owner='ebbd19d68501417398b05a6a4b7c22de',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-03T11:26:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.477 2 DEBUG nova.virt.hardware [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.477 2 DEBUG nova.virt.hardware [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.478 2 DEBUG nova.virt.hardware [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.478 2 DEBUG nova.virt.hardware [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.479 2 DEBUG nova.virt.hardware [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.479 2 DEBUG nova.virt.hardware [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.480 2 DEBUG nova.virt.hardware [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.480 2 DEBUG nova.virt.hardware [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.481 2 DEBUG nova.virt.hardware [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.481 2 DEBUG nova.virt.hardware [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.485 2 DEBUG oslo_concurrency.processutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:30:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3704: 321 pgs: 321 active+clean; 248 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.4 MiB/s wr, 13 op/s
Oct  3 11:30:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:30:37 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2137181762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:30:37 compute-0 nova_compute[351685]: 2025-10-03 11:30:37.969 2 DEBUG oslo_concurrency.processutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.007 2 DEBUG nova.storage.rbd_utils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] rbd image 443e486d-1bf2-4550-a4ae-32f0f8f4af19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.015 2 DEBUG oslo_concurrency.processutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:30:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  3 11:30:38 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/554169673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  3 11:30:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.563 2 DEBUG oslo_concurrency.processutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.566 2 DEBUG nova.virt.libvirt.vif [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:30:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0355793-asg-bz6ac4extgro-yngmy2hkxebf-kzam7sftwtcg',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0355793-asg-bz6ac4extgro-yngmy2hkxebf-kzam7sftwtcg',id=16,image_ref='b9c8e0cc-ecf1-4fa8-92ee-328b108123cd',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ebbd19d68501417398b05a6a4b7c22de',ramdisk_id='',reservation_id='r-oaxcr3xw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b9c8e0cc-ecf1-4fa8-92ee-328b108123cd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-298349364',owner_user_name='tempest-PrometheusGabbiTest-298349364-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:30:33Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8990c210ba8740dc9714739f27702391',uuid=443e486d-1bf2-4550-a4ae-32f0f8f4af19,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "address": "fa:16:3e:5e:f1:a3", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6a8cc09-54", "ovs_interfaceid": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.567 2 DEBUG nova.network.os_vif_util [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Converting VIF {"id": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "address": "fa:16:3e:5e:f1:a3", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6a8cc09-54", "ovs_interfaceid": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.568 2 DEBUG nova.network.os_vif_util [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:f1:a3,bridge_name='br-int',has_traffic_filtering=True,id=d6a8cc09-5401-43eb-a552-9e7406a4b201,network=Network(9844dad0-501d-443c-9110-8dd633c460de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6a8cc09-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.570 2 DEBUG nova.objects.instance [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lazy-loading 'pci_devices' on Instance uuid 443e486d-1bf2-4550-a4ae-32f0f8f4af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.584 2 DEBUG nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] End _get_guest_xml xml=<domain type="kvm">
Oct  3 11:30:38 compute-0 nova_compute[351685]:  <uuid>443e486d-1bf2-4550-a4ae-32f0f8f4af19</uuid>
Oct  3 11:30:38 compute-0 nova_compute[351685]:  <name>instance-00000010</name>
Oct  3 11:30:38 compute-0 nova_compute[351685]:  <memory>131072</memory>
Oct  3 11:30:38 compute-0 nova_compute[351685]:  <vcpu>1</vcpu>
Oct  3 11:30:38 compute-0 nova_compute[351685]:  <metadata>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <nova:name>te-0355793-asg-bz6ac4extgro-yngmy2hkxebf-kzam7sftwtcg</nova:name>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <nova:creationTime>2025-10-03 11:30:37</nova:creationTime>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <nova:flavor name="m1.nano">
Oct  3 11:30:38 compute-0 nova_compute[351685]:        <nova:memory>128</nova:memory>
Oct  3 11:30:38 compute-0 nova_compute[351685]:        <nova:disk>1</nova:disk>
Oct  3 11:30:38 compute-0 nova_compute[351685]:        <nova:swap>0</nova:swap>
Oct  3 11:30:38 compute-0 nova_compute[351685]:        <nova:ephemeral>0</nova:ephemeral>
Oct  3 11:30:38 compute-0 nova_compute[351685]:        <nova:vcpus>1</nova:vcpus>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      </nova:flavor>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <nova:owner>
Oct  3 11:30:38 compute-0 nova_compute[351685]:        <nova:user uuid="8990c210ba8740dc9714739f27702391">tempest-PrometheusGabbiTest-298349364-project-member</nova:user>
Oct  3 11:30:38 compute-0 nova_compute[351685]:        <nova:project uuid="ebbd19d68501417398b05a6a4b7c22de">tempest-PrometheusGabbiTest-298349364</nova:project>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      </nova:owner>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <nova:root type="image" uuid="b9c8e0cc-ecf1-4fa8-92ee-328b108123cd"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <nova:ports>
Oct  3 11:30:38 compute-0 nova_compute[351685]:        <nova:port uuid="d6a8cc09-5401-43eb-a552-9e7406a4b201">
Oct  3 11:30:38 compute-0 nova_compute[351685]:          <nova:ip type="fixed" address="10.100.0.169" ipVersion="4"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:        </nova:port>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      </nova:ports>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    </nova:instance>
Oct  3 11:30:38 compute-0 nova_compute[351685]:  </metadata>
Oct  3 11:30:38 compute-0 nova_compute[351685]:  <sysinfo type="smbios">
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <system>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <entry name="manufacturer">RDO</entry>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <entry name="product">OpenStack Compute</entry>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <entry name="serial">443e486d-1bf2-4550-a4ae-32f0f8f4af19</entry>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <entry name="uuid">443e486d-1bf2-4550-a4ae-32f0f8f4af19</entry>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <entry name="family">Virtual Machine</entry>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    </system>
Oct  3 11:30:38 compute-0 nova_compute[351685]:  </sysinfo>
Oct  3 11:30:38 compute-0 nova_compute[351685]:  <os>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <boot dev="hd"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <smbios mode="sysinfo"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:  </os>
Oct  3 11:30:38 compute-0 nova_compute[351685]:  <features>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <acpi/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <apic/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <vmcoreinfo/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:  </features>
Oct  3 11:30:38 compute-0 nova_compute[351685]:  <clock offset="utc">
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <timer name="pit" tickpolicy="delay"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <timer name="hpet" present="no"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:  </clock>
Oct  3 11:30:38 compute-0 nova_compute[351685]:  <cpu mode="host-model" match="exact">
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <topology sockets="1" cores="1" threads="1"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:  </cpu>
Oct  3 11:30:38 compute-0 nova_compute[351685]:  <devices>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <disk type="network" device="disk">
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/443e486d-1bf2-4550-a4ae-32f0f8f4af19_disk">
Oct  3 11:30:38 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      </source>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:30:38 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <target dev="vda" bus="virtio"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <disk type="network" device="cdrom">
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <driver type="raw" cache="none"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <source protocol="rbd" name="vms/443e486d-1bf2-4550-a4ae-32f0f8f4af19_disk.config">
Oct  3 11:30:38 compute-0 nova_compute[351685]:        <host name="192.168.122.100" port="6789"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      </source>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <auth username="openstack">
Oct  3 11:30:38 compute-0 nova_compute[351685]:        <secret type="ceph" uuid="9b4e8c9a-5555-5510-a631-4742a1182561"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      </auth>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <target dev="sda" bus="sata"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    </disk>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <interface type="ethernet">
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <mac address="fa:16:3e:5e:f1:a3"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <driver name="vhost" rx_queue_size="512"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <mtu size="1442"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <target dev="tapd6a8cc09-54"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    </interface>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <serial type="pty">
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <log file="/var/lib/nova/instances/443e486d-1bf2-4550-a4ae-32f0f8f4af19/console.log" append="off"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    </serial>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <video>
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <model type="virtio"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    </video>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <input type="tablet" bus="usb"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <rng model="virtio">
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <backend model="random">/dev/urandom</backend>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    </rng>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="pci" model="pcie-root-port"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <controller type="usb" index="0"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    <memballoon model="virtio">
Oct  3 11:30:38 compute-0 nova_compute[351685]:      <stats period="10"/>
Oct  3 11:30:38 compute-0 nova_compute[351685]:    </memballoon>
Oct  3 11:30:38 compute-0 nova_compute[351685]:  </devices>
Oct  3 11:30:38 compute-0 nova_compute[351685]: </domain>
Oct  3 11:30:38 compute-0 nova_compute[351685]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.586 2 DEBUG nova.compute.manager [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Preparing to wait for external event network-vif-plugged-d6a8cc09-5401-43eb-a552-9e7406a4b201 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.586 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.587 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.587 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.588 2 DEBUG nova.virt.libvirt.vif [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-03T11:30:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0355793-asg-bz6ac4extgro-yngmy2hkxebf-kzam7sftwtcg',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0355793-asg-bz6ac4extgro-yngmy2hkxebf-kzam7sftwtcg',id=16,image_ref='b9c8e0cc-ecf1-4fa8-92ee-328b108123cd',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ebbd19d68501417398b05a6a4b7c22de',ramdisk_id='',reservation_id='r-oaxcr3xw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b9c8e0cc-ecf1-4fa8-92ee-328b108123cd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-298349364',owner_user_name='tempest-PrometheusGabbiTest-298349364-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-03T11:30:33Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8990c210ba8740dc9714739f27702391',uuid=443e486d-1bf2-4550-a4ae-32f0f8f4af19,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "address": "fa:16:3e:5e:f1:a3", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6a8cc09-54", "ovs_interfaceid": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.589 2 DEBUG nova.network.os_vif_util [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Converting VIF {"id": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "address": "fa:16:3e:5e:f1:a3", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6a8cc09-54", "ovs_interfaceid": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.590 2 DEBUG nova.network.os_vif_util [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:f1:a3,bridge_name='br-int',has_traffic_filtering=True,id=d6a8cc09-5401-43eb-a552-9e7406a4b201,network=Network(9844dad0-501d-443c-9110-8dd633c460de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6a8cc09-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.591 2 DEBUG os_vif [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:f1:a3,bridge_name='br-int',has_traffic_filtering=True,id=d6a8cc09-5401-43eb-a552-9e7406a4b201,network=Network(9844dad0-501d-443c-9110-8dd633c460de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6a8cc09-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.593 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.593 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6a8cc09-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.598 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6a8cc09-54, col_values=(('external_ids', {'iface-id': 'd6a8cc09-5401-43eb-a552-9e7406a4b201', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:f1:a3', 'vm-uuid': '443e486d-1bf2-4550-a4ae-32f0f8f4af19'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:30:38 compute-0 NetworkManager[45015]: <info>  [1759491038.6016] manager: (tapd6a8cc09-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.616 2 INFO os_vif [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:f1:a3,bridge_name='br-int',has_traffic_filtering=True,id=d6a8cc09-5401-43eb-a552-9e7406a4b201,network=Network(9844dad0-501d-443c-9110-8dd633c460de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6a8cc09-54')#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.675 2 DEBUG nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.676 2 DEBUG nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.676 2 DEBUG nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] No VIF found with MAC fa:16:3e:5e:f1:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.677 2 INFO nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Using config drive#033[00m
Oct  3 11:30:38 compute-0 nova_compute[351685]: 2025-10-03 11:30:38.714 2 DEBUG nova.storage.rbd_utils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] rbd image 443e486d-1bf2-4550-a4ae-32f0f8f4af19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:30:38 compute-0 podman[537810]: 2025-10-03 11:30:38.825455438 +0000 UTC m=+0.081282916 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 11:30:38 compute-0 podman[537806]: 2025-10-03 11:30:38.836175571 +0000 UTC m=+0.100612605 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 11:30:38 compute-0 podman[537809]: 2025-10-03 11:30:38.839620432 +0000 UTC m=+0.101792354 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, release=1214.1726694543, config_id=edpm, container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., name=ubi9, version=9.4)
Oct  3 11:30:39 compute-0 nova_compute[351685]: 2025-10-03 11:30:39.402 2 INFO nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Creating config drive at /var/lib/nova/instances/443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.config#033[00m
Oct  3 11:30:39 compute-0 nova_compute[351685]: 2025-10-03 11:30:39.416 2 DEBUG oslo_concurrency.processutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvm003n34 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:30:39 compute-0 nova_compute[351685]: 2025-10-03 11:30:39.445 2 DEBUG nova.network.neutron [req-19d0cfa4-73be-4256-a1a8-b7bde13b9824 req-1e7c410c-7c5f-4937-824b-babd2f270abb 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Updated VIF entry in instance network info cache for port d6a8cc09-5401-43eb-a552-9e7406a4b201. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  3 11:30:39 compute-0 nova_compute[351685]: 2025-10-03 11:30:39.447 2 DEBUG nova.network.neutron [req-19d0cfa4-73be-4256-a1a8-b7bde13b9824 req-1e7c410c-7c5f-4937-824b-babd2f270abb 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Updating instance_info_cache with network_info: [{"id": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "address": "fa:16:3e:5e:f1:a3", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6a8cc09-54", "ovs_interfaceid": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:30:39 compute-0 nova_compute[351685]: 2025-10-03 11:30:39.464 2 DEBUG oslo_concurrency.lockutils [req-19d0cfa4-73be-4256-a1a8-b7bde13b9824 req-1e7c410c-7c5f-4937-824b-babd2f270abb 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Releasing lock "refresh_cache-443e486d-1bf2-4550-a4ae-32f0f8f4af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:30:39 compute-0 nova_compute[351685]: 2025-10-03 11:30:39.555 2 DEBUG oslo_concurrency.processutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvm003n34" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:30:39 compute-0 nova_compute[351685]: 2025-10-03 11:30:39.613 2 DEBUG nova.storage.rbd_utils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] rbd image 443e486d-1bf2-4550-a4ae-32f0f8f4af19_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  3 11:30:39 compute-0 nova_compute[351685]: 2025-10-03 11:30:39.625 2 DEBUG oslo_concurrency.processutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.config 443e486d-1bf2-4550-a4ae-32f0f8f4af19_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:30:39 compute-0 nova_compute[351685]: 2025-10-03 11:30:39.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:30:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3705: 321 pgs: 321 active+clean; 264 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  3 11:30:39 compute-0 nova_compute[351685]: 2025-10-03 11:30:39.919 2 DEBUG oslo_concurrency.processutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.config 443e486d-1bf2-4550-a4ae-32f0f8f4af19_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:30:39 compute-0 nova_compute[351685]: 2025-10-03 11:30:39.920 2 INFO nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Deleting local config drive /var/lib/nova/instances/443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.config because it was imported into RBD.#033[00m
Oct  3 11:30:39 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct  3 11:30:40 compute-0 systemd[1]: Started libvirt secret daemon.
Oct  3 11:30:40 compute-0 nova_compute[351685]: 2025-10-03 11:30:40.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:40 compute-0 kernel: tapd6a8cc09-54: entered promiscuous mode
Oct  3 11:30:40 compute-0 nova_compute[351685]: 2025-10-03 11:30:40.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:40 compute-0 NetworkManager[45015]: <info>  [1759491040.0808] manager: (tapd6a8cc09-54): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Oct  3 11:30:40 compute-0 ovn_controller[88471]: 2025-10-03T11:30:40Z|00198|binding|INFO|Claiming lport d6a8cc09-5401-43eb-a552-9e7406a4b201 for this chassis.
Oct  3 11:30:40 compute-0 ovn_controller[88471]: 2025-10-03T11:30:40Z|00199|binding|INFO|d6a8cc09-5401-43eb-a552-9e7406a4b201: Claiming fa:16:3e:5e:f1:a3 10.100.0.169
Oct  3 11:30:40 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:40.089 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:f1:a3 10.100.0.169'], port_security=['fa:16:3e:5e:f1:a3 10.100.0.169'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.169/16', 'neutron:device_id': '443e486d-1bf2-4550-a4ae-32f0f8f4af19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9844dad0-501d-443c-9110-8dd633c460de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ebbd19d68501417398b05a6a4b7c22de', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6c689562-b70d-4f38-a6f1-f8491b7c8598', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=557eeff1-fb6f-4cc0-9427-7ac15dbf385b, chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=d6a8cc09-5401-43eb-a552-9e7406a4b201) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:30:40 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:40.092 284328 INFO neutron.agent.ovn.metadata.agent [-] Port d6a8cc09-5401-43eb-a552-9e7406a4b201 in datapath 9844dad0-501d-443c-9110-8dd633c460de bound to our chassis#033[00m
Oct  3 11:30:40 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:40.095 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9844dad0-501d-443c-9110-8dd633c460de#033[00m
Oct  3 11:30:40 compute-0 ovn_controller[88471]: 2025-10-03T11:30:40Z|00200|binding|INFO|Setting lport d6a8cc09-5401-43eb-a552-9e7406a4b201 ovn-installed in OVS
Oct  3 11:30:40 compute-0 ovn_controller[88471]: 2025-10-03T11:30:40Z|00201|binding|INFO|Setting lport d6a8cc09-5401-43eb-a552-9e7406a4b201 up in Southbound
Oct  3 11:30:40 compute-0 nova_compute[351685]: 2025-10-03 11:30:40.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:40 compute-0 nova_compute[351685]: 2025-10-03 11:30:40.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:40 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:40.125 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[31c58abc-a067-4434-bce1-3a88f8c680b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:30:40 compute-0 systemd-machined[137653]: New machine qemu-17-instance-00000010.
Oct  3 11:30:40 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000010.
Oct  3 11:30:40 compute-0 systemd-udevd[537941]: Network interface NamePolicy= disabled on kernel command line.
Oct  3 11:30:40 compute-0 NetworkManager[45015]: <info>  [1759491040.1593] device (tapd6a8cc09-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  3 11:30:40 compute-0 NetworkManager[45015]: <info>  [1759491040.1600] device (tapd6a8cc09-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  3 11:30:40 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:40.161 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[03035bba-3b87-4c32-b0cd-63a28a5e6d71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:30:40 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:40.165 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[8dee6da2-4ba4-4329-b112-d463be286aa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:30:40 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:40.194 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[e2361d5f-511b-4da8-94a8-935e77541c83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:30:40 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:40.221 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[09d54f24-9f79-4361-a298-a0607b5dc0e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9844dad0-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:82:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 999798, 'reachable_time': 19541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 537946, 'error': None, 'target': 'ovnmeta-9844dad0-501d-443c-9110-8dd633c460de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:30:40 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:40.251 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[7c700f3f-0f8e-4cb4-b710-efe8de25d886]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap9844dad0-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 999815, 'tstamp': 999815}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 537951, 'error': None, 'target': 'ovnmeta-9844dad0-501d-443c-9110-8dd633c460de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9844dad0-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 999820, 'tstamp': 999820}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 537951, 'error': None, 'target': 'ovnmeta-9844dad0-501d-443c-9110-8dd633c460de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:30:40 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:40.252 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9844dad0-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:30:40 compute-0 nova_compute[351685]: 2025-10-03 11:30:40.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:40 compute-0 nova_compute[351685]: 2025-10-03 11:30:40.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:40 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:40.256 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9844dad0-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:30:40 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:40.256 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:30:40 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:40.257 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9844dad0-50, col_values=(('external_ids', {'iface-id': '71787e87-58e9-457f-840d-4d7e879d0280'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:30:40 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:40.257 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:30:40 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:40.292 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:30:40 compute-0 nova_compute[351685]: 2025-10-03 11:30:40.380 2 DEBUG nova.compute.manager [req-2da846ef-cd96-4c2c-9c2a-46ad6f43a335 req-9a76f1e4-2964-4576-9fdd-7953b303afef 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Received event network-vif-plugged-d6a8cc09-5401-43eb-a552-9e7406a4b201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:30:40 compute-0 nova_compute[351685]: 2025-10-03 11:30:40.380 2 DEBUG oslo_concurrency.lockutils [req-2da846ef-cd96-4c2c-9c2a-46ad6f43a335 req-9a76f1e4-2964-4576-9fdd-7953b303afef 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:30:40 compute-0 nova_compute[351685]: 2025-10-03 11:30:40.381 2 DEBUG oslo_concurrency.lockutils [req-2da846ef-cd96-4c2c-9c2a-46ad6f43a335 req-9a76f1e4-2964-4576-9fdd-7953b303afef 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:30:40 compute-0 nova_compute[351685]: 2025-10-03 11:30:40.381 2 DEBUG oslo_concurrency.lockutils [req-2da846ef-cd96-4c2c-9c2a-46ad6f43a335 req-9a76f1e4-2964-4576-9fdd-7953b303afef 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:30:40 compute-0 nova_compute[351685]: 2025-10-03 11:30:40.381 2 DEBUG nova.compute.manager [req-2da846ef-cd96-4c2c-9c2a-46ad6f43a335 req-9a76f1e4-2964-4576-9fdd-7953b303afef 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Processing event network-vif-plugged-d6a8cc09-5401-43eb-a552-9e7406a4b201 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.432 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759491041.4311998, 443e486d-1bf2-4550-a4ae-32f0f8f4af19 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.433 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] VM Started (Lifecycle Event)#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.438 2 DEBUG nova.compute.manager [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.443 2 DEBUG nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.448 2 INFO nova.virt.libvirt.driver [-] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Instance spawned successfully.#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.449 2 DEBUG nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.456 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.461 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.483 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.483 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759491041.4314563, 443e486d-1bf2-4550-a4ae-32f0f8f4af19 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.484 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] VM Paused (Lifecycle Event)#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.489 2 DEBUG nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.489 2 DEBUG nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.490 2 DEBUG nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.490 2 DEBUG nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.491 2 DEBUG nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.491 2 DEBUG nova.virt.libvirt.driver [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.513 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.520 2 DEBUG nova.virt.driver [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] Emitting event <LifecycleEvent: 1759491041.4445117, 443e486d-1bf2-4550-a4ae-32f0f8f4af19 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.520 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] VM Resumed (Lifecycle Event)#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.541 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.548 2 INFO nova.compute.manager [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Took 7.94 seconds to spawn the instance on the hypervisor.#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.548 2 DEBUG nova.compute.manager [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.555 2 DEBUG nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.578 2 INFO nova.compute.manager [None req-04b27053-9ca2-4eb6-abb0-100760dd9f92 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.623 2 INFO nova.compute.manager [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Took 8.97 seconds to build instance.#033[00m
Oct  3 11:30:41 compute-0 nova_compute[351685]: 2025-10-03 11:30:41.640 2 DEBUG oslo_concurrency.lockutils [None req-e2ed9530-e80b-4dc7-8814-a70d6d438dd7 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:30:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:41.707 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:30:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:41.708 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:30:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:30:41.708 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:30:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3706: 321 pgs: 321 active+clean; 264 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  3 11:30:42 compute-0 nova_compute[351685]: 2025-10-03 11:30:42.834 2 DEBUG nova.compute.manager [req-1b0a8d2f-caec-4856-b95d-58dab3b16f5d req-31a88b72-205f-4551-8ee7-5d4e4b4085db 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Received event network-vif-plugged-d6a8cc09-5401-43eb-a552-9e7406a4b201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:30:42 compute-0 nova_compute[351685]: 2025-10-03 11:30:42.834 2 DEBUG oslo_concurrency.lockutils [req-1b0a8d2f-caec-4856-b95d-58dab3b16f5d req-31a88b72-205f-4551-8ee7-5d4e4b4085db 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:30:42 compute-0 nova_compute[351685]: 2025-10-03 11:30:42.835 2 DEBUG oslo_concurrency.lockutils [req-1b0a8d2f-caec-4856-b95d-58dab3b16f5d req-31a88b72-205f-4551-8ee7-5d4e4b4085db 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:30:42 compute-0 nova_compute[351685]: 2025-10-03 11:30:42.835 2 DEBUG oslo_concurrency.lockutils [req-1b0a8d2f-caec-4856-b95d-58dab3b16f5d req-31a88b72-205f-4551-8ee7-5d4e4b4085db 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:30:42 compute-0 nova_compute[351685]: 2025-10-03 11:30:42.835 2 DEBUG nova.compute.manager [req-1b0a8d2f-caec-4856-b95d-58dab3b16f5d req-31a88b72-205f-4551-8ee7-5d4e4b4085db 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] No waiting events found dispatching network-vif-plugged-d6a8cc09-5401-43eb-a552-9e7406a4b201 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:30:42 compute-0 nova_compute[351685]: 2025-10-03 11:30:42.836 2 WARNING nova.compute.manager [req-1b0a8d2f-caec-4856-b95d-58dab3b16f5d req-31a88b72-205f-4551-8ee7-5d4e4b4085db 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Received unexpected event network-vif-plugged-d6a8cc09-5401-43eb-a552-9e7406a4b201 for instance with vm_state active and task_state None.#033[00m
Oct  3 11:30:43 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct  3 11:30:43 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct  3 11:30:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:30:43 compute-0 nova_compute[351685]: 2025-10-03 11:30:43.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:43 compute-0 nova_compute[351685]: 2025-10-03 11:30:43.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:30:43 compute-0 nova_compute[351685]: 2025-10-03 11:30:43.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:30:43 compute-0 nova_compute[351685]: 2025-10-03 11:30:43.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:30:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3707: 321 pgs: 321 active+clean; 264 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct  3 11:30:44 compute-0 nova_compute[351685]: 2025-10-03 11:30:44.239 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:30:44 compute-0 nova_compute[351685]: 2025-10-03 11:30:44.240 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:30:44 compute-0 nova_compute[351685]: 2025-10-03 11:30:44.240 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:30:44 compute-0 nova_compute[351685]: 2025-10-03 11:30:44.241 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:30:45 compute-0 nova_compute[351685]: 2025-10-03 11:30:45.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:45 compute-0 nova_compute[351685]: 2025-10-03 11:30:45.736 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:30:45 compute-0 nova_compute[351685]: 2025-10-03 11:30:45.749 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:30:45 compute-0 nova_compute[351685]: 2025-10-03 11:30:45.749 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:30:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3708: 321 pgs: 321 active+clean; 265 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 77 op/s
Oct  3 11:30:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:30:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:30:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:30:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:30:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:30:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:30:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:30:46
Oct  3 11:30:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:30:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:30:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'default.rgw.control', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data']
Oct  3 11:30:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:30:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:30:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:30:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:30:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:30:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:30:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:30:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:30:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:30:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:30:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:30:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3709: 321 pgs: 321 active+clean; 265 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 352 KiB/s wr, 64 op/s
Oct  3 11:30:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:30:48 compute-0 nova_compute[351685]: 2025-10-03 11:30:48.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3710: 321 pgs: 321 active+clean; 265 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 352 KiB/s wr, 87 op/s
Oct  3 11:30:50 compute-0 nova_compute[351685]: 2025-10-03 11:30:50.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3711: 321 pgs: 321 active+clean; 265 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct  3 11:30:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  3 11:30:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 11:30:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:30:52 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:30:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:30:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:30:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:30:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:30:52 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3ca2eeb9-a8ce-42dc-aa7c-d755d4844c5c does not exist
Oct  3 11:30:52 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6d60d25b-3e78-4551-a584-2c507b4c940a does not exist
Oct  3 11:30:52 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev cc26f3f5-3860-4b3a-a5c9-32c70f4c4a5f does not exist
Oct  3 11:30:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:30:52 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:30:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:30:52 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:30:52 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:30:52 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:30:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 11:30:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:30:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:30:53 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:30:53 compute-0 podman[538283]: 2025-10-03 11:30:53.283361855 +0000 UTC m=+0.072933599 container create 4cecb0362765a192936995ac69289dbaa2f9f181bc663a0876397c930fbcf723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_golick, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:30:53 compute-0 podman[538283]: 2025-10-03 11:30:53.255836973 +0000 UTC m=+0.045408727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:30:53 compute-0 systemd[1]: Started libpod-conmon-4cecb0362765a192936995ac69289dbaa2f9f181bc663a0876397c930fbcf723.scope.
Oct  3 11:30:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:30:53 compute-0 podman[538283]: 2025-10-03 11:30:53.4301961 +0000 UTC m=+0.219767914 container init 4cecb0362765a192936995ac69289dbaa2f9f181bc663a0876397c930fbcf723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_golick, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  3 11:30:53 compute-0 podman[538283]: 2025-10-03 11:30:53.439286552 +0000 UTC m=+0.228858276 container start 4cecb0362765a192936995ac69289dbaa2f9f181bc663a0876397c930fbcf723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:30:53 compute-0 podman[538283]: 2025-10-03 11:30:53.444187608 +0000 UTC m=+0.233759352 container attach 4cecb0362765a192936995ac69289dbaa2f9f181bc663a0876397c930fbcf723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_golick, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 11:30:53 compute-0 amazing_golick[538299]: 167 167
Oct  3 11:30:53 compute-0 systemd[1]: libpod-4cecb0362765a192936995ac69289dbaa2f9f181bc663a0876397c930fbcf723.scope: Deactivated successfully.
Oct  3 11:30:53 compute-0 podman[538283]: 2025-10-03 11:30:53.451446682 +0000 UTC m=+0.241018406 container died 4cecb0362765a192936995ac69289dbaa2f9f181bc663a0876397c930fbcf723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_golick, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:30:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-98716238271a7f59d7fdc4d60b8a548ff206bfd4467841f17d5201b44855f5ae-merged.mount: Deactivated successfully.
Oct  3 11:30:53 compute-0 podman[538283]: 2025-10-03 11:30:53.513951434 +0000 UTC m=+0.303523138 container remove 4cecb0362765a192936995ac69289dbaa2f9f181bc663a0876397c930fbcf723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_golick, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:30:53 compute-0 systemd[1]: libpod-conmon-4cecb0362765a192936995ac69289dbaa2f9f181bc663a0876397c930fbcf723.scope: Deactivated successfully.
Oct  3 11:30:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:30:53 compute-0 nova_compute[351685]: 2025-10-03 11:30:53.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:53 compute-0 podman[538322]: 2025-10-03 11:30:53.757613864 +0000 UTC m=+0.086961029 container create 8cc169058f4dd79487e127a892a8b0cec6b2b3d27ae0e69e8380916cd7f5c669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 11:30:53 compute-0 systemd[1]: Started libpod-conmon-8cc169058f4dd79487e127a892a8b0cec6b2b3d27ae0e69e8380916cd7f5c669.scope.
Oct  3 11:30:53 compute-0 podman[538322]: 2025-10-03 11:30:53.72191479 +0000 UTC m=+0.051261975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:30:53 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:30:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d8fd74c11b23928653cd4b1cd40f23e49d21b63a05b4c69b82632cdbe6250c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:30:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d8fd74c11b23928653cd4b1cd40f23e49d21b63a05b4c69b82632cdbe6250c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:30:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d8fd74c11b23928653cd4b1cd40f23e49d21b63a05b4c69b82632cdbe6250c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:30:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d8fd74c11b23928653cd4b1cd40f23e49d21b63a05b4c69b82632cdbe6250c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:30:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d8fd74c11b23928653cd4b1cd40f23e49d21b63a05b4c69b82632cdbe6250c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:30:53 compute-0 podman[538322]: 2025-10-03 11:30:53.915880296 +0000 UTC m=+0.245227481 container init 8cc169058f4dd79487e127a892a8b0cec6b2b3d27ae0e69e8380916cd7f5c669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct  3 11:30:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3712: 321 pgs: 321 active+clean; 265 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 72 op/s
Oct  3 11:30:53 compute-0 podman[538322]: 2025-10-03 11:30:53.933933334 +0000 UTC m=+0.263280499 container start 8cc169058f4dd79487e127a892a8b0cec6b2b3d27ae0e69e8380916cd7f5c669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:30:53 compute-0 podman[538322]: 2025-10-03 11:30:53.939032969 +0000 UTC m=+0.268380194 container attach 8cc169058f4dd79487e127a892a8b0cec6b2b3d27ae0e69e8380916cd7f5c669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 11:30:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:30:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2505760603' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:30:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:30:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2505760603' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:30:55 compute-0 nova_compute[351685]: 2025-10-03 11:30:55.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:55 compute-0 suspicious_williams[538338]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:30:55 compute-0 suspicious_williams[538338]: --> relative data size: 1.0
Oct  3 11:30:55 compute-0 suspicious_williams[538338]: --> All data devices are unavailable
Oct  3 11:30:55 compute-0 systemd[1]: libpod-8cc169058f4dd79487e127a892a8b0cec6b2b3d27ae0e69e8380916cd7f5c669.scope: Deactivated successfully.
Oct  3 11:30:55 compute-0 systemd[1]: libpod-8cc169058f4dd79487e127a892a8b0cec6b2b3d27ae0e69e8380916cd7f5c669.scope: Consumed 1.143s CPU time.
Oct  3 11:30:55 compute-0 podman[538322]: 2025-10-03 11:30:55.15785214 +0000 UTC m=+1.487199305 container died 8cc169058f4dd79487e127a892a8b0cec6b2b3d27ae0e69e8380916cd7f5c669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_williams, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:30:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-68d8fd74c11b23928653cd4b1cd40f23e49d21b63a05b4c69b82632cdbe6250c-merged.mount: Deactivated successfully.
Oct  3 11:30:55 compute-0 podman[538322]: 2025-10-03 11:30:55.239623261 +0000 UTC m=+1.568970436 container remove 8cc169058f4dd79487e127a892a8b0cec6b2b3d27ae0e69e8380916cd7f5c669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_williams, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct  3 11:30:55 compute-0 systemd[1]: libpod-conmon-8cc169058f4dd79487e127a892a8b0cec6b2b3d27ae0e69e8380916cd7f5c669.scope: Deactivated successfully.
Oct  3 11:30:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:30:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.0 total, 600.0 interval#012Cumulative writes: 16K writes, 74K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.01 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.10 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1360 writes, 6143 keys, 1360 commit groups, 1.0 writes per commit group, ingest: 8.69 MB, 0.01 MB/s#012Interval WAL: 1360 writes, 1360 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     29.6      3.15              0.39        56    0.056       0      0       0.0       0.0#012  L6      1/0    7.36 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.8     91.9     77.1      5.77              1.71        55    0.105    352K    29K       0.0       0.0#012 Sum      1/0    7.36 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.8     59.5     60.3      8.92              2.10       111    0.080    352K    29K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   7.2     80.8     79.1      0.64              0.21        10    0.064     42K   2572       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     91.9     77.1      5.77              1.71        55    0.105    352K    29K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     29.7      3.14              0.39        55    0.057       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.6      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7200.0 total, 600.0 interval#012Flush(GB): cumulative 0.091, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.53 GB write, 0.07 MB/s write, 0.52 GB read, 0.07 MB/s read, 8.9 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56005dddb1f0#2 capacity: 304.00 MB usage: 64.49 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000394 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4061,62.08 MB,20.4202%) FilterBlock(112,985.36 KB,0.316535%) IndexBlock(112,1.45 MB,0.475768%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  3 11:30:55 compute-0 nova_compute[351685]: 2025-10-03 11:30:55.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:30:55 compute-0 nova_compute[351685]: 2025-10-03 11:30:55.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:30:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3713: 321 pgs: 321 active+clean; 265 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 67 op/s
Oct  3 11:30:56 compute-0 podman[538516]: 2025-10-03 11:30:56.307652262 +0000 UTC m=+0.045753438 container create a62c533ecb5bf7bb61a22d7f67fd1c92b1799dd9f5d4a3d2aec7c9685219d492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 11:30:56 compute-0 systemd[1]: Started libpod-conmon-a62c533ecb5bf7bb61a22d7f67fd1c92b1799dd9f5d4a3d2aec7c9685219d492.scope.
Oct  3 11:30:56 compute-0 podman[538516]: 2025-10-03 11:30:56.290515332 +0000 UTC m=+0.028616528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:30:56 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:30:56 compute-0 podman[538516]: 2025-10-03 11:30:56.417670767 +0000 UTC m=+0.155772023 container init a62c533ecb5bf7bb61a22d7f67fd1c92b1799dd9f5d4a3d2aec7c9685219d492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ganguly, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 11:30:56 compute-0 podman[538516]: 2025-10-03 11:30:56.426157759 +0000 UTC m=+0.164258935 container start a62c533ecb5bf7bb61a22d7f67fd1c92b1799dd9f5d4a3d2aec7c9685219d492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:30:56 compute-0 podman[538516]: 2025-10-03 11:30:56.430123226 +0000 UTC m=+0.168224442 container attach a62c533ecb5bf7bb61a22d7f67fd1c92b1799dd9f5d4a3d2aec7c9685219d492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:30:56 compute-0 systemd[1]: libpod-a62c533ecb5bf7bb61a22d7f67fd1c92b1799dd9f5d4a3d2aec7c9685219d492.scope: Deactivated successfully.
Oct  3 11:30:56 compute-0 hopeful_ganguly[538532]: 167 167
Oct  3 11:30:56 compute-0 conmon[538532]: conmon a62c533ecb5bf7bb61a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a62c533ecb5bf7bb61a22d7f67fd1c92b1799dd9f5d4a3d2aec7c9685219d492.scope/container/memory.events
Oct  3 11:30:56 compute-0 podman[538537]: 2025-10-03 11:30:56.48267135 +0000 UTC m=+0.037675809 container died a62c533ecb5bf7bb61a22d7f67fd1c92b1799dd9f5d4a3d2aec7c9685219d492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  3 11:30:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f8ff8e937631996a3c95cb1fcd8fe2f0869b43ba747d33fddfc4db4d5b05e98-merged.mount: Deactivated successfully.
Oct  3 11:30:56 compute-0 podman[538537]: 2025-10-03 11:30:56.529600634 +0000 UTC m=+0.084605073 container remove a62c533ecb5bf7bb61a22d7f67fd1c92b1799dd9f5d4a3d2aec7c9685219d492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:30:56 compute-0 systemd[1]: libpod-conmon-a62c533ecb5bf7bb61a22d7f67fd1c92b1799dd9f5d4a3d2aec7c9685219d492.scope: Deactivated successfully.
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016581277064205248 of space, bias 1.0, pg target 0.49743831192615745 quantized to 32 (current 32)
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:30:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:30:56 compute-0 podman[538560]: 2025-10-03 11:30:56.825206479 +0000 UTC m=+0.090486932 container create c83551cba95f42582491eed336bae02d21fb3c5cbd187e040dfc4b31c829db2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_saha, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:30:56 compute-0 podman[538560]: 2025-10-03 11:30:56.787755268 +0000 UTC m=+0.053035801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:30:56 compute-0 systemd[1]: Started libpod-conmon-c83551cba95f42582491eed336bae02d21fb3c5cbd187e040dfc4b31c829db2b.scope.
Oct  3 11:30:56 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:30:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed0de2573e4193c98f6297f066bdc47980a4d29f3e85e0f059a5e70362837b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:30:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed0de2573e4193c98f6297f066bdc47980a4d29f3e85e0f059a5e70362837b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:30:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed0de2573e4193c98f6297f066bdc47980a4d29f3e85e0f059a5e70362837b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:30:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed0de2573e4193c98f6297f066bdc47980a4d29f3e85e0f059a5e70362837b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:30:56 compute-0 podman[538560]: 2025-10-03 11:30:56.975887818 +0000 UTC m=+0.241168261 container init c83551cba95f42582491eed336bae02d21fb3c5cbd187e040dfc4b31c829db2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Oct  3 11:30:56 compute-0 podman[538560]: 2025-10-03 11:30:56.990771145 +0000 UTC m=+0.256051588 container start c83551cba95f42582491eed336bae02d21fb3c5cbd187e040dfc4b31c829db2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_saha, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 11:30:56 compute-0 podman[538560]: 2025-10-03 11:30:56.995768475 +0000 UTC m=+0.261048918 container attach c83551cba95f42582491eed336bae02d21fb3c5cbd187e040dfc4b31c829db2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 11:30:57 compute-0 gifted_saha[538574]: {
Oct  3 11:30:57 compute-0 gifted_saha[538574]:    "0": [
Oct  3 11:30:57 compute-0 gifted_saha[538574]:        {
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "devices": [
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "/dev/loop3"
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            ],
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "lv_name": "ceph_lv0",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "lv_size": "21470642176",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "name": "ceph_lv0",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "tags": {
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.cluster_name": "ceph",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.crush_device_class": "",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.encrypted": "0",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.osd_id": "0",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.type": "block",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.vdo": "0"
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            },
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "type": "block",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "vg_name": "ceph_vg0"
Oct  3 11:30:57 compute-0 gifted_saha[538574]:        }
Oct  3 11:30:57 compute-0 gifted_saha[538574]:    ],
Oct  3 11:30:57 compute-0 gifted_saha[538574]:    "1": [
Oct  3 11:30:57 compute-0 gifted_saha[538574]:        {
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "devices": [
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "/dev/loop4"
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            ],
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "lv_name": "ceph_lv1",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "lv_size": "21470642176",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "name": "ceph_lv1",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "tags": {
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.cluster_name": "ceph",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.crush_device_class": "",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.encrypted": "0",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.osd_id": "1",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.type": "block",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.vdo": "0"
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            },
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "type": "block",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "vg_name": "ceph_vg1"
Oct  3 11:30:57 compute-0 gifted_saha[538574]:        }
Oct  3 11:30:57 compute-0 gifted_saha[538574]:    ],
Oct  3 11:30:57 compute-0 gifted_saha[538574]:    "2": [
Oct  3 11:30:57 compute-0 gifted_saha[538574]:        {
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "devices": [
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "/dev/loop5"
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            ],
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "lv_name": "ceph_lv2",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "lv_size": "21470642176",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "name": "ceph_lv2",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "tags": {
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.cluster_name": "ceph",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.crush_device_class": "",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.encrypted": "0",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.osd_id": "2",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.type": "block",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:                "ceph.vdo": "0"
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            },
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "type": "block",
Oct  3 11:30:57 compute-0 gifted_saha[538574]:            "vg_name": "ceph_vg2"
Oct  3 11:30:57 compute-0 gifted_saha[538574]:        }
Oct  3 11:30:57 compute-0 gifted_saha[538574]:    ]
Oct  3 11:30:57 compute-0 gifted_saha[538574]: }
Oct  3 11:30:57 compute-0 systemd[1]: libpod-c83551cba95f42582491eed336bae02d21fb3c5cbd187e040dfc4b31c829db2b.scope: Deactivated successfully.
Oct  3 11:30:57 compute-0 podman[538583]: 2025-10-03 11:30:57.8552146 +0000 UTC m=+0.058984292 container died c83551cba95f42582491eed336bae02d21fb3c5cbd187e040dfc4b31c829db2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_saha, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:30:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ed0de2573e4193c98f6297f066bdc47980a4d29f3e85e0f059a5e70362837b5-merged.mount: Deactivated successfully.
Oct  3 11:30:57 compute-0 podman[538583]: 2025-10-03 11:30:57.92448403 +0000 UTC m=+0.128253692 container remove c83551cba95f42582491eed336bae02d21fb3c5cbd187e040dfc4b31c829db2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_saha, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 11:30:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3714: 321 pgs: 321 active+clean; 265 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 707 KiB/s rd, 22 op/s
Oct  3 11:30:57 compute-0 systemd[1]: libpod-conmon-c83551cba95f42582491eed336bae02d21fb3c5cbd187e040dfc4b31c829db2b.scope: Deactivated successfully.
Oct  3 11:30:58 compute-0 podman[538597]: 2025-10-03 11:30:58.113888689 +0000 UTC m=+0.139358267 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:30:58 compute-0 podman[538622]: 2025-10-03 11:30:58.146117083 +0000 UTC m=+0.092339940 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  3 11:30:58 compute-0 podman[538610]: 2025-10-03 11:30:58.152617231 +0000 UTC m=+0.141711333 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:30:58 compute-0 podman[538615]: 2025-10-03 11:30:58.168411697 +0000 UTC m=+0.140542765 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm)
Oct  3 11:30:58 compute-0 podman[538601]: 2025-10-03 11:30:58.18661252 +0000 UTC m=+0.170588008 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:30:58 compute-0 podman[538598]: 2025-10-03 11:30:58.19345566 +0000 UTC m=+0.217077379 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350)
Oct  3 11:30:58 compute-0 podman[538634]: 2025-10-03 11:30:58.206522068 +0000 UTC m=+0.152668303 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 11:30:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:30:58 compute-0 nova_compute[351685]: 2025-10-03 11:30:58.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:30:58 compute-0 podman[538865]: 2025-10-03 11:30:58.822966645 +0000 UTC m=+0.053752494 container create d4be679f27990f8f1f551779b1e24c2712a474a20df880fedb1fe8c8014a7ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:30:58 compute-0 systemd[1]: Started libpod-conmon-d4be679f27990f8f1f551779b1e24c2712a474a20df880fedb1fe8c8014a7ed1.scope.
Oct  3 11:30:58 compute-0 podman[538865]: 2025-10-03 11:30:58.801357193 +0000 UTC m=+0.032143052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:30:58 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:30:58 compute-0 podman[538865]: 2025-10-03 11:30:58.918213478 +0000 UTC m=+0.148999327 container init d4be679f27990f8f1f551779b1e24c2712a474a20df880fedb1fe8c8014a7ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  3 11:30:58 compute-0 podman[538865]: 2025-10-03 11:30:58.926924627 +0000 UTC m=+0.157710456 container start d4be679f27990f8f1f551779b1e24c2712a474a20df880fedb1fe8c8014a7ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 11:30:58 compute-0 podman[538865]: 2025-10-03 11:30:58.930530033 +0000 UTC m=+0.161315892 container attach d4be679f27990f8f1f551779b1e24c2712a474a20df880fedb1fe8c8014a7ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:30:58 compute-0 compassionate_hermann[538880]: 167 167
Oct  3 11:30:58 compute-0 podman[538865]: 2025-10-03 11:30:58.934110807 +0000 UTC m=+0.164896626 container died d4be679f27990f8f1f551779b1e24c2712a474a20df880fedb1fe8c8014a7ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:30:58 compute-0 systemd[1]: libpod-d4be679f27990f8f1f551779b1e24c2712a474a20df880fedb1fe8c8014a7ed1.scope: Deactivated successfully.
Oct  3 11:30:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-37013c94635d73e2b06b2e22a87162a984fa7dadff356aeb5de2aeaa6df21511-merged.mount: Deactivated successfully.
Oct  3 11:30:58 compute-0 podman[538865]: 2025-10-03 11:30:58.984384198 +0000 UTC m=+0.215170027 container remove d4be679f27990f8f1f551779b1e24c2712a474a20df880fedb1fe8c8014a7ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct  3 11:30:59 compute-0 systemd[1]: libpod-conmon-d4be679f27990f8f1f551779b1e24c2712a474a20df880fedb1fe8c8014a7ed1.scope: Deactivated successfully.
Oct  3 11:30:59 compute-0 podman[538903]: 2025-10-03 11:30:59.204529334 +0000 UTC m=+0.064731025 container create 011f6e7b3b9a3da6a074c9a0c43f487dc034653cff5d2d347a08cb279fcaca69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 11:30:59 compute-0 systemd[1]: Started libpod-conmon-011f6e7b3b9a3da6a074c9a0c43f487dc034653cff5d2d347a08cb279fcaca69.scope.
Oct  3 11:30:59 compute-0 podman[538903]: 2025-10-03 11:30:59.180649108 +0000 UTC m=+0.040850839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:30:59 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:30:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e65bd7ae5d3c1f681e693d6d9bb34c947a1ae4fd46006eb3b14337e9d7bbc98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:30:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e65bd7ae5d3c1f681e693d6d9bb34c947a1ae4fd46006eb3b14337e9d7bbc98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:30:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e65bd7ae5d3c1f681e693d6d9bb34c947a1ae4fd46006eb3b14337e9d7bbc98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:30:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e65bd7ae5d3c1f681e693d6d9bb34c947a1ae4fd46006eb3b14337e9d7bbc98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:30:59 compute-0 podman[538903]: 2025-10-03 11:30:59.318197577 +0000 UTC m=+0.178399308 container init 011f6e7b3b9a3da6a074c9a0c43f487dc034653cff5d2d347a08cb279fcaca69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cori, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  3 11:30:59 compute-0 podman[538903]: 2025-10-03 11:30:59.334337804 +0000 UTC m=+0.194539495 container start 011f6e7b3b9a3da6a074c9a0c43f487dc034653cff5d2d347a08cb279fcaca69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cori, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:30:59 compute-0 podman[538903]: 2025-10-03 11:30:59.341925578 +0000 UTC m=+0.202127299 container attach 011f6e7b3b9a3da6a074c9a0c43f487dc034653cff5d2d347a08cb279fcaca69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct  3 11:30:59 compute-0 podman[157165]: time="2025-10-03T11:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:30:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 49068 "" "Go-http-client/1.1"
Oct  3 11:30:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10013 "" "Go-http-client/1.1"
Oct  3 11:30:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3715: 321 pgs: 321 active+clean; 265 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 707 KiB/s rd, 22 op/s
Oct  3 11:31:00 compute-0 nova_compute[351685]: 2025-10-03 11:31:00.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:00 compute-0 pedantic_cori[538919]: {
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:        "osd_id": 1,
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:        "type": "bluestore"
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:    },
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:        "osd_id": 2,
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:        "type": "bluestore"
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:    },
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:        "osd_id": 0,
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:        "type": "bluestore"
Oct  3 11:31:00 compute-0 pedantic_cori[538919]:    }
Oct  3 11:31:00 compute-0 pedantic_cori[538919]: }
Oct  3 11:31:00 compute-0 systemd[1]: libpod-011f6e7b3b9a3da6a074c9a0c43f487dc034653cff5d2d347a08cb279fcaca69.scope: Deactivated successfully.
Oct  3 11:31:00 compute-0 podman[538903]: 2025-10-03 11:31:00.475325422 +0000 UTC m=+1.335527133 container died 011f6e7b3b9a3da6a074c9a0c43f487dc034653cff5d2d347a08cb279fcaca69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:31:00 compute-0 systemd[1]: libpod-011f6e7b3b9a3da6a074c9a0c43f487dc034653cff5d2d347a08cb279fcaca69.scope: Consumed 1.132s CPU time.
Oct  3 11:31:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e65bd7ae5d3c1f681e693d6d9bb34c947a1ae4fd46006eb3b14337e9d7bbc98-merged.mount: Deactivated successfully.
Oct  3 11:31:00 compute-0 podman[538903]: 2025-10-03 11:31:00.571160374 +0000 UTC m=+1.431362065 container remove 011f6e7b3b9a3da6a074c9a0c43f487dc034653cff5d2d347a08cb279fcaca69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:31:00 compute-0 systemd[1]: libpod-conmon-011f6e7b3b9a3da6a074c9a0c43f487dc034653cff5d2d347a08cb279fcaca69.scope: Deactivated successfully.
Oct  3 11:31:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:31:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:31:00 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:31:00 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:31:00 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 76bfda2b-d667-4a3a-a012-0fee4b1490a5 does not exist
Oct  3 11:31:00 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev cc61e914-94d5-4a71-8a7d-a4272216ab98 does not exist
Oct  3 11:31:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:31:01 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:31:01 compute-0 openstack_network_exporter[367524]: ERROR   11:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:31:01 compute-0 openstack_network_exporter[367524]: ERROR   11:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:31:01 compute-0 openstack_network_exporter[367524]: ERROR   11:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:31:01 compute-0 openstack_network_exporter[367524]: ERROR   11:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:31:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:31:01 compute-0 openstack_network_exporter[367524]: ERROR   11:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:31:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:31:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3716: 321 pgs: 321 active+clean; 265 MiB data, 405 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:31:02 compute-0 nova_compute[351685]: 2025-10-03 11:31:02.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:31:02 compute-0 nova_compute[351685]: 2025-10-03 11:31:02.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:31:02 compute-0 nova_compute[351685]: 2025-10-03 11:31:02.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:31:02 compute-0 nova_compute[351685]: 2025-10-03 11:31:02.755 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:31:02 compute-0 nova_compute[351685]: 2025-10-03 11:31:02.756 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:31:02 compute-0 nova_compute[351685]: 2025-10-03 11:31:02.756 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:31:02 compute-0 nova_compute[351685]: 2025-10-03 11:31:02.756 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:31:02 compute-0 nova_compute[351685]: 2025-10-03 11:31:02.757 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:31:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:31:03 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/486676044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.219 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.319 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.320 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.325 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.326 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.326 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.330 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.331 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:31:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.714 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.715 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3221MB free_disk=59.88881301879883GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.716 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.716 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.798 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.799 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 83fc22ce-d2e4-468a-b166-04f2743fa68d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.799 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 443e486d-1bf2-4550-a4ae-32f0f8f4af19 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.799 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.799 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:31:03 compute-0 nova_compute[351685]: 2025-10-03 11:31:03.871 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:31:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3717: 321 pgs: 321 active+clean; 265 MiB data, 405 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:31:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:31:04 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1655730549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:31:04 compute-0 nova_compute[351685]: 2025-10-03 11:31:04.391 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:31:04 compute-0 nova_compute[351685]: 2025-10-03 11:31:04.399 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:31:04 compute-0 nova_compute[351685]: 2025-10-03 11:31:04.417 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:31:04 compute-0 nova_compute[351685]: 2025-10-03 11:31:04.440 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:31:04 compute-0 nova_compute[351685]: 2025-10-03 11:31:04.441 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:31:05 compute-0 nova_compute[351685]: 2025-10-03 11:31:05.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3718: 321 pgs: 321 active+clean; 265 MiB data, 405 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:31:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3719: 321 pgs: 321 active+clean; 265 MiB data, 405 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:31:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:31:08 compute-0 nova_compute[351685]: 2025-10-03 11:31:08.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:09 compute-0 podman[539061]: 2025-10-03 11:31:09.829293691 +0000 UTC m=+0.086277757 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:31:09 compute-0 podman[539059]: 2025-10-03 11:31:09.837586967 +0000 UTC m=+0.086807534 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 11:31:09 compute-0 podman[539060]: 2025-10-03 11:31:09.863435184 +0000 UTC m=+0.124241152 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, vendor=Red Hat, Inc., version=9.4, maintainer=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler)
Oct  3 11:31:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3720: 321 pgs: 321 active+clean; 265 MiB data, 405 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:31:10 compute-0 nova_compute[351685]: 2025-10-03 11:31:10.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:10 compute-0 ovn_controller[88471]: 2025-10-03T11:31:10Z|00202|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Oct  3 11:31:10 compute-0 nova_compute[351685]: 2025-10-03 11:31:10.442 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:31:10 compute-0 nova_compute[351685]: 2025-10-03 11:31:10.442 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:31:10 compute-0 nova_compute[351685]: 2025-10-03 11:31:10.442 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:31:10 compute-0 nova_compute[351685]: 2025-10-03 11:31:10.442 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:31:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3721: 321 pgs: 321 active+clean; 265 MiB data, 405 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:31:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:31:13 compute-0 nova_compute[351685]: 2025-10-03 11:31:13.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3722: 321 pgs: 321 active+clean; 265 MiB data, 405 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:31:15 compute-0 nova_compute[351685]: 2025-10-03 11:31:15.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3723: 321 pgs: 321 active+clean; 265 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 op/s
Oct  3 11:31:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:31:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:31:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:31:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:31:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:31:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:31:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3724: 321 pgs: 321 active+clean; 265 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 op/s
Oct  3 11:31:18 compute-0 ovn_controller[88471]: 2025-10-03T11:31:18Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:f1:a3 10.100.0.169
Oct  3 11:31:18 compute-0 ovn_controller[88471]: 2025-10-03T11:31:18Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:f1:a3 10.100.0.169
Oct  3 11:31:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:31:18 compute-0 nova_compute[351685]: 2025-10-03 11:31:18.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3725: 321 pgs: 321 active+clean; 288 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 1.6 MiB/s wr, 45 op/s
Oct  3 11:31:20 compute-0 nova_compute[351685]: 2025-10-03 11:31:20.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3726: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 205 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Oct  3 11:31:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:31:23 compute-0 nova_compute[351685]: 2025-10-03 11:31:23.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3727: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 205 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Oct  3 11:31:25 compute-0 nova_compute[351685]: 2025-10-03 11:31:25.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3728: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 205 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Oct  3 11:31:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3729: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 203 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Oct  3 11:31:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:31:28 compute-0 nova_compute[351685]: 2025-10-03 11:31:28.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:28 compute-0 podman[539122]: 2025-10-03 11:31:28.870389715 +0000 UTC m=+0.105181531 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:31:28 compute-0 podman[539134]: 2025-10-03 11:31:28.883038191 +0000 UTC m=+0.096142722 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Oct  3 11:31:28 compute-0 podman[539121]: 2025-10-03 11:31:28.892542705 +0000 UTC m=+0.138652624 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Oct  3 11:31:28 compute-0 podman[539120]: 2025-10-03 11:31:28.898509227 +0000 UTC m=+0.138024325 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:31:28 compute-0 podman[539141]: 2025-10-03 11:31:28.916818284 +0000 UTC m=+0.117187258 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  3 11:31:28 compute-0 podman[539125]: 2025-10-03 11:31:28.918772796 +0000 UTC m=+0.140364600 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  3 11:31:28 compute-0 podman[539140]: 2025-10-03 11:31:28.939057427 +0000 UTC m=+0.153553933 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 11:31:29 compute-0 podman[157165]: time="2025-10-03T11:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:31:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:31:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9608 "" "Go-http-client/1.1"
Oct  3 11:31:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3730: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 203 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Oct  3 11:31:30 compute-0 nova_compute[351685]: 2025-10-03 11:31:30.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:31 compute-0 openstack_network_exporter[367524]: ERROR   11:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:31:31 compute-0 openstack_network_exporter[367524]: ERROR   11:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:31:31 compute-0 openstack_network_exporter[367524]: ERROR   11:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:31:31 compute-0 openstack_network_exporter[367524]: ERROR   11:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:31:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:31:31 compute-0 openstack_network_exporter[367524]: ERROR   11:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:31:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:31:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3731: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 587 KiB/s wr, 10 op/s
Oct  3 11:31:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:31:33 compute-0 nova_compute[351685]: 2025-10-03 11:31:33.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3732: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct  3 11:31:35 compute-0 nova_compute[351685]: 2025-10-03 11:31:35.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3733: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct  3 11:31:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3734: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Oct  3 11:31:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:31:38 compute-0 nova_compute[351685]: 2025-10-03 11:31:38.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3735: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Oct  3 11:31:40 compute-0 nova_compute[351685]: 2025-10-03 11:31:40.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:40 compute-0 podman[539251]: 2025-10-03 11:31:40.829700553 +0000 UTC m=+0.080769880 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  3 11:31:40 compute-0 podman[539250]: 2025-10-03 11:31:40.838701851 +0000 UTC m=+0.096299936 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, config_id=edpm, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vcs-type=git, version=9.4, name=ubi9, release=1214.1726694543, container_name=kepler, vendor=Red Hat, Inc.)
Oct  3 11:31:40 compute-0 podman[539249]: 2025-10-03 11:31:40.867102202 +0000 UTC m=+0.123915993 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.908 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.908 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.908 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.909 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.909 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.910 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a964db8f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.914 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 443e486d-1bf2-4550-a4ae-32f0f8f4af19 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Oct  3 11:31:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:40.915 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/443e486d-1bf2-4550-a4ae-32f0f8f4af19 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}05c71207458c74aad35f0b171d3453ab31f2036fb50a6b94fe7b4d338da45aed" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Oct  3 11:31:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:31:41.708 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:31:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:31:41.709 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:31:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:31:41.710 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:31:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3736: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.283 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Fri, 03 Oct 2025 11:31:40 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-9bcbfed6-5cdf-46bd-80d0-645b12060bb7 x-openstack-request-id: req-9bcbfed6-5cdf-46bd-80d0-645b12060bb7 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.283 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "443e486d-1bf2-4550-a4ae-32f0f8f4af19", "name": "te-0355793-asg-bz6ac4extgro-yngmy2hkxebf-kzam7sftwtcg", "status": "ACTIVE", "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "user_id": "8990c210ba8740dc9714739f27702391", "metadata": {"metering.server_group": "0f5ccd31-0ab5-424c-9868-9c1f9b1ba831"}, "hostId": "68f1a860c9647e69f481ba6f1cfa913085c986ba3b27987b76a63364", "image": {"id": "b9c8e0cc-ecf1-4fa8-92ee-328b108123cd", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/b9c8e0cc-ecf1-4fa8-92ee-328b108123cd"}]}, "flavor": {"id": "b93eb926-1d95-406e-aec3-a907be067084", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b93eb926-1d95-406e-aec3-a907be067084"}]}, "created": "2025-10-03T11:30:30Z", "updated": "2025-10-03T11:30:41Z", "addresses": {"": [{"version": 4, "addr": "10.100.0.169", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:5e:f1:a3"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/443e486d-1bf2-4550-a4ae-32f0f8f4af19"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/443e486d-1bf2-4550-a4ae-32f0f8f4af19"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-10-03T11:30:41.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000010", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.283 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/443e486d-1bf2-4550-a4ae-32f0f8f4af19 used request id req-9bcbfed6-5cdf-46bd-80d0-645b12060bb7 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.285 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '443e486d-1bf2-4550-a4ae-32f0f8f4af19', 'name': 'te-0355793-asg-bz6ac4extgro-yngmy2hkxebf-kzam7sftwtcg', 'flavor': {'id': 'b93eb926-1d95-406e-aec3-a907be067084', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b9c8e0cc-ecf1-4fa8-92ee-328b108123cd'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ebbd19d68501417398b05a6a4b7c22de', 'user_id': '8990c210ba8740dc9714739f27702391', 'hostId': '68f1a860c9647e69f481ba6f1cfa913085c986ba3b27987b76a63364', 'status': 'active', 'metadata': {'metering.server_group': '0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.288 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.292 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '83fc22ce-d2e4-468a-b166-04f2743fa68d', 'name': 'te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz', 'flavor': {'id': 'b93eb926-1d95-406e-aec3-a907be067084', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b9c8e0cc-ecf1-4fa8-92ee-328b108123cd'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ebbd19d68501417398b05a6a4b7c22de', 'user_id': '8990c210ba8740dc9714739f27702391', 'hostId': '68f1a860c9647e69f481ba6f1cfa913085c986ba3b27987b76a63364', 'status': 'active', 'metadata': {'metering.server_group': '0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.292 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.293 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.293 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.293 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.294 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:31:42.293498) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.299 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 443e486d-1bf2-4550-a4ae-32f0f8f4af19 / tapd6a8cc09-54 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.299 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.305 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.310 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.311 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.312 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.312 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.312 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.313 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:31:42.312892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.313 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.314 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.314 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.316 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.316 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.316 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:31:42.316807) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.332 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.333 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.354 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.355 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.355 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.373 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.373 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.374 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.374 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.374 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.375 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.375 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.375 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:31:42.375319) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.403 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.bytes volume: 29019136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.403 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.462 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.463 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.464 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.492 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.bytes volume: 30612480 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.493 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.494 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.494 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.494 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.494 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.494 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.494 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.494 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.latency volume: 2011591932 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.495 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.latency volume: 159834513 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.495 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.495 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.495 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.495 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:31:42.494756) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.496 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.latency volume: 2539266810 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.496 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.latency volume: 146824610 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.496 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.496 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.496 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.497 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.497 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.497 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.497 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.requests volume: 1049 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.497 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.497 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.498 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.498 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.498 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.requests volume: 1112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:31:42.497193) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.498 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.499 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.499 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.499 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.499 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.499 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.499 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.500 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:31:42.499571) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.500 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.500 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.500 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.500 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.501 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.501 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.501 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.501 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.501 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.502 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.502 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.502 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.502 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:31:42.501901) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.502 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.503 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.503 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.503 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.504 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.504 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.504 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.bytes volume: 72790016 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.504 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:31:42.504220) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.504 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.505 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.505 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.505 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.505 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.506 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:31:42.506643) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.547 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.585 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.606 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.606 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.606 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.607 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.607 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.607 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.607 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.607 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.latency volume: 9548013768 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.607 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.608 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.608 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.608 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:31:42.607458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.608 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.609 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.latency volume: 10666610362 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.609 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.610 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.610 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.610 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.610 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.610 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.610 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.610 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.requests volume: 295 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.611 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.611 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:31:42.610622) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.611 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.611 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.612 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.612 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.requests volume: 324 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.612 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.613 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.613 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.613 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.613 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.613 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.613 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:31:42.613602) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.614 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.614 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.614 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.614 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.615 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.615 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.615 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.615 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.615 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.615 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-0355793-asg-bz6ac4extgro-yngmy2hkxebf-kzam7sftwtcg>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-0355793-asg-bz6ac4extgro-yngmy2hkxebf-kzam7sftwtcg>]
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.616 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.616 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.616 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.616 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.616 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.617 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.617 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-10-03T11:31:42.615427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:31:42.616706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.617 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.617 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.617 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.618 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.618 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.618 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:31:42.618093) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.618 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.618 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.619 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.619 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.619 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.619 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.619 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.619 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:31:42.619802) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.620 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.620 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.620 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.620 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.621 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.621 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.621 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:31:42.621099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.621 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.621 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.621 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.622 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.622 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.622 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.623 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:31:42.622882) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.623 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/cpu volume: 58280000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.623 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 111020000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.623 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/cpu volume: 279450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.624 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.624 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.624 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.624 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.624 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.624 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.625 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.625 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:31:42.624629) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.625 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.625 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.626 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.626 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.626 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.626 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.626 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:31:42.626401) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.626 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.626 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.627 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.627 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.627 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.627 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.627 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.628 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.628 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.628 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.628 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.629 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:31:42.628067) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.629 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.629 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.629 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.629 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.629 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.630 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:31:42.629872) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.630 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/memory.usage volume: 43.3515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.630 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.630 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/memory.usage volume: 43.296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.630 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.631 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.631 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.631 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.631 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.631 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.631 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-10-03T11:31:42.631437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.631 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-0355793-asg-bz6ac4extgro-yngmy2hkxebf-kzam7sftwtcg>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-0355793-asg-bz6ac4extgro-yngmy2hkxebf-kzam7sftwtcg>]
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.632 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.632 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.632 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.632 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.632 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.bytes volume: 1646 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.632 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.633 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.bytes volume: 1820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.633 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.633 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:31:42.632464) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.633 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.633 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.633 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.634 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.634 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:31:42.633858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.634 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.634 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:42 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:31:42.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:31:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:31:43 compute-0 nova_compute[351685]: 2025-10-03 11:31:43.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3737: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct  3 11:31:45 compute-0 nova_compute[351685]: 2025-10-03 11:31:45.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:45 compute-0 nova_compute[351685]: 2025-10-03 11:31:45.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:31:45 compute-0 nova_compute[351685]: 2025-10-03 11:31:45.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:31:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3738: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct  3 11:31:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:31:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:31:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:31:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:31:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:31:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:31:46 compute-0 nova_compute[351685]: 2025-10-03 11:31:46.235 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:31:46 compute-0 nova_compute[351685]: 2025-10-03 11:31:46.236 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:31:46 compute-0 nova_compute[351685]: 2025-10-03 11:31:46.237 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:31:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:31:46
Oct  3 11:31:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:31:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:31:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'images', 'vms', 'volumes']
Oct  3 11:31:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:31:46 compute-0 ceph-mgr[192071]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3262515590
Oct  3 11:31:47 compute-0 nova_compute[351685]: 2025-10-03 11:31:47.356 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Updating instance_info_cache with network_info: [{"id": "226590bd-fa92-4e26-8879-8782d015ad61", "address": "fa:16:3e:c0:36:62", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.141", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap226590bd-fa", "ovs_interfaceid": "226590bd-fa92-4e26-8879-8782d015ad61", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:31:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:31:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:31:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:31:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:31:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:31:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:31:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:31:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:31:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:31:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:31:47 compute-0 nova_compute[351685]: 2025-10-03 11:31:47.371 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:31:47 compute-0 nova_compute[351685]: 2025-10-03 11:31:47.372 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:31:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3739: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Oct  3 11:31:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:31:48 compute-0 nova_compute[351685]: 2025-10-03 11:31:48.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3740: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Oct  3 11:31:50 compute-0 nova_compute[351685]: 2025-10-03 11:31:50.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3741: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 767 B/s wr, 0 op/s
Oct  3 11:31:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:31:53 compute-0 nova_compute[351685]: 2025-10-03 11:31:53.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3742: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct  3 11:31:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:31:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2229656892' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:31:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:31:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2229656892' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:31:55 compute-0 nova_compute[351685]: 2025-10-03 11:31:55.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:55 compute-0 nova_compute[351685]: 2025-10-03 11:31:55.366 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:31:55 compute-0 nova_compute[351685]: 2025-10-03 11:31:55.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:31:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3743: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002067079548414404 of space, bias 1.0, pg target 0.6201238645243212 quantized to 32 (current 32)
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:31:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:31:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3744: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct  3 11:31:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:31:58 compute-0 nova_compute[351685]: 2025-10-03 11:31:58.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:31:59 compute-0 podman[157165]: time="2025-10-03T11:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:31:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:31:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9605 "" "Go-http-client/1.1"
Oct  3 11:31:59 compute-0 podman[539329]: 2025-10-03 11:31:59.880612112 +0000 UTC m=+0.089838399 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:31:59 compute-0 podman[539311]: 2025-10-03 11:31:59.904873581 +0000 UTC m=+0.129679198 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct  3 11:31:59 compute-0 podman[539312]: 2025-10-03 11:31:59.918116605 +0000 UTC m=+0.135838874 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Oct  3 11:31:59 compute-0 podman[539309]: 2025-10-03 11:31:59.923096514 +0000 UTC m=+0.160446612 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 11:31:59 compute-0 podman[539310]: 2025-10-03 11:31:59.933144237 +0000 UTC m=+0.166812138 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 11:31:59 compute-0 podman[539308]: 2025-10-03 11:31:59.934144288 +0000 UTC m=+0.171436175 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 11:31:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3745: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct  3 11:32:00 compute-0 podman[539327]: 2025-10-03 11:32:00.00564851 +0000 UTC m=+0.207674427 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:32:00 compute-0 nova_compute[351685]: 2025-10-03 11:32:00.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:01 compute-0 openstack_network_exporter[367524]: ERROR   11:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:32:01 compute-0 openstack_network_exporter[367524]: ERROR   11:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:32:01 compute-0 openstack_network_exporter[367524]: ERROR   11:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:32:01 compute-0 openstack_network_exporter[367524]: ERROR   11:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:32:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:32:01 compute-0 openstack_network_exporter[367524]: ERROR   11:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:32:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:32:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:32:01 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:32:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:32:01 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:32:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:32:01 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:32:01 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3c689c44-2230-49f6-a6ce-b66ed9b5017c does not exist
Oct  3 11:32:01 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 60ee3770-e6c2-4b3f-a7d0-ef59c98e8c21 does not exist
Oct  3 11:32:01 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d26857fa-9704-40d0-9b98-45d3b4c0db12 does not exist
Oct  3 11:32:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:32:01 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:32:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:32:01 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:32:01 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:32:01 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:32:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3746: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Oct  3 11:32:02 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:32:02 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:32:02 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:32:02 compute-0 nova_compute[351685]: 2025-10-03 11:32:02.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:32:02 compute-0 nova_compute[351685]: 2025-10-03 11:32:02.760 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:32:02 compute-0 nova_compute[351685]: 2025-10-03 11:32:02.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:32:02 compute-0 nova_compute[351685]: 2025-10-03 11:32:02.762 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:32:02 compute-0 nova_compute[351685]: 2025-10-03 11:32:02.763 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:32:02 compute-0 nova_compute[351685]: 2025-10-03 11:32:02.763 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:32:02 compute-0 podman[539708]: 2025-10-03 11:32:02.817796547 +0000 UTC m=+0.099118027 container create a2c1b5afd38db17ed39bb3d9fa1a84a61bcf8599d2362982dbbf3ac6f27cac7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Oct  3 11:32:02 compute-0 podman[539708]: 2025-10-03 11:32:02.768586781 +0000 UTC m=+0.049908361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:32:02 compute-0 systemd[1]: Started libpod-conmon-a2c1b5afd38db17ed39bb3d9fa1a84a61bcf8599d2362982dbbf3ac6f27cac7a.scope.
Oct  3 11:32:02 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:32:02 compute-0 podman[539708]: 2025-10-03 11:32:02.949753137 +0000 UTC m=+0.231074647 container init a2c1b5afd38db17ed39bb3d9fa1a84a61bcf8599d2362982dbbf3ac6f27cac7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_grothendieck, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:32:02 compute-0 podman[539708]: 2025-10-03 11:32:02.961640208 +0000 UTC m=+0.242961698 container start a2c1b5afd38db17ed39bb3d9fa1a84a61bcf8599d2362982dbbf3ac6f27cac7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_grothendieck, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:32:02 compute-0 podman[539708]: 2025-10-03 11:32:02.967063842 +0000 UTC m=+0.248385322 container attach a2c1b5afd38db17ed39bb3d9fa1a84a61bcf8599d2362982dbbf3ac6f27cac7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:32:02 compute-0 unruffled_grothendieck[539735]: 167 167
Oct  3 11:32:02 compute-0 systemd[1]: libpod-a2c1b5afd38db17ed39bb3d9fa1a84a61bcf8599d2362982dbbf3ac6f27cac7a.scope: Deactivated successfully.
Oct  3 11:32:02 compute-0 podman[539708]: 2025-10-03 11:32:02.973160927 +0000 UTC m=+0.254482417 container died a2c1b5afd38db17ed39bb3d9fa1a84a61bcf8599d2362982dbbf3ac6f27cac7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_grothendieck, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:32:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-33e424426debc681ca5b5c3db006634ed7b678c57299a818306ae3fb5ae4c1ad-merged.mount: Deactivated successfully.
Oct  3 11:32:03 compute-0 podman[539708]: 2025-10-03 11:32:03.026024101 +0000 UTC m=+0.307345581 container remove a2c1b5afd38db17ed39bb3d9fa1a84a61bcf8599d2362982dbbf3ac6f27cac7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  3 11:32:03 compute-0 systemd[1]: libpod-conmon-a2c1b5afd38db17ed39bb3d9fa1a84a61bcf8599d2362982dbbf3ac6f27cac7a.scope: Deactivated successfully.
Oct  3 11:32:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:32:03 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1706463720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:32:03 compute-0 podman[539767]: 2025-10-03 11:32:03.260881178 +0000 UTC m=+0.059031673 container create 029c6bec6bb5274a26cadacc915fc070d04e7f29ee5f3713e0198d3cd32f9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.272 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:32:03 compute-0 systemd[1]: Started libpod-conmon-029c6bec6bb5274a26cadacc915fc070d04e7f29ee5f3713e0198d3cd32f9b8a.scope.
Oct  3 11:32:03 compute-0 podman[539767]: 2025-10-03 11:32:03.238429519 +0000 UTC m=+0.036580034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:32:03 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8392830aeb41f3fbd27bbb52fdadc5c7238f89e63ec54b8d32172c092fbb527/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.371 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.372 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8392830aeb41f3fbd27bbb52fdadc5c7238f89e63ec54b8d32172c092fbb527/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8392830aeb41f3fbd27bbb52fdadc5c7238f89e63ec54b8d32172c092fbb527/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8392830aeb41f3fbd27bbb52fdadc5c7238f89e63ec54b8d32172c092fbb527/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:32:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8392830aeb41f3fbd27bbb52fdadc5c7238f89e63ec54b8d32172c092fbb527/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.379 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.380 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.380 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:32:03 compute-0 podman[539767]: 2025-10-03 11:32:03.397108514 +0000 UTC m=+0.195259029 container init 029c6bec6bb5274a26cadacc915fc070d04e7f29ee5f3713e0198d3cd32f9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct  3 11:32:03 compute-0 podman[539767]: 2025-10-03 11:32:03.409783261 +0000 UTC m=+0.207933756 container start 029c6bec6bb5274a26cadacc915fc070d04e7f29ee5f3713e0198d3cd32f9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 11:32:03 compute-0 podman[539767]: 2025-10-03 11:32:03.413650764 +0000 UTC m=+0.211801279 container attach 029c6bec6bb5274a26cadacc915fc070d04e7f29ee5f3713e0198d3cd32f9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.433 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.435 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:32:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.843 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.846 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3161MB free_disk=59.864280700683594GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.847 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.848 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.935 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.936 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 83fc22ce-d2e4-468a-b166-04f2743fa68d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.937 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 443e486d-1bf2-4550-a4ae-32f0f8f4af19 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.938 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:32:03 compute-0 nova_compute[351685]: 2025-10-03 11:32:03.939 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:32:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3747: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct  3 11:32:04 compute-0 nova_compute[351685]: 2025-10-03 11:32:04.011 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:32:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:32:04 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3211707591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:32:04 compute-0 nova_compute[351685]: 2025-10-03 11:32:04.520 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:32:04 compute-0 nova_compute[351685]: 2025-10-03 11:32:04.529 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:32:04 compute-0 nova_compute[351685]: 2025-10-03 11:32:04.551 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:32:04 compute-0 nova_compute[351685]: 2025-10-03 11:32:04.553 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:32:04 compute-0 nova_compute[351685]: 2025-10-03 11:32:04.553 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:32:04 compute-0 ecstatic_neumann[539785]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:32:04 compute-0 ecstatic_neumann[539785]: --> relative data size: 1.0
Oct  3 11:32:04 compute-0 ecstatic_neumann[539785]: --> All data devices are unavailable
Oct  3 11:32:04 compute-0 systemd[1]: libpod-029c6bec6bb5274a26cadacc915fc070d04e7f29ee5f3713e0198d3cd32f9b8a.scope: Deactivated successfully.
Oct  3 11:32:04 compute-0 systemd[1]: libpod-029c6bec6bb5274a26cadacc915fc070d04e7f29ee5f3713e0198d3cd32f9b8a.scope: Consumed 1.096s CPU time.
Oct  3 11:32:04 compute-0 podman[539767]: 2025-10-03 11:32:04.605225013 +0000 UTC m=+1.403375508 container died 029c6bec6bb5274a26cadacc915fc070d04e7f29ee5f3713e0198d3cd32f9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  3 11:32:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8392830aeb41f3fbd27bbb52fdadc5c7238f89e63ec54b8d32172c092fbb527-merged.mount: Deactivated successfully.
Oct  3 11:32:04 compute-0 podman[539767]: 2025-10-03 11:32:04.672994045 +0000 UTC m=+1.471144530 container remove 029c6bec6bb5274a26cadacc915fc070d04e7f29ee5f3713e0198d3cd32f9b8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:32:04 compute-0 systemd[1]: libpod-conmon-029c6bec6bb5274a26cadacc915fc070d04e7f29ee5f3713e0198d3cd32f9b8a.scope: Deactivated successfully.
Oct  3 11:32:05 compute-0 nova_compute[351685]: 2025-10-03 11:32:05.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:05 compute-0 podman[539985]: 2025-10-03 11:32:05.546836131 +0000 UTC m=+0.053087453 container create d8fd1a9f5fcfd79a4d8e1d4e866f4c117da4733e390c827f48856a98f41cd4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heisenberg, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 11:32:05 compute-0 systemd[1]: Started libpod-conmon-d8fd1a9f5fcfd79a4d8e1d4e866f4c117da4733e390c827f48856a98f41cd4be.scope.
Oct  3 11:32:05 compute-0 podman[539985]: 2025-10-03 11:32:05.527360666 +0000 UTC m=+0.033612009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:32:05 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:32:05 compute-0 podman[539985]: 2025-10-03 11:32:05.654852793 +0000 UTC m=+0.161104135 container init d8fd1a9f5fcfd79a4d8e1d4e866f4c117da4733e390c827f48856a98f41cd4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heisenberg, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 11:32:05 compute-0 podman[539985]: 2025-10-03 11:32:05.665016548 +0000 UTC m=+0.171267870 container start d8fd1a9f5fcfd79a4d8e1d4e866f4c117da4733e390c827f48856a98f41cd4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heisenberg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:32:05 compute-0 podman[539985]: 2025-10-03 11:32:05.669341858 +0000 UTC m=+0.175593190 container attach d8fd1a9f5fcfd79a4d8e1d4e866f4c117da4733e390c827f48856a98f41cd4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:32:05 compute-0 mystifying_heisenberg[540001]: 167 167
Oct  3 11:32:05 compute-0 systemd[1]: libpod-d8fd1a9f5fcfd79a4d8e1d4e866f4c117da4733e390c827f48856a98f41cd4be.scope: Deactivated successfully.
Oct  3 11:32:05 compute-0 podman[539985]: 2025-10-03 11:32:05.67472036 +0000 UTC m=+0.180971682 container died d8fd1a9f5fcfd79a4d8e1d4e866f4c117da4733e390c827f48856a98f41cd4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heisenberg, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:32:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c62b0208ce76c29e96d36100aacec3e3edc43b14978e17783bba6250d6ac6e8f-merged.mount: Deactivated successfully.
Oct  3 11:32:05 compute-0 podman[539985]: 2025-10-03 11:32:05.730416775 +0000 UTC m=+0.236668097 container remove d8fd1a9f5fcfd79a4d8e1d4e866f4c117da4733e390c827f48856a98f41cd4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:32:05 compute-0 systemd[1]: libpod-conmon-d8fd1a9f5fcfd79a4d8e1d4e866f4c117da4733e390c827f48856a98f41cd4be.scope: Deactivated successfully.
Oct  3 11:32:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3748: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:32:06 compute-0 podman[540024]: 2025-10-03 11:32:06.012834176 +0000 UTC m=+0.078685083 container create 4825334f03f9a1586b6407b6b9f19923632fc34dbbf09c5218a273f7c8ed5c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:32:06 compute-0 podman[540024]: 2025-10-03 11:32:05.981976727 +0000 UTC m=+0.047827674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:32:06 compute-0 systemd[1]: Started libpod-conmon-4825334f03f9a1586b6407b6b9f19923632fc34dbbf09c5218a273f7c8ed5c21.scope.
Oct  3 11:32:06 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a43e36fb40d72c7ae414e85bb42f0859133e09a0990324753a448875dbe5965/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a43e36fb40d72c7ae414e85bb42f0859133e09a0990324753a448875dbe5965/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a43e36fb40d72c7ae414e85bb42f0859133e09a0990324753a448875dbe5965/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:32:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a43e36fb40d72c7ae414e85bb42f0859133e09a0990324753a448875dbe5965/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:32:06 compute-0 podman[540024]: 2025-10-03 11:32:06.136732497 +0000 UTC m=+0.202583404 container init 4825334f03f9a1586b6407b6b9f19923632fc34dbbf09c5218a273f7c8ed5c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_perlman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:32:06 compute-0 podman[540024]: 2025-10-03 11:32:06.152918326 +0000 UTC m=+0.218769223 container start 4825334f03f9a1586b6407b6b9f19923632fc34dbbf09c5218a273f7c8ed5c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  3 11:32:06 compute-0 podman[540024]: 2025-10-03 11:32:06.15773226 +0000 UTC m=+0.223583157 container attach 4825334f03f9a1586b6407b6b9f19923632fc34dbbf09c5218a273f7c8ed5c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_perlman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  3 11:32:06 compute-0 nova_compute[351685]: 2025-10-03 11:32:06.554 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:32:06 compute-0 nova_compute[351685]: 2025-10-03 11:32:06.554 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]: {
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:    "0": [
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:        {
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "devices": [
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "/dev/loop3"
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            ],
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "lv_name": "ceph_lv0",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "lv_size": "21470642176",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "name": "ceph_lv0",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "tags": {
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.cluster_name": "ceph",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.crush_device_class": "",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.encrypted": "0",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.osd_id": "0",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.type": "block",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.vdo": "0"
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            },
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "type": "block",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "vg_name": "ceph_vg0"
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:        }
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:    ],
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:    "1": [
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:        {
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "devices": [
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "/dev/loop4"
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            ],
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "lv_name": "ceph_lv1",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "lv_size": "21470642176",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "name": "ceph_lv1",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "tags": {
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.cluster_name": "ceph",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.crush_device_class": "",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.encrypted": "0",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.osd_id": "1",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.type": "block",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.vdo": "0"
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            },
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "type": "block",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "vg_name": "ceph_vg1"
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:        }
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:    ],
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:    "2": [
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:        {
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "devices": [
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "/dev/loop5"
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            ],
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "lv_name": "ceph_lv2",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "lv_size": "21470642176",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "name": "ceph_lv2",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "tags": {
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.cluster_name": "ceph",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.crush_device_class": "",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.encrypted": "0",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.osd_id": "2",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.type": "block",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:                "ceph.vdo": "0"
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            },
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "type": "block",
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:            "vg_name": "ceph_vg2"
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:        }
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]:    ]
Oct  3 11:32:06 compute-0 compassionate_perlman[540040]: }
Oct  3 11:32:06 compute-0 systemd[1]: libpod-4825334f03f9a1586b6407b6b9f19923632fc34dbbf09c5218a273f7c8ed5c21.scope: Deactivated successfully.
Oct  3 11:32:06 compute-0 podman[540024]: 2025-10-03 11:32:06.937186171 +0000 UTC m=+1.003037078 container died 4825334f03f9a1586b6407b6b9f19923632fc34dbbf09c5218a273f7c8ed5c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 11:32:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a43e36fb40d72c7ae414e85bb42f0859133e09a0990324753a448875dbe5965-merged.mount: Deactivated successfully.
Oct  3 11:32:07 compute-0 podman[540024]: 2025-10-03 11:32:07.020669947 +0000 UTC m=+1.086520854 container remove 4825334f03f9a1586b6407b6b9f19923632fc34dbbf09c5218a273f7c8ed5c21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:32:07 compute-0 systemd[1]: libpod-conmon-4825334f03f9a1586b6407b6b9f19923632fc34dbbf09c5218a273f7c8ed5c21.scope: Deactivated successfully.
Oct  3 11:32:07 compute-0 podman[540200]: 2025-10-03 11:32:07.909193083 +0000 UTC m=+0.059709495 container create aeae3c3e929ee5fadba6285a01594c18ac63d5ca71dc5c1bccf4363600410854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:32:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3749: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:32:07 compute-0 systemd[1]: Started libpod-conmon-aeae3c3e929ee5fadba6285a01594c18ac63d5ca71dc5c1bccf4363600410854.scope.
Oct  3 11:32:07 compute-0 podman[540200]: 2025-10-03 11:32:07.8853851 +0000 UTC m=+0.035901532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:32:07 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:32:08 compute-0 podman[540200]: 2025-10-03 11:32:08.024010463 +0000 UTC m=+0.174526955 container init aeae3c3e929ee5fadba6285a01594c18ac63d5ca71dc5c1bccf4363600410854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_boyd, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 11:32:08 compute-0 podman[540200]: 2025-10-03 11:32:08.034614893 +0000 UTC m=+0.185131305 container start aeae3c3e929ee5fadba6285a01594c18ac63d5ca71dc5c1bccf4363600410854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  3 11:32:08 compute-0 ecstatic_boyd[540216]: 167 167
Oct  3 11:32:08 compute-0 podman[540200]: 2025-10-03 11:32:08.040667517 +0000 UTC m=+0.191183949 container attach aeae3c3e929ee5fadba6285a01594c18ac63d5ca71dc5c1bccf4363600410854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_boyd, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:32:08 compute-0 systemd[1]: libpod-aeae3c3e929ee5fadba6285a01594c18ac63d5ca71dc5c1bccf4363600410854.scope: Deactivated successfully.
Oct  3 11:32:08 compute-0 conmon[540216]: conmon aeae3c3e929ee5fadba6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aeae3c3e929ee5fadba6285a01594c18ac63d5ca71dc5c1bccf4363600410854.scope/container/memory.events
Oct  3 11:32:08 compute-0 podman[540200]: 2025-10-03 11:32:08.045180751 +0000 UTC m=+0.195697163 container died aeae3c3e929ee5fadba6285a01594c18ac63d5ca71dc5c1bccf4363600410854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_boyd, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 11:32:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f95041bf3915913371570733f0f051ef66ec08fc099528e046611565d820a6d5-merged.mount: Deactivated successfully.
Oct  3 11:32:08 compute-0 podman[540200]: 2025-10-03 11:32:08.09317108 +0000 UTC m=+0.243687492 container remove aeae3c3e929ee5fadba6285a01594c18ac63d5ca71dc5c1bccf4363600410854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_boyd, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  3 11:32:08 compute-0 systemd[1]: libpod-conmon-aeae3c3e929ee5fadba6285a01594c18ac63d5ca71dc5c1bccf4363600410854.scope: Deactivated successfully.
Oct  3 11:32:08 compute-0 podman[540241]: 2025-10-03 11:32:08.288357755 +0000 UTC m=+0.041380467 container create 4112b8126f3a7bd3b09b5c38bb9b9532a53c6b85447aa85bd07bfaaf67dd2745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:32:08 compute-0 systemd[1]: Started libpod-conmon-4112b8126f3a7bd3b09b5c38bb9b9532a53c6b85447aa85bd07bfaaf67dd2745.scope.
Oct  3 11:32:08 compute-0 podman[540241]: 2025-10-03 11:32:08.270380179 +0000 UTC m=+0.023402871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:32:08 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:32:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f534b6f2f65c4710b7780d988bb748d69401651333aed1e153524176ffda85ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:32:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f534b6f2f65c4710b7780d988bb748d69401651333aed1e153524176ffda85ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:32:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f534b6f2f65c4710b7780d988bb748d69401651333aed1e153524176ffda85ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:32:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f534b6f2f65c4710b7780d988bb748d69401651333aed1e153524176ffda85ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:32:08 compute-0 podman[540241]: 2025-10-03 11:32:08.442324779 +0000 UTC m=+0.195347501 container init 4112b8126f3a7bd3b09b5c38bb9b9532a53c6b85447aa85bd07bfaaf67dd2745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  3 11:32:08 compute-0 podman[540241]: 2025-10-03 11:32:08.46761869 +0000 UTC m=+0.220641372 container start 4112b8126f3a7bd3b09b5c38bb9b9532a53c6b85447aa85bd07bfaaf67dd2745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  3 11:32:08 compute-0 podman[540241]: 2025-10-03 11:32:08.472575099 +0000 UTC m=+0.225597781 container attach 4112b8126f3a7bd3b09b5c38bb9b9532a53c6b85447aa85bd07bfaaf67dd2745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  3 11:32:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:32:08 compute-0 nova_compute[351685]: 2025-10-03 11:32:08.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]: {
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:        "osd_id": 1,
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:        "type": "bluestore"
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:    },
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:        "osd_id": 2,
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:        "type": "bluestore"
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:    },
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:        "osd_id": 0,
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:        "type": "bluestore"
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]:    }
Oct  3 11:32:09 compute-0 hardcore_hypatia[540257]: }
Oct  3 11:32:09 compute-0 systemd[1]: libpod-4112b8126f3a7bd3b09b5c38bb9b9532a53c6b85447aa85bd07bfaaf67dd2745.scope: Deactivated successfully.
Oct  3 11:32:09 compute-0 systemd[1]: libpod-4112b8126f3a7bd3b09b5c38bb9b9532a53c6b85447aa85bd07bfaaf67dd2745.scope: Consumed 1.150s CPU time.
Oct  3 11:32:09 compute-0 podman[540241]: 2025-10-03 11:32:09.624516068 +0000 UTC m=+1.377538780 container died 4112b8126f3a7bd3b09b5c38bb9b9532a53c6b85447aa85bd07bfaaf67dd2745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 11:32:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f534b6f2f65c4710b7780d988bb748d69401651333aed1e153524176ffda85ab-merged.mount: Deactivated successfully.
Oct  3 11:32:09 compute-0 podman[540241]: 2025-10-03 11:32:09.716757774 +0000 UTC m=+1.469780446 container remove 4112b8126f3a7bd3b09b5c38bb9b9532a53c6b85447aa85bd07bfaaf67dd2745 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hypatia, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 11:32:09 compute-0 nova_compute[351685]: 2025-10-03 11:32:09.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:32:09 compute-0 systemd[1]: libpod-conmon-4112b8126f3a7bd3b09b5c38bb9b9532a53c6b85447aa85bd07bfaaf67dd2745.scope: Deactivated successfully.
Oct  3 11:32:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:32:09 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:32:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:32:09 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:32:09 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 9ceae54a-4269-4a7e-bb3b-5c998f799c7a does not exist
Oct  3 11:32:09 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev cf39f1e2-8677-46aa-a918-b7103aafd29c does not exist
Oct  3 11:32:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3750: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:32:10 compute-0 nova_compute[351685]: 2025-10-03 11:32:10.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:10 compute-0 nova_compute[351685]: 2025-10-03 11:32:10.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:32:10 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:32:10 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:32:11 compute-0 nova_compute[351685]: 2025-10-03 11:32:11.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:32:11 compute-0 nova_compute[351685]: 2025-10-03 11:32:11.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:32:11 compute-0 podman[540353]: 2025-10-03 11:32:11.847235335 +0000 UTC m=+0.099881572 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:32:11 compute-0 podman[540355]: 2025-10-03 11:32:11.857103221 +0000 UTC m=+0.096504354 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251001, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  3 11:32:11 compute-0 podman[540354]: 2025-10-03 11:32:11.869932632 +0000 UTC m=+0.104629054 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release-0.7.12=, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.openshift.expose-services=, architecture=x86_64, build-date=2024-09-18T21:23:30)
Oct  3 11:32:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3751: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:32:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:13.584552) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491133584598, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1733, "num_deletes": 255, "total_data_size": 2803706, "memory_usage": 2849504, "flush_reason": "Manual Compaction"}
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491133604935, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 2743597, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74178, "largest_seqno": 75910, "table_properties": {"data_size": 2735656, "index_size": 4819, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16098, "raw_average_key_size": 19, "raw_value_size": 2719737, "raw_average_value_size": 3328, "num_data_blocks": 215, "num_entries": 817, "num_filter_entries": 817, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759490949, "oldest_key_time": 1759490949, "file_creation_time": 1759491133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 20435 microseconds, and 12215 cpu microseconds.
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:13.604990) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 2743597 bytes OK
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:13.605013) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:13.608801) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:13.608825) EVENT_LOG_v1 {"time_micros": 1759491133608817, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:13.608848) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2796264, prev total WAL file size 2796264, number of live WAL files 2.
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:13.611734) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323635' seq:72057594037927935, type:22 .. '6C6F676D0033353137' seq:0, type:0; will stop at (end)
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(2679KB)], [182(7534KB)]
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491133611797, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 10459254, "oldest_snapshot_seqno": -1}
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 8395 keys, 10358859 bytes, temperature: kUnknown
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491133662346, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 10358859, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10308120, "index_size": 28674, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20997, "raw_key_size": 221408, "raw_average_key_size": 26, "raw_value_size": 10161471, "raw_average_value_size": 1210, "num_data_blocks": 1129, "num_entries": 8395, "num_filter_entries": 8395, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759491133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:13.662562) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 10358859 bytes
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:13.664379) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.6 rd, 204.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 7.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(7.6) write-amplify(3.8) OK, records in: 8920, records dropped: 525 output_compression: NoCompression
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:13.664394) EVENT_LOG_v1 {"time_micros": 1759491133664386, "job": 114, "event": "compaction_finished", "compaction_time_micros": 50614, "compaction_time_cpu_micros": 31460, "output_level": 6, "num_output_files": 1, "total_output_size": 10358859, "num_input_records": 8920, "num_output_records": 8395, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491133664871, "job": 114, "event": "table_file_deletion", "file_number": 184}
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491133666285, "job": 114, "event": "table_file_deletion", "file_number": 182}
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:13.611462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:13.666449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:13.666453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:13.666454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:13.666456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:32:13 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:13.666457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:32:13 compute-0 nova_compute[351685]: 2025-10-03 11:32:13.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3752: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:32:15 compute-0 nova_compute[351685]: 2025-10-03 11:32:15.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3753: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:32:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:32:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3754: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:32:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:32:18 compute-0 nova_compute[351685]: 2025-10-03 11:32:18.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3755: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:32:20 compute-0 nova_compute[351685]: 2025-10-03 11:32:20.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3756: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:32:22 compute-0 nova_compute[351685]: 2025-10-03 11:32:22.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:32:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:32:23 compute-0 nova_compute[351685]: 2025-10-03 11:32:23.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3757: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct  3 11:32:25 compute-0 nova_compute[351685]: 2025-10-03 11:32:25.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3758: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct  3 11:32:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3759: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct  3 11:32:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:32:28 compute-0 nova_compute[351685]: 2025-10-03 11:32:28.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:29.157426) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491149157471, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 368, "num_deletes": 251, "total_data_size": 235186, "memory_usage": 242136, "flush_reason": "Manual Compaction"}
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491149163817, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 233248, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75911, "largest_seqno": 76278, "table_properties": {"data_size": 230982, "index_size": 429, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5535, "raw_average_key_size": 18, "raw_value_size": 226555, "raw_average_value_size": 757, "num_data_blocks": 20, "num_entries": 299, "num_filter_entries": 299, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759491134, "oldest_key_time": 1759491134, "file_creation_time": 1759491149, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 6442 microseconds, and 2490 cpu microseconds.
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:29.163865) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 233248 bytes OK
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:29.163891) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:29.167136) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:29.167166) EVENT_LOG_v1 {"time_micros": 1759491149167155, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:29.167191) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 232767, prev total WAL file size 232767, number of live WAL files 2.
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:29.167802) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(227KB)], [185(10116KB)]
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491149167828, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 10592107, "oldest_snapshot_seqno": -1}
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 8185 keys, 8862020 bytes, temperature: kUnknown
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491149210513, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 8862020, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8814167, "index_size": 26300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 217724, "raw_average_key_size": 26, "raw_value_size": 8672601, "raw_average_value_size": 1059, "num_data_blocks": 1020, "num_entries": 8185, "num_filter_entries": 8185, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759491149, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:29.210720) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 8862020 bytes
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:29.212435) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 247.8 rd, 207.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.9 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(83.4) write-amplify(38.0) OK, records in: 8694, records dropped: 509 output_compression: NoCompression
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:29.212451) EVENT_LOG_v1 {"time_micros": 1759491149212444, "job": 116, "event": "compaction_finished", "compaction_time_micros": 42753, "compaction_time_cpu_micros": 24971, "output_level": 6, "num_output_files": 1, "total_output_size": 8862020, "num_input_records": 8694, "num_output_records": 8185, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491149212605, "job": 116, "event": "table_file_deletion", "file_number": 187}
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491149214193, "job": 116, "event": "table_file_deletion", "file_number": 185}
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:29.167705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:29.214374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:29.214378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:29.214380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:29.214381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:32:29 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:32:29.214383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:32:29 compute-0 podman[157165]: time="2025-10-03T11:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:32:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:32:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9607 "" "Go-http-client/1.1"
Oct  3 11:32:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3760: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct  3 11:32:30 compute-0 nova_compute[351685]: 2025-10-03 11:32:30.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:30 compute-0 podman[540413]: 2025-10-03 11:32:30.874985632 +0000 UTC m=+0.125358589 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:32:30 compute-0 podman[540417]: 2025-10-03 11:32:30.889335762 +0000 UTC m=+0.114007025 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:32:30 compute-0 podman[540416]: 2025-10-03 11:32:30.898592269 +0000 UTC m=+0.138663346 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 11:32:30 compute-0 podman[540425]: 2025-10-03 11:32:30.904656043 +0000 UTC m=+0.131191185 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  3 11:32:30 compute-0 podman[540415]: 2025-10-03 11:32:30.912879467 +0000 UTC m=+0.142438426 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  3 11:32:30 compute-0 podman[540414]: 2025-10-03 11:32:30.933752895 +0000 UTC m=+0.147815418 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, build-date=2025-08-20T13:12:41, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Oct  3 11:32:30 compute-0 podman[540423]: 2025-10-03 11:32:30.946178314 +0000 UTC m=+0.162270162 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct  3 11:32:31 compute-0 openstack_network_exporter[367524]: ERROR   11:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:32:31 compute-0 openstack_network_exporter[367524]: ERROR   11:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:32:31 compute-0 openstack_network_exporter[367524]: ERROR   11:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:32:31 compute-0 openstack_network_exporter[367524]: ERROR   11:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:32:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:32:31 compute-0 openstack_network_exporter[367524]: ERROR   11:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:32:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:32:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3761: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct  3 11:32:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:32:33 compute-0 nova_compute[351685]: 2025-10-03 11:32:33.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:33 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3762: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s wr, 0 op/s
Oct  3 11:32:34 compute-0 nova_compute[351685]: 2025-10-03 11:32:34.075 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:32:34 compute-0 nova_compute[351685]: 2025-10-03 11:32:34.099 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Triggering sync for uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  3 11:32:34 compute-0 nova_compute[351685]: 2025-10-03 11:32:34.101 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Triggering sync for uuid 83fc22ce-d2e4-468a-b166-04f2743fa68d _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  3 11:32:34 compute-0 nova_compute[351685]: 2025-10-03 11:32:34.102 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Triggering sync for uuid 443e486d-1bf2-4550-a4ae-32f0f8f4af19 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  3 11:32:34 compute-0 nova_compute[351685]: 2025-10-03 11:32:34.103 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:32:34 compute-0 nova_compute[351685]: 2025-10-03 11:32:34.104 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:32:34 compute-0 nova_compute[351685]: 2025-10-03 11:32:34.106 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "83fc22ce-d2e4-468a-b166-04f2743fa68d" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:32:34 compute-0 nova_compute[351685]: 2025-10-03 11:32:34.107 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:32:34 compute-0 nova_compute[351685]: 2025-10-03 11:32:34.108 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:32:34 compute-0 nova_compute[351685]: 2025-10-03 11:32:34.109 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:32:34 compute-0 nova_compute[351685]: 2025-10-03 11:32:34.142 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "b43db93c-a4fe-46e9-8418-eedf4f5c135a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:32:34 compute-0 nova_compute[351685]: 2025-10-03 11:32:34.148 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:32:34 compute-0 nova_compute[351685]: 2025-10-03 11:32:34.150 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:32:35 compute-0 nova_compute[351685]: 2025-10-03 11:32:35.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:35 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3763: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:32:37 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3764: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:32:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:32:38 compute-0 nova_compute[351685]: 2025-10-03 11:32:38.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:39 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3765: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 85 B/s wr, 3 op/s
Oct  3 11:32:40 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  3 11:32:40 compute-0 nova_compute[351685]: 2025-10-03 11:32:40.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:32:41.710 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:32:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:32:41.711 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:32:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:32:41.711 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:32:41 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3766: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 170 B/s wr, 5 op/s
Oct  3 11:32:42 compute-0 nova_compute[351685]: 2025-10-03 11:32:42.758 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:32:42 compute-0 podman[540546]: 2025-10-03 11:32:42.835330412 +0000 UTC m=+0.091388199 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:32:42 compute-0 podman[540548]: 2025-10-03 11:32:42.852306156 +0000 UTC m=+0.091799793 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Oct  3 11:32:42 compute-0 podman[540547]: 2025-10-03 11:32:42.864806387 +0000 UTC m=+0.114055177 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release=1214.1726694543, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, config_id=edpm, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  3 11:32:43 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  3 11:32:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:32:43 compute-0 nova_compute[351685]: 2025-10-03 11:32:43.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:43 compute-0 nova_compute[351685]: 2025-10-03 11:32:43.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:32:43 compute-0 nova_compute[351685]: 2025-10-03 11:32:43.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 11:32:43 compute-0 nova_compute[351685]: 2025-10-03 11:32:43.747 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 11:32:43 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3767: 321 pgs: 321 active+clean; 298 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 170 B/s wr, 5 op/s
Oct  3 11:32:45 compute-0 nova_compute[351685]: 2025-10-03 11:32:45.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:45 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3768: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Oct  3 11:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:32:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:32:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:32:46
Oct  3 11:32:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:32:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:32:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'images', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'vms', '.mgr']
Oct  3 11:32:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:32:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:32:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:32:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:32:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:32:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:32:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:32:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:32:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:32:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:32:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:32:47 compute-0 nova_compute[351685]: 2025-10-03 11:32:47.746 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:32:47 compute-0 nova_compute[351685]: 2025-10-03 11:32:47.747 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:32:47 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3769: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Oct  3 11:32:48 compute-0 nova_compute[351685]: 2025-10-03 11:32:48.301 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-443e486d-1bf2-4550-a4ae-32f0f8f4af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:32:48 compute-0 nova_compute[351685]: 2025-10-03 11:32:48.302 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-443e486d-1bf2-4550-a4ae-32f0f8f4af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:32:48 compute-0 nova_compute[351685]: 2025-10-03 11:32:48.303 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:32:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:32:48 compute-0 nova_compute[351685]: 2025-10-03 11:32:48.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:49 compute-0 nova_compute[351685]: 2025-10-03 11:32:49.631 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Updating instance_info_cache with network_info: [{"id": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "address": "fa:16:3e:5e:f1:a3", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6a8cc09-54", "ovs_interfaceid": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:32:49 compute-0 nova_compute[351685]: 2025-10-03 11:32:49.652 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-443e486d-1bf2-4550-a4ae-32f0f8f4af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:32:49 compute-0 nova_compute[351685]: 2025-10-03 11:32:49.653 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:32:49 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3770: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Oct  3 11:32:50 compute-0 nova_compute[351685]: 2025-10-03 11:32:50.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:32:50 compute-0 ceph-osd[205584]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.2 total, 600.0 interval#012Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 11K writes, 3427 syncs, 3.37 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2216 writes, 8469 keys, 2216 commit groups, 1.0 writes per commit group, ingest: 8.79 MB, 0.01 MB/s#012Interval WAL: 2216 writes, 883 syncs, 2.51 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 11:32:51 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3771: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 8.5 KiB/s wr, 2 op/s
Oct  3 11:32:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:32:53 compute-0 nova_compute[351685]: 2025-10-03 11:32:53.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:53 compute-0 nova_compute[351685]: 2025-10-03 11:32:53.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:32:53 compute-0 nova_compute[351685]: 2025-10-03 11:32:53.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 11:32:53 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3772: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Oct  3 11:32:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:32:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4159193846' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:32:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:32:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4159193846' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:32:54 compute-0 nova_compute[351685]: 2025-10-03 11:32:54.739 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:32:55 compute-0 nova_compute[351685]: 2025-10-03 11:32:55.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:55 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3773: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 8.4 KiB/s wr, 0 op/s
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020686693161099173 of space, bias 1.0, pg target 0.6206007948329751 quantized to 32 (current 32)
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:32:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:32:57 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 12K writes, 44K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 12K writes, 3634 syncs, 3.41 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2316 writes, 9075 keys, 2316 commit groups, 1.0 writes per commit group, ingest: 10.78 MB, 0.02 MB/s#012Interval WAL: 2316 writes, 906 syncs, 2.56 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 11:32:57 compute-0 nova_compute[351685]: 2025-10-03 11:32:57.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:32:57 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3774: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 0 op/s
Oct  3 11:32:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:32:58 compute-0 nova_compute[351685]: 2025-10-03 11:32:58.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:32:59 compute-0 podman[157165]: time="2025-10-03T11:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:32:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:32:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9608 "" "Go-http-client/1.1"
Oct  3 11:32:59 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3775: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Oct  3 11:33:00 compute-0 nova_compute[351685]: 2025-10-03 11:33:00.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:01 compute-0 openstack_network_exporter[367524]: ERROR   11:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:33:01 compute-0 openstack_network_exporter[367524]: ERROR   11:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:33:01 compute-0 openstack_network_exporter[367524]: ERROR   11:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:33:01 compute-0 openstack_network_exporter[367524]: ERROR   11:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:33:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:33:01 compute-0 openstack_network_exporter[367524]: ERROR   11:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:33:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:33:01 compute-0 podman[540608]: 2025-10-03 11:33:01.875665013 +0000 UTC m=+0.111454083 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, io.buildah.version=1.33.7, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm)
Oct  3 11:33:01 compute-0 podman[540609]: 2025-10-03 11:33:01.88370365 +0000 UTC m=+0.118724965 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  3 11:33:01 compute-0 podman[540628]: 2025-10-03 11:33:01.892147221 +0000 UTC m=+0.105968087 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  3 11:33:01 compute-0 podman[540610]: 2025-10-03 11:33:01.90146897 +0000 UTC m=+0.124857073 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  3 11:33:01 compute-0 podman[540619]: 2025-10-03 11:33:01.912928237 +0000 UTC m=+0.130950428 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 11:33:01 compute-0 podman[540626]: 2025-10-03 11:33:01.912897086 +0000 UTC m=+0.129077207 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  3 11:33:01 compute-0 podman[540607]: 2025-10-03 11:33:01.916969726 +0000 UTC m=+0.157625112 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:33:01 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3776: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Oct  3 11:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:33:03 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 10K writes, 2847 syncs, 3.57 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1928 writes, 7500 keys, 1928 commit groups, 1.0 writes per commit group, ingest: 9.29 MB, 0.02 MB/s#012Interval WAL: 1928 writes, 766 syncs, 2.52 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 11:33:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:33:03 compute-0 nova_compute[351685]: 2025-10-03 11:33:03.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:03 compute-0 nova_compute[351685]: 2025-10-03 11:33:03.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:33:03 compute-0 nova_compute[351685]: 2025-10-03 11:33:03.762 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:33:03 compute-0 nova_compute[351685]: 2025-10-03 11:33:03.763 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:33:03 compute-0 nova_compute[351685]: 2025-10-03 11:33:03.763 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:33:03 compute-0 nova_compute[351685]: 2025-10-03 11:33:03.764 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:33:03 compute-0 nova_compute[351685]: 2025-10-03 11:33:03.765 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:33:03 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3777: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Oct  3 11:33:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:33:04 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2281014369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:33:04 compute-0 nova_compute[351685]: 2025-10-03 11:33:04.292 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:33:04 compute-0 nova_compute[351685]: 2025-10-03 11:33:04.411 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:33:04 compute-0 nova_compute[351685]: 2025-10-03 11:33:04.412 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:33:04 compute-0 nova_compute[351685]: 2025-10-03 11:33:04.422 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:33:04 compute-0 nova_compute[351685]: 2025-10-03 11:33:04.423 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:33:04 compute-0 nova_compute[351685]: 2025-10-03 11:33:04.423 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:33:04 compute-0 nova_compute[351685]: 2025-10-03 11:33:04.432 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:33:04 compute-0 nova_compute[351685]: 2025-10-03 11:33:04.433 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:33:04 compute-0 nova_compute[351685]: 2025-10-03 11:33:04.921 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:33:04 compute-0 nova_compute[351685]: 2025-10-03 11:33:04.922 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3204MB free_disk=59.86410140991211GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:33:04 compute-0 nova_compute[351685]: 2025-10-03 11:33:04.923 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:33:04 compute-0 nova_compute[351685]: 2025-10-03 11:33:04.923 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:33:05 compute-0 nova_compute[351685]: 2025-10-03 11:33:05.007 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:33:05 compute-0 nova_compute[351685]: 2025-10-03 11:33:05.007 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 83fc22ce-d2e4-468a-b166-04f2743fa68d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:33:05 compute-0 nova_compute[351685]: 2025-10-03 11:33:05.008 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 443e486d-1bf2-4550-a4ae-32f0f8f4af19 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:33:05 compute-0 nova_compute[351685]: 2025-10-03 11:33:05.008 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:33:05 compute-0 nova_compute[351685]: 2025-10-03 11:33:05.008 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:33:05 compute-0 nova_compute[351685]: 2025-10-03 11:33:05.079 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:33:05 compute-0 nova_compute[351685]: 2025-10-03 11:33:05.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:33:05 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3517533883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:33:05 compute-0 nova_compute[351685]: 2025-10-03 11:33:05.579 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:33:05 compute-0 nova_compute[351685]: 2025-10-03 11:33:05.589 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:33:05 compute-0 nova_compute[351685]: 2025-10-03 11:33:05.611 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:33:05 compute-0 nova_compute[351685]: 2025-10-03 11:33:05.613 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:33:05 compute-0 nova_compute[351685]: 2025-10-03 11:33:05.613 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:33:05 compute-0 ceph-mgr[192071]: [devicehealth INFO root] Check health
Oct  3 11:33:05 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3778: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.3 KiB/s wr, 0 op/s
Oct  3 11:33:07 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3779: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Oct  3 11:33:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:33:08 compute-0 nova_compute[351685]: 2025-10-03 11:33:08.614 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:33:08 compute-0 nova_compute[351685]: 2025-10-03 11:33:08.615 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:33:08 compute-0 nova_compute[351685]: 2025-10-03 11:33:08.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:09 compute-0 nova_compute[351685]: 2025-10-03 11:33:09.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:33:09 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3780: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Oct  3 11:33:10 compute-0 nova_compute[351685]: 2025-10-03 11:33:10.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:11 compute-0 podman[540957]: 2025-10-03 11:33:11.109528452 +0000 UTC m=+0.108011593 container exec 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 11:33:11 compute-0 podman[540957]: 2025-10-03 11:33:11.247037329 +0000 UTC m=+0.245520510 container exec_died 5224f5bf68a060567ff8ed551ee1df405aad5d9c9c8124c38a8d638adbfe640b (image=quay.io/ceph/ceph:v18, name=ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:33:11 compute-0 nova_compute[351685]: 2025-10-03 11:33:11.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:33:11 compute-0 nova_compute[351685]: 2025-10-03 11:33:11.732 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:33:11 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3781: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:33:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:33:12 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:33:12 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:33:12 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:33:12 compute-0 nova_compute[351685]: 2025-10-03 11:33:12.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:33:13 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:33:13 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:33:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:33:13 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:33:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:33:13 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:33:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:33:13 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:33:13 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b20ac84a-a95f-405e-a87f-1e98b20c4420 does not exist
Oct  3 11:33:13 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8259a216-2b14-4176-85b4-023a2b623d5f does not exist
Oct  3 11:33:13 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 4d9d74a1-c185-4c7f-b1c9-93308f1d44a6 does not exist
Oct  3 11:33:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:33:13 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:33:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:33:13 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:33:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:33:13 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:33:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:33:13 compute-0 podman[541269]: 2025-10-03 11:33:13.618533753 +0000 UTC m=+0.088566349 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 11:33:13 compute-0 podman[541270]: 2025-10-03 11:33:13.630993553 +0000 UTC m=+0.093845968 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.expose-services=, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9)
Oct  3 11:33:13 compute-0 podman[541271]: 2025-10-03 11:33:13.639411403 +0000 UTC m=+0.110268346 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:33:13 compute-0 nova_compute[351685]: 2025-10-03 11:33:13.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:13 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3782: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:33:14 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:33:14 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:33:14 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:33:14 compute-0 podman[541443]: 2025-10-03 11:33:14.359973487 +0000 UTC m=+0.100405340 container create a9ace50d920cd7726546a791a2aceea0d3e8ec979e305215f7556a9869764b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 11:33:14 compute-0 podman[541443]: 2025-10-03 11:33:14.307050501 +0000 UTC m=+0.047482424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:33:14 compute-0 systemd[1]: Started libpod-conmon-a9ace50d920cd7726546a791a2aceea0d3e8ec979e305215f7556a9869764b4a.scope.
Oct  3 11:33:14 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:33:14 compute-0 podman[541443]: 2025-10-03 11:33:14.55917493 +0000 UTC m=+0.299606813 container init a9ace50d920cd7726546a791a2aceea0d3e8ec979e305215f7556a9869764b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_grothendieck, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:33:14 compute-0 podman[541443]: 2025-10-03 11:33:14.570737301 +0000 UTC m=+0.311169184 container start a9ace50d920cd7726546a791a2aceea0d3e8ec979e305215f7556a9869764b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_grothendieck, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:33:14 compute-0 podman[541443]: 2025-10-03 11:33:14.577833759 +0000 UTC m=+0.318265692 container attach a9ace50d920cd7726546a791a2aceea0d3e8ec979e305215f7556a9869764b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_grothendieck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 11:33:14 compute-0 awesome_grothendieck[541459]: 167 167
Oct  3 11:33:14 compute-0 systemd[1]: libpod-a9ace50d920cd7726546a791a2aceea0d3e8ec979e305215f7556a9869764b4a.scope: Deactivated successfully.
Oct  3 11:33:14 compute-0 podman[541464]: 2025-10-03 11:33:14.654862747 +0000 UTC m=+0.049374303 container died a9ace50d920cd7726546a791a2aceea0d3e8ec979e305215f7556a9869764b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_grothendieck, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:33:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1274bb7f273b4202b6d83981a878d465f78e156510d92e682692eacf2d64b784-merged.mount: Deactivated successfully.
Oct  3 11:33:14 compute-0 podman[541464]: 2025-10-03 11:33:14.727095242 +0000 UTC m=+0.121606778 container remove a9ace50d920cd7726546a791a2aceea0d3e8ec979e305215f7556a9869764b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_grothendieck, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  3 11:33:14 compute-0 systemd[1]: libpod-conmon-a9ace50d920cd7726546a791a2aceea0d3e8ec979e305215f7556a9869764b4a.scope: Deactivated successfully.
Oct  3 11:33:14 compute-0 podman[541485]: 2025-10-03 11:33:14.993849071 +0000 UTC m=+0.072558526 container create bcf5c63cfe1f7d561c50199e5438195af0da191438606659a3e62289d9cc3368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shamir, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:33:15 compute-0 podman[541485]: 2025-10-03 11:33:14.967676253 +0000 UTC m=+0.046385758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:33:15 compute-0 systemd[1]: Started libpod-conmon-bcf5c63cfe1f7d561c50199e5438195af0da191438606659a3e62289d9cc3368.scope.
Oct  3 11:33:15 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c41f78b41bcd474289bf86090b5abeaffc635ac339c3daf39ca12f41bef6b07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c41f78b41bcd474289bf86090b5abeaffc635ac339c3daf39ca12f41bef6b07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c41f78b41bcd474289bf86090b5abeaffc635ac339c3daf39ca12f41bef6b07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c41f78b41bcd474289bf86090b5abeaffc635ac339c3daf39ca12f41bef6b07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c41f78b41bcd474289bf86090b5abeaffc635ac339c3daf39ca12f41bef6b07/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:33:15 compute-0 podman[541485]: 2025-10-03 11:33:15.128067973 +0000 UTC m=+0.206777488 container init bcf5c63cfe1f7d561c50199e5438195af0da191438606659a3e62289d9cc3368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  3 11:33:15 compute-0 podman[541485]: 2025-10-03 11:33:15.150383648 +0000 UTC m=+0.229093103 container start bcf5c63cfe1f7d561c50199e5438195af0da191438606659a3e62289d9cc3368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shamir, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:33:15 compute-0 podman[541485]: 2025-10-03 11:33:15.156208925 +0000 UTC m=+0.234918430 container attach bcf5c63cfe1f7d561c50199e5438195af0da191438606659a3e62289d9cc3368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shamir, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:33:15 compute-0 nova_compute[351685]: 2025-10-03 11:33:15.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:15 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3783: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:33:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:33:16 compute-0 goofy_shamir[541501]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:33:16 compute-0 goofy_shamir[541501]: --> relative data size: 1.0
Oct  3 11:33:16 compute-0 goofy_shamir[541501]: --> All data devices are unavailable
Oct  3 11:33:16 compute-0 systemd[1]: libpod-bcf5c63cfe1f7d561c50199e5438195af0da191438606659a3e62289d9cc3368.scope: Deactivated successfully.
Oct  3 11:33:16 compute-0 podman[541485]: 2025-10-03 11:33:16.482894214 +0000 UTC m=+1.561603669 container died bcf5c63cfe1f7d561c50199e5438195af0da191438606659a3e62289d9cc3368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shamir, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:33:16 compute-0 systemd[1]: libpod-bcf5c63cfe1f7d561c50199e5438195af0da191438606659a3e62289d9cc3368.scope: Consumed 1.231s CPU time.
Oct  3 11:33:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c41f78b41bcd474289bf86090b5abeaffc635ac339c3daf39ca12f41bef6b07-merged.mount: Deactivated successfully.
Oct  3 11:33:16 compute-0 podman[541485]: 2025-10-03 11:33:16.551846995 +0000 UTC m=+1.630556450 container remove bcf5c63cfe1f7d561c50199e5438195af0da191438606659a3e62289d9cc3368 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct  3 11:33:16 compute-0 systemd[1]: libpod-conmon-bcf5c63cfe1f7d561c50199e5438195af0da191438606659a3e62289d9cc3368.scope: Deactivated successfully.
Oct  3 11:33:17 compute-0 podman[541675]: 2025-10-03 11:33:17.500845599 +0000 UTC m=+0.052982159 container create c58e9ddb4a549efa5beaf9622a5fac3a58b25c151cffb81fe870e9692ef6b958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:33:17 compute-0 systemd[1]: Started libpod-conmon-c58e9ddb4a549efa5beaf9622a5fac3a58b25c151cffb81fe870e9692ef6b958.scope.
Oct  3 11:33:17 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:33:17 compute-0 podman[541675]: 2025-10-03 11:33:17.482350396 +0000 UTC m=+0.034486976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:33:17 compute-0 podman[541675]: 2025-10-03 11:33:17.594883083 +0000 UTC m=+0.147019663 container init c58e9ddb4a549efa5beaf9622a5fac3a58b25c151cffb81fe870e9692ef6b958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dewdney, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:33:17 compute-0 podman[541675]: 2025-10-03 11:33:17.607420635 +0000 UTC m=+0.159557235 container start c58e9ddb4a549efa5beaf9622a5fac3a58b25c151cffb81fe870e9692ef6b958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dewdney, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:33:17 compute-0 cranky_dewdney[541692]: 167 167
Oct  3 11:33:17 compute-0 systemd[1]: libpod-c58e9ddb4a549efa5beaf9622a5fac3a58b25c151cffb81fe870e9692ef6b958.scope: Deactivated successfully.
Oct  3 11:33:17 compute-0 conmon[541692]: conmon c58e9ddb4a549efa5bea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c58e9ddb4a549efa5beaf9622a5fac3a58b25c151cffb81fe870e9692ef6b958.scope/container/memory.events
Oct  3 11:33:17 compute-0 podman[541675]: 2025-10-03 11:33:17.615802464 +0000 UTC m=+0.167939054 container attach c58e9ddb4a549efa5beaf9622a5fac3a58b25c151cffb81fe870e9692ef6b958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct  3 11:33:17 compute-0 podman[541675]: 2025-10-03 11:33:17.617172367 +0000 UTC m=+0.169308937 container died c58e9ddb4a549efa5beaf9622a5fac3a58b25c151cffb81fe870e9692ef6b958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dewdney, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  3 11:33:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-87ba6e1650e83d4b2a26185b91db152931108ac952435e8d6b5949e9612af4f5-merged.mount: Deactivated successfully.
Oct  3 11:33:17 compute-0 podman[541675]: 2025-10-03 11:33:17.683603736 +0000 UTC m=+0.235740296 container remove c58e9ddb4a549efa5beaf9622a5fac3a58b25c151cffb81fe870e9692ef6b958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Oct  3 11:33:17 compute-0 systemd[1]: libpod-conmon-c58e9ddb4a549efa5beaf9622a5fac3a58b25c151cffb81fe870e9692ef6b958.scope: Deactivated successfully.
Oct  3 11:33:17 compute-0 podman[541717]: 2025-10-03 11:33:17.941704808 +0000 UTC m=+0.068038431 container create d3fb86aef7477c9c73a84bc6816ae7146ae875502389439ff4cab75ed0e22b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:33:17 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3784: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:33:18 compute-0 podman[541717]: 2025-10-03 11:33:17.917133831 +0000 UTC m=+0.043467424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:33:18 compute-0 systemd[1]: Started libpod-conmon-d3fb86aef7477c9c73a84bc6816ae7146ae875502389439ff4cab75ed0e22b75.scope.
Oct  3 11:33:18 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cd7d9fa79b9f7717e149acbedc246632e4c60df4f1d4f7108e241c643ce792/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cd7d9fa79b9f7717e149acbedc246632e4c60df4f1d4f7108e241c643ce792/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cd7d9fa79b9f7717e149acbedc246632e4c60df4f1d4f7108e241c643ce792/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cd7d9fa79b9f7717e149acbedc246632e4c60df4f1d4f7108e241c643ce792/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:33:18 compute-0 podman[541717]: 2025-10-03 11:33:18.107204382 +0000 UTC m=+0.233538015 container init d3fb86aef7477c9c73a84bc6816ae7146ae875502389439ff4cab75ed0e22b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:33:18 compute-0 podman[541717]: 2025-10-03 11:33:18.118909148 +0000 UTC m=+0.245242731 container start d3fb86aef7477c9c73a84bc6816ae7146ae875502389439ff4cab75ed0e22b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:33:18 compute-0 podman[541717]: 2025-10-03 11:33:18.124030032 +0000 UTC m=+0.250363675 container attach d3fb86aef7477c9c73a84bc6816ae7146ae875502389439ff4cab75ed0e22b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:33:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:33:18 compute-0 nova_compute[351685]: 2025-10-03 11:33:18.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]: {
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:    "0": [
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:        {
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "devices": [
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "/dev/loop3"
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            ],
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "lv_name": "ceph_lv0",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "lv_size": "21470642176",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "name": "ceph_lv0",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "tags": {
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.cluster_name": "ceph",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.crush_device_class": "",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.encrypted": "0",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.osd_id": "0",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.type": "block",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.vdo": "0"
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            },
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "type": "block",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "vg_name": "ceph_vg0"
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:        }
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:    ],
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:    "1": [
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:        {
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "devices": [
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "/dev/loop4"
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            ],
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "lv_name": "ceph_lv1",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "lv_size": "21470642176",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "name": "ceph_lv1",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:            "tags": {
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.cluster_name": "ceph",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.crush_device_class": "",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.encrypted": "0",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:33:18 compute-0 eager_varahamihira[541733]:                "ceph.osd_id": "1",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:                "ceph.type": "block",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:                "ceph.vdo": "0"
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:            },
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:            "type": "block",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:            "vg_name": "ceph_vg1"
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:        }
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:    ],
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:    "2": [
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:        {
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:            "devices": [
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:                "/dev/loop5"
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:            ],
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:            "lv_name": "ceph_lv2",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:            "lv_size": "21470642176",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:            "name": "ceph_lv2",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:            "tags": {
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:                "ceph.cluster_name": "ceph",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:                "ceph.crush_device_class": "",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:                "ceph.encrypted": "0",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:                "ceph.osd_id": "2",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:                "ceph.type": "block",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:                "ceph.vdo": "0"
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:            },
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:            "type": "block",
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:            "vg_name": "ceph_vg2"
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:        }
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]:    ]
Oct  3 11:33:19 compute-0 eager_varahamihira[541733]: }
Oct  3 11:33:19 compute-0 systemd[1]: libpod-d3fb86aef7477c9c73a84bc6816ae7146ae875502389439ff4cab75ed0e22b75.scope: Deactivated successfully.
Oct  3 11:33:19 compute-0 podman[541717]: 2025-10-03 11:33:19.054390589 +0000 UTC m=+1.180724222 container died d3fb86aef7477c9c73a84bc6816ae7146ae875502389439ff4cab75ed0e22b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  3 11:33:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-65cd7d9fa79b9f7717e149acbedc246632e4c60df4f1d4f7108e241c643ce792-merged.mount: Deactivated successfully.
Oct  3 11:33:19 compute-0 podman[541717]: 2025-10-03 11:33:19.143576588 +0000 UTC m=+1.269910171 container remove d3fb86aef7477c9c73a84bc6816ae7146ae875502389439ff4cab75ed0e22b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_varahamihira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 11:33:19 compute-0 systemd[1]: libpod-conmon-d3fb86aef7477c9c73a84bc6816ae7146ae875502389439ff4cab75ed0e22b75.scope: Deactivated successfully.
Oct  3 11:33:19 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3785: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 1 op/s
Oct  3 11:33:20 compute-0 podman[541891]: 2025-10-03 11:33:20.02230111 +0000 UTC m=+0.083565469 container create b292933001b923d6fe8401d859fa09d2cb50ba587931a07a83afabd2c361c19e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_turing, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:33:20 compute-0 podman[541891]: 2025-10-03 11:33:19.985844752 +0000 UTC m=+0.047109121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:33:20 compute-0 systemd[1]: Started libpod-conmon-b292933001b923d6fe8401d859fa09d2cb50ba587931a07a83afabd2c361c19e.scope.
Oct  3 11:33:20 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:33:20 compute-0 podman[541891]: 2025-10-03 11:33:20.122056507 +0000 UTC m=+0.183320846 container init b292933001b923d6fe8401d859fa09d2cb50ba587931a07a83afabd2c361c19e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 11:33:20 compute-0 podman[541891]: 2025-10-03 11:33:20.130470567 +0000 UTC m=+0.191734886 container start b292933001b923d6fe8401d859fa09d2cb50ba587931a07a83afabd2c361c19e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 11:33:20 compute-0 podman[541891]: 2025-10-03 11:33:20.134894689 +0000 UTC m=+0.196159028 container attach b292933001b923d6fe8401d859fa09d2cb50ba587931a07a83afabd2c361c19e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Oct  3 11:33:20 compute-0 blissful_turing[541907]: 167 167
Oct  3 11:33:20 compute-0 systemd[1]: libpod-b292933001b923d6fe8401d859fa09d2cb50ba587931a07a83afabd2c361c19e.scope: Deactivated successfully.
Oct  3 11:33:20 compute-0 podman[541891]: 2025-10-03 11:33:20.137219683 +0000 UTC m=+0.198484012 container died b292933001b923d6fe8401d859fa09d2cb50ba587931a07a83afabd2c361c19e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_turing, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 11:33:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc0c19b9e5920506366817d28f3ef8bda9eeefdb21b9990b7963a868f5c62b54-merged.mount: Deactivated successfully.
Oct  3 11:33:20 compute-0 nova_compute[351685]: 2025-10-03 11:33:20.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:20 compute-0 podman[541891]: 2025-10-03 11:33:20.191576916 +0000 UTC m=+0.252841235 container remove b292933001b923d6fe8401d859fa09d2cb50ba587931a07a83afabd2c361c19e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_turing, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct  3 11:33:20 compute-0 systemd[1]: libpod-conmon-b292933001b923d6fe8401d859fa09d2cb50ba587931a07a83afabd2c361c19e.scope: Deactivated successfully.
Oct  3 11:33:20 compute-0 podman[541929]: 2025-10-03 11:33:20.430225614 +0000 UTC m=+0.060396847 container create 25d41a2e3987fc90a3191e0071124921d38675f4bf8da4c15c33639d50642212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_williams, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:33:20 compute-0 systemd[1]: Started libpod-conmon-25d41a2e3987fc90a3191e0071124921d38675f4bf8da4c15c33639d50642212.scope.
Oct  3 11:33:20 compute-0 podman[541929]: 2025-10-03 11:33:20.412669142 +0000 UTC m=+0.042840405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:33:20 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535b5427c77b679f9b4480290c91692e21d0e6aa4456db9b8ea44fa77e09da8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535b5427c77b679f9b4480290c91692e21d0e6aa4456db9b8ea44fa77e09da8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535b5427c77b679f9b4480290c91692e21d0e6aa4456db9b8ea44fa77e09da8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535b5427c77b679f9b4480290c91692e21d0e6aa4456db9b8ea44fa77e09da8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:33:20 compute-0 podman[541929]: 2025-10-03 11:33:20.548459693 +0000 UTC m=+0.178630966 container init 25d41a2e3987fc90a3191e0071124921d38675f4bf8da4c15c33639d50642212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_williams, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:33:20 compute-0 podman[541929]: 2025-10-03 11:33:20.569438786 +0000 UTC m=+0.199610019 container start 25d41a2e3987fc90a3191e0071124921d38675f4bf8da4c15c33639d50642212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_williams, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:33:20 compute-0 podman[541929]: 2025-10-03 11:33:20.573696542 +0000 UTC m=+0.203867795 container attach 25d41a2e3987fc90a3191e0071124921d38675f4bf8da4c15c33639d50642212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_williams, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:33:21 compute-0 kind_williams[541945]: {
Oct  3 11:33:21 compute-0 kind_williams[541945]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:33:21 compute-0 kind_williams[541945]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:33:21 compute-0 kind_williams[541945]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:33:21 compute-0 kind_williams[541945]:        "osd_id": 1,
Oct  3 11:33:21 compute-0 kind_williams[541945]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:33:21 compute-0 kind_williams[541945]:        "type": "bluestore"
Oct  3 11:33:21 compute-0 kind_williams[541945]:    },
Oct  3 11:33:21 compute-0 kind_williams[541945]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:33:21 compute-0 kind_williams[541945]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:33:21 compute-0 kind_williams[541945]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:33:21 compute-0 kind_williams[541945]:        "osd_id": 2,
Oct  3 11:33:21 compute-0 kind_williams[541945]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:33:21 compute-0 kind_williams[541945]:        "type": "bluestore"
Oct  3 11:33:21 compute-0 kind_williams[541945]:    },
Oct  3 11:33:21 compute-0 kind_williams[541945]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:33:21 compute-0 kind_williams[541945]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:33:21 compute-0 kind_williams[541945]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:33:21 compute-0 kind_williams[541945]:        "osd_id": 0,
Oct  3 11:33:21 compute-0 kind_williams[541945]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:33:21 compute-0 kind_williams[541945]:        "type": "bluestore"
Oct  3 11:33:21 compute-0 kind_williams[541945]:    }
Oct  3 11:33:21 compute-0 kind_williams[541945]: }
Oct  3 11:33:21 compute-0 systemd[1]: libpod-25d41a2e3987fc90a3191e0071124921d38675f4bf8da4c15c33639d50642212.scope: Deactivated successfully.
Oct  3 11:33:21 compute-0 systemd[1]: libpod-25d41a2e3987fc90a3191e0071124921d38675f4bf8da4c15c33639d50642212.scope: Consumed 1.188s CPU time.
Oct  3 11:33:21 compute-0 podman[541929]: 2025-10-03 11:33:21.769096674 +0000 UTC m=+1.399267917 container died 25d41a2e3987fc90a3191e0071124921d38675f4bf8da4c15c33639d50642212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_williams, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  3 11:33:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-535b5427c77b679f9b4480290c91692e21d0e6aa4456db9b8ea44fa77e09da8d-merged.mount: Deactivated successfully.
Oct  3 11:33:21 compute-0 podman[541929]: 2025-10-03 11:33:21.83480249 +0000 UTC m=+1.464973723 container remove 25d41a2e3987fc90a3191e0071124921d38675f4bf8da4c15c33639d50642212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:33:21 compute-0 systemd[1]: libpod-conmon-25d41a2e3987fc90a3191e0071124921d38675f4bf8da4c15c33639d50642212.scope: Deactivated successfully.
Oct  3 11:33:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:33:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:33:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:33:21 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:33:21 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 0c4ba06e-6ff6-452c-a5c6-16c76e298827 does not exist
Oct  3 11:33:21 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6b8e8f30-38a7-4f39-b481-34f0a1145a43 does not exist
Oct  3 11:33:21 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3786: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct  3 11:33:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:33:22 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:33:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:33:23 compute-0 nova_compute[351685]: 2025-10-03 11:33:23.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:23 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3787: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct  3 11:33:25 compute-0 nova_compute[351685]: 2025-10-03 11:33:25.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:25 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3788: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct  3 11:33:27 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3789: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct  3 11:33:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:33:28 compute-0 nova_compute[351685]: 2025-10-03 11:33:28.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:29 compute-0 podman[157165]: time="2025-10-03T11:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:33:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:33:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9601 "" "Go-http-client/1.1"
Oct  3 11:33:29 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3790: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s wr, 1 op/s
Oct  3 11:33:30 compute-0 nova_compute[351685]: 2025-10-03 11:33:30.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:31 compute-0 openstack_network_exporter[367524]: ERROR   11:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:33:31 compute-0 openstack_network_exporter[367524]: ERROR   11:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:33:31 compute-0 openstack_network_exporter[367524]: ERROR   11:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:33:31 compute-0 openstack_network_exporter[367524]: ERROR   11:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:33:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:33:31 compute-0 openstack_network_exporter[367524]: ERROR   11:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:33:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:33:31 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3791: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Oct  3 11:33:32 compute-0 podman[542045]: 2025-10-03 11:33:32.857701746 +0000 UTC m=+0.099754987 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751)
Oct  3 11:33:32 compute-0 podman[542042]: 2025-10-03 11:33:32.871873691 +0000 UTC m=+0.126826735 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Oct  3 11:33:32 compute-0 podman[542041]: 2025-10-03 11:33:32.876327714 +0000 UTC m=+0.121426233 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:33:32 compute-0 podman[542043]: 2025-10-03 11:33:32.886337334 +0000 UTC m=+0.139812151 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 11:33:32 compute-0 podman[542064]: 2025-10-03 11:33:32.894701082 +0000 UTC m=+0.125504543 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  3 11:33:32 compute-0 podman[542044]: 2025-10-03 11:33:32.897589705 +0000 UTC m=+0.144998458 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  3 11:33:32 compute-0 podman[542049]: 2025-10-03 11:33:32.942335709 +0000 UTC m=+0.169767141 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 11:33:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:33:33 compute-0 nova_compute[351685]: 2025-10-03 11:33:33.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3792: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct  3 11:33:35 compute-0 nova_compute[351685]: 2025-10-03 11:33:35.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3793: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct  3 11:33:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3794: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct  3 11:33:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:33:38 compute-0 nova_compute[351685]: 2025-10-03 11:33:38.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3795: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct  3 11:33:40 compute-0 nova_compute[351685]: 2025-10-03 11:33:40.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.909 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.910 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.913 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.921 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.922 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.921 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '443e486d-1bf2-4550-a4ae-32f0f8f4af19', 'name': 'te-0355793-asg-bz6ac4extgro-yngmy2hkxebf-kzam7sftwtcg', 'flavor': {'id': 'b93eb926-1d95-406e-aec3-a907be067084', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b9c8e0cc-ecf1-4fa8-92ee-328b108123cd'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ebbd19d68501417398b05a6a4b7c22de', 'user_id': '8990c210ba8740dc9714739f27702391', 'hostId': '68f1a860c9647e69f481ba6f1cfa913085c986ba3b27987b76a63364', 'status': 'active', 'metadata': {'metering.server_group': '0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.923 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.927 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.928 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.928 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.929 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.931 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.932 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.934 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.936 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.937 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.939 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.940 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.943 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.945 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.946 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.948 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.950 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.951 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.952 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.951 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '83fc22ce-d2e4-468a-b166-04f2743fa68d', 'name': 'te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz', 'flavor': {'id': 'b93eb926-1d95-406e-aec3-a907be067084', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b9c8e0cc-ecf1-4fa8-92ee-328b108123cd'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ebbd19d68501417398b05a6a4b7c22de', 'user_id': '8990c210ba8740dc9714739f27702391', 'hostId': '68f1a860c9647e69f481ba6f1cfa913085c986ba3b27987b76a63364', 'status': 'active', 'metadata': {'metering.server_group': '0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.954 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.954 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.954 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.955 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.956 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:33:40.954906) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.963 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.970 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.976 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.977 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.978 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.978 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.978 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.978 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.979 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.979 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.980 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.980 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.981 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.981 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.981 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.981 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.983 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:33:40.978812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:40.983 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:33:40.981715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.007 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.008 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.043 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.044 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.045 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.070 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.071 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.072 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.072 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:33:41.073500) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.115 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.bytes volume: 29019136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.116 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.166 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.167 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.168 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.222 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.bytes volume: 31861248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.223 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.224 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.224 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.224 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.225 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.225 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.225 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.225 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.latency volume: 2011591932 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.226 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.latency volume: 159834513 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.226 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.227 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.227 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.227 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:33:41.225516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.228 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.latency volume: 2658882306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.229 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.latency volume: 170448087 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.229 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.230 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.230 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.230 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.230 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.230 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.231 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.requests volume: 1049 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.231 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.231 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.232 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.232 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.233 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.requests volume: 1164 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.233 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.234 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.234 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.235 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.235 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:33:41.230888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.235 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.235 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.235 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.236 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.236 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.236 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.237 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.237 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.238 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.238 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.239 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.239 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.239 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.240 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.240 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.240 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.240 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.240 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.241 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.241 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.242 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.242 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:33:41.235803) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.242 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.243 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:33:41.240321) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.243 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.244 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.244 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.244 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.244 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.244 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.245 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.245 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.bytes volume: 72847360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.245 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.246 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.246 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.247 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.247 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.248 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.249 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.249 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.249 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:33:41.245118) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.250 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.250 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.250 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.250 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.251 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:33:41.250629) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.280 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.299 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.324 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.325 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.326 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.326 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.326 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.326 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.327 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.latency volume: 9738524142 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.327 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.328 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.328 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.328 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.329 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.latency volume: 11038555047 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.329 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.330 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.331 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.332 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.requests volume: 305 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:33:41.326766) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.333 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.333 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.334 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.334 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.335 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.requests volume: 349 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.335 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.336 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.337 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.337 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.338 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.338 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.339 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.340 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.340 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.340 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.341 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.342 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.343 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.343 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:33:41.332474) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.344 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.344 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:33:41.337536) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.344 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.344 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.345 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:33:41.341129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.345 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:33:41.343623) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.347 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:33:41.346279) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.347 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.348 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.348 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.348 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.348 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.348 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.349 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.349 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.349 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.349 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.349 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.350 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.350 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/cpu volume: 175730000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.350 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 112690000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.350 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/cpu volume: 334330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.351 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.351 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.351 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.351 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.351 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.352 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.352 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.352 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.353 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.353 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.353 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.354 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.354 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.354 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.354 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:33:41.348229) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.354 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.355 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.355 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.355 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.355 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.356 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.356 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.356 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.356 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.357 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.357 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.357 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.358 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.358 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.358 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/memory.usage volume: 43.33984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.358 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.359 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/memory.usage volume: 42.3671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.359 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.359 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.360 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.360 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.360 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.360 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.360 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.bytes volume: 2276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.360 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.361 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.bytes volume: 1820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.361 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.361 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.362 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.362 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.362 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.362 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.362 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.363 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.363 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:33:41.350060) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:33:41.351960) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:33:41.354185) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:33:41.356152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:33:41.358097) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:33:41.360499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:33:41.362340) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.369 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.369 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.369 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.369 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:33:41.372 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:33:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:33:41.711 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:33:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:33:41.711 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:33:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:33:41.712 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:33:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3796: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:33:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:33:43 compute-0 nova_compute[351685]: 2025-10-03 11:33:43.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:43 compute-0 podman[542176]: 2025-10-03 11:33:43.86301212 +0000 UTC m=+0.107508436 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:33:43 compute-0 podman[542178]: 2025-10-03 11:33:43.868127063 +0000 UTC m=+0.084976994 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS)
Oct  3 11:33:43 compute-0 podman[542177]: 2025-10-03 11:33:43.885908084 +0000 UTC m=+0.131103423 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.buildah.version=1.29.0, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Oct  3 11:33:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3797: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:33:45 compute-0 nova_compute[351685]: 2025-10-03 11:33:45.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3798: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:33:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:33:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:33:46
Oct  3 11:33:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:33:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:33:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'images', '.mgr', 'cephfs.cephfs.data', 'backups', 'volumes', 'cephfs.cephfs.meta']
Oct  3 11:33:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:33:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:33:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:33:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:33:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:33:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:33:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:33:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:33:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:33:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:33:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:33:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3799: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:33:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:33:48 compute-0 nova_compute[351685]: 2025-10-03 11:33:48.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:49 compute-0 nova_compute[351685]: 2025-10-03 11:33:49.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:33:49 compute-0 nova_compute[351685]: 2025-10-03 11:33:49.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:33:49 compute-0 nova_compute[351685]: 2025-10-03 11:33:49.731 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:33:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3800: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:33:50 compute-0 nova_compute[351685]: 2025-10-03 11:33:50.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:50 compute-0 nova_compute[351685]: 2025-10-03 11:33:50.299 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:33:50 compute-0 nova_compute[351685]: 2025-10-03 11:33:50.300 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:33:50 compute-0 nova_compute[351685]: 2025-10-03 11:33:50.300 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:33:50 compute-0 nova_compute[351685]: 2025-10-03 11:33:50.300 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:33:51 compute-0 nova_compute[351685]: 2025-10-03 11:33:51.310 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:33:51 compute-0 nova_compute[351685]: 2025-10-03 11:33:51.323 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:33:51 compute-0 nova_compute[351685]: 2025-10-03 11:33:51.323 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:33:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3801: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:33:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:33:53 compute-0 nova_compute[351685]: 2025-10-03 11:33:53.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3802: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:33:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:33:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4093148858' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:33:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:33:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4093148858' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:33:55 compute-0 nova_compute[351685]: 2025-10-03 11:33:55.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3803: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00207013190238979 of space, bias 1.0, pg target 0.6210395707169369 quantized to 32 (current 32)
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:33:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:33:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3804: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:33:58 compute-0 nova_compute[351685]: 2025-10-03 11:33:58.317 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:33:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:33:58 compute-0 nova_compute[351685]: 2025-10-03 11:33:58.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:33:58 compute-0 nova_compute[351685]: 2025-10-03 11:33:58.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:33:59 compute-0 podman[157165]: time="2025-10-03T11:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:33:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:33:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9600 "" "Go-http-client/1.1"
Oct  3 11:34:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3805: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:00 compute-0 nova_compute[351685]: 2025-10-03 11:34:00.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:01 compute-0 openstack_network_exporter[367524]: ERROR   11:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:34:01 compute-0 openstack_network_exporter[367524]: ERROR   11:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:34:01 compute-0 openstack_network_exporter[367524]: ERROR   11:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:34:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:34:01 compute-0 openstack_network_exporter[367524]: ERROR   11:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:34:01 compute-0 openstack_network_exporter[367524]: ERROR   11:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:34:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:34:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3806: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:34:03 compute-0 nova_compute[351685]: 2025-10-03 11:34:03.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:34:03 compute-0 nova_compute[351685]: 2025-10-03 11:34:03.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:03 compute-0 nova_compute[351685]: 2025-10-03 11:34:03.773 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:34:03 compute-0 nova_compute[351685]: 2025-10-03 11:34:03.773 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:34:03 compute-0 nova_compute[351685]: 2025-10-03 11:34:03.774 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:34:03 compute-0 nova_compute[351685]: 2025-10-03 11:34:03.774 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:34:03 compute-0 nova_compute[351685]: 2025-10-03 11:34:03.774 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:34:03 compute-0 podman[542237]: 2025-10-03 11:34:03.876914172 +0000 UTC m=+0.122822717 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  3 11:34:03 compute-0 podman[542235]: 2025-10-03 11:34:03.877824292 +0000 UTC m=+0.121883127 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:34:03 compute-0 podman[542236]: 2025-10-03 11:34:03.903178064 +0000 UTC m=+0.126997941 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41)
Oct  3 11:34:03 compute-0 podman[542239]: 2025-10-03 11:34:03.916570803 +0000 UTC m=+0.143959074 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Oct  3 11:34:03 compute-0 podman[542241]: 2025-10-03 11:34:03.923924309 +0000 UTC m=+0.156102704 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct  3 11:34:03 compute-0 podman[542238]: 2025-10-03 11:34:03.933442724 +0000 UTC m=+0.165666390 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:34:03 compute-0 podman[542240]: 2025-10-03 11:34:03.952775044 +0000 UTC m=+0.177716596 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  3 11:34:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3807: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:34:04 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/708298687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.254 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.350 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.352 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.357 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.357 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.357 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.362 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.362 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.752 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.754 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3169MB free_disk=59.864097595214844GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.755 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.755 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.849 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.849 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 83fc22ce-d2e4-468a-b166-04f2743fa68d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.850 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 443e486d-1bf2-4550-a4ae-32f0f8f4af19 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.851 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.851 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.870 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.898 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.899 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.914 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 11:34:04 compute-0 nova_compute[351685]: 2025-10-03 11:34:04.939 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 11:34:05 compute-0 nova_compute[351685]: 2025-10-03 11:34:05.022 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:34:05 compute-0 nova_compute[351685]: 2025-10-03 11:34:05.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:34:05 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/144541116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:34:05 compute-0 nova_compute[351685]: 2025-10-03 11:34:05.519 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:34:05 compute-0 nova_compute[351685]: 2025-10-03 11:34:05.532 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:34:05 compute-0 nova_compute[351685]: 2025-10-03 11:34:05.553 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:34:05 compute-0 nova_compute[351685]: 2025-10-03 11:34:05.555 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:34:05 compute-0 nova_compute[351685]: 2025-10-03 11:34:05.555 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:34:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3808: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3809: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:08 compute-0 nova_compute[351685]: 2025-10-03 11:34:08.555 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:34:08 compute-0 nova_compute[351685]: 2025-10-03 11:34:08.556 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:34:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:34:08 compute-0 nova_compute[351685]: 2025-10-03 11:34:08.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3810: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:10 compute-0 nova_compute[351685]: 2025-10-03 11:34:10.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:11 compute-0 nova_compute[351685]: 2025-10-03 11:34:11.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:34:11 compute-0 nova_compute[351685]: 2025-10-03 11:34:11.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:34:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3811: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:12 compute-0 nova_compute[351685]: 2025-10-03 11:34:12.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:34:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:34:13 compute-0 nova_compute[351685]: 2025-10-03 11:34:13.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:34:13 compute-0 nova_compute[351685]: 2025-10-03 11:34:13.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3812: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:14 compute-0 podman[542407]: 2025-10-03 11:34:14.781873459 +0000 UTC m=+0.088261589 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 11:34:14 compute-0 podman[542408]: 2025-10-03 11:34:14.78377922 +0000 UTC m=+0.088830568 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, release-0.7.12=, container_name=kepler, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, release=1214.1726694543)
Oct  3 11:34:14 compute-0 podman[542409]: 2025-10-03 11:34:14.793828833 +0000 UTC m=+0.097390092 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Oct  3 11:34:15 compute-0 nova_compute[351685]: 2025-10-03 11:34:15.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3813: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:34:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:34:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3814: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:34:18 compute-0 nova_compute[351685]: 2025-10-03 11:34:18.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3815: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:20 compute-0 nova_compute[351685]: 2025-10-03 11:34:20.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3816: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:34:23 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:34:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:34:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:34:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:34:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:34:23 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev c70ac045-7bdb-4408-abb7-0c2c17ec00dc does not exist
Oct  3 11:34:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:34:23 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 7596b855-3a58-4ead-a070-56d1cd7b8aa0 does not exist
Oct  3 11:34:23 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:34:23 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 94495e93-c4f4-4c82-97d5-3898500220e8 does not exist
Oct  3 11:34:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:34:23 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:34:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:34:23 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:34:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:34:23 compute-0 nova_compute[351685]: 2025-10-03 11:34:23.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3817: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:34:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:34:24 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:34:24 compute-0 podman[542731]: 2025-10-03 11:34:24.324958469 +0000 UTC m=+0.100195092 container create fa0de7d7e1b2bb553c6e66afb99491f7e945aa860228fcab4c5714caec131d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hugle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:34:24 compute-0 podman[542731]: 2025-10-03 11:34:24.265928017 +0000 UTC m=+0.041164730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:34:24 compute-0 systemd[1]: Started libpod-conmon-fa0de7d7e1b2bb553c6e66afb99491f7e945aa860228fcab4c5714caec131d0b.scope.
Oct  3 11:34:24 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:34:24 compute-0 podman[542731]: 2025-10-03 11:34:24.48096962 +0000 UTC m=+0.256206273 container init fa0de7d7e1b2bb553c6e66afb99491f7e945aa860228fcab4c5714caec131d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hugle, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:34:24 compute-0 podman[542731]: 2025-10-03 11:34:24.493724998 +0000 UTC m=+0.268961621 container start fa0de7d7e1b2bb553c6e66afb99491f7e945aa860228fcab4c5714caec131d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:34:24 compute-0 podman[542731]: 2025-10-03 11:34:24.499737731 +0000 UTC m=+0.274974354 container attach fa0de7d7e1b2bb553c6e66afb99491f7e945aa860228fcab4c5714caec131d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hugle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct  3 11:34:24 compute-0 agitated_hugle[542747]: 167 167
Oct  3 11:34:24 compute-0 podman[542731]: 2025-10-03 11:34:24.505865417 +0000 UTC m=+0.281102030 container died fa0de7d7e1b2bb553c6e66afb99491f7e945aa860228fcab4c5714caec131d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:34:24 compute-0 systemd[1]: libpod-fa0de7d7e1b2bb553c6e66afb99491f7e945aa860228fcab4c5714caec131d0b.scope: Deactivated successfully.
Oct  3 11:34:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-da0c70422ed1e315ace1868b3bb7b4d0bbbc348c494240d1d08a825047067b09-merged.mount: Deactivated successfully.
Oct  3 11:34:24 compute-0 podman[542731]: 2025-10-03 11:34:24.559030261 +0000 UTC m=+0.334266874 container remove fa0de7d7e1b2bb553c6e66afb99491f7e945aa860228fcab4c5714caec131d0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 11:34:24 compute-0 systemd[1]: libpod-conmon-fa0de7d7e1b2bb553c6e66afb99491f7e945aa860228fcab4c5714caec131d0b.scope: Deactivated successfully.
Oct  3 11:34:24 compute-0 podman[542772]: 2025-10-03 11:34:24.852807357 +0000 UTC m=+0.072429273 container create 1e7052364bb81d6218d9c3f12858f85822637fcc102c52eb15302e6b2b1bfc43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:34:24 compute-0 podman[542772]: 2025-10-03 11:34:24.829713596 +0000 UTC m=+0.049335542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:34:24 compute-0 systemd[1]: Started libpod-conmon-1e7052364bb81d6218d9c3f12858f85822637fcc102c52eb15302e6b2b1bfc43.scope.
Oct  3 11:34:24 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/230ed810c09fa9194f66fed0e592af84e2429215dd9360a71dc585bff1e13859/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/230ed810c09fa9194f66fed0e592af84e2429215dd9360a71dc585bff1e13859/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/230ed810c09fa9194f66fed0e592af84e2429215dd9360a71dc585bff1e13859/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/230ed810c09fa9194f66fed0e592af84e2429215dd9360a71dc585bff1e13859/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/230ed810c09fa9194f66fed0e592af84e2429215dd9360a71dc585bff1e13859/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:34:24 compute-0 podman[542772]: 2025-10-03 11:34:24.982111921 +0000 UTC m=+0.201733837 container init 1e7052364bb81d6218d9c3f12858f85822637fcc102c52eb15302e6b2b1bfc43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  3 11:34:24 compute-0 podman[542772]: 2025-10-03 11:34:24.999199498 +0000 UTC m=+0.218821394 container start 1e7052364bb81d6218d9c3f12858f85822637fcc102c52eb15302e6b2b1bfc43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:34:25 compute-0 podman[542772]: 2025-10-03 11:34:25.0039192 +0000 UTC m=+0.223541096 container attach 1e7052364bb81d6218d9c3f12858f85822637fcc102c52eb15302e6b2b1bfc43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:34:25 compute-0 nova_compute[351685]: 2025-10-03 11:34:25.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3818: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:26 compute-0 agitated_lichterman[542787]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:34:26 compute-0 agitated_lichterman[542787]: --> relative data size: 1.0
Oct  3 11:34:26 compute-0 agitated_lichterman[542787]: --> All data devices are unavailable
Oct  3 11:34:26 compute-0 systemd[1]: libpod-1e7052364bb81d6218d9c3f12858f85822637fcc102c52eb15302e6b2b1bfc43.scope: Deactivated successfully.
Oct  3 11:34:26 compute-0 systemd[1]: libpod-1e7052364bb81d6218d9c3f12858f85822637fcc102c52eb15302e6b2b1bfc43.scope: Consumed 1.125s CPU time.
Oct  3 11:34:26 compute-0 podman[542772]: 2025-10-03 11:34:26.186917883 +0000 UTC m=+1.406539779 container died 1e7052364bb81d6218d9c3f12858f85822637fcc102c52eb15302e6b2b1bfc43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:34:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-230ed810c09fa9194f66fed0e592af84e2429215dd9360a71dc585bff1e13859-merged.mount: Deactivated successfully.
Oct  3 11:34:26 compute-0 podman[542772]: 2025-10-03 11:34:26.255780521 +0000 UTC m=+1.475402407 container remove 1e7052364bb81d6218d9c3f12858f85822637fcc102c52eb15302e6b2b1bfc43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  3 11:34:26 compute-0 systemd[1]: libpod-conmon-1e7052364bb81d6218d9c3f12858f85822637fcc102c52eb15302e6b2b1bfc43.scope: Deactivated successfully.
Oct  3 11:34:27 compute-0 podman[542966]: 2025-10-03 11:34:27.147880322 +0000 UTC m=+0.072230266 container create f334affbc5630a3af4608f8fda38588675d9fbda83c3a7263104dbb181c3059e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 11:34:27 compute-0 systemd[1]: Started libpod-conmon-f334affbc5630a3af4608f8fda38588675d9fbda83c3a7263104dbb181c3059e.scope.
Oct  3 11:34:27 compute-0 podman[542966]: 2025-10-03 11:34:27.125148803 +0000 UTC m=+0.049498777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:34:27 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:34:27 compute-0 podman[542966]: 2025-10-03 11:34:27.261584776 +0000 UTC m=+0.185934730 container init f334affbc5630a3af4608f8fda38588675d9fbda83c3a7263104dbb181c3059e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shtern, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:34:27 compute-0 podman[542966]: 2025-10-03 11:34:27.270812802 +0000 UTC m=+0.195162726 container start f334affbc5630a3af4608f8fda38588675d9fbda83c3a7263104dbb181c3059e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shtern, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  3 11:34:27 compute-0 podman[542966]: 2025-10-03 11:34:27.276333689 +0000 UTC m=+0.200683633 container attach f334affbc5630a3af4608f8fda38588675d9fbda83c3a7263104dbb181c3059e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shtern, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  3 11:34:27 compute-0 great_shtern[542982]: 167 167
Oct  3 11:34:27 compute-0 podman[542966]: 2025-10-03 11:34:27.27856598 +0000 UTC m=+0.202915904 container died f334affbc5630a3af4608f8fda38588675d9fbda83c3a7263104dbb181c3059e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  3 11:34:27 compute-0 systemd[1]: libpod-f334affbc5630a3af4608f8fda38588675d9fbda83c3a7263104dbb181c3059e.scope: Deactivated successfully.
Oct  3 11:34:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d52572a213dddb4b986fab1463009eb7df9990e544b5921fee50f69f6952c14-merged.mount: Deactivated successfully.
Oct  3 11:34:27 compute-0 podman[542966]: 2025-10-03 11:34:27.325889827 +0000 UTC m=+0.250239741 container remove f334affbc5630a3af4608f8fda38588675d9fbda83c3a7263104dbb181c3059e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:34:27 compute-0 systemd[1]: libpod-conmon-f334affbc5630a3af4608f8fda38588675d9fbda83c3a7263104dbb181c3059e.scope: Deactivated successfully.
Oct  3 11:34:27 compute-0 podman[543006]: 2025-10-03 11:34:27.576881581 +0000 UTC m=+0.072420061 container create 86e5970cf32feca6397463f36d85592f998e8e1eff0ad6314200212a9246b071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:34:27 compute-0 podman[543006]: 2025-10-03 11:34:27.543072408 +0000 UTC m=+0.038610908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:34:27 compute-0 systemd[1]: Started libpod-conmon-86e5970cf32feca6397463f36d85592f998e8e1eff0ad6314200212a9246b071.scope.
Oct  3 11:34:27 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/302596a3e7c066381335a0c7068859dc2749b9251eceea1d8663c4738cc852fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/302596a3e7c066381335a0c7068859dc2749b9251eceea1d8663c4738cc852fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/302596a3e7c066381335a0c7068859dc2749b9251eceea1d8663c4738cc852fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:34:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/302596a3e7c066381335a0c7068859dc2749b9251eceea1d8663c4738cc852fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:34:27 compute-0 podman[543006]: 2025-10-03 11:34:27.749170233 +0000 UTC m=+0.244708723 container init 86e5970cf32feca6397463f36d85592f998e8e1eff0ad6314200212a9246b071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jones, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:34:27 compute-0 podman[543006]: 2025-10-03 11:34:27.767385827 +0000 UTC m=+0.262924317 container start 86e5970cf32feca6397463f36d85592f998e8e1eff0ad6314200212a9246b071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jones, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 11:34:27 compute-0 podman[543006]: 2025-10-03 11:34:27.772551802 +0000 UTC m=+0.268090292 container attach 86e5970cf32feca6397463f36d85592f998e8e1eff0ad6314200212a9246b071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jones, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct  3 11:34:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3819: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:34:28 compute-0 peaceful_jones[543022]: {
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:    "0": [
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:        {
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "devices": [
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "/dev/loop3"
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            ],
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "lv_name": "ceph_lv0",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "lv_size": "21470642176",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "name": "ceph_lv0",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "tags": {
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.cluster_name": "ceph",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.crush_device_class": "",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.encrypted": "0",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.osd_id": "0",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.type": "block",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.vdo": "0"
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            },
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "type": "block",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "vg_name": "ceph_vg0"
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:        }
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:    ],
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:    "1": [
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:        {
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "devices": [
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "/dev/loop4"
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            ],
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "lv_name": "ceph_lv1",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "lv_size": "21470642176",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "name": "ceph_lv1",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "tags": {
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.cluster_name": "ceph",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.crush_device_class": "",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.encrypted": "0",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.osd_id": "1",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.type": "block",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.vdo": "0"
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            },
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "type": "block",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "vg_name": "ceph_vg1"
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:        }
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:    ],
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:    "2": [
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:        {
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "devices": [
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "/dev/loop5"
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            ],
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "lv_name": "ceph_lv2",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "lv_size": "21470642176",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "name": "ceph_lv2",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "tags": {
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.cluster_name": "ceph",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.crush_device_class": "",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.encrypted": "0",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.osd_id": "2",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.type": "block",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:                "ceph.vdo": "0"
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            },
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "type": "block",
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:            "vg_name": "ceph_vg2"
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:        }
Oct  3 11:34:28 compute-0 peaceful_jones[543022]:    ]
Oct  3 11:34:28 compute-0 peaceful_jones[543022]: }
Oct  3 11:34:28 compute-0 systemd[1]: libpod-86e5970cf32feca6397463f36d85592f998e8e1eff0ad6314200212a9246b071.scope: Deactivated successfully.
Oct  3 11:34:28 compute-0 podman[543031]: 2025-10-03 11:34:28.742781837 +0000 UTC m=+0.046421338 container died 86e5970cf32feca6397463f36d85592f998e8e1eff0ad6314200212a9246b071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jones, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:34:28 compute-0 nova_compute[351685]: 2025-10-03 11:34:28.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-302596a3e7c066381335a0c7068859dc2749b9251eceea1d8663c4738cc852fb-merged.mount: Deactivated successfully.
Oct  3 11:34:28 compute-0 podman[543031]: 2025-10-03 11:34:28.846498921 +0000 UTC m=+0.150138352 container remove 86e5970cf32feca6397463f36d85592f998e8e1eff0ad6314200212a9246b071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jones, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:34:28 compute-0 systemd[1]: libpod-conmon-86e5970cf32feca6397463f36d85592f998e8e1eff0ad6314200212a9246b071.scope: Deactivated successfully.
Oct  3 11:34:29 compute-0 podman[157165]: time="2025-10-03T11:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:34:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:34:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9605 "" "Go-http-client/1.1"
Oct  3 11:34:29 compute-0 podman[543177]: 2025-10-03 11:34:29.937087925 +0000 UTC m=+0.063974082 container create b2316bd9172565b32fb8f28f367161ee7595ea5e5157b839513cd2b2b6b5e548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mccarthy, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 11:34:30 compute-0 podman[543177]: 2025-10-03 11:34:29.907207726 +0000 UTC m=+0.034093933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:34:30 compute-0 systemd[1]: Started libpod-conmon-b2316bd9172565b32fb8f28f367161ee7595ea5e5157b839513cd2b2b6b5e548.scope.
Oct  3 11:34:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3820: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:30 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:34:30 compute-0 podman[543177]: 2025-10-03 11:34:30.070926384 +0000 UTC m=+0.197812531 container init b2316bd9172565b32fb8f28f367161ee7595ea5e5157b839513cd2b2b6b5e548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 11:34:30 compute-0 podman[543177]: 2025-10-03 11:34:30.080093928 +0000 UTC m=+0.206980075 container start b2316bd9172565b32fb8f28f367161ee7595ea5e5157b839513cd2b2b6b5e548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mccarthy, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  3 11:34:30 compute-0 podman[543177]: 2025-10-03 11:34:30.083969042 +0000 UTC m=+0.210855189 container attach b2316bd9172565b32fb8f28f367161ee7595ea5e5157b839513cd2b2b6b5e548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mccarthy, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:34:30 compute-0 cool_mccarthy[543193]: 167 167
Oct  3 11:34:30 compute-0 systemd[1]: libpod-b2316bd9172565b32fb8f28f367161ee7595ea5e5157b839513cd2b2b6b5e548.scope: Deactivated successfully.
Oct  3 11:34:30 compute-0 conmon[543193]: conmon b2316bd9172565b32fb8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b2316bd9172565b32fb8f28f367161ee7595ea5e5157b839513cd2b2b6b5e548.scope/container/memory.events
Oct  3 11:34:30 compute-0 podman[543177]: 2025-10-03 11:34:30.0892275 +0000 UTC m=+0.216113647 container died b2316bd9172565b32fb8f28f367161ee7595ea5e5157b839513cd2b2b6b5e548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mccarthy, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:34:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c3946df12a7f20cf9ab78cf30f32419837ba31a4142e618d2ca115ebddeffd9-merged.mount: Deactivated successfully.
Oct  3 11:34:30 compute-0 podman[543177]: 2025-10-03 11:34:30.133652424 +0000 UTC m=+0.260538571 container remove b2316bd9172565b32fb8f28f367161ee7595ea5e5157b839513cd2b2b6b5e548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mccarthy, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 11:34:30 compute-0 systemd[1]: libpod-conmon-b2316bd9172565b32fb8f28f367161ee7595ea5e5157b839513cd2b2b6b5e548.scope: Deactivated successfully.
Oct  3 11:34:30 compute-0 nova_compute[351685]: 2025-10-03 11:34:30.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:30 compute-0 podman[543216]: 2025-10-03 11:34:30.353950825 +0000 UTC m=+0.049449467 container create 355399d2dd3cbaef5c0667e50ac10ea5be5304a9dc296f902857d4e0125ebb21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 11:34:30 compute-0 systemd[1]: Started libpod-conmon-355399d2dd3cbaef5c0667e50ac10ea5be5304a9dc296f902857d4e0125ebb21.scope.
Oct  3 11:34:30 compute-0 podman[543216]: 2025-10-03 11:34:30.335609796 +0000 UTC m=+0.031108458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:34:30 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334415850b594dcc4eec43b4bf933929fb83765f68328f88f0ea43576ed2bed5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334415850b594dcc4eec43b4bf933929fb83765f68328f88f0ea43576ed2bed5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334415850b594dcc4eec43b4bf933929fb83765f68328f88f0ea43576ed2bed5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:34:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/334415850b594dcc4eec43b4bf933929fb83765f68328f88f0ea43576ed2bed5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:34:30 compute-0 podman[543216]: 2025-10-03 11:34:30.497604918 +0000 UTC m=+0.193103600 container init 355399d2dd3cbaef5c0667e50ac10ea5be5304a9dc296f902857d4e0125ebb21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:34:30 compute-0 podman[543216]: 2025-10-03 11:34:30.516853916 +0000 UTC m=+0.212352568 container start 355399d2dd3cbaef5c0667e50ac10ea5be5304a9dc296f902857d4e0125ebb21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  3 11:34:30 compute-0 podman[543216]: 2025-10-03 11:34:30.521480164 +0000 UTC m=+0.216978816 container attach 355399d2dd3cbaef5c0667e50ac10ea5be5304a9dc296f902857d4e0125ebb21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_faraday, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 11:34:31 compute-0 openstack_network_exporter[367524]: ERROR   11:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:34:31 compute-0 openstack_network_exporter[367524]: ERROR   11:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:34:31 compute-0 openstack_network_exporter[367524]: ERROR   11:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:34:31 compute-0 openstack_network_exporter[367524]: ERROR   11:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:34:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:34:31 compute-0 openstack_network_exporter[367524]: ERROR   11:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:34:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]: {
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:        "osd_id": 1,
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:        "type": "bluestore"
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:    },
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:        "osd_id": 2,
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:        "type": "bluestore"
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:    },
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:        "osd_id": 0,
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:        "type": "bluestore"
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]:    }
Oct  3 11:34:31 compute-0 mystifying_faraday[543231]: }
Oct  3 11:34:31 compute-0 systemd[1]: libpod-355399d2dd3cbaef5c0667e50ac10ea5be5304a9dc296f902857d4e0125ebb21.scope: Deactivated successfully.
Oct  3 11:34:31 compute-0 systemd[1]: libpod-355399d2dd3cbaef5c0667e50ac10ea5be5304a9dc296f902857d4e0125ebb21.scope: Consumed 1.141s CPU time.
Oct  3 11:34:31 compute-0 podman[543216]: 2025-10-03 11:34:31.670356935 +0000 UTC m=+1.365855587 container died 355399d2dd3cbaef5c0667e50ac10ea5be5304a9dc296f902857d4e0125ebb21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  3 11:34:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-334415850b594dcc4eec43b4bf933929fb83765f68328f88f0ea43576ed2bed5-merged.mount: Deactivated successfully.
Oct  3 11:34:31 compute-0 podman[543216]: 2025-10-03 11:34:31.754692187 +0000 UTC m=+1.450190829 container remove 355399d2dd3cbaef5c0667e50ac10ea5be5304a9dc296f902857d4e0125ebb21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  3 11:34:31 compute-0 systemd[1]: libpod-conmon-355399d2dd3cbaef5c0667e50ac10ea5be5304a9dc296f902857d4e0125ebb21.scope: Deactivated successfully.
Oct  3 11:34:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:34:31 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:34:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:34:31 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:34:31 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 71b700db-4c92-403a-a169-3db380cec5cc does not exist
Oct  3 11:34:31 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 8ad9a50e-f23a-4c47-9e19-aaa778f92f61 does not exist
Oct  3 11:34:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3821: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:32 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:34:32 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:34:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:34:33 compute-0 nova_compute[351685]: 2025-10-03 11:34:33.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3822: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Oct  3 11:34:34 compute-0 podman[543331]: 2025-10-03 11:34:34.903731812 +0000 UTC m=+0.115618957 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm)
Oct  3 11:34:34 compute-0 podman[543330]: 2025-10-03 11:34:34.906666287 +0000 UTC m=+0.144060509 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct  3 11:34:34 compute-0 podman[543328]: 2025-10-03 11:34:34.914062223 +0000 UTC m=+0.149929456 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-type=git, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 11:34:34 compute-0 podman[543329]: 2025-10-03 11:34:34.91518539 +0000 UTC m=+0.144138721 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:34:34 compute-0 podman[543351]: 2025-10-03 11:34:34.920162129 +0000 UTC m=+0.132333402 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  3 11:34:34 compute-0 podman[543327]: 2025-10-03 11:34:34.922664589 +0000 UTC m=+0.167531310 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:34:34 compute-0 podman[543338]: 2025-10-03 11:34:34.960503681 +0000 UTC m=+0.182997805 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct  3 11:34:35 compute-0 nova_compute[351685]: 2025-10-03 11:34:35.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3823: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Oct  3 11:34:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3824: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 14 op/s
Oct  3 11:34:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:34:38 compute-0 nova_compute[351685]: 2025-10-03 11:34:38.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3825: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Oct  3 11:34:40 compute-0 nova_compute[351685]: 2025-10-03 11:34:40.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:34:41.712 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:34:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:34:41.713 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:34:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:34:41.714 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:34:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3826: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 67 op/s
Oct  3 11:34:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:34:43 compute-0 nova_compute[351685]: 2025-10-03 11:34:43.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3827: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Oct  3 11:34:45 compute-0 nova_compute[351685]: 2025-10-03 11:34:45.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:45 compute-0 nova_compute[351685]: 2025-10-03 11:34:45.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:34:45 compute-0 podman[543465]: 2025-10-03 11:34:45.815138926 +0000 UTC m=+0.077664730 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:34:45 compute-0 podman[543466]: 2025-10-03 11:34:45.824787455 +0000 UTC m=+0.086348539 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, version=9.4, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.openshift.expose-services=, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1214.1726694543)
Oct  3 11:34:45 compute-0 podman[543467]: 2025-10-03 11:34:45.827481802 +0000 UTC m=+0.083498438 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Oct  3 11:34:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3828: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 67 op/s
Oct  3 11:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:34:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:34:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:34:46
Oct  3 11:34:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:34:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:34:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'vms', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.control', 'default.rgw.log']
Oct  3 11:34:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:34:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:34:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:34:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:34:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:34:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:34:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:34:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:34:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:34:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:34:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:34:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3829: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Oct  3 11:34:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:34:48 compute-0 nova_compute[351685]: 2025-10-03 11:34:48.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3830: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Oct  3 11:34:50 compute-0 nova_compute[351685]: 2025-10-03 11:34:50.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:51 compute-0 nova_compute[351685]: 2025-10-03 11:34:51.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:34:51 compute-0 nova_compute[351685]: 2025-10-03 11:34:51.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:34:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3831: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Oct  3 11:34:52 compute-0 nova_compute[351685]: 2025-10-03 11:34:52.116 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:34:52 compute-0 nova_compute[351685]: 2025-10-03 11:34:52.116 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:34:52 compute-0 nova_compute[351685]: 2025-10-03 11:34:52.117 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:34:53 compute-0 nova_compute[351685]: 2025-10-03 11:34:53.348 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Updating instance_info_cache with network_info: [{"id": "226590bd-fa92-4e26-8879-8782d015ad61", "address": "fa:16:3e:c0:36:62", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.141", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap226590bd-fa", "ovs_interfaceid": "226590bd-fa92-4e26-8879-8782d015ad61", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:34:53 compute-0 nova_compute[351685]: 2025-10-03 11:34:53.362 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:34:53 compute-0 nova_compute[351685]: 2025-10-03 11:34:53.363 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:34:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:34:53 compute-0 nova_compute[351685]: 2025-10-03 11:34:53.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3832: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s
Oct  3 11:34:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:34:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3405565812' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:34:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:34:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3405565812' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:34:55 compute-0 nova_compute[351685]: 2025-10-03 11:34:55.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3833: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00207013190238979 of space, bias 1.0, pg target 0.6210395707169369 quantized to 32 (current 32)
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:34:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:34:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3834: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:34:58 compute-0 nova_compute[351685]: 2025-10-03 11:34:58.359 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:34:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:34:58 compute-0 nova_compute[351685]: 2025-10-03 11:34:58.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:34:59 compute-0 nova_compute[351685]: 2025-10-03 11:34:59.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:34:59 compute-0 podman[157165]: time="2025-10-03T11:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:34:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:34:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9611 "" "Go-http-client/1.1"
Oct  3 11:35:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3835: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:00 compute-0 nova_compute[351685]: 2025-10-03 11:35:00.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:01 compute-0 openstack_network_exporter[367524]: ERROR   11:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:35:01 compute-0 openstack_network_exporter[367524]: ERROR   11:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:35:01 compute-0 openstack_network_exporter[367524]: ERROR   11:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:35:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:35:01 compute-0 openstack_network_exporter[367524]: ERROR   11:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:35:01 compute-0 openstack_network_exporter[367524]: ERROR   11:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:35:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:35:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3836: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:35:03 compute-0 nova_compute[351685]: 2025-10-03 11:35:03.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3837: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:04 compute-0 nova_compute[351685]: 2025-10-03 11:35:04.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:35:04 compute-0 nova_compute[351685]: 2025-10-03 11:35:04.793 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:35:04 compute-0 nova_compute[351685]: 2025-10-03 11:35:04.794 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:35:04 compute-0 nova_compute[351685]: 2025-10-03 11:35:04.795 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:35:04 compute-0 nova_compute[351685]: 2025-10-03 11:35:04.796 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:35:04 compute-0 nova_compute[351685]: 2025-10-03 11:35:04.796 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:35:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:35:05 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1774180649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.264 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.361 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.361 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.366 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.367 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.367 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.371 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.371 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.806 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.807 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3164MB free_disk=59.864097595214844GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.807 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.807 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:35:05 compute-0 podman[543552]: 2025-10-03 11:35:05.877185051 +0000 UTC m=+0.109781839 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  3 11:35:05 compute-0 podman[543549]: 2025-10-03 11:35:05.879660461 +0000 UTC m=+0.125759772 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 11:35:05 compute-0 podman[543550]: 2025-10-03 11:35:05.890822168 +0000 UTC m=+0.130703879 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm)
Oct  3 11:35:05 compute-0 podman[543547]: 2025-10-03 11:35:05.894504737 +0000 UTC m=+0.138368926 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, managed_by=edpm_ansible, release=1755695350, config_id=edpm, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct  3 11:35:05 compute-0 podman[543546]: 2025-10-03 11:35:05.906101268 +0000 UTC m=+0.151484606 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:35:05 compute-0 podman[543548]: 2025-10-03 11:35:05.9061627 +0000 UTC m=+0.149968648 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:35:05 compute-0 podman[543551]: 2025-10-03 11:35:05.93769092 +0000 UTC m=+0.166005741 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.996 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.996 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 83fc22ce-d2e4-468a-b166-04f2743fa68d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.996 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 443e486d-1bf2-4550-a4ae-32f0f8f4af19 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.997 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:35:05 compute-0 nova_compute[351685]: 2025-10-03 11:35:05.997 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:35:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3838: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:06 compute-0 nova_compute[351685]: 2025-10-03 11:35:06.134 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:35:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:35:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4286620208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:35:06 compute-0 nova_compute[351685]: 2025-10-03 11:35:06.638 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:35:06 compute-0 nova_compute[351685]: 2025-10-03 11:35:06.646 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:35:06 compute-0 nova_compute[351685]: 2025-10-03 11:35:06.660 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:35:06 compute-0 nova_compute[351685]: 2025-10-03 11:35:06.662 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:35:06 compute-0 nova_compute[351685]: 2025-10-03 11:35:06.662 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:35:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3839: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:35:08 compute-0 nova_compute[351685]: 2025-10-03 11:35:08.664 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:35:08 compute-0 nova_compute[351685]: 2025-10-03 11:35:08.665 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:35:08 compute-0 nova_compute[351685]: 2025-10-03 11:35:08.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3840: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:10 compute-0 nova_compute[351685]: 2025-10-03 11:35:10.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3841: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:12 compute-0 nova_compute[351685]: 2025-10-03 11:35:12.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:35:12 compute-0 nova_compute[351685]: 2025-10-03 11:35:12.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:35:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:35:13 compute-0 nova_compute[351685]: 2025-10-03 11:35:13.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:35:13 compute-0 nova_compute[351685]: 2025-10-03 11:35:13.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3842: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:14 compute-0 nova_compute[351685]: 2025-10-03 11:35:14.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:35:15 compute-0 nova_compute[351685]: 2025-10-03 11:35:15.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3843: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:35:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:35:16 compute-0 podman[543703]: 2025-10-03 11:35:16.847273725 +0000 UTC m=+0.087594548 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, architecture=x86_64, config_id=edpm, container_name=kepler, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.29.0, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9)
Oct  3 11:35:16 compute-0 podman[543704]: 2025-10-03 11:35:16.866347787 +0000 UTC m=+0.093501988 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:35:16 compute-0 podman[543702]: 2025-10-03 11:35:16.8848704 +0000 UTC m=+0.117717204 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 11:35:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3844: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:35:18 compute-0 nova_compute[351685]: 2025-10-03 11:35:18.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3845: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:20 compute-0 nova_compute[351685]: 2025-10-03 11:35:20.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3846: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:35:23 compute-0 nova_compute[351685]: 2025-10-03 11:35:23.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3847: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:25 compute-0 nova_compute[351685]: 2025-10-03 11:35:25.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3848: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3849: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:35:28 compute-0 nova_compute[351685]: 2025-10-03 11:35:28.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:29 compute-0 podman[157165]: time="2025-10-03T11:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:35:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:35:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9610 "" "Go-http-client/1.1"
Oct  3 11:35:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3850: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:30 compute-0 nova_compute[351685]: 2025-10-03 11:35:30.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:31 compute-0 openstack_network_exporter[367524]: ERROR   11:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:35:31 compute-0 openstack_network_exporter[367524]: ERROR   11:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:35:31 compute-0 openstack_network_exporter[367524]: ERROR   11:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:35:31 compute-0 openstack_network_exporter[367524]: ERROR   11:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:35:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:35:31 compute-0 openstack_network_exporter[367524]: ERROR   11:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:35:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:35:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3851: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:35:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:35:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:35:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:35:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:35:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:35:33 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:35:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:35:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:35:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:35:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:35:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev ed655c3d-3719-4fb5-9f5a-981fa14df2e6 does not exist
Oct  3 11:35:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev cf14569b-1ac4-46ee-85a0-32f292cc6a4f does not exist
Oct  3 11:35:33 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 77f8f536-a437-41d4-87f0-d83893a8b98f does not exist
Oct  3 11:35:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:35:33 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:35:33 compute-0 nova_compute[351685]: 2025-10-03 11:35:33.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:35:33 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:35:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:35:33 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:35:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3852: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:34 compute-0 podman[544151]: 2025-10-03 11:35:34.690157178 +0000 UTC m=+0.060010704 container create cbf8f2daea99887cfbdfb4fba535fb21fc5e43dd3d17412874f3d7c198d53ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 11:35:34 compute-0 podman[544151]: 2025-10-03 11:35:34.664360681 +0000 UTC m=+0.034214227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:35:34 compute-0 systemd[1]: Started libpod-conmon-cbf8f2daea99887cfbdfb4fba535fb21fc5e43dd3d17412874f3d7c198d53ee2.scope.
Oct  3 11:35:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:35:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:35:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:35:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:35:34 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:35:34 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:35:34 compute-0 podman[544151]: 2025-10-03 11:35:34.841790648 +0000 UTC m=+0.211644264 container init cbf8f2daea99887cfbdfb4fba535fb21fc5e43dd3d17412874f3d7c198d53ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 11:35:34 compute-0 podman[544151]: 2025-10-03 11:35:34.856414527 +0000 UTC m=+0.226268083 container start cbf8f2daea99887cfbdfb4fba535fb21fc5e43dd3d17412874f3d7c198d53ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  3 11:35:34 compute-0 podman[544151]: 2025-10-03 11:35:34.86276254 +0000 UTC m=+0.232616166 container attach cbf8f2daea99887cfbdfb4fba535fb21fc5e43dd3d17412874f3d7c198d53ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  3 11:35:34 compute-0 pensive_fermat[544167]: 167 167
Oct  3 11:35:34 compute-0 systemd[1]: libpod-cbf8f2daea99887cfbdfb4fba535fb21fc5e43dd3d17412874f3d7c198d53ee2.scope: Deactivated successfully.
Oct  3 11:35:34 compute-0 conmon[544167]: conmon cbf8f2daea99887cfbdf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cbf8f2daea99887cfbdfb4fba535fb21fc5e43dd3d17412874f3d7c198d53ee2.scope/container/memory.events
Oct  3 11:35:34 compute-0 podman[544151]: 2025-10-03 11:35:34.867305356 +0000 UTC m=+0.237158902 container died cbf8f2daea99887cfbdfb4fba535fb21fc5e43dd3d17412874f3d7c198d53ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:35:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-add93c224256a3d73af237c655f2b1ecf864258ab9e469c9112ba2fb7d477b79-merged.mount: Deactivated successfully.
Oct  3 11:35:34 compute-0 podman[544151]: 2025-10-03 11:35:34.925227772 +0000 UTC m=+0.295081308 container remove cbf8f2daea99887cfbdfb4fba535fb21fc5e43dd3d17412874f3d7c198d53ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_fermat, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 11:35:34 compute-0 systemd[1]: libpod-conmon-cbf8f2daea99887cfbdfb4fba535fb21fc5e43dd3d17412874f3d7c198d53ee2.scope: Deactivated successfully.
Oct  3 11:35:35 compute-0 podman[544189]: 2025-10-03 11:35:35.1903968 +0000 UTC m=+0.087028270 container create d6ec23c9fc32384d82d689e43c5fed43494df0ad105fea276a869699dfcd2094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 11:35:35 compute-0 podman[544189]: 2025-10-03 11:35:35.160175832 +0000 UTC m=+0.056807382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:35:35 compute-0 systemd[1]: Started libpod-conmon-d6ec23c9fc32384d82d689e43c5fed43494df0ad105fea276a869699dfcd2094.scope.
Oct  3 11:35:35 compute-0 nova_compute[351685]: 2025-10-03 11:35:35.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:35 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ae20b33edeae2803e8047d73445e158e7d86e50d27d15a1f5207bad8f1f91d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ae20b33edeae2803e8047d73445e158e7d86e50d27d15a1f5207bad8f1f91d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ae20b33edeae2803e8047d73445e158e7d86e50d27d15a1f5207bad8f1f91d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ae20b33edeae2803e8047d73445e158e7d86e50d27d15a1f5207bad8f1f91d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:35:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ae20b33edeae2803e8047d73445e158e7d86e50d27d15a1f5207bad8f1f91d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:35:35 compute-0 podman[544189]: 2025-10-03 11:35:35.306395728 +0000 UTC m=+0.203027198 container init d6ec23c9fc32384d82d689e43c5fed43494df0ad105fea276a869699dfcd2094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:35:35 compute-0 podman[544189]: 2025-10-03 11:35:35.320673436 +0000 UTC m=+0.217304896 container start d6ec23c9fc32384d82d689e43c5fed43494df0ad105fea276a869699dfcd2094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  3 11:35:35 compute-0 podman[544189]: 2025-10-03 11:35:35.326022437 +0000 UTC m=+0.222653907 container attach d6ec23c9fc32384d82d689e43c5fed43494df0ad105fea276a869699dfcd2094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_grothendieck, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:35:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3853: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:36 compute-0 gifted_grothendieck[544206]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:35:36 compute-0 gifted_grothendieck[544206]: --> relative data size: 1.0
Oct  3 11:35:36 compute-0 gifted_grothendieck[544206]: --> All data devices are unavailable
Oct  3 11:35:36 compute-0 systemd[1]: libpod-d6ec23c9fc32384d82d689e43c5fed43494df0ad105fea276a869699dfcd2094.scope: Deactivated successfully.
Oct  3 11:35:36 compute-0 systemd[1]: libpod-d6ec23c9fc32384d82d689e43c5fed43494df0ad105fea276a869699dfcd2094.scope: Consumed 1.106s CPU time.
Oct  3 11:35:36 compute-0 podman[544189]: 2025-10-03 11:35:36.51895534 +0000 UTC m=+1.415586830 container died d6ec23c9fc32384d82d689e43c5fed43494df0ad105fea276a869699dfcd2094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_grothendieck, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  3 11:35:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7ae20b33edeae2803e8047d73445e158e7d86e50d27d15a1f5207bad8f1f91d-merged.mount: Deactivated successfully.
Oct  3 11:35:36 compute-0 podman[544189]: 2025-10-03 11:35:36.628970957 +0000 UTC m=+1.525602407 container remove d6ec23c9fc32384d82d689e43c5fed43494df0ad105fea276a869699dfcd2094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_grothendieck, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 11:35:36 compute-0 systemd[1]: libpod-conmon-d6ec23c9fc32384d82d689e43c5fed43494df0ad105fea276a869699dfcd2094.scope: Deactivated successfully.
Oct  3 11:35:36 compute-0 podman[544242]: 2025-10-03 11:35:36.694864018 +0000 UTC m=+0.133870351 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, name=ubi9-minimal, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:35:36 compute-0 podman[544257]: 2025-10-03 11:35:36.711694958 +0000 UTC m=+0.105319407 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0)
Oct  3 11:35:36 compute-0 podman[544236]: 2025-10-03 11:35:36.717570856 +0000 UTC m=+0.160727253 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:35:36 compute-0 podman[544244]: 2025-10-03 11:35:36.727435822 +0000 UTC m=+0.154477811 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:35:36 compute-0 podman[544273]: 2025-10-03 11:35:36.733531277 +0000 UTC m=+0.138556291 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible)
Oct  3 11:35:36 compute-0 podman[544245]: 2025-10-03 11:35:36.746018628 +0000 UTC m=+0.177707437 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct  3 11:35:36 compute-0 podman[544271]: 2025-10-03 11:35:36.782339922 +0000 UTC m=+0.179758002 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 11:35:37 compute-0 podman[544517]: 2025-10-03 11:35:37.513583928 +0000 UTC m=+0.079569242 container create d741411b26fc854351282ebd7355d9e4aa2b12b1fbf1f8000ec5b03c282da93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shirley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 11:35:37 compute-0 podman[544517]: 2025-10-03 11:35:37.489584358 +0000 UTC m=+0.055569712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:35:37 compute-0 systemd[1]: Started libpod-conmon-d741411b26fc854351282ebd7355d9e4aa2b12b1fbf1f8000ec5b03c282da93f.scope.
Oct  3 11:35:37 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:35:37 compute-0 podman[544517]: 2025-10-03 11:35:37.644763382 +0000 UTC m=+0.210748776 container init d741411b26fc854351282ebd7355d9e4aa2b12b1fbf1f8000ec5b03c282da93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  3 11:35:37 compute-0 podman[544517]: 2025-10-03 11:35:37.662406317 +0000 UTC m=+0.228391651 container start d741411b26fc854351282ebd7355d9e4aa2b12b1fbf1f8000ec5b03c282da93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shirley, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:35:37 compute-0 podman[544517]: 2025-10-03 11:35:37.667773139 +0000 UTC m=+0.233758443 container attach d741411b26fc854351282ebd7355d9e4aa2b12b1fbf1f8000ec5b03c282da93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:35:37 compute-0 romantic_shirley[544532]: 167 167
Oct  3 11:35:37 compute-0 systemd[1]: libpod-d741411b26fc854351282ebd7355d9e4aa2b12b1fbf1f8000ec5b03c282da93f.scope: Deactivated successfully.
Oct  3 11:35:37 compute-0 podman[544517]: 2025-10-03 11:35:37.67497817 +0000 UTC m=+0.240963474 container died d741411b26fc854351282ebd7355d9e4aa2b12b1fbf1f8000ec5b03c282da93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Oct  3 11:35:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1761701d10d867294bec72ed4383ce54aafffb180663623f91f5affccbf2b831-merged.mount: Deactivated successfully.
Oct  3 11:35:37 compute-0 podman[544517]: 2025-10-03 11:35:37.728924949 +0000 UTC m=+0.294910253 container remove d741411b26fc854351282ebd7355d9e4aa2b12b1fbf1f8000ec5b03c282da93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  3 11:35:37 compute-0 systemd[1]: libpod-conmon-d741411b26fc854351282ebd7355d9e4aa2b12b1fbf1f8000ec5b03c282da93f.scope: Deactivated successfully.
Oct  3 11:35:37 compute-0 podman[544556]: 2025-10-03 11:35:37.986749583 +0000 UTC m=+0.080676308 container create 5e65f7451eb4ede4e53983f97a75f0007e13b383ca01e86809d48b481f5c0035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Oct  3 11:35:38 compute-0 podman[544556]: 2025-10-03 11:35:37.958463396 +0000 UTC m=+0.052390091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:35:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3854: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:38 compute-0 systemd[1]: Started libpod-conmon-5e65f7451eb4ede4e53983f97a75f0007e13b383ca01e86809d48b481f5c0035.scope.
Oct  3 11:35:38 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:35:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931ff1813784f3cb4930a3174eb3ff2aa1eb0858e962255be238a106e6f2693d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:35:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931ff1813784f3cb4930a3174eb3ff2aa1eb0858e962255be238a106e6f2693d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:35:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931ff1813784f3cb4930a3174eb3ff2aa1eb0858e962255be238a106e6f2693d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:35:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/931ff1813784f3cb4930a3174eb3ff2aa1eb0858e962255be238a106e6f2693d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:35:38 compute-0 podman[544556]: 2025-10-03 11:35:38.136679227 +0000 UTC m=+0.230605982 container init 5e65f7451eb4ede4e53983f97a75f0007e13b383ca01e86809d48b481f5c0035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 11:35:38 compute-0 podman[544556]: 2025-10-03 11:35:38.157651929 +0000 UTC m=+0.251578604 container start 5e65f7451eb4ede4e53983f97a75f0007e13b383ca01e86809d48b481f5c0035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kowalevski, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:35:38 compute-0 podman[544556]: 2025-10-03 11:35:38.162469734 +0000 UTC m=+0.256396409 container attach 5e65f7451eb4ede4e53983f97a75f0007e13b383ca01e86809d48b481f5c0035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 11:35:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:35:38 compute-0 nova_compute[351685]: 2025-10-03 11:35:38.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]: {
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:    "0": [
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:        {
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "devices": [
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "/dev/loop3"
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            ],
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "lv_name": "ceph_lv0",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "lv_size": "21470642176",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "name": "ceph_lv0",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "tags": {
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.cluster_name": "ceph",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.crush_device_class": "",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.encrypted": "0",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.osd_id": "0",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.type": "block",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.vdo": "0"
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            },
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "type": "block",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "vg_name": "ceph_vg0"
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:        }
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:    ],
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:    "1": [
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:        {
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "devices": [
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "/dev/loop4"
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            ],
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "lv_name": "ceph_lv1",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "lv_size": "21470642176",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "name": "ceph_lv1",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "tags": {
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.cluster_name": "ceph",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.crush_device_class": "",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.encrypted": "0",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.osd_id": "1",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.type": "block",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.vdo": "0"
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            },
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "type": "block",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "vg_name": "ceph_vg1"
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:        }
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:    ],
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:    "2": [
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:        {
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "devices": [
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "/dev/loop5"
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            ],
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "lv_name": "ceph_lv2",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "lv_size": "21470642176",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "name": "ceph_lv2",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "tags": {
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.cluster_name": "ceph",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.crush_device_class": "",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.encrypted": "0",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.osd_id": "2",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.type": "block",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:                "ceph.vdo": "0"
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            },
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "type": "block",
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:            "vg_name": "ceph_vg2"
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:        }
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]:    ]
Oct  3 11:35:39 compute-0 jovial_kowalevski[544572]: }
Oct  3 11:35:39 compute-0 systemd[1]: libpod-5e65f7451eb4ede4e53983f97a75f0007e13b383ca01e86809d48b481f5c0035.scope: Deactivated successfully.
Oct  3 11:35:39 compute-0 podman[544581]: 2025-10-03 11:35:39.126209041 +0000 UTC m=+0.050603702 container died 5e65f7451eb4ede4e53983f97a75f0007e13b383ca01e86809d48b481f5c0035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  3 11:35:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-931ff1813784f3cb4930a3174eb3ff2aa1eb0858e962255be238a106e6f2693d-merged.mount: Deactivated successfully.
Oct  3 11:35:39 compute-0 podman[544581]: 2025-10-03 11:35:39.271850768 +0000 UTC m=+0.196245329 container remove 5e65f7451eb4ede4e53983f97a75f0007e13b383ca01e86809d48b481f5c0035 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kowalevski, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct  3 11:35:39 compute-0 systemd[1]: libpod-conmon-5e65f7451eb4ede4e53983f97a75f0007e13b383ca01e86809d48b481f5c0035.scope: Deactivated successfully.
Oct  3 11:35:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3855: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:40 compute-0 podman[544731]: 2025-10-03 11:35:40.227533607 +0000 UTC m=+0.071449380 container create 040b4d3fe5a263aeaafd2574050ef465856002d1941e50e2bc5bdb8e7eb300bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct  3 11:35:40 compute-0 nova_compute[351685]: 2025-10-03 11:35:40.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:40 compute-0 systemd[1]: Started libpod-conmon-040b4d3fe5a263aeaafd2574050ef465856002d1941e50e2bc5bdb8e7eb300bc.scope.
Oct  3 11:35:40 compute-0 podman[544731]: 2025-10-03 11:35:40.210605655 +0000 UTC m=+0.054521448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:35:40 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:35:40 compute-0 podman[544731]: 2025-10-03 11:35:40.332400609 +0000 UTC m=+0.176316402 container init 040b4d3fe5a263aeaafd2574050ef465856002d1941e50e2bc5bdb8e7eb300bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banach, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:35:40 compute-0 podman[544731]: 2025-10-03 11:35:40.34274135 +0000 UTC m=+0.186657123 container start 040b4d3fe5a263aeaafd2574050ef465856002d1941e50e2bc5bdb8e7eb300bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  3 11:35:40 compute-0 podman[544731]: 2025-10-03 11:35:40.346670166 +0000 UTC m=+0.190585939 container attach 040b4d3fe5a263aeaafd2574050ef465856002d1941e50e2bc5bdb8e7eb300bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:35:40 compute-0 wonderful_banach[544747]: 167 167
Oct  3 11:35:40 compute-0 systemd[1]: libpod-040b4d3fe5a263aeaafd2574050ef465856002d1941e50e2bc5bdb8e7eb300bc.scope: Deactivated successfully.
Oct  3 11:35:40 compute-0 conmon[544747]: conmon 040b4d3fe5a263aeaafd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-040b4d3fe5a263aeaafd2574050ef465856002d1941e50e2bc5bdb8e7eb300bc.scope/container/memory.events
Oct  3 11:35:40 compute-0 podman[544731]: 2025-10-03 11:35:40.358015429 +0000 UTC m=+0.201931202 container died 040b4d3fe5a263aeaafd2574050ef465856002d1941e50e2bc5bdb8e7eb300bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banach, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 11:35:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc1a4721c9bd45ac543e97d7d3cccc3d05ee93d0458572d0d0fd32bc7c8126c6-merged.mount: Deactivated successfully.
Oct  3 11:35:40 compute-0 podman[544731]: 2025-10-03 11:35:40.409307883 +0000 UTC m=+0.253223666 container remove 040b4d3fe5a263aeaafd2574050ef465856002d1941e50e2bc5bdb8e7eb300bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:35:40 compute-0 systemd[1]: libpod-conmon-040b4d3fe5a263aeaafd2574050ef465856002d1941e50e2bc5bdb8e7eb300bc.scope: Deactivated successfully.
Oct  3 11:35:40 compute-0 podman[544770]: 2025-10-03 11:35:40.647843138 +0000 UTC m=+0.082628150 container create 9815d74160289e989b19fcb1b27066a1d70d7e2b481b625558da5006a958f982 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:35:40 compute-0 podman[544770]: 2025-10-03 11:35:40.601370149 +0000 UTC m=+0.036155161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:35:40 compute-0 systemd[1]: Started libpod-conmon-9815d74160289e989b19fcb1b27066a1d70d7e2b481b625558da5006a958f982.scope.
Oct  3 11:35:40 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec4066f2dc5705fc8998c6e0aee8b8542c19914cb14c9dca429bbb9714023a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec4066f2dc5705fc8998c6e0aee8b8542c19914cb14c9dca429bbb9714023a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec4066f2dc5705fc8998c6e0aee8b8542c19914cb14c9dca429bbb9714023a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:35:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec4066f2dc5705fc8998c6e0aee8b8542c19914cb14c9dca429bbb9714023a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:35:40 compute-0 podman[544770]: 2025-10-03 11:35:40.777978488 +0000 UTC m=+0.212763510 container init 9815d74160289e989b19fcb1b27066a1d70d7e2b481b625558da5006a958f982 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:35:40 compute-0 podman[544770]: 2025-10-03 11:35:40.787154352 +0000 UTC m=+0.221939364 container start 9815d74160289e989b19fcb1b27066a1d70d7e2b481b625558da5006a958f982 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:35:40 compute-0 podman[544770]: 2025-10-03 11:35:40.792631228 +0000 UTC m=+0.227416260 container attach 9815d74160289e989b19fcb1b27066a1d70d7e2b481b625558da5006a958f982 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_noether, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.909 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.911 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.920 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.920 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.920 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.920 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.921 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.921 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.921 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.921 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.921 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.922 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.922 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.922 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.923 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.923 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.923 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.923 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.924 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.924 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.924 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.924 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.924 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.925 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.925 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.926 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.926 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a95f1cf50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.926 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '443e486d-1bf2-4550-a4ae-32f0f8f4af19', 'name': 'te-0355793-asg-bz6ac4extgro-yngmy2hkxebf-kzam7sftwtcg', 'flavor': {'id': 'b93eb926-1d95-406e-aec3-a907be067084', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b9c8e0cc-ecf1-4fa8-92ee-328b108123cd'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ebbd19d68501417398b05a6a4b7c22de', 'user_id': '8990c210ba8740dc9714739f27702391', 'hostId': '68f1a860c9647e69f481ba6f1cfa913085c986ba3b27987b76a63364', 'status': 'active', 'metadata': {'metering.server_group': '0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.931 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.935 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '83fc22ce-d2e4-468a-b166-04f2743fa68d', 'name': 'te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz', 'flavor': {'id': 'b93eb926-1d95-406e-aec3-a907be067084', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b9c8e0cc-ecf1-4fa8-92ee-328b108123cd'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ebbd19d68501417398b05a6a4b7c22de', 'user_id': '8990c210ba8740dc9714739f27702391', 'hostId': '68f1a860c9647e69f481ba6f1cfa913085c986ba3b27987b76a63364', 'status': 'active', 'metadata': {'metering.server_group': '0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.936 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.936 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.936 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.937 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.938 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:35:40.937116) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.945 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.950 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.955 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.956 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.956 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.957 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.957 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.957 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.957 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.958 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.958 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.959 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.960 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.960 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.960 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:35:40.957620) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.960 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.961 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:35:40.960908) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.986 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:40.987 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.012 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.012 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.040 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.041 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.042 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.042 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.042 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.043 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.043 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:35:41.043029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.070 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.bytes volume: 29019136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.070 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.110 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.110 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.111 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.131 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.bytes volume: 31861248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.132 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.132 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.133 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.133 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.133 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.134 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.latency volume: 2011591932 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.135 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.latency volume: 159834513 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.135 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.136 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.136 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:35:41.134426) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.136 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.137 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.latency volume: 2658882306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.137 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.latency volume: 170448087 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.138 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.138 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.138 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.139 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.139 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.139 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.140 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.requests volume: 1049 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.140 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.141 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.141 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.142 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.142 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.requests volume: 1164 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.142 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.143 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.143 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:35:41.139746) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.143 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.144 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.144 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.144 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.145 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.145 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.146 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.146 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.146 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.147 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.147 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:35:41.145105) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.148 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.148 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.149 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.149 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.149 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.149 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.150 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.150 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.150 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:35:41.150320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.150 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.151 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.151 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.152 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.152 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.153 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.153 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.154 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.154 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.154 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.155 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.155 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.155 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.156 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.bytes volume: 72847360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.156 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.157 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.157 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:35:41.155753) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.157 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.158 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.158 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.159 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.160 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.160 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.160 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.161 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.162 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:35:41.161815) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.185 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.205 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.228 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.228 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.229 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.229 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.230 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.230 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.230 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.231 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.latency volume: 9738524142 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:35:41.230834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.231 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.232 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.232 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.233 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.233 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.latency volume: 11038555047 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.234 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.234 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.235 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.235 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.235 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.236 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.236 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.236 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:35:41.236443) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.236 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.requests volume: 305 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.237 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.237 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.238 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.238 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.239 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.requests volume: 349 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.239 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.240 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.240 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.241 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.241 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.241 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.242 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.242 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.242 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:35:41.241983) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.243 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.243 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.244 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.245 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.245 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.246 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.246 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.247 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.247 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.248 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.248 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:35:41.248200) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.249 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.250 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.251 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.251 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.252 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.252 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.253 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:35:41.252587) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.253 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.254 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.254 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.255 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.256 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.257 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.257 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.258 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.258 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:35:41.258143) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.259 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.260 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.261 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.261 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.262 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.262 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:35:41.262007) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.262 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.263 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.264 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.265 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.266 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.266 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.267 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.267 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.267 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:35:41.267505) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.268 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/cpu volume: 294570000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.268 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 114330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.269 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/cpu volume: 335970000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.270 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.271 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.271 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.272 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.272 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.273 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:35:41.272662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.273 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.274 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.274 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.275 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.275 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.275 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.276 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.276 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.276 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.277 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.277 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.277 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:35:41.276820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.278 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.279 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.279 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.280 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.280 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.280 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.281 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.281 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:35:41.280726) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.281 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.282 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.282 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.283 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.283 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.284 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.284 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.284 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:35:41.284450) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.285 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/memory.usage volume: 43.33984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.285 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.286 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/memory.usage volume: 42.3671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.286 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.287 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.288 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.288 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.288 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.289 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.289 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:35:41.289014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.289 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.bytes volume: 2276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.290 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.290 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.bytes volume: 1820 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.291 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.291 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.292 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.292 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.292 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.293 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:35:41.292827) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.293 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.294 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.294 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:35:41.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:35:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:35:41.714 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:35:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:35:41.715 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:35:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:35:41.716 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:35:41 compute-0 tender_noether[544784]: {
Oct  3 11:35:41 compute-0 tender_noether[544784]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:35:41 compute-0 tender_noether[544784]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:35:41 compute-0 tender_noether[544784]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:35:41 compute-0 tender_noether[544784]:        "osd_id": 1,
Oct  3 11:35:41 compute-0 tender_noether[544784]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:35:41 compute-0 tender_noether[544784]:        "type": "bluestore"
Oct  3 11:35:41 compute-0 tender_noether[544784]:    },
Oct  3 11:35:41 compute-0 tender_noether[544784]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:35:41 compute-0 tender_noether[544784]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:35:41 compute-0 tender_noether[544784]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:35:41 compute-0 tender_noether[544784]:        "osd_id": 2,
Oct  3 11:35:41 compute-0 tender_noether[544784]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:35:41 compute-0 tender_noether[544784]:        "type": "bluestore"
Oct  3 11:35:41 compute-0 tender_noether[544784]:    },
Oct  3 11:35:41 compute-0 tender_noether[544784]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:35:41 compute-0 tender_noether[544784]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:35:41 compute-0 tender_noether[544784]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:35:41 compute-0 tender_noether[544784]:        "osd_id": 0,
Oct  3 11:35:41 compute-0 tender_noether[544784]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:35:41 compute-0 tender_noether[544784]:        "type": "bluestore"
Oct  3 11:35:41 compute-0 tender_noether[544784]:    }
Oct  3 11:35:41 compute-0 tender_noether[544784]: }
Oct  3 11:35:41 compute-0 systemd[1]: libpod-9815d74160289e989b19fcb1b27066a1d70d7e2b481b625558da5006a958f982.scope: Deactivated successfully.
Oct  3 11:35:41 compute-0 systemd[1]: libpod-9815d74160289e989b19fcb1b27066a1d70d7e2b481b625558da5006a958f982.scope: Consumed 1.117s CPU time.
Oct  3 11:35:41 compute-0 podman[544820]: 2025-10-03 11:35:41.993131493 +0000 UTC m=+0.033058400 container died 9815d74160289e989b19fcb1b27066a1d70d7e2b481b625558da5006a958f982 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_noether, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:35:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ec4066f2dc5705fc8998c6e0aee8b8542c19914cb14c9dca429bbb9714023a6-merged.mount: Deactivated successfully.
Oct  3 11:35:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3856: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:42 compute-0 podman[544820]: 2025-10-03 11:35:42.087333893 +0000 UTC m=+0.127260810 container remove 9815d74160289e989b19fcb1b27066a1d70d7e2b481b625558da5006a958f982 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_noether, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:35:42 compute-0 systemd[1]: libpod-conmon-9815d74160289e989b19fcb1b27066a1d70d7e2b481b625558da5006a958f982.scope: Deactivated successfully.
Oct  3 11:35:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:35:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:35:42 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:35:42 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:35:42 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev fd6e696d-7a01-4234-988b-e364180647d6 does not exist
Oct  3 11:35:42 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 28867ea7-46d0-40d8-abe1-148a720f6cd3 does not exist
Oct  3 11:35:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:35:43 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:35:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:35:43 compute-0 nova_compute[351685]: 2025-10-03 11:35:43.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3857: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:45 compute-0 nova_compute[351685]: 2025-10-03 11:35:45.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3858: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:35:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:35:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:35:46
Oct  3 11:35:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:35:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:35:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['backups', 'default.rgw.control', '.rgw.root', '.mgr', 'default.rgw.log', 'images', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms']
Oct  3 11:35:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:35:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:35:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:35:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:35:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:35:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:35:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:35:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:35:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:35:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:35:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:35:47 compute-0 podman[544885]: 2025-10-03 11:35:47.832608325 +0000 UTC m=+0.088075214 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:35:47 compute-0 podman[544886]: 2025-10-03 11:35:47.848918198 +0000 UTC m=+0.092743134 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.buildah.version=1.29.0, managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, maintainer=Red Hat, Inc., vcs-type=git)
Oct  3 11:35:47 compute-0 podman[544887]: 2025-10-03 11:35:47.862898426 +0000 UTC m=+0.113038105 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  3 11:35:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3859: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:35:48 compute-0 nova_compute[351685]: 2025-10-03 11:35:48.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3860: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:50 compute-0 nova_compute[351685]: 2025-10-03 11:35:50.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3861: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:35:53 compute-0 nova_compute[351685]: 2025-10-03 11:35:53.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:35:53 compute-0 nova_compute[351685]: 2025-10-03 11:35:53.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:35:53 compute-0 nova_compute[351685]: 2025-10-03 11:35:53.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3862: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:35:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/305858216' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:35:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:35:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/305858216' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:35:54 compute-0 nova_compute[351685]: 2025-10-03 11:35:54.421 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-443e486d-1bf2-4550-a4ae-32f0f8f4af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:35:54 compute-0 nova_compute[351685]: 2025-10-03 11:35:54.422 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-443e486d-1bf2-4550-a4ae-32f0f8f4af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:35:54 compute-0 nova_compute[351685]: 2025-10-03 11:35:54.423 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:35:55 compute-0 nova_compute[351685]: 2025-10-03 11:35:55.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:55 compute-0 nova_compute[351685]: 2025-10-03 11:35:55.788 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Updating instance_info_cache with network_info: [{"id": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "address": "fa:16:3e:5e:f1:a3", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6a8cc09-54", "ovs_interfaceid": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:35:55 compute-0 nova_compute[351685]: 2025-10-03 11:35:55.812 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-443e486d-1bf2-4550-a4ae-32f0f8f4af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:35:55 compute-0 nova_compute[351685]: 2025-10-03 11:35:55.813 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3863: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00207013190238979 of space, bias 1.0, pg target 0.6210395707169369 quantized to 32 (current 32)
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:35:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:35:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3864: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:35:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:35:58 compute-0 nova_compute[351685]: 2025-10-03 11:35:58.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:35:59 compute-0 nova_compute[351685]: 2025-10-03 11:35:59.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:35:59 compute-0 nova_compute[351685]: 2025-10-03 11:35:59.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:35:59 compute-0 podman[157165]: time="2025-10-03T11:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:35:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:35:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9600 "" "Go-http-client/1.1"
Oct  3 11:36:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3865: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:36:00 compute-0 nova_compute[351685]: 2025-10-03 11:36:00.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:01 compute-0 openstack_network_exporter[367524]: ERROR   11:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:36:01 compute-0 openstack_network_exporter[367524]: ERROR   11:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:36:01 compute-0 openstack_network_exporter[367524]: ERROR   11:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:36:01 compute-0 openstack_network_exporter[367524]: ERROR   11:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:36:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:36:01 compute-0 openstack_network_exporter[367524]: ERROR   11:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:36:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:36:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3866: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:36:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:36:03 compute-0 nova_compute[351685]: 2025-10-03 11:36:03.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3867: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:36:04 compute-0 nova_compute[351685]: 2025-10-03 11:36:04.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:36:04 compute-0 nova_compute[351685]: 2025-10-03 11:36:04.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:36:04 compute-0 nova_compute[351685]: 2025-10-03 11:36:04.761 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:36:04 compute-0 nova_compute[351685]: 2025-10-03 11:36:04.762 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:36:04 compute-0 nova_compute[351685]: 2025-10-03 11:36:04.762 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:36:04 compute-0 nova_compute[351685]: 2025-10-03 11:36:04.763 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:36:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:36:05 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/454260627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.288 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.404 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.405 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.410 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.410 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.411 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.415 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.416 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.820 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.821 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3174MB free_disk=59.864097595214844GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.822 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.822 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.929 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.929 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 83fc22ce-d2e4-468a-b166-04f2743fa68d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.929 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 443e486d-1bf2-4550-a4ae-32f0f8f4af19 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.930 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:36:05 compute-0 nova_compute[351685]: 2025-10-03 11:36:05.930 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:36:06 compute-0 nova_compute[351685]: 2025-10-03 11:36:06.029 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:36:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3868: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:36:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:36:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3371125811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:36:06 compute-0 nova_compute[351685]: 2025-10-03 11:36:06.555 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:36:06 compute-0 nova_compute[351685]: 2025-10-03 11:36:06.563 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:36:06 compute-0 nova_compute[351685]: 2025-10-03 11:36:06.582 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:36:06 compute-0 nova_compute[351685]: 2025-10-03 11:36:06.583 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:36:06 compute-0 nova_compute[351685]: 2025-10-03 11:36:06.584 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:36:06 compute-0 podman[544988]: 2025-10-03 11:36:06.8322235 +0000 UTC m=+0.099809589 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-type=git, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Oct  3 11:36:06 compute-0 podman[544989]: 2025-10-03 11:36:06.851841239 +0000 UTC m=+0.102479725 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, config_id=edpm, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  3 11:36:06 compute-0 podman[545026]: 2025-10-03 11:36:06.948692443 +0000 UTC m=+0.086281756 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:36:06 compute-0 podman[545029]: 2025-10-03 11:36:06.954304673 +0000 UTC m=+0.081691629 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:36:06 compute-0 podman[545028]: 2025-10-03 11:36:06.974563992 +0000 UTC m=+0.095920935 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  3 11:36:06 compute-0 podman[545031]: 2025-10-03 11:36:06.987142636 +0000 UTC m=+0.104470190 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid)
Oct  3 11:36:06 compute-0 podman[545030]: 2025-10-03 11:36:06.990346719 +0000 UTC m=+0.117751665 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  3 11:36:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3869: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:36:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:36:08 compute-0 nova_compute[351685]: 2025-10-03 11:36:08.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3870: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:36:10 compute-0 nova_compute[351685]: 2025-10-03 11:36:10.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:10 compute-0 nova_compute[351685]: 2025-10-03 11:36:10.584 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:36:10 compute-0 nova_compute[351685]: 2025-10-03 11:36:10.585 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:36:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3871: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:36:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:36:13 compute-0 nova_compute[351685]: 2025-10-03 11:36:13.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:36:13 compute-0 nova_compute[351685]: 2025-10-03 11:36:13.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:36:13 compute-0 nova_compute[351685]: 2025-10-03 11:36:13.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3872: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:36:14 compute-0 nova_compute[351685]: 2025-10-03 11:36:14.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:36:15 compute-0 nova_compute[351685]: 2025-10-03 11:36:15.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:15 compute-0 nova_compute[351685]: 2025-10-03 11:36:15.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:36:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3873: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:36:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:36:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3874: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:36:18.147678) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491378147700, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 2048, "num_deletes": 251, "total_data_size": 3488145, "memory_usage": 3532184, "flush_reason": "Manual Compaction"}
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491378170539, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 3410882, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76279, "largest_seqno": 78326, "table_properties": {"data_size": 3401438, "index_size": 6001, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18543, "raw_average_key_size": 20, "raw_value_size": 3382862, "raw_average_value_size": 3657, "num_data_blocks": 267, "num_entries": 925, "num_filter_entries": 925, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759491150, "oldest_key_time": 1759491150, "file_creation_time": 1759491378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 22935 microseconds, and 8232 cpu microseconds.
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:36:18.170607) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 3410882 bytes OK
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:36:18.170630) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:36:18.172466) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:36:18.172481) EVENT_LOG_v1 {"time_micros": 1759491378172476, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:36:18.172499) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 3479582, prev total WAL file size 3479582, number of live WAL files 2.
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:36:18.173819) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(3330KB)], [188(8654KB)]
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491378173877, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 12272902, "oldest_snapshot_seqno": -1}
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 8596 keys, 10577941 bytes, temperature: kUnknown
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491378236186, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 10577941, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10525554, "index_size": 29810, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 226840, "raw_average_key_size": 26, "raw_value_size": 10375003, "raw_average_value_size": 1206, "num_data_blocks": 1166, "num_entries": 8596, "num_filter_entries": 8596, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759491378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:36:18.236472) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 10577941 bytes
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:36:18.238163) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.6 rd, 169.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.5 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 9110, records dropped: 514 output_compression: NoCompression
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:36:18.238204) EVENT_LOG_v1 {"time_micros": 1759491378238172, "job": 118, "event": "compaction_finished", "compaction_time_micros": 62420, "compaction_time_cpu_micros": 39739, "output_level": 6, "num_output_files": 1, "total_output_size": 10577941, "num_input_records": 9110, "num_output_records": 8596, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491378239054, "job": 118, "event": "table_file_deletion", "file_number": 190}
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491378240766, "job": 118, "event": "table_file_deletion", "file_number": 188}
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:36:18.173691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:36:18.240957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:36:18.240964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:36:18.240966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:36:18.240968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:36:18 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:36:18.240970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:36:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:36:18 compute-0 nova_compute[351685]: 2025-10-03 11:36:18.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:18 compute-0 podman[545127]: 2025-10-03 11:36:18.885580692 +0000 UTC m=+0.104730848 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi)
Oct  3 11:36:18 compute-0 podman[545125]: 2025-10-03 11:36:18.900398286 +0000 UTC m=+0.137241440 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:36:18 compute-0 podman[545126]: 2025-10-03 11:36:18.923013121 +0000 UTC m=+0.155002418 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Oct  3 11:36:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3875: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 85 B/s wr, 1 op/s
Oct  3 11:36:20 compute-0 nova_compute[351685]: 2025-10-03 11:36:20.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3876: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 170 B/s wr, 3 op/s
Oct  3 11:36:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:36:23 compute-0 nova_compute[351685]: 2025-10-03 11:36:23.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3877: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 170 B/s wr, 4 op/s
Oct  3 11:36:25 compute-0 nova_compute[351685]: 2025-10-03 11:36:25.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3878: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 938 B/s wr, 5 op/s
Oct  3 11:36:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3879: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 938 B/s wr, 5 op/s
Oct  3 11:36:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:36:28 compute-0 nova_compute[351685]: 2025-10-03 11:36:28.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:29 compute-0 podman[157165]: time="2025-10-03T11:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:36:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:36:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9602 "" "Go-http-client/1.1"
Oct  3 11:36:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3880: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Oct  3 11:36:30 compute-0 nova_compute[351685]: 2025-10-03 11:36:30.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:31 compute-0 openstack_network_exporter[367524]: ERROR   11:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:36:31 compute-0 openstack_network_exporter[367524]: ERROR   11:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:36:31 compute-0 openstack_network_exporter[367524]: ERROR   11:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:36:31 compute-0 openstack_network_exporter[367524]: ERROR   11:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:36:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:36:31 compute-0 openstack_network_exporter[367524]: ERROR   11:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:36:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:36:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3881: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 8.5 KiB/s wr, 3 op/s
Oct  3 11:36:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:36:33 compute-0 nova_compute[351685]: 2025-10-03 11:36:33.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3882: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 8.4 KiB/s wr, 2 op/s
Oct  3 11:36:35 compute-0 nova_compute[351685]: 2025-10-03 11:36:35.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3883: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 8.4 KiB/s wr, 0 op/s
Oct  3 11:36:37 compute-0 podman[545184]: 2025-10-03 11:36:37.84766647 +0000 UTC m=+0.111037359 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:36:37 compute-0 podman[545192]: 2025-10-03 11:36:37.868207919 +0000 UTC m=+0.103049414 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  3 11:36:37 compute-0 podman[545194]: 2025-10-03 11:36:37.872050572 +0000 UTC m=+0.105347568 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Oct  3 11:36:37 compute-0 podman[545205]: 2025-10-03 11:36:37.8954088 +0000 UTC m=+0.109447879 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:36:37 compute-0 podman[545189]: 2025-10-03 11:36:37.896305869 +0000 UTC m=+0.132710164 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:36:37 compute-0 podman[545185]: 2025-10-03 11:36:37.898088496 +0000 UTC m=+0.136897148 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1755695350, architecture=x86_64, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Oct  3 11:36:37 compute-0 podman[545197]: 2025-10-03 11:36:37.939579457 +0000 UTC m=+0.164111112 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct  3 11:36:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3884: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 7.7 KiB/s wr, 0 op/s
Oct  3 11:36:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:36:38 compute-0 nova_compute[351685]: 2025-10-03 11:36:38.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3885: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 8.0 KiB/s wr, 0 op/s
Oct  3 11:36:40 compute-0 nova_compute[351685]: 2025-10-03 11:36:40.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:36:41.715 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:36:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:36:41.716 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:36:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:36:41.716 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:36:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3886: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 341 B/s wr, 0 op/s
Oct  3 11:36:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:36:43 compute-0 nova_compute[351685]: 2025-10-03 11:36:43.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3887: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Oct  3 11:36:44 compute-0 podman[545586]: 2025-10-03 11:36:44.504204069 +0000 UTC m=+0.067300217 container create cb1f1503b4809f3ba320cd3674afd8c09f12924ea2524763ec4341a32e9764ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_robinson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:36:44 compute-0 systemd[1]: Started libpod-conmon-cb1f1503b4809f3ba320cd3674afd8c09f12924ea2524763ec4341a32e9764ce.scope.
Oct  3 11:36:44 compute-0 podman[545586]: 2025-10-03 11:36:44.476042087 +0000 UTC m=+0.039138285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:36:44 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:36:44 compute-0 podman[545586]: 2025-10-03 11:36:44.651593313 +0000 UTC m=+0.214689421 container init cb1f1503b4809f3ba320cd3674afd8c09f12924ea2524763ec4341a32e9764ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_robinson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:36:44 compute-0 podman[545586]: 2025-10-03 11:36:44.66396245 +0000 UTC m=+0.227058548 container start cb1f1503b4809f3ba320cd3674afd8c09f12924ea2524763ec4341a32e9764ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:36:44 compute-0 podman[545586]: 2025-10-03 11:36:44.669399234 +0000 UTC m=+0.232495392 container attach cb1f1503b4809f3ba320cd3674afd8c09f12924ea2524763ec4341a32e9764ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:36:44 compute-0 blissful_robinson[545602]: 167 167
Oct  3 11:36:44 compute-0 systemd[1]: libpod-cb1f1503b4809f3ba320cd3674afd8c09f12924ea2524763ec4341a32e9764ce.scope: Deactivated successfully.
Oct  3 11:36:44 compute-0 podman[545586]: 2025-10-03 11:36:44.678093383 +0000 UTC m=+0.241189551 container died cb1f1503b4809f3ba320cd3674afd8c09f12924ea2524763ec4341a32e9764ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_robinson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct  3 11:36:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1468d87850bb8355c60cdc2077718fe416dc7555ae4a8f069bb5c6175b5b393-merged.mount: Deactivated successfully.
Oct  3 11:36:44 compute-0 podman[545586]: 2025-10-03 11:36:44.747462745 +0000 UTC m=+0.310558863 container remove cb1f1503b4809f3ba320cd3674afd8c09f12924ea2524763ec4341a32e9764ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_robinson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:36:44 compute-0 systemd[1]: libpod-conmon-cb1f1503b4809f3ba320cd3674afd8c09f12924ea2524763ec4341a32e9764ce.scope: Deactivated successfully.
Oct  3 11:36:44 compute-0 podman[545625]: 2025-10-03 11:36:44.992890111 +0000 UTC m=+0.070415428 container create 2d9d3772e47690d609019865d6c3b3c94730010a7105ffccc53d00b49d5b878c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 11:36:45 compute-0 podman[545625]: 2025-10-03 11:36:44.959517091 +0000 UTC m=+0.037042448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:36:45 compute-0 systemd[1]: Started libpod-conmon-2d9d3772e47690d609019865d6c3b3c94730010a7105ffccc53d00b49d5b878c.scope.
Oct  3 11:36:45 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fc5db258a4bb1b0324c1c0280ad3bcee671fe60840bb7108923a5f8322ca12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fc5db258a4bb1b0324c1c0280ad3bcee671fe60840bb7108923a5f8322ca12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fc5db258a4bb1b0324c1c0280ad3bcee671fe60840bb7108923a5f8322ca12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:36:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fc5db258a4bb1b0324c1c0280ad3bcee671fe60840bb7108923a5f8322ca12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:36:45 compute-0 podman[545625]: 2025-10-03 11:36:45.16945124 +0000 UTC m=+0.246976567 container init 2d9d3772e47690d609019865d6c3b3c94730010a7105ffccc53d00b49d5b878c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 11:36:45 compute-0 podman[545625]: 2025-10-03 11:36:45.187563291 +0000 UTC m=+0.265088598 container start 2d9d3772e47690d609019865d6c3b3c94730010a7105ffccc53d00b49d5b878c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  3 11:36:45 compute-0 podman[545625]: 2025-10-03 11:36:45.193138719 +0000 UTC m=+0.270664026 container attach 2d9d3772e47690d609019865d6c3b3c94730010a7105ffccc53d00b49d5b878c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:36:45 compute-0 nova_compute[351685]: 2025-10-03 11:36:45.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3888: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Oct  3 11:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:36:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:36:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:36:46
Oct  3 11:36:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:36:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:36:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'backups', 'images', 'vms', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control']
Oct  3 11:36:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:36:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:36:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:36:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:36:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:36:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:36:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:36:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:36:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:36:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:36:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:36:47 compute-0 zen_morse[545641]: [
Oct  3 11:36:47 compute-0 zen_morse[545641]:    {
Oct  3 11:36:47 compute-0 zen_morse[545641]:        "available": false,
Oct  3 11:36:47 compute-0 zen_morse[545641]:        "ceph_device": false,
Oct  3 11:36:47 compute-0 zen_morse[545641]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  3 11:36:47 compute-0 zen_morse[545641]:        "lsm_data": {},
Oct  3 11:36:47 compute-0 zen_morse[545641]:        "lvs": [],
Oct  3 11:36:47 compute-0 zen_morse[545641]:        "path": "/dev/sr0",
Oct  3 11:36:47 compute-0 zen_morse[545641]:        "rejected_reasons": [
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "Has a FileSystem",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "Insufficient space (<5GB)"
Oct  3 11:36:47 compute-0 zen_morse[545641]:        ],
Oct  3 11:36:47 compute-0 zen_morse[545641]:        "sys_api": {
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "actuators": null,
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "device_nodes": "sr0",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "devname": "sr0",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "human_readable_size": "482.00 KB",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "id_bus": "ata",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "model": "QEMU DVD-ROM",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "nr_requests": "2",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "parent": "/dev/sr0",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "partitions": {},
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "path": "/dev/sr0",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "removable": "1",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "rev": "2.5+",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "ro": "0",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "rotational": "0",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "sas_address": "",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "sas_device_handle": "",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "scheduler_mode": "mq-deadline",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "sectors": 0,
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "sectorsize": "2048",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "size": 493568.0,
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "support_discard": "2048",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "type": "disk",
Oct  3 11:36:47 compute-0 zen_morse[545641]:            "vendor": "QEMU"
Oct  3 11:36:47 compute-0 zen_morse[545641]:        }
Oct  3 11:36:47 compute-0 zen_morse[545641]:    }
Oct  3 11:36:47 compute-0 zen_morse[545641]: ]
Oct  3 11:36:47 compute-0 systemd[1]: libpod-2d9d3772e47690d609019865d6c3b3c94730010a7105ffccc53d00b49d5b878c.scope: Deactivated successfully.
Oct  3 11:36:47 compute-0 systemd[1]: libpod-2d9d3772e47690d609019865d6c3b3c94730010a7105ffccc53d00b49d5b878c.scope: Consumed 2.383s CPU time.
Oct  3 11:36:47 compute-0 podman[548111]: 2025-10-03 11:36:47.581929808 +0000 UTC m=+0.046172711 container died 2d9d3772e47690d609019865d6c3b3c94730010a7105ffccc53d00b49d5b878c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 11:36:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4fc5db258a4bb1b0324c1c0280ad3bcee671fe60840bb7108923a5f8322ca12-merged.mount: Deactivated successfully.
Oct  3 11:36:47 compute-0 podman[548111]: 2025-10-03 11:36:47.670866859 +0000 UTC m=+0.135109742 container remove 2d9d3772e47690d609019865d6c3b3c94730010a7105ffccc53d00b49d5b878c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct  3 11:36:47 compute-0 systemd[1]: libpod-conmon-2d9d3772e47690d609019865d6c3b3c94730010a7105ffccc53d00b49d5b878c.scope: Deactivated successfully.
Oct  3 11:36:47 compute-0 nova_compute[351685]: 2025-10-03 11:36:47.724 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:36:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:36:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:36:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:36:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:36:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:36:47 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:36:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:36:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:36:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:36:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:36:47 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev db471a72-4d06-4a5b-94c7-129408324cac does not exist
Oct  3 11:36:47 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 9956ccfa-8433-497d-a2db-439ce55e7a73 does not exist
Oct  3 11:36:47 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev a968d257-daee-4c7c-9307-9100be6db0e5 does not exist
Oct  3 11:36:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:36:47 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:36:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:36:47 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:36:47 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:36:47 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:36:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3889: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 7.3 KiB/s wr, 0 op/s
Oct  3 11:36:48 compute-0 podman[548264]: 2025-10-03 11:36:48.572579997 +0000 UTC m=+0.051884333 container create f76c53fd6643d09800b376ce7f811eb9093d9dbd74b962dd7f3af65664e222da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:36:48 compute-0 systemd[1]: Started libpod-conmon-f76c53fd6643d09800b376ce7f811eb9093d9dbd74b962dd7f3af65664e222da.scope.
Oct  3 11:36:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:36:48 compute-0 podman[548264]: 2025-10-03 11:36:48.554981864 +0000 UTC m=+0.034286220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:36:48 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:36:48 compute-0 podman[548264]: 2025-10-03 11:36:48.693605956 +0000 UTC m=+0.172910312 container init f76c53fd6643d09800b376ce7f811eb9093d9dbd74b962dd7f3af65664e222da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct  3 11:36:48 compute-0 podman[548264]: 2025-10-03 11:36:48.711787479 +0000 UTC m=+0.191091825 container start f76c53fd6643d09800b376ce7f811eb9093d9dbd74b962dd7f3af65664e222da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 11:36:48 compute-0 podman[548264]: 2025-10-03 11:36:48.717878175 +0000 UTC m=+0.197182521 container attach f76c53fd6643d09800b376ce7f811eb9093d9dbd74b962dd7f3af65664e222da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:36:48 compute-0 vigilant_allen[548280]: 167 167
Oct  3 11:36:48 compute-0 systemd[1]: libpod-f76c53fd6643d09800b376ce7f811eb9093d9dbd74b962dd7f3af65664e222da.scope: Deactivated successfully.
Oct  3 11:36:48 compute-0 podman[548264]: 2025-10-03 11:36:48.720595752 +0000 UTC m=+0.199900088 container died f76c53fd6643d09800b376ce7f811eb9093d9dbd74b962dd7f3af65664e222da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:36:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:36:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:36:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:36:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:36:48 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:36:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-72af5e28d871de0a0d4291321f2c23f6d784de18c1115c2a3cb3a062838c2122-merged.mount: Deactivated successfully.
Oct  3 11:36:48 compute-0 podman[548264]: 2025-10-03 11:36:48.782887698 +0000 UTC m=+0.262192034 container remove f76c53fd6643d09800b376ce7f811eb9093d9dbd74b962dd7f3af65664e222da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:36:48 compute-0 systemd[1]: libpod-conmon-f76c53fd6643d09800b376ce7f811eb9093d9dbd74b962dd7f3af65664e222da.scope: Deactivated successfully.
Oct  3 11:36:48 compute-0 nova_compute[351685]: 2025-10-03 11:36:48.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:49 compute-0 podman[548303]: 2025-10-03 11:36:49.02635432 +0000 UTC m=+0.065962294 container create 64654560dcb66ac2b0a0b03602089e7ab3e181e12d06b27ea3dd1324177e6370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shaw, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  3 11:36:49 compute-0 systemd[1]: Started libpod-conmon-64654560dcb66ac2b0a0b03602089e7ab3e181e12d06b27ea3dd1324177e6370.scope.
Oct  3 11:36:49 compute-0 podman[548303]: 2025-10-03 11:36:49.002661712 +0000 UTC m=+0.042269726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:36:49 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:36:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4684ce4c0668f894cf73ae8085276eab90111c527951bd8997de1f1904e3d9a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:36:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4684ce4c0668f894cf73ae8085276eab90111c527951bd8997de1f1904e3d9a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:36:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4684ce4c0668f894cf73ae8085276eab90111c527951bd8997de1f1904e3d9a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:36:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4684ce4c0668f894cf73ae8085276eab90111c527951bd8997de1f1904e3d9a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:36:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4684ce4c0668f894cf73ae8085276eab90111c527951bd8997de1f1904e3d9a5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:36:49 compute-0 podman[548303]: 2025-10-03 11:36:49.135942263 +0000 UTC m=+0.175550257 container init 64654560dcb66ac2b0a0b03602089e7ab3e181e12d06b27ea3dd1324177e6370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:36:49 compute-0 podman[548303]: 2025-10-03 11:36:49.152597877 +0000 UTC m=+0.192205831 container start 64654560dcb66ac2b0a0b03602089e7ab3e181e12d06b27ea3dd1324177e6370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shaw, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 11:36:49 compute-0 podman[548303]: 2025-10-03 11:36:49.157121652 +0000 UTC m=+0.196729616 container attach 64654560dcb66ac2b0a0b03602089e7ab3e181e12d06b27ea3dd1324177e6370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:36:49 compute-0 podman[548322]: 2025-10-03 11:36:49.190157231 +0000 UTC m=+0.097299349 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  3 11:36:49 compute-0 podman[548316]: 2025-10-03 11:36:49.190793461 +0000 UTC m=+0.107724934 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:36:49 compute-0 podman[548321]: 2025-10-03 11:36:49.221915228 +0000 UTC m=+0.138832510 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, name=ubi9, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9)
Oct  3 11:36:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3890: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 7.3 KiB/s wr, 0 op/s
Oct  3 11:36:50 compute-0 nova_compute[351685]: 2025-10-03 11:36:50.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:50 compute-0 inspiring_shaw[548333]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:36:50 compute-0 inspiring_shaw[548333]: --> relative data size: 1.0
Oct  3 11:36:50 compute-0 inspiring_shaw[548333]: --> All data devices are unavailable
Oct  3 11:36:50 compute-0 systemd[1]: libpod-64654560dcb66ac2b0a0b03602089e7ab3e181e12d06b27ea3dd1324177e6370.scope: Deactivated successfully.
Oct  3 11:36:50 compute-0 systemd[1]: libpod-64654560dcb66ac2b0a0b03602089e7ab3e181e12d06b27ea3dd1324177e6370.scope: Consumed 1.120s CPU time.
Oct  3 11:36:50 compute-0 podman[548303]: 2025-10-03 11:36:50.373975361 +0000 UTC m=+1.413583335 container died 64654560dcb66ac2b0a0b03602089e7ab3e181e12d06b27ea3dd1324177e6370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shaw, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:36:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-4684ce4c0668f894cf73ae8085276eab90111c527951bd8997de1f1904e3d9a5-merged.mount: Deactivated successfully.
Oct  3 11:36:50 compute-0 podman[548303]: 2025-10-03 11:36:50.445367729 +0000 UTC m=+1.484975703 container remove 64654560dcb66ac2b0a0b03602089e7ab3e181e12d06b27ea3dd1324177e6370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Oct  3 11:36:50 compute-0 systemd[1]: libpod-conmon-64654560dcb66ac2b0a0b03602089e7ab3e181e12d06b27ea3dd1324177e6370.scope: Deactivated successfully.
Oct  3 11:36:51 compute-0 podman[548563]: 2025-10-03 11:36:51.335470907 +0000 UTC m=+0.077545697 container create e3468c8caa99d53496b921a31434be14b7daf565d750e7182f44d64dc0326869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:36:51 compute-0 systemd[1]: Started libpod-conmon-e3468c8caa99d53496b921a31434be14b7daf565d750e7182f44d64dc0326869.scope.
Oct  3 11:36:51 compute-0 podman[548563]: 2025-10-03 11:36:51.301205359 +0000 UTC m=+0.043280259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:36:51 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:36:51 compute-0 podman[548563]: 2025-10-03 11:36:51.455766652 +0000 UTC m=+0.197841482 container init e3468c8caa99d53496b921a31434be14b7daf565d750e7182f44d64dc0326869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:36:51 compute-0 podman[548563]: 2025-10-03 11:36:51.47569705 +0000 UTC m=+0.217771840 container start e3468c8caa99d53496b921a31434be14b7daf565d750e7182f44d64dc0326869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:36:51 compute-0 podman[548563]: 2025-10-03 11:36:51.481020962 +0000 UTC m=+0.223095792 container attach e3468c8caa99d53496b921a31434be14b7daf565d750e7182f44d64dc0326869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:36:51 compute-0 charming_archimedes[548578]: 167 167
Oct  3 11:36:51 compute-0 systemd[1]: libpod-e3468c8caa99d53496b921a31434be14b7daf565d750e7182f44d64dc0326869.scope: Deactivated successfully.
Oct  3 11:36:51 compute-0 podman[548563]: 2025-10-03 11:36:51.488507751 +0000 UTC m=+0.230582541 container died e3468c8caa99d53496b921a31434be14b7daf565d750e7182f44d64dc0326869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:36:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-961a59d6a5898adbc26200d71fe0588744e622d6c2c737668dfb6c79bc60ca2f-merged.mount: Deactivated successfully.
Oct  3 11:36:51 compute-0 podman[548563]: 2025-10-03 11:36:51.551367435 +0000 UTC m=+0.293442226 container remove e3468c8caa99d53496b921a31434be14b7daf565d750e7182f44d64dc0326869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:36:51 compute-0 systemd[1]: libpod-conmon-e3468c8caa99d53496b921a31434be14b7daf565d750e7182f44d64dc0326869.scope: Deactivated successfully.
Oct  3 11:36:51 compute-0 podman[548603]: 2025-10-03 11:36:51.835195542 +0000 UTC m=+0.083216617 container create 13ed89d772245bf766bb8ad3b1d3cf41b93f906d2a7128748194601f5e79a72f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 11:36:51 compute-0 podman[548603]: 2025-10-03 11:36:51.798722714 +0000 UTC m=+0.046743849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:36:51 compute-0 systemd[1]: Started libpod-conmon-13ed89d772245bf766bb8ad3b1d3cf41b93f906d2a7128748194601f5e79a72f.scope.
Oct  3 11:36:51 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7ba488c7e44728db0b45995655e3e6325dcfc7cbbc8dc01efa8ca237fdf9f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7ba488c7e44728db0b45995655e3e6325dcfc7cbbc8dc01efa8ca237fdf9f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7ba488c7e44728db0b45995655e3e6325dcfc7cbbc8dc01efa8ca237fdf9f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:36:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7ba488c7e44728db0b45995655e3e6325dcfc7cbbc8dc01efa8ca237fdf9f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:36:51 compute-0 podman[548603]: 2025-10-03 11:36:51.997193624 +0000 UTC m=+0.245214669 container init 13ed89d772245bf766bb8ad3b1d3cf41b93f906d2a7128748194601f5e79a72f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chatelet, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  3 11:36:52 compute-0 podman[548603]: 2025-10-03 11:36:52.016065539 +0000 UTC m=+0.264086594 container start 13ed89d772245bf766bb8ad3b1d3cf41b93f906d2a7128748194601f5e79a72f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chatelet, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 11:36:52 compute-0 podman[548603]: 2025-10-03 11:36:52.022501385 +0000 UTC m=+0.270522460 container attach 13ed89d772245bf766bb8ad3b1d3cf41b93f906d2a7128748194601f5e79a72f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:36:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3891: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]: {
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:    "0": [
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:        {
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "devices": [
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "/dev/loop3"
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            ],
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "lv_name": "ceph_lv0",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "lv_size": "21470642176",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "name": "ceph_lv0",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "tags": {
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.cluster_name": "ceph",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.crush_device_class": "",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.encrypted": "0",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.osd_id": "0",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.type": "block",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.vdo": "0"
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            },
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "type": "block",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "vg_name": "ceph_vg0"
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:        }
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:    ],
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:    "1": [
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:        {
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "devices": [
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "/dev/loop4"
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            ],
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "lv_name": "ceph_lv1",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "lv_size": "21470642176",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "name": "ceph_lv1",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "tags": {
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.cluster_name": "ceph",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.crush_device_class": "",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.encrypted": "0",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.osd_id": "1",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.type": "block",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.vdo": "0"
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            },
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "type": "block",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "vg_name": "ceph_vg1"
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:        }
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:    ],
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:    "2": [
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:        {
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "devices": [
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "/dev/loop5"
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            ],
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "lv_name": "ceph_lv2",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "lv_size": "21470642176",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "name": "ceph_lv2",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "tags": {
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.cluster_name": "ceph",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.crush_device_class": "",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.encrypted": "0",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.osd_id": "2",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.type": "block",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:                "ceph.vdo": "0"
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            },
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "type": "block",
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:            "vg_name": "ceph_vg2"
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:        }
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]:    ]
Oct  3 11:36:52 compute-0 blissful_chatelet[548618]: }
Oct  3 11:36:52 compute-0 systemd[1]: libpod-13ed89d772245bf766bb8ad3b1d3cf41b93f906d2a7128748194601f5e79a72f.scope: Deactivated successfully.
Oct  3 11:36:52 compute-0 podman[548627]: 2025-10-03 11:36:52.930591449 +0000 UTC m=+0.028980210 container died 13ed89d772245bf766bb8ad3b1d3cf41b93f906d2a7128748194601f5e79a72f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:36:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-da7ba488c7e44728db0b45995655e3e6325dcfc7cbbc8dc01efa8ca237fdf9f6-merged.mount: Deactivated successfully.
Oct  3 11:36:53 compute-0 podman[548627]: 2025-10-03 11:36:53.012532475 +0000 UTC m=+0.110921206 container remove 13ed89d772245bf766bb8ad3b1d3cf41b93f906d2a7128748194601f5e79a72f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:36:53 compute-0 systemd[1]: libpod-conmon-13ed89d772245bf766bb8ad3b1d3cf41b93f906d2a7128748194601f5e79a72f.scope: Deactivated successfully.
Oct  3 11:36:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:36:53 compute-0 nova_compute[351685]: 2025-10-03 11:36:53.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:54 compute-0 podman[548777]: 2025-10-03 11:36:54.044389206 +0000 UTC m=+0.059950203 container create 6543a38a8159ea186884eb1b2f22e25869d0644067942b8c93faa5de274d271b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:36:54 compute-0 systemd[1]: Started libpod-conmon-6543a38a8159ea186884eb1b2f22e25869d0644067942b8c93faa5de274d271b.scope.
Oct  3 11:36:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3892: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Oct  3 11:36:54 compute-0 podman[548777]: 2025-10-03 11:36:54.02547945 +0000 UTC m=+0.041040467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:36:54 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:36:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:36:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/31593936' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:36:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:36:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/31593936' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:36:54 compute-0 podman[548777]: 2025-10-03 11:36:54.148770741 +0000 UTC m=+0.164331738 container init 6543a38a8159ea186884eb1b2f22e25869d0644067942b8c93faa5de274d271b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 11:36:54 compute-0 podman[548777]: 2025-10-03 11:36:54.163685909 +0000 UTC m=+0.179246896 container start 6543a38a8159ea186884eb1b2f22e25869d0644067942b8c93faa5de274d271b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  3 11:36:54 compute-0 podman[548777]: 2025-10-03 11:36:54.169073921 +0000 UTC m=+0.184634918 container attach 6543a38a8159ea186884eb1b2f22e25869d0644067942b8c93faa5de274d271b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:36:54 compute-0 keen_hodgkin[548793]: 167 167
Oct  3 11:36:54 compute-0 systemd[1]: libpod-6543a38a8159ea186884eb1b2f22e25869d0644067942b8c93faa5de274d271b.scope: Deactivated successfully.
Oct  3 11:36:54 compute-0 podman[548777]: 2025-10-03 11:36:54.174141175 +0000 UTC m=+0.189702172 container died 6543a38a8159ea186884eb1b2f22e25869d0644067942b8c93faa5de274d271b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 11:36:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-339ef8c886aab59a3860a9cfa7eb320af838a54cb2566be29790a2c4bcdc84a7-merged.mount: Deactivated successfully.
Oct  3 11:36:54 compute-0 podman[548777]: 2025-10-03 11:36:54.21581414 +0000 UTC m=+0.231375127 container remove 6543a38a8159ea186884eb1b2f22e25869d0644067942b8c93faa5de274d271b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:36:54 compute-0 systemd[1]: libpod-conmon-6543a38a8159ea186884eb1b2f22e25869d0644067942b8c93faa5de274d271b.scope: Deactivated successfully.
Oct  3 11:36:54 compute-0 podman[548816]: 2025-10-03 11:36:54.431154891 +0000 UTC m=+0.059967303 container create f7b080698efcc2855dc4bbbc9771defac8a7a3ca2b3601f7cdbda8e203365e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Oct  3 11:36:54 compute-0 systemd[1]: Started libpod-conmon-f7b080698efcc2855dc4bbbc9771defac8a7a3ca2b3601f7cdbda8e203365e6b.scope.
Oct  3 11:36:54 compute-0 podman[548816]: 2025-10-03 11:36:54.410499459 +0000 UTC m=+0.039311901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:36:54 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8481e644fa7d8cd3b2641bbbf0e3501b5548ae26c53433a8d3cb51a97e30611/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8481e644fa7d8cd3b2641bbbf0e3501b5548ae26c53433a8d3cb51a97e30611/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8481e644fa7d8cd3b2641bbbf0e3501b5548ae26c53433a8d3cb51a97e30611/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8481e644fa7d8cd3b2641bbbf0e3501b5548ae26c53433a8d3cb51a97e30611/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:36:54 compute-0 podman[548816]: 2025-10-03 11:36:54.552134429 +0000 UTC m=+0.180946861 container init f7b080698efcc2855dc4bbbc9771defac8a7a3ca2b3601f7cdbda8e203365e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_proskuriakova, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 11:36:54 compute-0 podman[548816]: 2025-10-03 11:36:54.564702471 +0000 UTC m=+0.193514913 container start f7b080698efcc2855dc4bbbc9771defac8a7a3ca2b3601f7cdbda8e203365e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:36:54 compute-0 podman[548816]: 2025-10-03 11:36:54.572138779 +0000 UTC m=+0.200951221 container attach f7b080698efcc2855dc4bbbc9771defac8a7a3ca2b3601f7cdbda8e203365e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_proskuriakova, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:36:55 compute-0 nova_compute[351685]: 2025-10-03 11:36:55.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]: {
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:        "osd_id": 1,
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:        "type": "bluestore"
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:    },
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:        "osd_id": 2,
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:        "type": "bluestore"
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:    },
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:        "osd_id": 0,
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:        "type": "bluestore"
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]:    }
Oct  3 11:36:55 compute-0 nice_proskuriakova[548832]: }
Oct  3 11:36:55 compute-0 systemd[1]: libpod-f7b080698efcc2855dc4bbbc9771defac8a7a3ca2b3601f7cdbda8e203365e6b.scope: Deactivated successfully.
Oct  3 11:36:55 compute-0 systemd[1]: libpod-f7b080698efcc2855dc4bbbc9771defac8a7a3ca2b3601f7cdbda8e203365e6b.scope: Consumed 1.116s CPU time.
Oct  3 11:36:55 compute-0 podman[548816]: 2025-10-03 11:36:55.686860166 +0000 UTC m=+1.315672598 container died f7b080698efcc2855dc4bbbc9771defac8a7a3ca2b3601f7cdbda8e203365e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct  3 11:36:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8481e644fa7d8cd3b2641bbbf0e3501b5548ae26c53433a8d3cb51a97e30611-merged.mount: Deactivated successfully.
Oct  3 11:36:55 compute-0 nova_compute[351685]: 2025-10-03 11:36:55.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:36:55 compute-0 nova_compute[351685]: 2025-10-03 11:36:55.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:36:55 compute-0 nova_compute[351685]: 2025-10-03 11:36:55.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:36:55 compute-0 podman[548816]: 2025-10-03 11:36:55.752237321 +0000 UTC m=+1.381049733 container remove f7b080698efcc2855dc4bbbc9771defac8a7a3ca2b3601f7cdbda8e203365e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_proskuriakova, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  3 11:36:55 compute-0 systemd[1]: libpod-conmon-f7b080698efcc2855dc4bbbc9771defac8a7a3ca2b3601f7cdbda8e203365e6b.scope: Deactivated successfully.
Oct  3 11:36:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:36:55 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:36:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:36:55 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:36:55 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 95f16d87-97bb-4c96-a19a-c53e394c7db3 does not exist
Oct  3 11:36:55 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f3259305-327b-4a28-9ddb-2c9b19a4c717 does not exist
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3893: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct  3 11:36:56 compute-0 nova_compute[351685]: 2025-10-03 11:36:56.437 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:36:56 compute-0 nova_compute[351685]: 2025-10-03 11:36:56.439 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:36:56 compute-0 nova_compute[351685]: 2025-10-03 11:36:56.440 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:36:56 compute-0 nova_compute[351685]: 2025-10-03 11:36:56.440 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002073120665657355 of space, bias 1.0, pg target 0.6219361996972065 quantized to 32 (current 32)
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:36:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:36:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:36:56 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:36:57 compute-0 nova_compute[351685]: 2025-10-03 11:36:57.541 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:36:57 compute-0 nova_compute[351685]: 2025-10-03 11:36:57.590 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:36:57 compute-0 nova_compute[351685]: 2025-10-03 11:36:57.590 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:36:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3894: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct  3 11:36:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:36:58 compute-0 nova_compute[351685]: 2025-10-03 11:36:58.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:36:59 compute-0 nova_compute[351685]: 2025-10-03 11:36:59.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:36:59 compute-0 nova_compute[351685]: 2025-10-03 11:36:59.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:36:59 compute-0 podman[157165]: time="2025-10-03T11:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:36:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:36:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9606 "" "Go-http-client/1.1"
Oct  3 11:37:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3895: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct  3 11:37:00 compute-0 nova_compute[351685]: 2025-10-03 11:37:00.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:01 compute-0 openstack_network_exporter[367524]: ERROR   11:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:37:01 compute-0 openstack_network_exporter[367524]: ERROR   11:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:37:01 compute-0 openstack_network_exporter[367524]: ERROR   11:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:37:01 compute-0 openstack_network_exporter[367524]: ERROR   11:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:37:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:37:01 compute-0 openstack_network_exporter[367524]: ERROR   11:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:37:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:37:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3896: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct  3 11:37:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:37:03 compute-0 nova_compute[351685]: 2025-10-03 11:37:03.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3897: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct  3 11:37:05 compute-0 nova_compute[351685]: 2025-10-03 11:37:05.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:05 compute-0 nova_compute[351685]: 2025-10-03 11:37:05.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:37:05 compute-0 nova_compute[351685]: 2025-10-03 11:37:05.758 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:37:05 compute-0 nova_compute[351685]: 2025-10-03 11:37:05.759 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:37:05 compute-0 nova_compute[351685]: 2025-10-03 11:37:05.759 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:37:05 compute-0 nova_compute[351685]: 2025-10-03 11:37:05.760 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:37:05 compute-0 nova_compute[351685]: 2025-10-03 11:37:05.760 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:37:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3898: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct  3 11:37:06 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:37:06 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/885291436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:37:06 compute-0 nova_compute[351685]: 2025-10-03 11:37:06.234 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:37:06 compute-0 nova_compute[351685]: 2025-10-03 11:37:06.346 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:37:06 compute-0 nova_compute[351685]: 2025-10-03 11:37:06.347 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:37:06 compute-0 nova_compute[351685]: 2025-10-03 11:37:06.355 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:37:06 compute-0 nova_compute[351685]: 2025-10-03 11:37:06.356 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:37:06 compute-0 nova_compute[351685]: 2025-10-03 11:37:06.356 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:37:06 compute-0 nova_compute[351685]: 2025-10-03 11:37:06.364 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:37:06 compute-0 nova_compute[351685]: 2025-10-03 11:37:06.365 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:37:06 compute-0 nova_compute[351685]: 2025-10-03 11:37:06.875 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:37:06 compute-0 nova_compute[351685]: 2025-10-03 11:37:06.876 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3148MB free_disk=59.863914489746094GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:37:06 compute-0 nova_compute[351685]: 2025-10-03 11:37:06.877 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:37:06 compute-0 nova_compute[351685]: 2025-10-03 11:37:06.877 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:37:06 compute-0 nova_compute[351685]: 2025-10-03 11:37:06.972 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:37:06 compute-0 nova_compute[351685]: 2025-10-03 11:37:06.973 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 83fc22ce-d2e4-468a-b166-04f2743fa68d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:37:06 compute-0 nova_compute[351685]: 2025-10-03 11:37:06.973 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 443e486d-1bf2-4550-a4ae-32f0f8f4af19 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:37:06 compute-0 nova_compute[351685]: 2025-10-03 11:37:06.974 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:37:06 compute-0 nova_compute[351685]: 2025-10-03 11:37:06.974 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:37:07 compute-0 nova_compute[351685]: 2025-10-03 11:37:07.052 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:37:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:37:07 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/202566040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:37:07 compute-0 nova_compute[351685]: 2025-10-03 11:37:07.538 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:37:07 compute-0 nova_compute[351685]: 2025-10-03 11:37:07.546 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:37:07 compute-0 nova_compute[351685]: 2025-10-03 11:37:07.561 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:37:07 compute-0 nova_compute[351685]: 2025-10-03 11:37:07.563 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:37:07 compute-0 nova_compute[351685]: 2025-10-03 11:37:07.564 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:37:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3899: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 1 op/s
Oct  3 11:37:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:37:08 compute-0 podman[548989]: 2025-10-03 11:37:08.883369464 +0000 UTC m=+0.110612085 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  3 11:37:08 compute-0 podman[548974]: 2025-10-03 11:37:08.901105823 +0000 UTC m=+0.136016510 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2)
Oct  3 11:37:08 compute-0 podman[548970]: 2025-10-03 11:37:08.904333127 +0000 UTC m=+0.150374220 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:37:08 compute-0 podman[548971]: 2025-10-03 11:37:08.90443862 +0000 UTC m=+0.157827949 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm)
Oct  3 11:37:08 compute-0 podman[548972]: 2025-10-03 11:37:08.907207009 +0000 UTC m=+0.157698395 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  3 11:37:08 compute-0 podman[548973]: 2025-10-03 11:37:08.910642359 +0000 UTC m=+0.145649549 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 11:37:08 compute-0 nova_compute[351685]: 2025-10-03 11:37:08.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:08 compute-0 podman[548978]: 2025-10-03 11:37:08.943537553 +0000 UTC m=+0.173673857 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  3 11:37:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3900: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Oct  3 11:37:10 compute-0 nova_compute[351685]: 2025-10-03 11:37:10.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:11 compute-0 nova_compute[351685]: 2025-10-03 11:37:11.564 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:37:11 compute-0 nova_compute[351685]: 2025-10-03 11:37:11.565 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:37:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3901: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct  3 11:37:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:37:13 compute-0 nova_compute[351685]: 2025-10-03 11:37:13.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:37:13 compute-0 nova_compute[351685]: 2025-10-03 11:37:13.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3902: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct  3 11:37:14 compute-0 nova_compute[351685]: 2025-10-03 11:37:14.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:37:14 compute-0 nova_compute[351685]: 2025-10-03 11:37:14.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:37:15 compute-0 nova_compute[351685]: 2025-10-03 11:37:15.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3903: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct  3 11:37:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:37:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:37:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:37:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:37:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:37:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:37:16 compute-0 nova_compute[351685]: 2025-10-03 11:37:16.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:37:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3904: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct  3 11:37:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:37:18 compute-0 nova_compute[351685]: 2025-10-03 11:37:18.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:19 compute-0 podman[549111]: 2025-10-03 11:37:19.846763738 +0000 UTC m=+0.094244441 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  3 11:37:19 compute-0 podman[549109]: 2025-10-03 11:37:19.855167798 +0000 UTC m=+0.108048344 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 11:37:19 compute-0 podman[549110]: 2025-10-03 11:37:19.857670988 +0000 UTC m=+0.100100579 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9)
Oct  3 11:37:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3905: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Oct  3 11:37:20 compute-0 nova_compute[351685]: 2025-10-03 11:37:20.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3906: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:37:23 compute-0 nova_compute[351685]: 2025-10-03 11:37:23.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3907: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:25 compute-0 nova_compute[351685]: 2025-10-03 11:37:25.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3908: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3909: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:37:28.658888) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491448658973, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 814, "num_deletes": 250, "total_data_size": 1097443, "memory_usage": 1123432, "flush_reason": "Manual Compaction"}
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491448666428, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 694985, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78327, "largest_seqno": 79140, "table_properties": {"data_size": 691536, "index_size": 1228, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9066, "raw_average_key_size": 20, "raw_value_size": 684295, "raw_average_value_size": 1555, "num_data_blocks": 56, "num_entries": 440, "num_filter_entries": 440, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759491379, "oldest_key_time": 1759491379, "file_creation_time": 1759491448, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 7551 microseconds, and 2814 cpu microseconds.
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:37:28.666469) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 694985 bytes OK
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:37:28.666482) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:37:28.668870) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:37:28.668887) EVENT_LOG_v1 {"time_micros": 1759491448668881, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:37:28.668901) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 1093391, prev total WAL file size 1093391, number of live WAL files 2.
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:37:28.670263) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353034' seq:72057594037927935, type:22 .. '6D6772737461740033373535' seq:0, type:0; will stop at (end)
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(678KB)], [191(10MB)]
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491448670306, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 11272926, "oldest_snapshot_seqno": -1}
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 8553 keys, 8354678 bytes, temperature: kUnknown
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491448708948, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 8354678, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8306431, "index_size": 25734, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21445, "raw_key_size": 226093, "raw_average_key_size": 26, "raw_value_size": 8160525, "raw_average_value_size": 954, "num_data_blocks": 998, "num_entries": 8553, "num_filter_entries": 8553, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759491448, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:37:28.709397) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 8354678 bytes
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:37:28.711829) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 290.7 rd, 215.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.1 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(28.2) write-amplify(12.0) OK, records in: 9036, records dropped: 483 output_compression: NoCompression
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:37:28.711862) EVENT_LOG_v1 {"time_micros": 1759491448711847, "job": 120, "event": "compaction_finished", "compaction_time_micros": 38773, "compaction_time_cpu_micros": 22328, "output_level": 6, "num_output_files": 1, "total_output_size": 8354678, "num_input_records": 9036, "num_output_records": 8553, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491448712341, "job": 120, "event": "table_file_deletion", "file_number": 193}
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491448716797, "job": 120, "event": "table_file_deletion", "file_number": 191}
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:37:28.670105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:37:28.716947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:37:28.716953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:37:28.716954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:37:28.716956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:37:28 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:37:28.716957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:37:28 compute-0 nova_compute[351685]: 2025-10-03 11:37:28.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:29 compute-0 podman[157165]: time="2025-10-03T11:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:37:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:37:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9606 "" "Go-http-client/1.1"
Oct  3 11:37:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3910: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:30 compute-0 nova_compute[351685]: 2025-10-03 11:37:30.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:30 compute-0 nova_compute[351685]: 2025-10-03 11:37:30.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:37:31 compute-0 openstack_network_exporter[367524]: ERROR   11:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:37:31 compute-0 openstack_network_exporter[367524]: ERROR   11:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:37:31 compute-0 openstack_network_exporter[367524]: ERROR   11:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:37:31 compute-0 openstack_network_exporter[367524]: ERROR   11:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:37:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:37:31 compute-0 openstack_network_exporter[367524]: ERROR   11:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:37:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:37:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3911: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:37:33 compute-0 nova_compute[351685]: 2025-10-03 11:37:33.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3912: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:35 compute-0 nova_compute[351685]: 2025-10-03 11:37:35.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3913: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3914: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:37:38 compute-0 nova_compute[351685]: 2025-10-03 11:37:38.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:39 compute-0 podman[549171]: 2025-10-03 11:37:39.879366015 +0000 UTC m=+0.112765635 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-type=git)
Oct  3 11:37:39 compute-0 podman[549170]: 2025-10-03 11:37:39.881110461 +0000 UTC m=+0.137145647 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 11:37:39 compute-0 podman[549179]: 2025-10-03 11:37:39.888798567 +0000 UTC m=+0.105527233 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:37:39 compute-0 podman[549180]: 2025-10-03 11:37:39.893776247 +0000 UTC m=+0.127820778 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Oct  3 11:37:39 compute-0 podman[549172]: 2025-10-03 11:37:39.902520367 +0000 UTC m=+0.132600441 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:37:39 compute-0 podman[549183]: 2025-10-03 11:37:39.918910253 +0000 UTC m=+0.138060947 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:37:39 compute-0 podman[549181]: 2025-10-03 11:37:39.926484115 +0000 UTC m=+0.144716649 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, config_id=ovn_controller)
Oct  3 11:37:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3915: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:40 compute-0 nova_compute[351685]: 2025-10-03 11:37:40.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.910 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.911 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.911 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.912 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.919 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.919 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.919 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.919 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a98fabf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.918 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '443e486d-1bf2-4550-a4ae-32f0f8f4af19', 'name': 'te-0355793-asg-bz6ac4extgro-yngmy2hkxebf-kzam7sftwtcg', 'flavor': {'id': 'b93eb926-1d95-406e-aec3-a907be067084', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b9c8e0cc-ecf1-4fa8-92ee-328b108123cd'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ebbd19d68501417398b05a6a4b7c22de', 'user_id': '8990c210ba8740dc9714739f27702391', 'hostId': '68f1a860c9647e69f481ba6f1cfa913085c986ba3b27987b76a63364', 'status': 'active', 'metadata': {'metering.server_group': '0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.924 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.928 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '83fc22ce-d2e4-468a-b166-04f2743fa68d', 'name': 'te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz', 'flavor': {'id': 'b93eb926-1d95-406e-aec3-a907be067084', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b9c8e0cc-ecf1-4fa8-92ee-328b108123cd'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ebbd19d68501417398b05a6a4b7c22de', 'user_id': '8990c210ba8740dc9714739f27702391', 'hostId': '68f1a860c9647e69f481ba6f1cfa913085c986ba3b27987b76a63364', 'status': 'active', 'metadata': {'metering.server_group': '0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.928 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.929 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.929 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.929 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:37:40.929357) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.934 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.939 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.944 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.945 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.945 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.945 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.946 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.946 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.946 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.946 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.947 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:37:40.946179) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.947 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.947 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.948 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.948 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.948 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.948 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.949 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:37:40.948479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.967 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.968 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.994 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.994 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:40.995 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.011 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.012 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.013 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:37:41.013519) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.037 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.bytes volume: 30247424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.038 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.088 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.088 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.089 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.113 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.bytes volume: 31861248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.114 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.115 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.115 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.115 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.115 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.115 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.115 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.latency volume: 2094347087 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.115 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.latency volume: 184046568 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.116 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.116 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.116 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.116 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.latency volume: 2658882306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.116 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.latency volume: 170448087 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.117 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:37:41.115525) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.117 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.117 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.117 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.118 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.118 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.requests volume: 1096 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.118 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.118 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.119 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.119 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:37:41.118162) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.119 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.119 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.requests volume: 1164 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.119 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.120 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.120 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.120 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.120 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.120 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.120 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.121 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.121 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.121 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.121 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.121 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.122 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.122 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.122 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.122 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.123 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.123 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.123 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.123 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.123 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.124 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.124 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.124 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.124 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.125 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.125 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:37:41.120707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.125 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.bytes volume: 73154560 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:37:41.123011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.125 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.126 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:37:41.125573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.126 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.126 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.126 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.127 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.127 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.127 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.127 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.128 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.128 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.128 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.128 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:37:41.128166) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.150 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.173 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.201 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.202 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.202 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.203 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.203 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.latency volume: 10137118849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.203 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.204 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.204 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:37:41.203065) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.204 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.205 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.latency volume: 11038555047 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.205 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.206 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.206 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.206 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.206 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.206 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.206 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.207 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.requests volume: 330 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:37:41.206820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.207 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.207 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.208 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.208 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.208 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.requests volume: 349 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.209 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.209 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.209 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.210 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.210 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.210 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.210 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.210 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.211 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:37:41.210368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.211 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.211 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.211 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.212 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.212 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.212 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.212 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.212 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.212 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.213 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.213 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.213 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.213 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.214 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.213 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:37:41.212652) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.214 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.214 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.214 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:37:41.214072) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.214 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.215 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.216 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.216 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.216 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.216 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.216 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.217 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:37:41.217030) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.218 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.218 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.218 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.218 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.218 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.218 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.218 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.219 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.219 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.219 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:37:41.218754) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.220 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.220 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.220 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.220 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.220 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.220 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.221 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/cpu volume: 333700000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.221 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:37:41.220694) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.221 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 115950000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.221 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/cpu volume: 337650000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.221 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.222 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.222 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.222 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.222 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.222 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.222 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.223 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.223 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:37:41.222578) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.223 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.223 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.224 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.224 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.224 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.224 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.224 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.224 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.224 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.225 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.225 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.225 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.226 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.226 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.226 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.226 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.226 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:37:41.224476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.226 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.226 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:37:41.226447) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.227 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.227 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.227 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.227 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.228 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.228 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.228 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.228 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.228 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/memory.usage volume: 42.1796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.228 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.228 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:37:41.228353) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.229 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/memory.usage volume: 42.3671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.229 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.229 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.229 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.230 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.230 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.230 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.230 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.230 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.230 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.bytes volume: 2276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.230 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.231 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.bytes volume: 2450 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.231 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.231 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.232 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.232 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.232 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:37:41.230473) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.232 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.232 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.232 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.232 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.233 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.233 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.234 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:37:41.232420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.234 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.235 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.235 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.235 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.235 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.235 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.235 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.235 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.235 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.235 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.235 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.235 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.235 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.236 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.236 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.236 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.236 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.236 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.236 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:37:41.236 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:37:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:37:41.716 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:37:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:37:41.717 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:37:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:37:41.718 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:37:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3916: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:37:43 compute-0 nova_compute[351685]: 2025-10-03 11:37:43.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3917: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:45 compute-0 nova_compute[351685]: 2025-10-03 11:37:45.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3918: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:37:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:37:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:37:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:37:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:37:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:37:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:37:46
Oct  3 11:37:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:37:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:37:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.mgr', 'images', 'volumes', 'default.rgw.log', 'default.rgw.control', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta']
Oct  3 11:37:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:37:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:37:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:37:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:37:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:37:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:37:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:37:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:37:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:37:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:37:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:37:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3919: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:37:48 compute-0 nova_compute[351685]: 2025-10-03 11:37:48.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3920: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:50 compute-0 nova_compute[351685]: 2025-10-03 11:37:50.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:50 compute-0 podman[549308]: 2025-10-03 11:37:50.813282914 +0000 UTC m=+0.069842369 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:37:50 compute-0 podman[549309]: 2025-10-03 11:37:50.866832141 +0000 UTC m=+0.119113098 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, release-0.7.12=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Oct  3 11:37:50 compute-0 podman[549310]: 2025-10-03 11:37:50.874022932 +0000 UTC m=+0.110126071 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  3 11:37:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3921: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:37:53 compute-0 nova_compute[351685]: 2025-10-03 11:37:53.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3922: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:37:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3357332983' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:37:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:37:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3357332983' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:37:55 compute-0 nova_compute[351685]: 2025-10-03 11:37:55.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:55 compute-0 nova_compute[351685]: 2025-10-03 11:37:55.743 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:37:55 compute-0 nova_compute[351685]: 2025-10-03 11:37:55.744 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  3 11:37:55 compute-0 nova_compute[351685]: 2025-10-03 11:37:55.761 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3923: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020731842563651757 of space, bias 1.0, pg target 0.6219552769095527 quantized to 32 (current 32)
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:37:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:37:56 compute-0 nova_compute[351685]: 2025-10-03 11:37:56.749 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:37:56 compute-0 nova_compute[351685]: 2025-10-03 11:37:56.751 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:37:57 compute-0 nova_compute[351685]: 2025-10-03 11:37:57.064 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:37:57 compute-0 nova_compute[351685]: 2025-10-03 11:37:57.065 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:37:57 compute-0 nova_compute[351685]: 2025-10-03 11:37:57.065 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:37:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:37:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:37:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:37:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:37:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:37:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:37:57 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev c3a583eb-f232-461a-8833-0dcb024e82c3 does not exist
Oct  3 11:37:57 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 6396c2a5-1009-45f4-aa07-d273c4168707 does not exist
Oct  3 11:37:57 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 9e2430d5-7342-40f7-bf15-1fe418a34a10 does not exist
Oct  3 11:37:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:37:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:37:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:37:57 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:37:57 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:37:57 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:37:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:37:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:37:57 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:37:58 compute-0 podman[549639]: 2025-10-03 11:37:58.13165182 +0000 UTC m=+0.065093428 container create 853a20c098daadc0024d7de0b6aa113fdbf429ce4f5ba0b8019d0c52f39c5c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:37:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3924: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:37:58 compute-0 systemd[1]: Started libpod-conmon-853a20c098daadc0024d7de0b6aa113fdbf429ce4f5ba0b8019d0c52f39c5c2f.scope.
Oct  3 11:37:58 compute-0 podman[549639]: 2025-10-03 11:37:58.110929176 +0000 UTC m=+0.044370814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:37:58 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:37:58 compute-0 podman[549639]: 2025-10-03 11:37:58.287798014 +0000 UTC m=+0.221239652 container init 853a20c098daadc0024d7de0b6aa113fdbf429ce4f5ba0b8019d0c52f39c5c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_carver, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 11:37:58 compute-0 podman[549639]: 2025-10-03 11:37:58.300784221 +0000 UTC m=+0.234225869 container start 853a20c098daadc0024d7de0b6aa113fdbf429ce4f5ba0b8019d0c52f39c5c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_carver, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  3 11:37:58 compute-0 podman[549639]: 2025-10-03 11:37:58.308971233 +0000 UTC m=+0.242412851 container attach 853a20c098daadc0024d7de0b6aa113fdbf429ce4f5ba0b8019d0c52f39c5c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct  3 11:37:58 compute-0 brave_carver[549654]: 167 167
Oct  3 11:37:58 compute-0 systemd[1]: libpod-853a20c098daadc0024d7de0b6aa113fdbf429ce4f5ba0b8019d0c52f39c5c2f.scope: Deactivated successfully.
Oct  3 11:37:58 compute-0 podman[549639]: 2025-10-03 11:37:58.317422454 +0000 UTC m=+0.250864072 container died 853a20c098daadc0024d7de0b6aa113fdbf429ce4f5ba0b8019d0c52f39c5c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_carver, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:37:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-37202bfd2541ac03b9e7df1afb1a2140b817f907ca512ed18b801bde7a027ada-merged.mount: Deactivated successfully.
Oct  3 11:37:58 compute-0 podman[549639]: 2025-10-03 11:37:58.373740269 +0000 UTC m=+0.307181877 container remove 853a20c098daadc0024d7de0b6aa113fdbf429ce4f5ba0b8019d0c52f39c5c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_carver, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:37:58 compute-0 systemd[1]: libpod-conmon-853a20c098daadc0024d7de0b6aa113fdbf429ce4f5ba0b8019d0c52f39c5c2f.scope: Deactivated successfully.
Oct  3 11:37:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:37:58 compute-0 podman[549678]: 2025-10-03 11:37:58.668902338 +0000 UTC m=+0.079050174 container create d1218569aff84eadb401b27c61376fc29e334525ebc821f6a81752473dddb24a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sanderson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  3 11:37:58 compute-0 podman[549678]: 2025-10-03 11:37:58.633493594 +0000 UTC m=+0.043641420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:37:58 compute-0 systemd[1]: Started libpod-conmon-d1218569aff84eadb401b27c61376fc29e334525ebc821f6a81752473dddb24a.scope.
Oct  3 11:37:58 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b7e55c576d39c7980d5dfd1bd31d98fe3e2e842f7ffc5a715301c808eff6682/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b7e55c576d39c7980d5dfd1bd31d98fe3e2e842f7ffc5a715301c808eff6682/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b7e55c576d39c7980d5dfd1bd31d98fe3e2e842f7ffc5a715301c808eff6682/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b7e55c576d39c7980d5dfd1bd31d98fe3e2e842f7ffc5a715301c808eff6682/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b7e55c576d39c7980d5dfd1bd31d98fe3e2e842f7ffc5a715301c808eff6682/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:37:58 compute-0 podman[549678]: 2025-10-03 11:37:58.83555634 +0000 UTC m=+0.245704166 container init d1218569aff84eadb401b27c61376fc29e334525ebc821f6a81752473dddb24a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  3 11:37:58 compute-0 podman[549678]: 2025-10-03 11:37:58.862461712 +0000 UTC m=+0.272609558 container start d1218569aff84eadb401b27c61376fc29e334525ebc821f6a81752473dddb24a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sanderson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  3 11:37:58 compute-0 podman[549678]: 2025-10-03 11:37:58.869375064 +0000 UTC m=+0.279522890 container attach d1218569aff84eadb401b27c61376fc29e334525ebc821f6a81752473dddb24a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:37:58 compute-0 nova_compute[351685]: 2025-10-03 11:37:58.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:37:59 compute-0 nova_compute[351685]: 2025-10-03 11:37:59.019 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Updating instance_info_cache with network_info: [{"id": "226590bd-fa92-4e26-8879-8782d015ad61", "address": "fa:16:3e:c0:36:62", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.141", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap226590bd-fa", "ovs_interfaceid": "226590bd-fa92-4e26-8879-8782d015ad61", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:37:59 compute-0 nova_compute[351685]: 2025-10-03 11:37:59.040 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-83fc22ce-d2e4-468a-b166-04f2743fa68d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:37:59 compute-0 nova_compute[351685]: 2025-10-03 11:37:59.040 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:37:59 compute-0 nova_compute[351685]: 2025-10-03 11:37:59.042 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:37:59 compute-0 nova_compute[351685]: 2025-10-03 11:37:59.042 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  3 11:37:59 compute-0 podman[157165]: time="2025-10-03T11:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:37:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 49206 "" "Go-http-client/1.1"
Oct  3 11:37:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 10023 "" "Go-http-client/1.1"
Oct  3 11:38:00 compute-0 stoic_sanderson[549693]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:38:00 compute-0 stoic_sanderson[549693]: --> relative data size: 1.0
Oct  3 11:38:00 compute-0 stoic_sanderson[549693]: --> All data devices are unavailable
Oct  3 11:38:00 compute-0 systemd[1]: libpod-d1218569aff84eadb401b27c61376fc29e334525ebc821f6a81752473dddb24a.scope: Deactivated successfully.
Oct  3 11:38:00 compute-0 systemd[1]: libpod-d1218569aff84eadb401b27c61376fc29e334525ebc821f6a81752473dddb24a.scope: Consumed 1.156s CPU time.
Oct  3 11:38:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3925: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:00 compute-0 podman[549723]: 2025-10-03 11:38:00.164365567 +0000 UTC m=+0.041517561 container died d1218569aff84eadb401b27c61376fc29e334525ebc821f6a81752473dddb24a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sanderson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:38:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b7e55c576d39c7980d5dfd1bd31d98fe3e2e842f7ffc5a715301c808eff6682-merged.mount: Deactivated successfully.
Oct  3 11:38:00 compute-0 podman[549723]: 2025-10-03 11:38:00.249515427 +0000 UTC m=+0.126667391 container remove d1218569aff84eadb401b27c61376fc29e334525ebc821f6a81752473dddb24a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:38:00 compute-0 systemd[1]: libpod-conmon-d1218569aff84eadb401b27c61376fc29e334525ebc821f6a81752473dddb24a.scope: Deactivated successfully.
Oct  3 11:38:00 compute-0 nova_compute[351685]: 2025-10-03 11:38:00.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:01 compute-0 podman[549875]: 2025-10-03 11:38:01.15201207 +0000 UTC m=+0.061959606 container create bca04fd52ed52bb03e7889dc4bda9948130681848dda491b0a317c384c09b08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:38:01 compute-0 systemd[1]: Started libpod-conmon-bca04fd52ed52bb03e7889dc4bda9948130681848dda491b0a317c384c09b08c.scope.
Oct  3 11:38:01 compute-0 podman[549875]: 2025-10-03 11:38:01.134146849 +0000 UTC m=+0.044094405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:38:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:38:01 compute-0 podman[549875]: 2025-10-03 11:38:01.293181545 +0000 UTC m=+0.203129151 container init bca04fd52ed52bb03e7889dc4bda9948130681848dda491b0a317c384c09b08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:38:01 compute-0 podman[549875]: 2025-10-03 11:38:01.310513331 +0000 UTC m=+0.220460877 container start bca04fd52ed52bb03e7889dc4bda9948130681848dda491b0a317c384c09b08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:38:01 compute-0 podman[549875]: 2025-10-03 11:38:01.315464409 +0000 UTC m=+0.225412035 container attach bca04fd52ed52bb03e7889dc4bda9948130681848dda491b0a317c384c09b08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  3 11:38:01 compute-0 nice_bassi[549890]: 167 167
Oct  3 11:38:01 compute-0 systemd[1]: libpod-bca04fd52ed52bb03e7889dc4bda9948130681848dda491b0a317c384c09b08c.scope: Deactivated successfully.
Oct  3 11:38:01 compute-0 podman[549895]: 2025-10-03 11:38:01.379073948 +0000 UTC m=+0.048354131 container died bca04fd52ed52bb03e7889dc4bda9948130681848dda491b0a317c384c09b08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:38:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a9f82cfea34325c95251daaeeb802a5466e6e2df4fc6bf739e2040b32e87d96-merged.mount: Deactivated successfully.
Oct  3 11:38:01 compute-0 openstack_network_exporter[367524]: ERROR   11:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:38:01 compute-0 openstack_network_exporter[367524]: ERROR   11:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:38:01 compute-0 openstack_network_exporter[367524]: ERROR   11:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:38:01 compute-0 openstack_network_exporter[367524]: ERROR   11:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:38:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:38:01 compute-0 openstack_network_exporter[367524]: ERROR   11:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:38:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:38:01 compute-0 podman[549895]: 2025-10-03 11:38:01.435055642 +0000 UTC m=+0.104335795 container remove bca04fd52ed52bb03e7889dc4bda9948130681848dda491b0a317c384c09b08c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bassi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  3 11:38:01 compute-0 systemd[1]: libpod-conmon-bca04fd52ed52bb03e7889dc4bda9948130681848dda491b0a317c384c09b08c.scope: Deactivated successfully.
Oct  3 11:38:01 compute-0 podman[549916]: 2025-10-03 11:38:01.661565441 +0000 UTC m=+0.068673721 container create 4ae3e67ef097ab1f457ffd6457cbc9b1744e33af4ba9d061ef8df8fa3cf48158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 11:38:01 compute-0 podman[549916]: 2025-10-03 11:38:01.627094687 +0000 UTC m=+0.034203017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:38:01 compute-0 systemd[1]: Started libpod-conmon-4ae3e67ef097ab1f457ffd6457cbc9b1744e33af4ba9d061ef8df8fa3cf48158.scope.
Oct  3 11:38:01 compute-0 nova_compute[351685]: 2025-10-03 11:38:01.745 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:38:01 compute-0 nova_compute[351685]: 2025-10-03 11:38:01.747 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:38:01 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29db413f988b339a6b13bc5c6959845325247c17efd1fd546282f1d46000812d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29db413f988b339a6b13bc5c6959845325247c17efd1fd546282f1d46000812d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29db413f988b339a6b13bc5c6959845325247c17efd1fd546282f1d46000812d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:38:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29db413f988b339a6b13bc5c6959845325247c17efd1fd546282f1d46000812d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:38:01 compute-0 podman[549916]: 2025-10-03 11:38:01.825096723 +0000 UTC m=+0.232205013 container init 4ae3e67ef097ab1f457ffd6457cbc9b1744e33af4ba9d061ef8df8fa3cf48158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:38:01 compute-0 podman[549916]: 2025-10-03 11:38:01.843314436 +0000 UTC m=+0.250422736 container start 4ae3e67ef097ab1f457ffd6457cbc9b1744e33af4ba9d061ef8df8fa3cf48158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  3 11:38:01 compute-0 podman[549916]: 2025-10-03 11:38:01.848577385 +0000 UTC m=+0.255685705 container attach 4ae3e67ef097ab1f457ffd6457cbc9b1744e33af4ba9d061ef8df8fa3cf48158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 11:38:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3926: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]: {
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:    "0": [
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:        {
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "devices": [
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "/dev/loop3"
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            ],
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "lv_name": "ceph_lv0",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "lv_size": "21470642176",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "name": "ceph_lv0",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "tags": {
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.cluster_name": "ceph",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.crush_device_class": "",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.encrypted": "0",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.osd_id": "0",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.type": "block",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.vdo": "0"
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            },
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "type": "block",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "vg_name": "ceph_vg0"
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:        }
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:    ],
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:    "1": [
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:        {
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "devices": [
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "/dev/loop4"
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            ],
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "lv_name": "ceph_lv1",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "lv_size": "21470642176",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "name": "ceph_lv1",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "tags": {
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.cluster_name": "ceph",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.crush_device_class": "",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.encrypted": "0",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.osd_id": "1",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.type": "block",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.vdo": "0"
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            },
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "type": "block",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "vg_name": "ceph_vg1"
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:        }
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:    ],
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:    "2": [
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:        {
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "devices": [
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "/dev/loop5"
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            ],
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "lv_name": "ceph_lv2",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "lv_size": "21470642176",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "name": "ceph_lv2",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "tags": {
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.cluster_name": "ceph",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.crush_device_class": "",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.encrypted": "0",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.osd_id": "2",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.type": "block",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:                "ceph.vdo": "0"
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            },
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "type": "block",
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:            "vg_name": "ceph_vg2"
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:        }
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]:    ]
Oct  3 11:38:02 compute-0 admiring_khayyam[549931]: }
Oct  3 11:38:02 compute-0 systemd[1]: libpod-4ae3e67ef097ab1f457ffd6457cbc9b1744e33af4ba9d061ef8df8fa3cf48158.scope: Deactivated successfully.
Oct  3 11:38:02 compute-0 podman[549916]: 2025-10-03 11:38:02.753688743 +0000 UTC m=+1.160797023 container died 4ae3e67ef097ab1f457ffd6457cbc9b1744e33af4ba9d061ef8df8fa3cf48158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 11:38:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-29db413f988b339a6b13bc5c6959845325247c17efd1fd546282f1d46000812d-merged.mount: Deactivated successfully.
Oct  3 11:38:02 compute-0 podman[549916]: 2025-10-03 11:38:02.821720714 +0000 UTC m=+1.228829004 container remove 4ae3e67ef097ab1f457ffd6457cbc9b1744e33af4ba9d061ef8df8fa3cf48158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  3 11:38:02 compute-0 systemd[1]: libpod-conmon-4ae3e67ef097ab1f457ffd6457cbc9b1744e33af4ba9d061ef8df8fa3cf48158.scope: Deactivated successfully.
Oct  3 11:38:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:38:03 compute-0 podman[550089]: 2025-10-03 11:38:03.807352313 +0000 UTC m=+0.081145252 container create c50b13364b2d89c9564e23e24986154dfb1b4a8c58f3cfa14b09ff183616050f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 11:38:03 compute-0 podman[550089]: 2025-10-03 11:38:03.768055223 +0000 UTC m=+0.041848232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:38:03 compute-0 systemd[1]: Started libpod-conmon-c50b13364b2d89c9564e23e24986154dfb1b4a8c58f3cfa14b09ff183616050f.scope.
Oct  3 11:38:03 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:38:03 compute-0 podman[550089]: 2025-10-03 11:38:03.949120016 +0000 UTC m=+0.222913015 container init c50b13364b2d89c9564e23e24986154dfb1b4a8c58f3cfa14b09ff183616050f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 11:38:03 compute-0 nova_compute[351685]: 2025-10-03 11:38:03.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:03 compute-0 podman[550089]: 2025-10-03 11:38:03.964610603 +0000 UTC m=+0.238403512 container start c50b13364b2d89c9564e23e24986154dfb1b4a8c58f3cfa14b09ff183616050f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  3 11:38:03 compute-0 podman[550089]: 2025-10-03 11:38:03.969568102 +0000 UTC m=+0.243361021 container attach c50b13364b2d89c9564e23e24986154dfb1b4a8c58f3cfa14b09ff183616050f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:38:03 compute-0 mystifying_dewdney[550105]: 167 167
Oct  3 11:38:03 compute-0 systemd[1]: libpod-c50b13364b2d89c9564e23e24986154dfb1b4a8c58f3cfa14b09ff183616050f.scope: Deactivated successfully.
Oct  3 11:38:03 compute-0 podman[550089]: 2025-10-03 11:38:03.975065628 +0000 UTC m=+0.248858557 container died c50b13364b2d89c9564e23e24986154dfb1b4a8c58f3cfa14b09ff183616050f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  3 11:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce289e919de0ce390a66afe5a5bee71d975b8c206a45c13a9033c11c58ba3e27-merged.mount: Deactivated successfully.
Oct  3 11:38:04 compute-0 podman[550089]: 2025-10-03 11:38:04.034026608 +0000 UTC m=+0.307819527 container remove c50b13364b2d89c9564e23e24986154dfb1b4a8c58f3cfa14b09ff183616050f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  3 11:38:04 compute-0 systemd[1]: libpod-conmon-c50b13364b2d89c9564e23e24986154dfb1b4a8c58f3cfa14b09ff183616050f.scope: Deactivated successfully.
Oct  3 11:38:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3927: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:04 compute-0 podman[550128]: 2025-10-03 11:38:04.298815674 +0000 UTC m=+0.077978020 container create 3cfdf86fb0526875015cad0fc289ed484b13b1a401f6f16804ecf574a8fd1dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_haibt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 11:38:04 compute-0 podman[550128]: 2025-10-03 11:38:04.271378564 +0000 UTC m=+0.050540920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:38:04 compute-0 systemd[1]: Started libpod-conmon-3cfdf86fb0526875015cad0fc289ed484b13b1a401f6f16804ecf574a8fd1dc1.scope.
Oct  3 11:38:04 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948bc2286427512f06d79615dc1082a804a7b499f1cc58a7ef7cd27d6c2e18ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948bc2286427512f06d79615dc1082a804a7b499f1cc58a7ef7cd27d6c2e18ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948bc2286427512f06d79615dc1082a804a7b499f1cc58a7ef7cd27d6c2e18ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:38:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/948bc2286427512f06d79615dc1082a804a7b499f1cc58a7ef7cd27d6c2e18ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:38:04 compute-0 podman[550128]: 2025-10-03 11:38:04.438850672 +0000 UTC m=+0.218013078 container init 3cfdf86fb0526875015cad0fc289ed484b13b1a401f6f16804ecf574a8fd1dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 11:38:04 compute-0 podman[550128]: 2025-10-03 11:38:04.458391718 +0000 UTC m=+0.237554044 container start 3cfdf86fb0526875015cad0fc289ed484b13b1a401f6f16804ecf574a8fd1dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  3 11:38:04 compute-0 podman[550128]: 2025-10-03 11:38:04.463869433 +0000 UTC m=+0.243031809 container attach 3cfdf86fb0526875015cad0fc289ed484b13b1a401f6f16804ecf574a8fd1dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_haibt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:38:05 compute-0 nova_compute[351685]: 2025-10-03 11:38:05.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:05 compute-0 funny_haibt[550144]: {
Oct  3 11:38:05 compute-0 funny_haibt[550144]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:38:05 compute-0 funny_haibt[550144]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:38:05 compute-0 funny_haibt[550144]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:38:05 compute-0 funny_haibt[550144]:        "osd_id": 1,
Oct  3 11:38:05 compute-0 funny_haibt[550144]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:38:05 compute-0 funny_haibt[550144]:        "type": "bluestore"
Oct  3 11:38:05 compute-0 funny_haibt[550144]:    },
Oct  3 11:38:05 compute-0 funny_haibt[550144]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:38:05 compute-0 funny_haibt[550144]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:38:05 compute-0 funny_haibt[550144]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:38:05 compute-0 funny_haibt[550144]:        "osd_id": 2,
Oct  3 11:38:05 compute-0 funny_haibt[550144]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:38:05 compute-0 funny_haibt[550144]:        "type": "bluestore"
Oct  3 11:38:05 compute-0 funny_haibt[550144]:    },
Oct  3 11:38:05 compute-0 funny_haibt[550144]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:38:05 compute-0 funny_haibt[550144]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:38:05 compute-0 funny_haibt[550144]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:38:05 compute-0 funny_haibt[550144]:        "osd_id": 0,
Oct  3 11:38:05 compute-0 funny_haibt[550144]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:38:05 compute-0 funny_haibt[550144]:        "type": "bluestore"
Oct  3 11:38:05 compute-0 funny_haibt[550144]:    }
Oct  3 11:38:05 compute-0 funny_haibt[550144]: }
Oct  3 11:38:05 compute-0 systemd[1]: libpod-3cfdf86fb0526875015cad0fc289ed484b13b1a401f6f16804ecf574a8fd1dc1.scope: Deactivated successfully.
Oct  3 11:38:05 compute-0 systemd[1]: libpod-3cfdf86fb0526875015cad0fc289ed484b13b1a401f6f16804ecf574a8fd1dc1.scope: Consumed 1.100s CPU time.
Oct  3 11:38:05 compute-0 conmon[550144]: conmon 3cfdf86fb0526875015c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3cfdf86fb0526875015cad0fc289ed484b13b1a401f6f16804ecf574a8fd1dc1.scope/container/memory.events
Oct  3 11:38:05 compute-0 podman[550128]: 2025-10-03 11:38:05.561790191 +0000 UTC m=+1.340952547 container died 3cfdf86fb0526875015cad0fc289ed484b13b1a401f6f16804ecf574a8fd1dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  3 11:38:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-948bc2286427512f06d79615dc1082a804a7b499f1cc58a7ef7cd27d6c2e18ed-merged.mount: Deactivated successfully.
Oct  3 11:38:05 compute-0 podman[550128]: 2025-10-03 11:38:05.677670996 +0000 UTC m=+1.456833312 container remove 3cfdf86fb0526875015cad0fc289ed484b13b1a401f6f16804ecf574a8fd1dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 11:38:05 compute-0 systemd[1]: libpod-conmon-3cfdf86fb0526875015cad0fc289ed484b13b1a401f6f16804ecf574a8fd1dc1.scope: Deactivated successfully.
Oct  3 11:38:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:38:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:38:05 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:38:05 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:38:05 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f0d9d868-42a6-4d32-8ed0-a87f98622a0d does not exist
Oct  3 11:38:05 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 0c7cbe45-c22c-47fd-b93d-9fcf07b87a42 does not exist
Oct  3 11:38:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3928: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:06 compute-0 nova_compute[351685]: 2025-10-03 11:38:06.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:38:06 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:38:06 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:38:06 compute-0 nova_compute[351685]: 2025-10-03 11:38:06.756 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:38:06 compute-0 nova_compute[351685]: 2025-10-03 11:38:06.757 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:38:06 compute-0 nova_compute[351685]: 2025-10-03 11:38:06.757 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:38:06 compute-0 nova_compute[351685]: 2025-10-03 11:38:06.757 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:38:06 compute-0 nova_compute[351685]: 2025-10-03 11:38:06.757 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:38:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:38:07 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2670325246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:38:07 compute-0 nova_compute[351685]: 2025-10-03 11:38:07.280 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:38:07 compute-0 nova_compute[351685]: 2025-10-03 11:38:07.413 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:38:07 compute-0 nova_compute[351685]: 2025-10-03 11:38:07.414 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:38:07 compute-0 nova_compute[351685]: 2025-10-03 11:38:07.421 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:38:07 compute-0 nova_compute[351685]: 2025-10-03 11:38:07.421 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:38:07 compute-0 nova_compute[351685]: 2025-10-03 11:38:07.422 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:38:07 compute-0 nova_compute[351685]: 2025-10-03 11:38:07.428 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:38:07 compute-0 nova_compute[351685]: 2025-10-03 11:38:07.428 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:38:07 compute-0 nova_compute[351685]: 2025-10-03 11:38:07.921 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:38:07 compute-0 nova_compute[351685]: 2025-10-03 11:38:07.922 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3134MB free_disk=59.863914489746094GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:38:07 compute-0 nova_compute[351685]: 2025-10-03 11:38:07.923 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:38:07 compute-0 nova_compute[351685]: 2025-10-03 11:38:07.923 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:38:08 compute-0 nova_compute[351685]: 2025-10-03 11:38:08.000 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:38:08 compute-0 nova_compute[351685]: 2025-10-03 11:38:08.000 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 83fc22ce-d2e4-468a-b166-04f2743fa68d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:38:08 compute-0 nova_compute[351685]: 2025-10-03 11:38:08.001 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 443e486d-1bf2-4550-a4ae-32f0f8f4af19 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:38:08 compute-0 nova_compute[351685]: 2025-10-03 11:38:08.001 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:38:08 compute-0 nova_compute[351685]: 2025-10-03 11:38:08.001 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:38:08 compute-0 nova_compute[351685]: 2025-10-03 11:38:08.095 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:38:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3929: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:38:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200294348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:38:08 compute-0 nova_compute[351685]: 2025-10-03 11:38:08.600 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:38:08 compute-0 nova_compute[351685]: 2025-10-03 11:38:08.611 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:38:08 compute-0 nova_compute[351685]: 2025-10-03 11:38:08.626 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:38:08 compute-0 nova_compute[351685]: 2025-10-03 11:38:08.628 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:38:08 compute-0 nova_compute[351685]: 2025-10-03 11:38:08.628 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:38:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:38:08 compute-0 nova_compute[351685]: 2025-10-03 11:38:08.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3930: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:10 compute-0 nova_compute[351685]: 2025-10-03 11:38:10.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:10 compute-0 podman[550285]: 2025-10-03 11:38:10.85981621 +0000 UTC m=+0.106118262 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc.)
Oct  3 11:38:10 compute-0 podman[550284]: 2025-10-03 11:38:10.86700362 +0000 UTC m=+0.113213949 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 11:38:10 compute-0 podman[550287]: 2025-10-03 11:38:10.872307219 +0000 UTC m=+0.101454772 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:38:10 compute-0 podman[550305]: 2025-10-03 11:38:10.880539533 +0000 UTC m=+0.097471924 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:38:10 compute-0 podman[550286]: 2025-10-03 11:38:10.889131259 +0000 UTC m=+0.130028258 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  3 11:38:10 compute-0 podman[550288]: 2025-10-03 11:38:10.890969938 +0000 UTC m=+0.117105295 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 11:38:10 compute-0 podman[550299]: 2025-10-03 11:38:10.908900472 +0000 UTC m=+0.134809391 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  3 11:38:11 compute-0 nova_compute[351685]: 2025-10-03 11:38:11.628 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:38:11 compute-0 nova_compute[351685]: 2025-10-03 11:38:11.629 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:38:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3931: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:38:13 compute-0 nova_compute[351685]: 2025-10-03 11:38:13.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3932: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:14 compute-0 nova_compute[351685]: 2025-10-03 11:38:14.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:38:15 compute-0 nova_compute[351685]: 2025-10-03 11:38:15.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:15 compute-0 nova_compute[351685]: 2025-10-03 11:38:15.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:38:15 compute-0 nova_compute[351685]: 2025-10-03 11:38:15.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:38:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3933: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:38:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:38:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:38:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:38:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:38:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:38:16 compute-0 nova_compute[351685]: 2025-10-03 11:38:16.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:38:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3934: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:38:18 compute-0 nova_compute[351685]: 2025-10-03 11:38:18.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3935: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:20 compute-0 nova_compute[351685]: 2025-10-03 11:38:20.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:21 compute-0 podman[550420]: 2025-10-03 11:38:21.823912167 +0000 UTC m=+0.076729309 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:38:21 compute-0 podman[550421]: 2025-10-03 11:38:21.831212302 +0000 UTC m=+0.077164395 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, distribution-scope=public, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, version=9.4, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=)
Oct  3 11:38:21 compute-0 podman[550422]: 2025-10-03 11:38:21.84989131 +0000 UTC m=+0.105359067 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:38:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3936: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:38:23 compute-0 nova_compute[351685]: 2025-10-03 11:38:23.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3937: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:25 compute-0 nova_compute[351685]: 2025-10-03 11:38:25.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3938: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3939: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:38:28 compute-0 nova_compute[351685]: 2025-10-03 11:38:28.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:29 compute-0 podman[157165]: time="2025-10-03T11:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:38:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:38:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9605 "" "Go-http-client/1.1"
Oct  3 11:38:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3940: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:30 compute-0 nova_compute[351685]: 2025-10-03 11:38:30.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:31 compute-0 openstack_network_exporter[367524]: ERROR   11:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:38:31 compute-0 openstack_network_exporter[367524]: ERROR   11:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:38:31 compute-0 openstack_network_exporter[367524]: ERROR   11:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:38:31 compute-0 openstack_network_exporter[367524]: ERROR   11:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:38:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:38:31 compute-0 openstack_network_exporter[367524]: ERROR   11:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:38:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:38:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3941: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:38:33 compute-0 nova_compute[351685]: 2025-10-03 11:38:33.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3942: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:35 compute-0 nova_compute[351685]: 2025-10-03 11:38:35.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3943: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3944: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:38:38 compute-0 nova_compute[351685]: 2025-10-03 11:38:38.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3945: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:40 compute-0 nova_compute[351685]: 2025-10-03 11:38:40.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:38:41.718 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:38:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:38:41.719 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:38:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:38:41.720 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:38:41 compute-0 podman[550481]: 2025-10-03 11:38:41.878676306 +0000 UTC m=+0.133711906 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, version=9.6)
Oct  3 11:38:41 compute-0 podman[550480]: 2025-10-03 11:38:41.883752199 +0000 UTC m=+0.131544617 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  3 11:38:41 compute-0 podman[550501]: 2025-10-03 11:38:41.884144052 +0000 UTC m=+0.101006529 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  3 11:38:41 compute-0 podman[550482]: 2025-10-03 11:38:41.898830672 +0000 UTC m=+0.132886540 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:38:41 compute-0 podman[550492]: 2025-10-03 11:38:41.918063649 +0000 UTC m=+0.136660691 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Oct  3 11:38:41 compute-0 podman[550487]: 2025-10-03 11:38:41.927931164 +0000 UTC m=+0.148448298 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd)
Oct  3 11:38:41 compute-0 podman[550494]: 2025-10-03 11:38:41.938698659 +0000 UTC m=+0.163888323 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 11:38:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3946: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:38:43 compute-0 nova_compute[351685]: 2025-10-03 11:38:43.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3947: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:45 compute-0 nova_compute[351685]: 2025-10-03 11:38:45.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3948: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:38:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:38:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:38:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:38:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:38:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:38:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:38:46
Oct  3 11:38:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:38:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:38:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'backups']
Oct  3 11:38:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:38:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:38:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:38:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:38:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:38:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:38:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:38:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:38:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:38:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:38:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:38:47 compute-0 nova_compute[351685]: 2025-10-03 11:38:47.724 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:38:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3949: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:38:48 compute-0 nova_compute[351685]: 2025-10-03 11:38:48.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3950: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:50 compute-0 nova_compute[351685]: 2025-10-03 11:38:50.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3951: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:52 compute-0 podman[550617]: 2025-10-03 11:38:52.558318056 +0000 UTC m=+0.073639121 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct  3 11:38:52 compute-0 podman[550619]: 2025-10-03 11:38:52.599566258 +0000 UTC m=+0.091940708 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 11:38:52 compute-0 podman[550618]: 2025-10-03 11:38:52.604657102 +0000 UTC m=+0.098272721 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, container_name=kepler, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, com.redhat.component=ubi9-container, name=ubi9)
Oct  3 11:38:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:38:53 compute-0 nova_compute[351685]: 2025-10-03 11:38:53.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:38:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3627509961' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:38:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:38:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3627509961' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:38:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3952: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:55 compute-0 nova_compute[351685]: 2025-10-03 11:38:55.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3953: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020731842563651757 of space, bias 1.0, pg target 0.6219552769095527 quantized to 32 (current 32)
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:38:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:38:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3954: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:38:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:38:58 compute-0 nova_compute[351685]: 2025-10-03 11:38:58.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:38:58 compute-0 nova_compute[351685]: 2025-10-03 11:38:58.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:38:59 compute-0 nova_compute[351685]: 2025-10-03 11:38:58.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:38:59 compute-0 nova_compute[351685]: 2025-10-03 11:38:59.504 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-443e486d-1bf2-4550-a4ae-32f0f8f4af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:38:59 compute-0 nova_compute[351685]: 2025-10-03 11:38:59.505 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-443e486d-1bf2-4550-a4ae-32f0f8f4af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:38:59 compute-0 nova_compute[351685]: 2025-10-03 11:38:59.505 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:38:59 compute-0 podman[157165]: time="2025-10-03T11:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:38:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:38:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9617 "" "Go-http-client/1.1"
Oct  3 11:39:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3955: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:00 compute-0 nova_compute[351685]: 2025-10-03 11:39:00.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:01 compute-0 nova_compute[351685]: 2025-10-03 11:39:01.139 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Updating instance_info_cache with network_info: [{"id": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "address": "fa:16:3e:5e:f1:a3", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6a8cc09-54", "ovs_interfaceid": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:39:01 compute-0 nova_compute[351685]: 2025-10-03 11:39:01.154 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-443e486d-1bf2-4550-a4ae-32f0f8f4af19" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:39:01 compute-0 nova_compute[351685]: 2025-10-03 11:39:01.155 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:39:01 compute-0 openstack_network_exporter[367524]: ERROR   11:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:39:01 compute-0 openstack_network_exporter[367524]: ERROR   11:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:39:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:39:01 compute-0 openstack_network_exporter[367524]: ERROR   11:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:39:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:39:01 compute-0 openstack_network_exporter[367524]: ERROR   11:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:39:01 compute-0 openstack_network_exporter[367524]: ERROR   11:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:39:01 compute-0 nova_compute[351685]: 2025-10-03 11:39:01.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:39:01 compute-0 nova_compute[351685]: 2025-10-03 11:39:01.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:39:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3956: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:39:04 compute-0 nova_compute[351685]: 2025-10-03 11:39:04.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3957: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:05 compute-0 nova_compute[351685]: 2025-10-03 11:39:05.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3958: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:06 compute-0 nova_compute[351685]: 2025-10-03 11:39:06.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:39:06 compute-0 nova_compute[351685]: 2025-10-03 11:39:06.754 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:39:06 compute-0 nova_compute[351685]: 2025-10-03 11:39:06.754 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:39:06 compute-0 nova_compute[351685]: 2025-10-03 11:39:06.755 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:39:06 compute-0 nova_compute[351685]: 2025-10-03 11:39:06.755 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:39:06 compute-0 nova_compute[351685]: 2025-10-03 11:39:06.755 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:39:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:39:07 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:39:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:39:07 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:39:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:39:07 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:39:07 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev bfd6a4b6-9d19-4e32-b537-b63f8c48aec9 does not exist
Oct  3 11:39:07 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 083f5786-c1dd-47ab-9b3e-2d5abb2f618a does not exist
Oct  3 11:39:07 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 354840db-30a9-4b85-ab08-6dd6ee2171a1 does not exist
Oct  3 11:39:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:39:07 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:39:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:39:07 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:39:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:39:07 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:39:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:39:07 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/539201069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.275 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:39:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:39:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:39:07 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.365 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.365 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.371 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.371 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.371 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.376 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.377 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.772 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.773 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3176MB free_disk=59.863914489746094GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.774 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.774 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.856 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.857 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 83fc22ce-d2e4-468a-b166-04f2743fa68d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.857 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 443e486d-1bf2-4550-a4ae-32f0f8f4af19 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.858 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.858 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.872 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing inventories for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.891 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating ProviderTree inventory for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.892 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Updating inventory in ProviderTree for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.909 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing aggregate associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  3 11:39:07 compute-0 nova_compute[351685]: 2025-10-03 11:39:07.934 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Refreshing trait associations for resource provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a, traits: COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_FMA3,HW_CPU_X86_F16C,HW_CPU_X86_SVM,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE42,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  3 11:39:07 compute-0 podman[550967]: 2025-10-03 11:39:07.980873207 +0000 UTC m=+0.114475350 container create d12318e90e4b985c66b81c30c99178f023eafeeeee45b5010acd0783749c641c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:39:07 compute-0 podman[550967]: 2025-10-03 11:39:07.897141303 +0000 UTC m=+0.030743416 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:39:08 compute-0 nova_compute[351685]: 2025-10-03 11:39:08.016 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:39:08 compute-0 systemd[1]: Started libpod-conmon-d12318e90e4b985c66b81c30c99178f023eafeeeee45b5010acd0783749c641c.scope.
Oct  3 11:39:08 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:39:08 compute-0 podman[550967]: 2025-10-03 11:39:08.121414911 +0000 UTC m=+0.255017034 container init d12318e90e4b985c66b81c30c99178f023eafeeeee45b5010acd0783749c641c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  3 11:39:08 compute-0 podman[550967]: 2025-10-03 11:39:08.13541929 +0000 UTC m=+0.269021393 container start d12318e90e4b985c66b81c30c99178f023eafeeeee45b5010acd0783749c641c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  3 11:39:08 compute-0 podman[550967]: 2025-10-03 11:39:08.14042595 +0000 UTC m=+0.274028053 container attach d12318e90e4b985c66b81c30c99178f023eafeeeee45b5010acd0783749c641c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:39:08 compute-0 focused_gould[550984]: 167 167
Oct  3 11:39:08 compute-0 systemd[1]: libpod-d12318e90e4b985c66b81c30c99178f023eafeeeee45b5010acd0783749c641c.scope: Deactivated successfully.
Oct  3 11:39:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3959: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:08 compute-0 podman[550989]: 2025-10-03 11:39:08.205430804 +0000 UTC m=+0.039672773 container died d12318e90e4b985c66b81c30c99178f023eafeeeee45b5010acd0783749c641c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  3 11:39:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-84ac63b33c2ac1b5cd84d6a511be09d65405651736e96daa2c16f04bfd587360-merged.mount: Deactivated successfully.
Oct  3 11:39:08 compute-0 podman[550989]: 2025-10-03 11:39:08.282144943 +0000 UTC m=+0.116386892 container remove d12318e90e4b985c66b81c30c99178f023eafeeeee45b5010acd0783749c641c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:39:08 compute-0 systemd[1]: libpod-conmon-d12318e90e4b985c66b81c30c99178f023eafeeeee45b5010acd0783749c641c.scope: Deactivated successfully.
Oct  3 11:39:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:39:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3479055062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:39:08 compute-0 podman[551027]: 2025-10-03 11:39:08.538785658 +0000 UTC m=+0.068133445 container create a7e7963b898fe57cb3060964b660d68fe0334662fbe960a6a25ce253a5c9d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  3 11:39:08 compute-0 nova_compute[351685]: 2025-10-03 11:39:08.546 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:39:08 compute-0 nova_compute[351685]: 2025-10-03 11:39:08.559 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:39:08 compute-0 nova_compute[351685]: 2025-10-03 11:39:08.579 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:39:08 compute-0 nova_compute[351685]: 2025-10-03 11:39:08.581 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:39:08 compute-0 nova_compute[351685]: 2025-10-03 11:39:08.582 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:39:08 compute-0 podman[551027]: 2025-10-03 11:39:08.506602966 +0000 UTC m=+0.035950843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:39:08 compute-0 systemd[1]: Started libpod-conmon-a7e7963b898fe57cb3060964b660d68fe0334662fbe960a6a25ce253a5c9d444.scope.
Oct  3 11:39:08 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969a70c60676cb5fe6778be4d253126cf726a112c3c627f503abc4d777c79e11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969a70c60676cb5fe6778be4d253126cf726a112c3c627f503abc4d777c79e11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969a70c60676cb5fe6778be4d253126cf726a112c3c627f503abc4d777c79e11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969a70c60676cb5fe6778be4d253126cf726a112c3c627f503abc4d777c79e11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969a70c60676cb5fe6778be4d253126cf726a112c3c627f503abc4d777c79e11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:39:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:39:08 compute-0 podman[551027]: 2025-10-03 11:39:08.682397431 +0000 UTC m=+0.211745308 container init a7e7963b898fe57cb3060964b660d68fe0334662fbe960a6a25ce253a5c9d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct  3 11:39:08 compute-0 podman[551027]: 2025-10-03 11:39:08.699501608 +0000 UTC m=+0.228849405 container start a7e7963b898fe57cb3060964b660d68fe0334662fbe960a6a25ce253a5c9d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 11:39:08 compute-0 podman[551027]: 2025-10-03 11:39:08.704890732 +0000 UTC m=+0.234238539 container attach a7e7963b898fe57cb3060964b660d68fe0334662fbe960a6a25ce253a5c9d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:39:09 compute-0 nova_compute[351685]: 2025-10-03 11:39:09.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:09 compute-0 determined_kowalevski[551044]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:39:09 compute-0 determined_kowalevski[551044]: --> relative data size: 1.0
Oct  3 11:39:09 compute-0 determined_kowalevski[551044]: --> All data devices are unavailable
Oct  3 11:39:10 compute-0 systemd[1]: libpod-a7e7963b898fe57cb3060964b660d68fe0334662fbe960a6a25ce253a5c9d444.scope: Deactivated successfully.
Oct  3 11:39:10 compute-0 systemd[1]: libpod-a7e7963b898fe57cb3060964b660d68fe0334662fbe960a6a25ce253a5c9d444.scope: Consumed 1.265s CPU time.
Oct  3 11:39:10 compute-0 podman[551027]: 2025-10-03 11:39:10.034584997 +0000 UTC m=+1.563932824 container died a7e7963b898fe57cb3060964b660d68fe0334662fbe960a6a25ce253a5c9d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kowalevski, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:39:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-969a70c60676cb5fe6778be4d253126cf726a112c3c627f503abc4d777c79e11-merged.mount: Deactivated successfully.
Oct  3 11:39:10 compute-0 podman[551027]: 2025-10-03 11:39:10.130840392 +0000 UTC m=+1.660188179 container remove a7e7963b898fe57cb3060964b660d68fe0334662fbe960a6a25ce253a5c9d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_kowalevski, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:39:10 compute-0 systemd[1]: libpod-conmon-a7e7963b898fe57cb3060964b660d68fe0334662fbe960a6a25ce253a5c9d444.scope: Deactivated successfully.
Oct  3 11:39:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3960: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:10 compute-0 nova_compute[351685]: 2025-10-03 11:39:10.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:11 compute-0 podman[551225]: 2025-10-03 11:39:11.073780992 +0000 UTC m=+0.051561433 container create 5c03af53da6c8a671db6dc6782f5483f323a7428fb8cad2aef01920780da8d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  3 11:39:11 compute-0 systemd[1]: Started libpod-conmon-5c03af53da6c8a671db6dc6782f5483f323a7428fb8cad2aef01920780da8d56.scope.
Oct  3 11:39:11 compute-0 podman[551225]: 2025-10-03 11:39:11.052578673 +0000 UTC m=+0.030359134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:39:11 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:39:11 compute-0 podman[551225]: 2025-10-03 11:39:11.219350018 +0000 UTC m=+0.197130479 container init 5c03af53da6c8a671db6dc6782f5483f323a7428fb8cad2aef01920780da8d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  3 11:39:11 compute-0 podman[551225]: 2025-10-03 11:39:11.232219221 +0000 UTC m=+0.209999672 container start 5c03af53da6c8a671db6dc6782f5483f323a7428fb8cad2aef01920780da8d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 11:39:11 compute-0 podman[551225]: 2025-10-03 11:39:11.237170909 +0000 UTC m=+0.214951390 container attach 5c03af53da6c8a671db6dc6782f5483f323a7428fb8cad2aef01920780da8d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:39:11 compute-0 pedantic_galileo[551241]: 167 167
Oct  3 11:39:11 compute-0 systemd[1]: libpod-5c03af53da6c8a671db6dc6782f5483f323a7428fb8cad2aef01920780da8d56.scope: Deactivated successfully.
Oct  3 11:39:11 compute-0 podman[551225]: 2025-10-03 11:39:11.243230034 +0000 UTC m=+0.221010485 container died 5c03af53da6c8a671db6dc6782f5483f323a7428fb8cad2aef01920780da8d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  3 11:39:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f868445390cd79da92407b20de14eb55659a9aa0b3b2d25a978c2f240f494df-merged.mount: Deactivated successfully.
Oct  3 11:39:11 compute-0 podman[551225]: 2025-10-03 11:39:11.310905943 +0000 UTC m=+0.288686384 container remove 5c03af53da6c8a671db6dc6782f5483f323a7428fb8cad2aef01920780da8d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:39:11 compute-0 systemd[1]: libpod-conmon-5c03af53da6c8a671db6dc6782f5483f323a7428fb8cad2aef01920780da8d56.scope: Deactivated successfully.
Oct  3 11:39:11 compute-0 podman[551264]: 2025-10-03 11:39:11.527002119 +0000 UTC m=+0.064793608 container create eba1575a10466ef90751abb90e7ccf6b05d1351507370aebb4d4c9d4a5eb2dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:39:11 compute-0 podman[551264]: 2025-10-03 11:39:11.494174246 +0000 UTC m=+0.031965815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:39:11 compute-0 systemd[1]: Started libpod-conmon-eba1575a10466ef90751abb90e7ccf6b05d1351507370aebb4d4c9d4a5eb2dbb.scope.
Oct  3 11:39:11 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa5eed7a95f03d82bddde9ee8fb5bdca214e551233b441e5e36ebe7ccd1ad4ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa5eed7a95f03d82bddde9ee8fb5bdca214e551233b441e5e36ebe7ccd1ad4ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa5eed7a95f03d82bddde9ee8fb5bdca214e551233b441e5e36ebe7ccd1ad4ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:39:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa5eed7a95f03d82bddde9ee8fb5bdca214e551233b441e5e36ebe7ccd1ad4ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:39:11 compute-0 podman[551264]: 2025-10-03 11:39:11.667574833 +0000 UTC m=+0.205366372 container init eba1575a10466ef90751abb90e7ccf6b05d1351507370aebb4d4c9d4a5eb2dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 11:39:11 compute-0 podman[551264]: 2025-10-03 11:39:11.676949084 +0000 UTC m=+0.214740563 container start eba1575a10466ef90751abb90e7ccf6b05d1351507370aebb4d4c9d4a5eb2dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  3 11:39:11 compute-0 podman[551264]: 2025-10-03 11:39:11.681353225 +0000 UTC m=+0.219144734 container attach eba1575a10466ef90751abb90e7ccf6b05d1351507370aebb4d4c9d4a5eb2dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:39:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3961: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]: {
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:    "0": [
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:        {
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "devices": [
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "/dev/loop3"
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            ],
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "lv_name": "ceph_lv0",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "lv_size": "21470642176",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "name": "ceph_lv0",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "tags": {
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.cluster_name": "ceph",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.crush_device_class": "",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.encrypted": "0",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.osd_id": "0",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.type": "block",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.vdo": "0"
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            },
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "type": "block",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "vg_name": "ceph_vg0"
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:        }
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:    ],
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:    "1": [
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:        {
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "devices": [
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "/dev/loop4"
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            ],
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "lv_name": "ceph_lv1",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "lv_size": "21470642176",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "name": "ceph_lv1",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "tags": {
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.cluster_name": "ceph",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.crush_device_class": "",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.encrypted": "0",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.osd_id": "1",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.type": "block",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.vdo": "0"
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            },
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "type": "block",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "vg_name": "ceph_vg1"
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:        }
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:    ],
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:    "2": [
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:        {
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "devices": [
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "/dev/loop5"
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            ],
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "lv_name": "ceph_lv2",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "lv_size": "21470642176",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "name": "ceph_lv2",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "tags": {
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.cluster_name": "ceph",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.crush_device_class": "",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.encrypted": "0",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.osd_id": "2",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.type": "block",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:                "ceph.vdo": "0"
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            },
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "type": "block",
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:            "vg_name": "ceph_vg2"
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:        }
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]:    ]
Oct  3 11:39:12 compute-0 wonderful_tesla[551281]: }
Oct  3 11:39:12 compute-0 systemd[1]: libpod-eba1575a10466ef90751abb90e7ccf6b05d1351507370aebb4d4c9d4a5eb2dbb.scope: Deactivated successfully.
Oct  3 11:39:12 compute-0 podman[551264]: 2025-10-03 11:39:12.532196613 +0000 UTC m=+1.069988102 container died eba1575a10466ef90751abb90e7ccf6b05d1351507370aebb4d4c9d4a5eb2dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct  3 11:39:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa5eed7a95f03d82bddde9ee8fb5bdca214e551233b441e5e36ebe7ccd1ad4ec-merged.mount: Deactivated successfully.
Oct  3 11:39:12 compute-0 nova_compute[351685]: 2025-10-03 11:39:12.583 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:39:12 compute-0 nova_compute[351685]: 2025-10-03 11:39:12.584 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:39:12 compute-0 podman[551264]: 2025-10-03 11:39:12.617789247 +0000 UTC m=+1.155580726 container remove eba1575a10466ef90751abb90e7ccf6b05d1351507370aebb4d4c9d4a5eb2dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tesla, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  3 11:39:12 compute-0 systemd[1]: libpod-conmon-eba1575a10466ef90751abb90e7ccf6b05d1351507370aebb4d4c9d4a5eb2dbb.scope: Deactivated successfully.
Oct  3 11:39:12 compute-0 podman[551291]: 2025-10-03 11:39:12.690947172 +0000 UTC m=+0.121883518 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:39:12 compute-0 podman[551306]: 2025-10-03 11:39:12.712193643 +0000 UTC m=+0.110632287 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  3 11:39:12 compute-0 podman[551303]: 2025-10-03 11:39:12.723496305 +0000 UTC m=+0.148368906 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image)
Oct  3 11:39:12 compute-0 podman[551301]: 2025-10-03 11:39:12.72614143 +0000 UTC m=+0.152960253 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:39:12 compute-0 podman[551299]: 2025-10-03 11:39:12.729167907 +0000 UTC m=+0.130779472 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  3 11:39:12 compute-0 podman[551304]: 2025-10-03 11:39:12.732726861 +0000 UTC m=+0.145646629 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  3 11:39:12 compute-0 podman[551297]: 2025-10-03 11:39:12.735114067 +0000 UTC m=+0.166168127 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, maintainer=Red Hat, Inc.)
Oct  3 11:39:13 compute-0 podman[551573]: 2025-10-03 11:39:13.628608383 +0000 UTC m=+0.120160492 container create bbbc730d47bd322e3e0362747fba36a089269f7818ea17b7dc49cb71ca40e856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:39:13 compute-0 podman[551573]: 2025-10-03 11:39:13.554016762 +0000 UTC m=+0.045568911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:39:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:39:13 compute-0 systemd[1]: Started libpod-conmon-bbbc730d47bd322e3e0362747fba36a089269f7818ea17b7dc49cb71ca40e856.scope.
Oct  3 11:39:13 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:39:13 compute-0 podman[551573]: 2025-10-03 11:39:13.891947463 +0000 UTC m=+0.383499672 container init bbbc730d47bd322e3e0362747fba36a089269f7818ea17b7dc49cb71ca40e856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:39:13 compute-0 podman[551573]: 2025-10-03 11:39:13.905499827 +0000 UTC m=+0.397051926 container start bbbc730d47bd322e3e0362747fba36a089269f7818ea17b7dc49cb71ca40e856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:39:13 compute-0 laughing_northcutt[551589]: 167 167
Oct  3 11:39:13 compute-0 systemd[1]: libpod-bbbc730d47bd322e3e0362747fba36a089269f7818ea17b7dc49cb71ca40e856.scope: Deactivated successfully.
Oct  3 11:39:13 compute-0 podman[551573]: 2025-10-03 11:39:13.987484585 +0000 UTC m=+0.479036734 container attach bbbc730d47bd322e3e0362747fba36a089269f7818ea17b7dc49cb71ca40e856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  3 11:39:13 compute-0 podman[551573]: 2025-10-03 11:39:13.988505647 +0000 UTC m=+0.480057756 container died bbbc730d47bd322e3e0362747fba36a089269f7818ea17b7dc49cb71ca40e856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:39:14 compute-0 nova_compute[351685]: 2025-10-03 11:39:14.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e05e621283749bf84c84dd63e07b38f48aadf4e94793f2168cd1c90b32e52ad4-merged.mount: Deactivated successfully.
Oct  3 11:39:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3962: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:14 compute-0 podman[551573]: 2025-10-03 11:39:14.272786179 +0000 UTC m=+0.764338308 container remove bbbc730d47bd322e3e0362747fba36a089269f7818ea17b7dc49cb71ca40e856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:39:14 compute-0 systemd[1]: libpod-conmon-bbbc730d47bd322e3e0362747fba36a089269f7818ea17b7dc49cb71ca40e856.scope: Deactivated successfully.
Oct  3 11:39:14 compute-0 podman[551613]: 2025-10-03 11:39:14.558082332 +0000 UTC m=+0.087616339 container create 7c2880791b12a275a0669450983303911bb54079d29071e5773ba4d5f5200487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  3 11:39:14 compute-0 podman[551613]: 2025-10-03 11:39:14.520104864 +0000 UTC m=+0.049638921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:39:14 compute-0 systemd[1]: Started libpod-conmon-7c2880791b12a275a0669450983303911bb54079d29071e5773ba4d5f5200487.scope.
Oct  3 11:39:14 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9099d4d8d9a6667e88efdaf446b5d19b63d3c8aabac9e97e9c7ac66e2fd6a450/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9099d4d8d9a6667e88efdaf446b5d19b63d3c8aabac9e97e9c7ac66e2fd6a450/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9099d4d8d9a6667e88efdaf446b5d19b63d3c8aabac9e97e9c7ac66e2fd6a450/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9099d4d8d9a6667e88efdaf446b5d19b63d3c8aabac9e97e9c7ac66e2fd6a450/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:39:14 compute-0 podman[551613]: 2025-10-03 11:39:14.731222531 +0000 UTC m=+0.260756528 container init 7c2880791b12a275a0669450983303911bb54079d29071e5773ba4d5f5200487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 11:39:14 compute-0 podman[551613]: 2025-10-03 11:39:14.755875281 +0000 UTC m=+0.285409248 container start 7c2880791b12a275a0669450983303911bb54079d29071e5773ba4d5f5200487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dhawan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  3 11:39:14 compute-0 podman[551613]: 2025-10-03 11:39:14.761073307 +0000 UTC m=+0.290607294 container attach 7c2880791b12a275a0669450983303911bb54079d29071e5773ba4d5f5200487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:39:15 compute-0 nova_compute[351685]: 2025-10-03 11:39:15.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:15 compute-0 nova_compute[351685]: 2025-10-03 11:39:15.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:39:15 compute-0 nova_compute[351685]: 2025-10-03 11:39:15.731 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]: {
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:        "osd_id": 1,
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:        "type": "bluestore"
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:    },
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:        "osd_id": 2,
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:        "type": "bluestore"
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:    },
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:        "osd_id": 0,
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:        "type": "bluestore"
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]:    }
Oct  3 11:39:15 compute-0 adoring_dhawan[551628]: }
Oct  3 11:39:15 compute-0 systemd[1]: libpod-7c2880791b12a275a0669450983303911bb54079d29071e5773ba4d5f5200487.scope: Deactivated successfully.
Oct  3 11:39:15 compute-0 systemd[1]: libpod-7c2880791b12a275a0669450983303911bb54079d29071e5773ba4d5f5200487.scope: Consumed 1.071s CPU time.
Oct  3 11:39:15 compute-0 podman[551613]: 2025-10-03 11:39:15.834507541 +0000 UTC m=+1.364041518 container died 7c2880791b12a275a0669450983303911bb54079d29071e5773ba4d5f5200487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dhawan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  3 11:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9099d4d8d9a6667e88efdaf446b5d19b63d3c8aabac9e97e9c7ac66e2fd6a450-merged.mount: Deactivated successfully.
Oct  3 11:39:15 compute-0 podman[551613]: 2025-10-03 11:39:15.897463758 +0000 UTC m=+1.426997725 container remove 7c2880791b12a275a0669450983303911bb54079d29071e5773ba4d5f5200487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dhawan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:39:15 compute-0 systemd[1]: libpod-conmon-7c2880791b12a275a0669450983303911bb54079d29071e5773ba4d5f5200487.scope: Deactivated successfully.
Oct  3 11:39:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:39:15 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:39:15 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:39:15 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:39:15 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev cb4b57cb-c363-4acc-b749-e936b7d9a672 does not exist
Oct  3 11:39:15 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 16220841-e4d3-4290-a5c9-d7e11da51511 does not exist
Oct  3 11:39:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3963: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:39:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:39:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:39:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:39:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:39:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:39:16 compute-0 nova_compute[351685]: 2025-10-03 11:39:16.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:39:16 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:39:16 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:39:17 compute-0 nova_compute[351685]: 2025-10-03 11:39:17.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:39:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3964: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:39:19 compute-0 nova_compute[351685]: 2025-10-03 11:39:19.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3965: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:20 compute-0 nova_compute[351685]: 2025-10-03 11:39:20.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3966: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:22 compute-0 podman[551724]: 2025-10-03 11:39:22.823864037 +0000 UTC m=+0.078403244 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:39:22 compute-0 podman[551725]: 2025-10-03 11:39:22.869979865 +0000 UTC m=+0.111491984 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  3 11:39:22 compute-0 podman[551726]: 2025-10-03 11:39:22.882457195 +0000 UTC m=+0.122564629 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  3 11:39:23 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:39:24 compute-0 nova_compute[351685]: 2025-10-03 11:39:24.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3967: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:25 compute-0 nova_compute[351685]: 2025-10-03 11:39:25.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3968: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3969: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:39:29 compute-0 nova_compute[351685]: 2025-10-03 11:39:29.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:29 compute-0 podman[157165]: time="2025-10-03T11:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:39:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:39:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9603 "" "Go-http-client/1.1"
Oct  3 11:39:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3970: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:30 compute-0 nova_compute[351685]: 2025-10-03 11:39:30.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:31 compute-0 openstack_network_exporter[367524]: ERROR   11:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:39:31 compute-0 openstack_network_exporter[367524]: ERROR   11:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:39:31 compute-0 openstack_network_exporter[367524]: ERROR   11:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:39:31 compute-0 openstack_network_exporter[367524]: ERROR   11:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:39:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:39:31 compute-0 openstack_network_exporter[367524]: ERROR   11:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:39:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:39:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3971: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:39:34 compute-0 nova_compute[351685]: 2025-10-03 11:39:34.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3972: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:35 compute-0 nova_compute[351685]: 2025-10-03 11:39:35.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3973: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3974: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:39:39 compute-0 nova_compute[351685]: 2025-10-03 11:39:39.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3975: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:40 compute-0 nova_compute[351685]: 2025-10-03 11:39:40.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.911 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.912 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.912 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.913 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.914 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b105f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.920 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '443e486d-1bf2-4550-a4ae-32f0f8f4af19', 'name': 'te-0355793-asg-bz6ac4extgro-yngmy2hkxebf-kzam7sftwtcg', 'flavor': {'id': 'b93eb926-1d95-406e-aec3-a907be067084', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b9c8e0cc-ecf1-4fa8-92ee-328b108123cd'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000010', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ebbd19d68501417398b05a6a4b7c22de', 'user_id': '8990c210ba8740dc9714739f27702391', 'hostId': '68f1a860c9647e69f481ba6f1cfa913085c986ba3b27987b76a63364', 'status': 'active', 'metadata': {'metering.server_group': '0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.925 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.929 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '83fc22ce-d2e4-468a-b166-04f2743fa68d', 'name': 'te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz', 'flavor': {'id': 'b93eb926-1d95-406e-aec3-a907be067084', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b9c8e0cc-ecf1-4fa8-92ee-328b108123cd'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ebbd19d68501417398b05a6a4b7c22de', 'user_id': '8990c210ba8740dc9714739f27702391', 'hostId': '68f1a860c9647e69f481ba6f1cfa913085c986ba3b27987b76a63364', 'status': 'active', 'metadata': {'metering.server_group': '0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.929 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.929 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.930 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.930 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:39:40.930148) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.936 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.940 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.944 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.945 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.945 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.945 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.945 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.946 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.946 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.946 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.946 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.947 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.948 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:39:40.946013) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.948 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.948 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.948 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.948 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.949 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:39:40.948773) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.962 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.962 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.980 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.980 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.981 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.994 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.995 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.995 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.996 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.996 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.996 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.996 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:40.997 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:39:40.996491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.021 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.bytes volume: 30247424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.021 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.068 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.068 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.069 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.101 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.bytes volume: 31861248 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.101 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.102 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.102 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.102 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.103 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.103 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.103 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.latency volume: 2094347087 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.103 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.latency volume: 184046568 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.104 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.104 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.104 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:39:41.103149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.105 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.105 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.latency volume: 2658882306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.105 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.latency volume: 170448087 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.106 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.106 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.106 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.106 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.107 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:39:41.107089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.107 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.requests volume: 1096 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.107 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.108 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.109 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.109 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.109 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.requests volume: 1164 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.110 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.110 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.111 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.111 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.111 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.111 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.111 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.112 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.111 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:39:41.111550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.112 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.113 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.113 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.113 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.114 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.114 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.114 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.114 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.115 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.115 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.115 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.115 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.116 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.116 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.117 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.117 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:39:41.115158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.117 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.118 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.118 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.119 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.119 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.119 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.120 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.bytes volume: 73154560 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.120 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:39:41.119869) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.120 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.121 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.121 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.121 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.122 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.122 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.123 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.123 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.123 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.123 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:39:41.123760) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.154 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.184 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.212 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.213 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.213 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.213 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.213 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.213 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.213 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.213 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.latency volume: 10137118849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.214 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.214 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.215 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.215 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.215 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.latency volume: 11038555047 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.216 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.216 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.217 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.217 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.217 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.217 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:39:41.213672) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.218 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.218 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.requests volume: 330 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.218 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.219 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:39:41.218046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.219 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.219 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.219 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.219 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.requests volume: 349 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.220 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.220 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.220 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.220 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.220 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.220 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.220 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.220 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.221 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.221 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.221 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.221 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:39:41.220774) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.222 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.222 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.222 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.222 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.222 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.222 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.223 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.223 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.223 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.223 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.223 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.223 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.224 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.224 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.224 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.224 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.225 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.225 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.225 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.225 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.225 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.225 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:39:41.222677) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.225 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.225 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.226 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.226 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.226 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.226 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.226 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.226 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.226 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:39:41.223910) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.227 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.227 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.227 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.227 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:39:41.225314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.228 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.228 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.228 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.228 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.228 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/cpu volume: 335410000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.228 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 117660000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.227 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:39:41.226395) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.229 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:39:41.228382) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.229 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/cpu volume: 339390000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.229 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.229 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.229 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.229 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.229 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.229 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.229 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.230 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.230 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.230 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.231 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.231 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.231 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.231 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:39:41.229818) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.231 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.231 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.231 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:39:41.231454) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.232 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.232 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.232 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.232 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.232 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.232 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.232 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.233 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.233 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.233 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.233 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.234 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.234 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.234 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.234 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.234 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.234 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/memory.usage volume: 42.1796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.234 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:39:41.232894) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.235 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:39:41.234699) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.235 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.235 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/memory.usage volume: 42.3671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.235 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.235 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.236 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.236 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.236 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.236 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.236 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.236 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.236 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.incoming.bytes volume: 2276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.236 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.236 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.incoming.bytes volume: 2450 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.237 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:39:41.236422) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.237 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.237 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.237 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.237 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.237 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.237 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.237 14 DEBUG ceilometer.compute.pollsters [-] 443e486d-1bf2-4550-a4ae-32f0f8f4af19/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.238 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:39:41.237777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.238 14 DEBUG ceilometer.compute.pollsters [-] 83fc22ce-d2e4-468a-b166-04f2743fa68d/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.238 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.239 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.239 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.239 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.239 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.239 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.239 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.239 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.239 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.241 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:39:41.241 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:39:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:39:41.719 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:39:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:39:41.720 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:39:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:39:41.721 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:39:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3976: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:42 compute-0 podman[551791]: 2025-10-03 11:39:42.877106857 +0000 UTC m=+0.115751339 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct  3 11:39:42 compute-0 podman[551792]: 2025-10-03 11:39:42.878804442 +0000 UTC m=+0.118372944 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Oct  3 11:39:42 compute-0 podman[551793]: 2025-10-03 11:39:42.880294239 +0000 UTC m=+0.111067771 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  3 11:39:42 compute-0 podman[551790]: 2025-10-03 11:39:42.886691633 +0000 UTC m=+0.133618799 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:39:42 compute-0 podman[551801]: 2025-10-03 11:39:42.904334956 +0000 UTC m=+0.122971080 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350)
Oct  3 11:39:42 compute-0 podman[551794]: 2025-10-03 11:39:42.92207039 +0000 UTC m=+0.140062995 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930)
Oct  3 11:39:42 compute-0 podman[551812]: 2025-10-03 11:39:42.960036581 +0000 UTC m=+0.163550184 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:39:43 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:39:44 compute-0 nova_compute[351685]: 2025-10-03 11:39:44.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3977: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:45 compute-0 nova_compute[351685]: 2025-10-03 11:39:45.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:39:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:39:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3978: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:39:46
Oct  3 11:39:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:39:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:39:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'vms', 'cephfs.cephfs.meta', 'volumes', 'backups', 'images', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control']
Oct  3 11:39:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:39:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:39:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:39:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:39:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:39:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:39:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:39:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:39:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:39:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:39:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:39:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3979: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:48 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:39:49 compute-0 nova_compute[351685]: 2025-10-03 11:39:49.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3980: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:50 compute-0 nova_compute[351685]: 2025-10-03 11:39:50.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3981: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:39:53 compute-0 podman[551926]: 2025-10-03 11:39:53.840684689 +0000 UTC m=+0.102938911 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct  3 11:39:53 compute-0 podman[551927]: 2025-10-03 11:39:53.848924822 +0000 UTC m=+0.104776040 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, architecture=x86_64, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc.)
Oct  3 11:39:53 compute-0 podman[551928]: 2025-10-03 11:39:53.85860234 +0000 UTC m=+0.111107362 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  3 11:39:54 compute-0 nova_compute[351685]: 2025-10-03 11:39:54.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:39:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4101803108' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:39:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:39:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4101803108' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:39:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3982: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:55 compute-0 nova_compute[351685]: 2025-10-03 11:39:55.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3983: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020731842563651757 of space, bias 1.0, pg target 0.6219552769095527 quantized to 32 (current 32)
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:39:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:39:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3984: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:39:58 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:39:59 compute-0 nova_compute[351685]: 2025-10-03 11:39:59.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:39:59 compute-0 nova_compute[351685]: 2025-10-03 11:39:59.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:39:59 compute-0 nova_compute[351685]: 2025-10-03 11:39:59.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:39:59 compute-0 nova_compute[351685]: 2025-10-03 11:39:59.730 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:39:59 compute-0 podman[157165]: time="2025-10-03T11:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:39:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:39:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9616 "" "Go-http-client/1.1"
Oct  3 11:40:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3985: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:40:00 compute-0 nova_compute[351685]: 2025-10-03 11:40:00.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:00 compute-0 nova_compute[351685]: 2025-10-03 11:40:00.544 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:40:00 compute-0 nova_compute[351685]: 2025-10-03 11:40:00.544 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:40:00 compute-0 nova_compute[351685]: 2025-10-03 11:40:00.544 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:40:00 compute-0 nova_compute[351685]: 2025-10-03 11:40:00.544 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:40:01 compute-0 openstack_network_exporter[367524]: ERROR   11:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:40:01 compute-0 openstack_network_exporter[367524]: ERROR   11:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:40:01 compute-0 openstack_network_exporter[367524]: ERROR   11:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:40:01 compute-0 openstack_network_exporter[367524]: ERROR   11:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:40:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:40:01 compute-0 openstack_network_exporter[367524]: ERROR   11:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:40:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:40:01 compute-0 nova_compute[351685]: 2025-10-03 11:40:01.505 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:40:01 compute-0 nova_compute[351685]: 2025-10-03 11:40:01.601 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:40:01 compute-0 nova_compute[351685]: 2025-10-03 11:40:01.601 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:40:01 compute-0 nova_compute[351685]: 2025-10-03 11:40:01.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:40:01 compute-0 nova_compute[351685]: 2025-10-03 11:40:01.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:40:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3986: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:40:03 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:03.722521) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491603722572, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 1439, "num_deletes": 255, "total_data_size": 2271908, "memory_usage": 2315176, "flush_reason": "Manual Compaction"}
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491603750620, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 2239240, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79141, "largest_seqno": 80579, "table_properties": {"data_size": 2232446, "index_size": 3928, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13742, "raw_average_key_size": 19, "raw_value_size": 2218921, "raw_average_value_size": 3156, "num_data_blocks": 176, "num_entries": 703, "num_filter_entries": 703, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759491449, "oldest_key_time": 1759491449, "file_creation_time": 1759491603, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 28168 microseconds, and 13183 cpu microseconds.
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:03.750688) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 2239240 bytes OK
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:03.750710) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:03.752486) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:03.752500) EVENT_LOG_v1 {"time_micros": 1759491603752495, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:03.752518) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 2265564, prev total WAL file size 2265564, number of live WAL files 2.
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:03.753933) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353136' seq:72057594037927935, type:22 .. '6C6F676D0033373637' seq:0, type:0; will stop at (end)
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(2186KB)], [194(8158KB)]
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491603754009, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 10593918, "oldest_snapshot_seqno": -1}
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 8734 keys, 10493483 bytes, temperature: kUnknown
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491603819599, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 10493483, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10441212, "index_size": 29310, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21893, "raw_key_size": 230725, "raw_average_key_size": 26, "raw_value_size": 10289182, "raw_average_value_size": 1178, "num_data_blocks": 1151, "num_entries": 8734, "num_filter_entries": 8734, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759491603, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:03.819820) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 10493483 bytes
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:03.828544) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.4 rd, 159.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 8.0 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(9.4) write-amplify(4.7) OK, records in: 9256, records dropped: 522 output_compression: NoCompression
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:03.828575) EVENT_LOG_v1 {"time_micros": 1759491603828562, "job": 122, "event": "compaction_finished", "compaction_time_micros": 65654, "compaction_time_cpu_micros": 37478, "output_level": 6, "num_output_files": 1, "total_output_size": 10493483, "num_input_records": 9256, "num_output_records": 8734, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491603829193, "job": 122, "event": "table_file_deletion", "file_number": 196}
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491603831545, "job": 122, "event": "table_file_deletion", "file_number": 194}
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:03.753673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:03.831779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:03.831784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:03.831786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:03.831788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:40:03 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:03.831790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:40:04 compute-0 nova_compute[351685]: 2025-10-03 11:40:04.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3987: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:40:05 compute-0 nova_compute[351685]: 2025-10-03 11:40:05.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3988: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:40:06 compute-0 nova_compute[351685]: 2025-10-03 11:40:06.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:40:06 compute-0 nova_compute[351685]: 2025-10-03 11:40:06.842 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:40:06 compute-0 nova_compute[351685]: 2025-10-03 11:40:06.843 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:40:06 compute-0 nova_compute[351685]: 2025-10-03 11:40:06.844 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:40:06 compute-0 nova_compute[351685]: 2025-10-03 11:40:06.845 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:40:06 compute-0 nova_compute[351685]: 2025-10-03 11:40:06.846 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:40:07 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:40:07 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2160640359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:40:07 compute-0 nova_compute[351685]: 2025-10-03 11:40:07.337 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:40:07 compute-0 nova_compute[351685]: 2025-10-03 11:40:07.516 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:40:07 compute-0 nova_compute[351685]: 2025-10-03 11:40:07.516 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:40:07 compute-0 nova_compute[351685]: 2025-10-03 11:40:07.521 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:40:07 compute-0 nova_compute[351685]: 2025-10-03 11:40:07.522 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:40:07 compute-0 nova_compute[351685]: 2025-10-03 11:40:07.522 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:40:07 compute-0 nova_compute[351685]: 2025-10-03 11:40:07.527 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:40:07 compute-0 nova_compute[351685]: 2025-10-03 11:40:07.528 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:40:07 compute-0 nova_compute[351685]: 2025-10-03 11:40:07.981 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:40:07 compute-0 nova_compute[351685]: 2025-10-03 11:40:07.982 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3188MB free_disk=59.863914489746094GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:40:07 compute-0 nova_compute[351685]: 2025-10-03 11:40:07.983 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:40:07 compute-0 nova_compute[351685]: 2025-10-03 11:40:07.983 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:40:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3989: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:08.639924) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491608640001, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 292, "num_deletes": 251, "total_data_size": 96348, "memory_usage": 101760, "flush_reason": "Manual Compaction"}
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491608689984, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 95883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80580, "largest_seqno": 80871, "table_properties": {"data_size": 93911, "index_size": 199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4988, "raw_average_key_size": 18, "raw_value_size": 90092, "raw_average_value_size": 331, "num_data_blocks": 9, "num_entries": 272, "num_filter_entries": 272, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759491604, "oldest_key_time": 1759491604, "file_creation_time": 1759491608, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 50153 microseconds, and 1974 cpu microseconds.
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:40:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:08.690086) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 95883 bytes OK
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:08.690113) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:08.748863) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:08.748924) EVENT_LOG_v1 {"time_micros": 1759491608748907, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:08.748960) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 94197, prev total WAL file size 94197, number of live WAL files 2.
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:08.749810) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(93KB)], [197(10MB)]
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491608749855, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 10589366, "oldest_snapshot_seqno": -1}
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 8497 keys, 8880941 bytes, temperature: kUnknown
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491608896895, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 8880941, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8831823, "index_size": 26797, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21253, "raw_key_size": 226494, "raw_average_key_size": 26, "raw_value_size": 8685435, "raw_average_value_size": 1022, "num_data_blocks": 1036, "num_entries": 8497, "num_filter_entries": 8497, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759483851, "oldest_key_time": 0, "file_creation_time": 1759491608, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "86380f0c-e5ab-4a78-a709-deb613d5683c", "db_session_id": "FRSIUNOLIZ7G5G8L8D2S", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:08.897358) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 8880941 bytes
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:08.915142) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.9 rd, 60.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.0 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(203.1) write-amplify(92.6) OK, records in: 9006, records dropped: 509 output_compression: NoCompression
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:08.915203) EVENT_LOG_v1 {"time_micros": 1759491608915178, "job": 124, "event": "compaction_finished", "compaction_time_micros": 147178, "compaction_time_cpu_micros": 35164, "output_level": 6, "num_output_files": 1, "total_output_size": 8880941, "num_input_records": 9006, "num_output_records": 8497, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491608915575, "job": 124, "event": "table_file_deletion", "file_number": 199}
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759491608919603, "job": 124, "event": "table_file_deletion", "file_number": 197}
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:08.749699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:08.919761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:08.919767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:08.919770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:08.919773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:40:08 compute-0 ceph-mon[191783]: rocksdb: (Original Log Time 2025/10/03-11:40:08.919775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  3 11:40:09 compute-0 nova_compute[351685]: 2025-10-03 11:40:09.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:09 compute-0 nova_compute[351685]: 2025-10-03 11:40:09.274 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:40:09 compute-0 nova_compute[351685]: 2025-10-03 11:40:09.275 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 83fc22ce-d2e4-468a-b166-04f2743fa68d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:40:09 compute-0 nova_compute[351685]: 2025-10-03 11:40:09.275 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance 443e486d-1bf2-4550-a4ae-32f0f8f4af19 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:40:09 compute-0 nova_compute[351685]: 2025-10-03 11:40:09.276 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:40:09 compute-0 nova_compute[351685]: 2025-10-03 11:40:09.277 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:40:09 compute-0 nova_compute[351685]: 2025-10-03 11:40:09.486 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:40:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:40:09 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/470546939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:40:09 compute-0 nova_compute[351685]: 2025-10-03 11:40:09.938 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:40:09 compute-0 nova_compute[351685]: 2025-10-03 11:40:09.946 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:40:10 compute-0 nova_compute[351685]: 2025-10-03 11:40:10.034 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:40:10 compute-0 nova_compute[351685]: 2025-10-03 11:40:10.038 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:40:10 compute-0 nova_compute[351685]: 2025-10-03 11:40:10.039 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:40:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3990: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:40:10 compute-0 nova_compute[351685]: 2025-10-03 11:40:10.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3991: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:40:13 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:40:13 compute-0 podman[552031]: 2025-10-03 11:40:13.850082694 +0000 UTC m=+0.110680128 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:40:13 compute-0 podman[552039]: 2025-10-03 11:40:13.860701343 +0000 UTC m=+0.098273833 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  3 11:40:13 compute-0 podman[552038]: 2025-10-03 11:40:13.870304889 +0000 UTC m=+0.099881595 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  3 11:40:13 compute-0 podman[552032]: 2025-10-03 11:40:13.90612268 +0000 UTC m=+0.142060508 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, vcs-type=git)
Oct  3 11:40:13 compute-0 podman[552043]: 2025-10-03 11:40:13.915943843 +0000 UTC m=+0.119508029 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20250930, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  3 11:40:13 compute-0 podman[552061]: 2025-10-03 11:40:13.922789471 +0000 UTC m=+0.136957415 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  3 11:40:13 compute-0 podman[552063]: 2025-10-03 11:40:13.937201371 +0000 UTC m=+0.128556778 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  3 11:40:14 compute-0 nova_compute[351685]: 2025-10-03 11:40:14.041 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:40:14 compute-0 nova_compute[351685]: 2025-10-03 11:40:14.041 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:40:14 compute-0 nova_compute[351685]: 2025-10-03 11:40:14.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3992: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:40:15 compute-0 nova_compute[351685]: 2025-10-03 11:40:15.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:40:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:40:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3993: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:40:16 compute-0 nova_compute[351685]: 2025-10-03 11:40:16.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:40:16 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:40:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:40:17 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:40:17 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:40:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:40:17 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:40:17 compute-0 nova_compute[351685]: 2025-10-03 11:40:17.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:40:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3994: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:40:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:40:18 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:40:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:40:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:40:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:40:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:40:18 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 65af10a5-95b1-4e7a-a554-0282bbaf8b75 does not exist
Oct  3 11:40:18 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev f6964a0c-8108-4383-b9b5-3cff0e16e065 does not exist
Oct  3 11:40:18 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3df55a10-7462-4ba6-bcc2-3fde1ef0f98f does not exist
Oct  3 11:40:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:40:18 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:40:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:40:18 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:40:18 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:40:18 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:40:18 compute-0 nova_compute[351685]: 2025-10-03 11:40:18.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:40:18 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:40:18 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:40:18 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:40:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:40:19 compute-0 nova_compute[351685]: 2025-10-03 11:40:19.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:19 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 11:40:19 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 11:40:19 compute-0 podman[552562]: 2025-10-03 11:40:19.303436382 +0000 UTC m=+0.039017425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:40:19 compute-0 podman[552562]: 2025-10-03 11:40:19.457785131 +0000 UTC m=+0.193366104 container create 9b2760e98cd95d785248f28d940a16c73804dc8e4e5c66d3c2bf17848503dc75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:40:19 compute-0 nova_compute[351685]: 2025-10-03 11:40:19.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:40:19 compute-0 systemd[1]: Started libpod-conmon-9b2760e98cd95d785248f28d940a16c73804dc8e4e5c66d3c2bf17848503dc75.scope.
Oct  3 11:40:19 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:40:20 compute-0 podman[552562]: 2025-10-03 11:40:20.002586234 +0000 UTC m=+0.738167287 container init 9b2760e98cd95d785248f28d940a16c73804dc8e4e5c66d3c2bf17848503dc75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  3 11:40:20 compute-0 podman[552562]: 2025-10-03 11:40:20.021454765 +0000 UTC m=+0.757035758 container start 9b2760e98cd95d785248f28d940a16c73804dc8e4e5c66d3c2bf17848503dc75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Oct  3 11:40:20 compute-0 pensive_darwin[552580]: 167 167
Oct  3 11:40:20 compute-0 systemd[1]: libpod-9b2760e98cd95d785248f28d940a16c73804dc8e4e5c66d3c2bf17848503dc75.scope: Deactivated successfully.
Oct  3 11:40:20 compute-0 podman[552562]: 2025-10-03 11:40:20.14085952 +0000 UTC m=+0.876440503 container attach 9b2760e98cd95d785248f28d940a16c73804dc8e4e5c66d3c2bf17848503dc75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 11:40:20 compute-0 podman[552562]: 2025-10-03 11:40:20.141711408 +0000 UTC m=+0.877292411 container died 9b2760e98cd95d785248f28d940a16c73804dc8e4e5c66d3c2bf17848503dc75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  3 11:40:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3995: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:40:20 compute-0 nova_compute[351685]: 2025-10-03 11:40:20.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca0b037cb2672e1284d861ec766253e45662e2d1092c88a2b8a5a5c743deac8a-merged.mount: Deactivated successfully.
Oct  3 11:40:21 compute-0 podman[552562]: 2025-10-03 11:40:21.087610285 +0000 UTC m=+1.823191288 container remove 9b2760e98cd95d785248f28d940a16c73804dc8e4e5c66d3c2bf17848503dc75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  3 11:40:21 compute-0 systemd[1]: libpod-conmon-9b2760e98cd95d785248f28d940a16c73804dc8e4e5c66d3c2bf17848503dc75.scope: Deactivated successfully.
Oct  3 11:40:21 compute-0 podman[552603]: 2025-10-03 11:40:21.29612932 +0000 UTC m=+0.037448963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:40:21 compute-0 podman[552603]: 2025-10-03 11:40:21.425480163 +0000 UTC m=+0.166799766 container create 55cf29a0d696044204ee2e0f2abaad9ccdd364f7d9c44099a080d704a6b4a708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:40:21 compute-0 systemd[1]: Started libpod-conmon-55cf29a0d696044204ee2e0f2abaad9ccdd364f7d9c44099a080d704a6b4a708.scope.
Oct  3 11:40:21 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48bc8f41b3133852f4c61bab706788a3c79bd878a8f8ab4cc132111751e0f6c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48bc8f41b3133852f4c61bab706788a3c79bd878a8f8ab4cc132111751e0f6c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48bc8f41b3133852f4c61bab706788a3c79bd878a8f8ab4cc132111751e0f6c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48bc8f41b3133852f4c61bab706788a3c79bd878a8f8ab4cc132111751e0f6c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:40:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48bc8f41b3133852f4c61bab706788a3c79bd878a8f8ab4cc132111751e0f6c8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:40:22 compute-0 podman[552603]: 2025-10-03 11:40:22.041815986 +0000 UTC m=+0.783135599 container init 55cf29a0d696044204ee2e0f2abaad9ccdd364f7d9c44099a080d704a6b4a708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:40:22 compute-0 podman[552603]: 2025-10-03 11:40:22.054367735 +0000 UTC m=+0.795687338 container start 55cf29a0d696044204ee2e0f2abaad9ccdd364f7d9c44099a080d704a6b4a708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:40:22 compute-0 podman[552603]: 2025-10-03 11:40:22.140611445 +0000 UTC m=+0.881931038 container attach 55cf29a0d696044204ee2e0f2abaad9ccdd364f7d9c44099a080d704a6b4a708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:40:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3996: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:40:23 compute-0 boring_fermi[552618]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:40:23 compute-0 boring_fermi[552618]: --> relative data size: 1.0
Oct  3 11:40:23 compute-0 boring_fermi[552618]: --> All data devices are unavailable
Oct  3 11:40:23 compute-0 systemd[1]: libpod-55cf29a0d696044204ee2e0f2abaad9ccdd364f7d9c44099a080d704a6b4a708.scope: Deactivated successfully.
Oct  3 11:40:23 compute-0 podman[552603]: 2025-10-03 11:40:23.254961227 +0000 UTC m=+1.996280840 container died 55cf29a0d696044204ee2e0f2abaad9ccdd364f7d9c44099a080d704a6b4a708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:40:23 compute-0 systemd[1]: libpod-55cf29a0d696044204ee2e0f2abaad9ccdd364f7d9c44099a080d704a6b4a708.scope: Consumed 1.129s CPU time.
Oct  3 11:40:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-48bc8f41b3133852f4c61bab706788a3c79bd878a8f8ab4cc132111751e0f6c8-merged.mount: Deactivated successfully.
Oct  3 11:40:23 compute-0 podman[552603]: 2025-10-03 11:40:23.976007658 +0000 UTC m=+2.717327301 container remove 55cf29a0d696044204ee2e0f2abaad9ccdd364f7d9c44099a080d704a6b4a708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_fermi, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  3 11:40:24 compute-0 systemd[1]: libpod-conmon-55cf29a0d696044204ee2e0f2abaad9ccdd364f7d9c44099a080d704a6b4a708.scope: Deactivated successfully.
Oct  3 11:40:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:40:24 compute-0 nova_compute[351685]: 2025-10-03 11:40:24.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:24 compute-0 podman[552659]: 2025-10-03 11:40:24.137517465 +0000 UTC m=+0.094878755 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0)
Oct  3 11:40:24 compute-0 podman[552658]: 2025-10-03 11:40:24.145383555 +0000 UTC m=+0.102599950 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:40:24 compute-0 podman[552660]: 2025-10-03 11:40:24.16905604 +0000 UTC m=+0.124512359 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  3 11:40:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3997: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:40:24 compute-0 podman[552859]: 2025-10-03 11:40:24.912426991 +0000 UTC m=+0.074096012 container create 3487a24406970ede4687f6bdaa503cae29aa1087387e4eaac36d2f39ddd5b4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  3 11:40:24 compute-0 podman[552859]: 2025-10-03 11:40:24.882094164 +0000 UTC m=+0.043763275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:40:24 compute-0 systemd[1]: Started libpod-conmon-3487a24406970ede4687f6bdaa503cae29aa1087387e4eaac36d2f39ddd5b4f6.scope.
Oct  3 11:40:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:40:25 compute-0 podman[552859]: 2025-10-03 11:40:25.047858987 +0000 UTC m=+0.209528038 container init 3487a24406970ede4687f6bdaa503cae29aa1087387e4eaac36d2f39ddd5b4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:40:25 compute-0 podman[552859]: 2025-10-03 11:40:25.057167384 +0000 UTC m=+0.218836405 container start 3487a24406970ede4687f6bdaa503cae29aa1087387e4eaac36d2f39ddd5b4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  3 11:40:25 compute-0 podman[552859]: 2025-10-03 11:40:25.061469511 +0000 UTC m=+0.223138562 container attach 3487a24406970ede4687f6bdaa503cae29aa1087387e4eaac36d2f39ddd5b4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  3 11:40:25 compute-0 silly_bardeen[552875]: 167 167
Oct  3 11:40:25 compute-0 systemd[1]: libpod-3487a24406970ede4687f6bdaa503cae29aa1087387e4eaac36d2f39ddd5b4f6.scope: Deactivated successfully.
Oct  3 11:40:25 compute-0 conmon[552875]: conmon 3487a24406970ede4687 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3487a24406970ede4687f6bdaa503cae29aa1087387e4eaac36d2f39ddd5b4f6.scope/container/memory.events
Oct  3 11:40:25 compute-0 podman[552880]: 2025-10-03 11:40:25.122215916 +0000 UTC m=+0.037492395 container died 3487a24406970ede4687f6bdaa503cae29aa1087387e4eaac36d2f39ddd5b4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bardeen, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  3 11:40:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-7aac6717d536644d0fc7da0d5b0fc14d7f8b2a4a5c3790cb43d5f26afdb3b162-merged.mount: Deactivated successfully.
Oct  3 11:40:25 compute-0 podman[552880]: 2025-10-03 11:40:25.169576687 +0000 UTC m=+0.084853156 container remove 3487a24406970ede4687f6bdaa503cae29aa1087387e4eaac36d2f39ddd5b4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bardeen, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  3 11:40:25 compute-0 systemd[1]: libpod-conmon-3487a24406970ede4687f6bdaa503cae29aa1087387e4eaac36d2f39ddd5b4f6.scope: Deactivated successfully.
Oct  3 11:40:25 compute-0 podman[552902]: 2025-10-03 11:40:25.433976482 +0000 UTC m=+0.076149228 container create edb8b584fa67e581376f616a1ddb6d1ade7eede06cd8ae6eea17e567b047844d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_turing, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:40:25 compute-0 nova_compute[351685]: 2025-10-03 11:40:25.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:25 compute-0 podman[552902]: 2025-10-03 11:40:25.402890192 +0000 UTC m=+0.045063008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:40:25 compute-0 systemd[1]: Started libpod-conmon-edb8b584fa67e581376f616a1ddb6d1ade7eede06cd8ae6eea17e567b047844d.scope.
Oct  3 11:40:25 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4714d6ec22148d0f96885d5fbc21e4965069c674bdd6ea5e576b1b87bc12dfb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4714d6ec22148d0f96885d5fbc21e4965069c674bdd6ea5e576b1b87bc12dfb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4714d6ec22148d0f96885d5fbc21e4965069c674bdd6ea5e576b1b87bc12dfb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:40:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4714d6ec22148d0f96885d5fbc21e4965069c674bdd6ea5e576b1b87bc12dfb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:40:25 compute-0 podman[552902]: 2025-10-03 11:40:25.584542161 +0000 UTC m=+0.226714917 container init edb8b584fa67e581376f616a1ddb6d1ade7eede06cd8ae6eea17e567b047844d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 11:40:25 compute-0 podman[552902]: 2025-10-03 11:40:25.598446344 +0000 UTC m=+0.240619090 container start edb8b584fa67e581376f616a1ddb6d1ade7eede06cd8ae6eea17e567b047844d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:40:25 compute-0 podman[552902]: 2025-10-03 11:40:25.603183455 +0000 UTC m=+0.245356201 container attach edb8b584fa67e581376f616a1ddb6d1ade7eede06cd8ae6eea17e567b047844d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct  3 11:40:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3998: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:40:26 compute-0 zen_turing[552916]: {
Oct  3 11:40:26 compute-0 zen_turing[552916]:    "0": [
Oct  3 11:40:26 compute-0 zen_turing[552916]:        {
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "devices": [
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "/dev/loop3"
Oct  3 11:40:26 compute-0 zen_turing[552916]:            ],
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "lv_name": "ceph_lv0",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "lv_size": "21470642176",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "name": "ceph_lv0",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "tags": {
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.cluster_name": "ceph",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.crush_device_class": "",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.encrypted": "0",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.osd_id": "0",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.type": "block",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.vdo": "0"
Oct  3 11:40:26 compute-0 zen_turing[552916]:            },
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "type": "block",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "vg_name": "ceph_vg0"
Oct  3 11:40:26 compute-0 zen_turing[552916]:        }
Oct  3 11:40:26 compute-0 zen_turing[552916]:    ],
Oct  3 11:40:26 compute-0 zen_turing[552916]:    "1": [
Oct  3 11:40:26 compute-0 zen_turing[552916]:        {
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "devices": [
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "/dev/loop4"
Oct  3 11:40:26 compute-0 zen_turing[552916]:            ],
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "lv_name": "ceph_lv1",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "lv_size": "21470642176",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "name": "ceph_lv1",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "tags": {
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.cluster_name": "ceph",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.crush_device_class": "",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.encrypted": "0",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.osd_id": "1",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.type": "block",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.vdo": "0"
Oct  3 11:40:26 compute-0 zen_turing[552916]:            },
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "type": "block",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "vg_name": "ceph_vg1"
Oct  3 11:40:26 compute-0 zen_turing[552916]:        }
Oct  3 11:40:26 compute-0 zen_turing[552916]:    ],
Oct  3 11:40:26 compute-0 zen_turing[552916]:    "2": [
Oct  3 11:40:26 compute-0 zen_turing[552916]:        {
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "devices": [
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "/dev/loop5"
Oct  3 11:40:26 compute-0 zen_turing[552916]:            ],
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "lv_name": "ceph_lv2",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "lv_size": "21470642176",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "name": "ceph_lv2",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "tags": {
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.cluster_name": "ceph",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.crush_device_class": "",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.encrypted": "0",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.osd_id": "2",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.type": "block",
Oct  3 11:40:26 compute-0 zen_turing[552916]:                "ceph.vdo": "0"
Oct  3 11:40:26 compute-0 zen_turing[552916]:            },
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "type": "block",
Oct  3 11:40:26 compute-0 zen_turing[552916]:            "vg_name": "ceph_vg2"
Oct  3 11:40:26 compute-0 zen_turing[552916]:        }
Oct  3 11:40:26 compute-0 zen_turing[552916]:    ]
Oct  3 11:40:26 compute-0 zen_turing[552916]: }
Oct  3 11:40:26 compute-0 systemd[1]: libpod-edb8b584fa67e581376f616a1ddb6d1ade7eede06cd8ae6eea17e567b047844d.scope: Deactivated successfully.
Oct  3 11:40:26 compute-0 podman[552902]: 2025-10-03 11:40:26.415421691 +0000 UTC m=+1.057594437 container died edb8b584fa67e581376f616a1ddb6d1ade7eede06cd8ae6eea17e567b047844d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_turing, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Oct  3 11:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4714d6ec22148d0f96885d5fbc21e4965069c674bdd6ea5e576b1b87bc12dfb5-merged.mount: Deactivated successfully.
Oct  3 11:40:26 compute-0 podman[552902]: 2025-10-03 11:40:26.496840815 +0000 UTC m=+1.139013571 container remove edb8b584fa67e581376f616a1ddb6d1ade7eede06cd8ae6eea17e567b047844d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:40:26 compute-0 systemd[1]: libpod-conmon-edb8b584fa67e581376f616a1ddb6d1ade7eede06cd8ae6eea17e567b047844d.scope: Deactivated successfully.
Oct  3 11:40:27 compute-0 podman[553074]: 2025-10-03 11:40:27.366223032 +0000 UTC m=+0.071519930 container create e8ab4849deabd31e14db6fceda6bfef18013b6a2287e60480a4fbf1804341dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 11:40:27 compute-0 podman[553074]: 2025-10-03 11:40:27.341326819 +0000 UTC m=+0.046623677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:40:27 compute-0 systemd[1]: Started libpod-conmon-e8ab4849deabd31e14db6fceda6bfef18013b6a2287e60480a4fbf1804341dba.scope.
Oct  3 11:40:27 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:40:27 compute-0 podman[553074]: 2025-10-03 11:40:27.518775064 +0000 UTC m=+0.224071932 container init e8ab4849deabd31e14db6fceda6bfef18013b6a2287e60480a4fbf1804341dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ritchie, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct  3 11:40:27 compute-0 podman[553074]: 2025-10-03 11:40:27.540527688 +0000 UTC m=+0.245824546 container start e8ab4849deabd31e14db6fceda6bfef18013b6a2287e60480a4fbf1804341dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct  3 11:40:27 compute-0 tender_ritchie[553090]: 167 167
Oct  3 11:40:27 compute-0 systemd[1]: libpod-e8ab4849deabd31e14db6fceda6bfef18013b6a2287e60480a4fbf1804341dba.scope: Deactivated successfully.
Oct  3 11:40:27 compute-0 podman[553074]: 2025-10-03 11:40:27.546674283 +0000 UTC m=+0.251971201 container attach e8ab4849deabd31e14db6fceda6bfef18013b6a2287e60480a4fbf1804341dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  3 11:40:27 compute-0 podman[553074]: 2025-10-03 11:40:27.547976095 +0000 UTC m=+0.253272973 container died e8ab4849deabd31e14db6fceda6bfef18013b6a2287e60480a4fbf1804341dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:40:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e20b483e8b8b5100f4fd8c3393271371c45f7db0fc20517bb1c6ed21baafad99-merged.mount: Deactivated successfully.
Oct  3 11:40:27 compute-0 podman[553074]: 2025-10-03 11:40:27.647471075 +0000 UTC m=+0.352767933 container remove e8ab4849deabd31e14db6fceda6bfef18013b6a2287e60480a4fbf1804341dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:40:27 compute-0 systemd[1]: libpod-conmon-e8ab4849deabd31e14db6fceda6bfef18013b6a2287e60480a4fbf1804341dba.scope: Deactivated successfully.
Oct  3 11:40:27 compute-0 podman[553115]: 2025-10-03 11:40:27.930772524 +0000 UTC m=+0.081090215 container create a6b6c0578deebf1827df6606b8831076d3225a3539a67499fe9c8a069a9a1e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct  3 11:40:27 compute-0 podman[553115]: 2025-10-03 11:40:27.889784538 +0000 UTC m=+0.040102289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:40:28 compute-0 systemd[1]: Started libpod-conmon-a6b6c0578deebf1827df6606b8831076d3225a3539a67499fe9c8a069a9a1e4d.scope.
Oct  3 11:40:28 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/354188bac5a980e187421e365a7c26c62f7b86ebd96bdacbea8ce0114d14d624/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/354188bac5a980e187421e365a7c26c62f7b86ebd96bdacbea8ce0114d14d624/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/354188bac5a980e187421e365a7c26c62f7b86ebd96bdacbea8ce0114d14d624/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/354188bac5a980e187421e365a7c26c62f7b86ebd96bdacbea8ce0114d14d624/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:40:28 compute-0 podman[553115]: 2025-10-03 11:40:28.110952247 +0000 UTC m=+0.261269938 container init a6b6c0578deebf1827df6606b8831076d3225a3539a67499fe9c8a069a9a1e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hawking, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:40:28 compute-0 podman[553115]: 2025-10-03 11:40:28.121183802 +0000 UTC m=+0.271501473 container start a6b6c0578deebf1827df6606b8831076d3225a3539a67499fe9c8a069a9a1e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hawking, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  3 11:40:28 compute-0 podman[553115]: 2025-10-03 11:40:28.12645414 +0000 UTC m=+0.276771891 container attach a6b6c0578deebf1827df6606b8831076d3225a3539a67499fe9c8a069a9a1e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hawking, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 11:40:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v3999: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:40:29 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:40:29 compute-0 nova_compute[351685]: 2025-10-03 11:40:29.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]: {
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:        "osd_id": 1,
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:        "type": "bluestore"
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:    },
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:        "osd_id": 2,
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:        "type": "bluestore"
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:    },
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:        "osd_id": 0,
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:        "type": "bluestore"
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]:    }
Oct  3 11:40:29 compute-0 compassionate_hawking[553129]: }
Oct  3 11:40:29 compute-0 systemd[1]: libpod-a6b6c0578deebf1827df6606b8831076d3225a3539a67499fe9c8a069a9a1e4d.scope: Deactivated successfully.
Oct  3 11:40:29 compute-0 systemd[1]: libpod-a6b6c0578deebf1827df6606b8831076d3225a3539a67499fe9c8a069a9a1e4d.scope: Consumed 1.038s CPU time.
Oct  3 11:40:29 compute-0 podman[553162]: 2025-10-03 11:40:29.211989636 +0000 UTC m=+0.036812764 container died a6b6c0578deebf1827df6606b8831076d3225a3539a67499fe9c8a069a9a1e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hawking, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:40:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-354188bac5a980e187421e365a7c26c62f7b86ebd96bdacbea8ce0114d14d624-merged.mount: Deactivated successfully.
Oct  3 11:40:29 compute-0 podman[553162]: 2025-10-03 11:40:29.273125555 +0000 UTC m=+0.097948673 container remove a6b6c0578deebf1827df6606b8831076d3225a3539a67499fe9c8a069a9a1e4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  3 11:40:29 compute-0 systemd[1]: libpod-conmon-a6b6c0578deebf1827df6606b8831076d3225a3539a67499fe9c8a069a9a1e4d.scope: Deactivated successfully.
Oct  3 11:40:29 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:40:29 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:40:29 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:40:29 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:40:29 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 3743eb21-b7c5-4a2e-aa04-20c41213c7a1 does not exist
Oct  3 11:40:29 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev d4b2c031-9e0e-4ad3-9332-d738335931c8 does not exist
Oct  3 11:40:29 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:40:29 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:40:29 compute-0 podman[157165]: time="2025-10-03T11:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:40:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 47500 "" "Go-http-client/1.1"
Oct  3 11:40:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9615 "" "Go-http-client/1.1"
Oct  3 11:40:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4000: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:40:30 compute-0 nova_compute[351685]: 2025-10-03 11:40:30.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.049 2 DEBUG oslo_concurrency.lockutils [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "83fc22ce-d2e4-468a-b166-04f2743fa68d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.050 2 DEBUG oslo_concurrency.lockutils [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.050 2 DEBUG oslo_concurrency.lockutils [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.050 2 DEBUG oslo_concurrency.lockutils [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.050 2 DEBUG oslo_concurrency.lockutils [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.051 2 INFO nova.compute.manager [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Terminating instance#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.052 2 DEBUG nova.compute.manager [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  3 11:40:31 compute-0 kernel: tap226590bd-fa (unregistering): left promiscuous mode
Oct  3 11:40:31 compute-0 NetworkManager[45015]: <info>  [1759491631.1511] device (tap226590bd-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:31 compute-0 ovn_controller[88471]: 2025-10-03T11:40:31Z|00203|binding|INFO|Releasing lport 226590bd-fa92-4e26-8879-8782d015ad61 from this chassis (sb_readonly=0)
Oct  3 11:40:31 compute-0 ovn_controller[88471]: 2025-10-03T11:40:31Z|00204|binding|INFO|Setting lport 226590bd-fa92-4e26-8879-8782d015ad61 down in Southbound
Oct  3 11:40:31 compute-0 ovn_controller[88471]: 2025-10-03T11:40:31Z|00205|binding|INFO|Removing iface tap226590bd-fa ovn-installed in OVS
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:31.180 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:36:62 10.100.1.141'], port_security=['fa:16:3e:c0:36:62 10.100.1.141'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.141/16', 'neutron:device_id': '83fc22ce-d2e4-468a-b166-04f2743fa68d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9844dad0-501d-443c-9110-8dd633c460de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ebbd19d68501417398b05a6a4b7c22de', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6c689562-b70d-4f38-a6f1-f8491b7c8598', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=557eeff1-fb6f-4cc0-9427-7ac15dbf385b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=226590bd-fa92-4e26-8879-8782d015ad61) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:40:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:31.182 284328 INFO neutron.agent.ovn.metadata.agent [-] Port 226590bd-fa92-4e26-8879-8782d015ad61 in datapath 9844dad0-501d-443c-9110-8dd633c460de unbound from our chassis#033[00m
Oct  3 11:40:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:31.186 284328 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9844dad0-501d-443c-9110-8dd633c460de#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:31.208 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[6bcf2a1a-cd39-4678-8e8e-696fffeb3d07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:40:31 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct  3 11:40:31 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 7min 10.563s CPU time.
Oct  3 11:40:31 compute-0 systemd-machined[137653]: Machine qemu-13-instance-0000000d terminated.
Oct  3 11:40:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:31.244 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[af4db301-9fe3-4d18-9a13-ab0770cbdaca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:40:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:31.248 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[b5891b2b-7816-4961-934b-651de189647b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:40:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:31.277 412675 DEBUG oslo.privsep.daemon [-] privsep: reply[962d4e89-29ef-47b8-bd72-8a3424bfb748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:40:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:31.298 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[fa0636bf-5569-4fdd-8d86-16c29dca9721]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9844dad0-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:82:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 2260, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 2260, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 999798, 'reachable_time': 27125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 553243, 'error': None, 'target': 'ovnmeta-9844dad0-501d-443c-9110-8dd633c460de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.301 2 INFO nova.virt.libvirt.driver [-] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Instance destroyed successfully.#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.301 2 DEBUG nova.objects.instance [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lazy-loading 'resources' on Instance uuid 83fc22ce-d2e4-468a-b166-04f2743fa68d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.316 2 DEBUG nova.virt.libvirt.vif [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T11:26:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0355793-asg-bz6ac4extgro-2jy3a4mwqdu7-elxctv2n66cz',id=13,image_ref='b9c8e0cc-ecf1-4fa8-92ee-328b108123cd',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-03T11:26:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ebbd19d68501417398b05a6a4b7c22de',ramdisk_id='',reservation_id='r-j0c04m1s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b9c8e0cc-ecf1-4fa8-92ee-328b108123cd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-298349364',owner_user_name='tempest-PrometheusGabbiTest-298349364-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-03T11:26:57Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8990c210ba8740dc9714739f27702391',uuid=83fc22ce-d2e4-468a-b166-04f2743fa68d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "226590bd-fa92-4e26-8879-8782d015ad61", "address": "fa:16:3e:c0:36:62", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.141", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap226590bd-fa", "ovs_interfaceid": "226590bd-fa92-4e26-8879-8782d015ad61", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.316 2 DEBUG nova.network.os_vif_util [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Converting VIF {"id": "226590bd-fa92-4e26-8879-8782d015ad61", "address": "fa:16:3e:c0:36:62", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.141", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap226590bd-fa", "ovs_interfaceid": "226590bd-fa92-4e26-8879-8782d015ad61", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.317 2 DEBUG nova.network.os_vif_util [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c0:36:62,bridge_name='br-int',has_traffic_filtering=True,id=226590bd-fa92-4e26-8879-8782d015ad61,network=Network(9844dad0-501d-443c-9110-8dd633c460de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap226590bd-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.317 2 DEBUG os_vif [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:36:62,bridge_name='br-int',has_traffic_filtering=True,id=226590bd-fa92-4e26-8879-8782d015ad61,network=Network(9844dad0-501d-443c-9110-8dd633c460de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap226590bd-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  3 11:40:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:31.317 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[02ea0efa-0979-462a-8bc5-d1a52dc4b064]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap9844dad0-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 999815, 'tstamp': 999815}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 553249, 'error': None, 'target': 'ovnmeta-9844dad0-501d-443c-9110-8dd633c460de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9844dad0-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 999820, 'tstamp': 999820}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 553249, 'error': None, 'target': 'ovnmeta-9844dad0-501d-443c-9110-8dd633c460de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.319 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap226590bd-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:40:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:31.319 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9844dad0-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:31.326 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9844dad0-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:40:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:31.327 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:40:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:31.327 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9844dad0-50, col_values=(('external_ids', {'iface-id': '71787e87-58e9-457f-840d-4d7e879d0280'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:40:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:31.328 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.330 2 INFO os_vif [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:36:62,bridge_name='br-int',has_traffic_filtering=True,id=226590bd-fa92-4e26-8879-8782d015ad61,network=Network(9844dad0-501d-443c-9110-8dd633c460de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap226590bd-fa')#033[00m
Oct  3 11:40:31 compute-0 openstack_network_exporter[367524]: ERROR   11:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:40:31 compute-0 openstack_network_exporter[367524]: ERROR   11:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:40:31 compute-0 openstack_network_exporter[367524]: ERROR   11:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:40:31 compute-0 openstack_network_exporter[367524]: ERROR   11:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:40:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:40:31 compute-0 openstack_network_exporter[367524]: ERROR   11:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:40:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.420 2 DEBUG nova.compute.manager [req-2c89f525-10c0-4d49-bd4c-ad6cefe9128b req-04054191-2584-44dc-bfd7-fd56f8435a37 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Received event network-vif-unplugged-226590bd-fa92-4e26-8879-8782d015ad61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.420 2 DEBUG oslo_concurrency.lockutils [req-2c89f525-10c0-4d49-bd4c-ad6cefe9128b req-04054191-2584-44dc-bfd7-fd56f8435a37 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.422 2 DEBUG oslo_concurrency.lockutils [req-2c89f525-10c0-4d49-bd4c-ad6cefe9128b req-04054191-2584-44dc-bfd7-fd56f8435a37 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.423 2 DEBUG oslo_concurrency.lockutils [req-2c89f525-10c0-4d49-bd4c-ad6cefe9128b req-04054191-2584-44dc-bfd7-fd56f8435a37 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.424 2 DEBUG nova.compute.manager [req-2c89f525-10c0-4d49-bd4c-ad6cefe9128b req-04054191-2584-44dc-bfd7-fd56f8435a37 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] No waiting events found dispatching network-vif-unplugged-226590bd-fa92-4e26-8879-8782d015ad61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.425 2 DEBUG nova.compute.manager [req-2c89f525-10c0-4d49-bd4c-ad6cefe9128b req-04054191-2584-44dc-bfd7-fd56f8435a37 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Received event network-vif-unplugged-226590bd-fa92-4e26-8879-8782d015ad61 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  3 11:40:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:31.828 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '1e:73:2e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '4e:70:f7:73:f2:48'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:40:31 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:31.829 284328 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  3 11:40:31 compute-0 nova_compute[351685]: 2025-10-03 11:40:31.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:32 compute-0 nova_compute[351685]: 2025-10-03 11:40:32.182 2 INFO nova.virt.libvirt.driver [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Deleting instance files /var/lib/nova/instances/83fc22ce-d2e4-468a-b166-04f2743fa68d_del#033[00m
Oct  3 11:40:32 compute-0 nova_compute[351685]: 2025-10-03 11:40:32.182 2 INFO nova.virt.libvirt.driver [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Deletion of /var/lib/nova/instances/83fc22ce-d2e4-468a-b166-04f2743fa68d_del complete#033[00m
Oct  3 11:40:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4001: 321 pgs: 321 active+clean; 298 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:40:32 compute-0 nova_compute[351685]: 2025-10-03 11:40:32.238 2 INFO nova.compute.manager [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Took 1.19 seconds to destroy the instance on the hypervisor.#033[00m
Oct  3 11:40:32 compute-0 nova_compute[351685]: 2025-10-03 11:40:32.238 2 DEBUG oslo.service.loopingcall [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  3 11:40:32 compute-0 nova_compute[351685]: 2025-10-03 11:40:32.239 2 DEBUG nova.compute.manager [-] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  3 11:40:32 compute-0 nova_compute[351685]: 2025-10-03 11:40:32.239 2 DEBUG nova.network.neutron [-] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  3 11:40:33 compute-0 nova_compute[351685]: 2025-10-03 11:40:33.477 2 DEBUG nova.network.neutron [-] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:40:33 compute-0 nova_compute[351685]: 2025-10-03 11:40:33.597 2 DEBUG nova.compute.manager [req-fb684633-43ee-4b2f-a5d8-c49e0f875026 req-f686b2a2-501e-4234-ac7d-66a7229c14a3 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Received event network-vif-plugged-226590bd-fa92-4e26-8879-8782d015ad61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:40:33 compute-0 nova_compute[351685]: 2025-10-03 11:40:33.597 2 DEBUG oslo_concurrency.lockutils [req-fb684633-43ee-4b2f-a5d8-c49e0f875026 req-f686b2a2-501e-4234-ac7d-66a7229c14a3 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:40:33 compute-0 nova_compute[351685]: 2025-10-03 11:40:33.598 2 DEBUG oslo_concurrency.lockutils [req-fb684633-43ee-4b2f-a5d8-c49e0f875026 req-f686b2a2-501e-4234-ac7d-66a7229c14a3 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:40:33 compute-0 nova_compute[351685]: 2025-10-03 11:40:33.598 2 DEBUG oslo_concurrency.lockutils [req-fb684633-43ee-4b2f-a5d8-c49e0f875026 req-f686b2a2-501e-4234-ac7d-66a7229c14a3 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:40:33 compute-0 nova_compute[351685]: 2025-10-03 11:40:33.598 2 DEBUG nova.compute.manager [req-fb684633-43ee-4b2f-a5d8-c49e0f875026 req-f686b2a2-501e-4234-ac7d-66a7229c14a3 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] No waiting events found dispatching network-vif-plugged-226590bd-fa92-4e26-8879-8782d015ad61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:40:33 compute-0 nova_compute[351685]: 2025-10-03 11:40:33.598 2 WARNING nova.compute.manager [req-fb684633-43ee-4b2f-a5d8-c49e0f875026 req-f686b2a2-501e-4234-ac7d-66a7229c14a3 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Received unexpected event network-vif-plugged-226590bd-fa92-4e26-8879-8782d015ad61 for instance with vm_state active and task_state deleting.#033[00m
Oct  3 11:40:33 compute-0 nova_compute[351685]: 2025-10-03 11:40:33.599 2 DEBUG nova.compute.manager [req-fb684633-43ee-4b2f-a5d8-c49e0f875026 req-f686b2a2-501e-4234-ac7d-66a7229c14a3 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Received event network-vif-deleted-226590bd-fa92-4e26-8879-8782d015ad61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:40:33 compute-0 nova_compute[351685]: 2025-10-03 11:40:33.599 2 INFO nova.compute.manager [req-fb684633-43ee-4b2f-a5d8-c49e0f875026 req-f686b2a2-501e-4234-ac7d-66a7229c14a3 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Neutron deleted interface 226590bd-fa92-4e26-8879-8782d015ad61; detaching it from the instance and deleting it from the info cache#033[00m
Oct  3 11:40:33 compute-0 nova_compute[351685]: 2025-10-03 11:40:33.599 2 DEBUG nova.network.neutron [req-fb684633-43ee-4b2f-a5d8-c49e0f875026 req-f686b2a2-501e-4234-ac7d-66a7229c14a3 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:40:33 compute-0 nova_compute[351685]: 2025-10-03 11:40:33.624 2 INFO nova.compute.manager [-] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Took 1.39 seconds to deallocate network for instance.#033[00m
Oct  3 11:40:33 compute-0 nova_compute[351685]: 2025-10-03 11:40:33.667 2 DEBUG nova.compute.manager [req-fb684633-43ee-4b2f-a5d8-c49e0f875026 req-f686b2a2-501e-4234-ac7d-66a7229c14a3 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Detach interface failed, port_id=226590bd-fa92-4e26-8879-8782d015ad61, reason: Instance 83fc22ce-d2e4-468a-b166-04f2743fa68d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  3 11:40:33 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:33.832 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=41fabae1-2dc7-46e2-b697-d9133d158399, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:40:33 compute-0 nova_compute[351685]: 2025-10-03 11:40:33.853 2 DEBUG oslo_concurrency.lockutils [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:40:33 compute-0 nova_compute[351685]: 2025-10-03 11:40:33.854 2 DEBUG oslo_concurrency.lockutils [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:40:33 compute-0 nova_compute[351685]: 2025-10-03 11:40:33.979 2 DEBUG oslo_concurrency.processutils [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:40:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:40:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4002: 321 pgs: 321 active+clean; 284 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 B/s wr, 1 op/s
Oct  3 11:40:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:40:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2020210092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:40:34 compute-0 nova_compute[351685]: 2025-10-03 11:40:34.510 2 DEBUG oslo_concurrency.processutils [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:40:34 compute-0 nova_compute[351685]: 2025-10-03 11:40:34.522 2 DEBUG nova.compute.provider_tree [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:40:34 compute-0 nova_compute[351685]: 2025-10-03 11:40:34.538 2 DEBUG nova.scheduler.client.report [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:40:34 compute-0 nova_compute[351685]: 2025-10-03 11:40:34.567 2 DEBUG oslo_concurrency.lockutils [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:40:34 compute-0 nova_compute[351685]: 2025-10-03 11:40:34.592 2 INFO nova.scheduler.client.report [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Deleted allocations for instance 83fc22ce-d2e4-468a-b166-04f2743fa68d#033[00m
Oct  3 11:40:34 compute-0 nova_compute[351685]: 2025-10-03 11:40:34.656 2 DEBUG oslo_concurrency.lockutils [None req-37692bf8-2e4b-4808-8b2f-5c0fc3d3afce 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "83fc22ce-d2e4-468a-b166-04f2743fa68d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:40:35 compute-0 nova_compute[351685]: 2025-10-03 11:40:35.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4003: 321 pgs: 321 active+clean; 218 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  3 11:40:36 compute-0 nova_compute[351685]: 2025-10-03 11:40:36.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4004: 321 pgs: 321 active+clean; 218 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  3 11:40:39 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:40:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4005: 321 pgs: 321 active+clean; 218 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  3 11:40:40 compute-0 nova_compute[351685]: 2025-10-03 11:40:40.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:41 compute-0 nova_compute[351685]: 2025-10-03 11:40:41.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:41.720 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:40:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:41.721 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:40:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:41.721 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:40:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4006: 321 pgs: 321 active+clean; 218 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  3 11:40:42 compute-0 nova_compute[351685]: 2025-10-03 11:40:42.805 2 DEBUG oslo_concurrency.lockutils [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:40:42 compute-0 nova_compute[351685]: 2025-10-03 11:40:42.806 2 DEBUG oslo_concurrency.lockutils [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:40:42 compute-0 nova_compute[351685]: 2025-10-03 11:40:42.807 2 DEBUG oslo_concurrency.lockutils [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:40:42 compute-0 nova_compute[351685]: 2025-10-03 11:40:42.807 2 DEBUG oslo_concurrency.lockutils [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:40:42 compute-0 nova_compute[351685]: 2025-10-03 11:40:42.808 2 DEBUG oslo_concurrency.lockutils [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:40:42 compute-0 nova_compute[351685]: 2025-10-03 11:40:42.810 2 INFO nova.compute.manager [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Terminating instance#033[00m
Oct  3 11:40:42 compute-0 nova_compute[351685]: 2025-10-03 11:40:42.813 2 DEBUG nova.compute.manager [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  3 11:40:42 compute-0 kernel: tapd6a8cc09-54 (unregistering): left promiscuous mode
Oct  3 11:40:42 compute-0 NetworkManager[45015]: <info>  [1759491642.9254] device (tapd6a8cc09-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  3 11:40:42 compute-0 nova_compute[351685]: 2025-10-03 11:40:42.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:42 compute-0 ovn_controller[88471]: 2025-10-03T11:40:42Z|00206|binding|INFO|Releasing lport d6a8cc09-5401-43eb-a552-9e7406a4b201 from this chassis (sb_readonly=0)
Oct  3 11:40:42 compute-0 ovn_controller[88471]: 2025-10-03T11:40:42Z|00207|binding|INFO|Setting lport d6a8cc09-5401-43eb-a552-9e7406a4b201 down in Southbound
Oct  3 11:40:42 compute-0 ovn_controller[88471]: 2025-10-03T11:40:42Z|00208|binding|INFO|Removing iface tapd6a8cc09-54 ovn-installed in OVS
Oct  3 11:40:42 compute-0 nova_compute[351685]: 2025-10-03 11:40:42.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:42.948 284328 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:f1:a3 10.100.0.169'], port_security=['fa:16:3e:5e:f1:a3 10.100.0.169'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.169/16', 'neutron:device_id': '443e486d-1bf2-4550-a4ae-32f0f8f4af19', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9844dad0-501d-443c-9110-8dd633c460de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ebbd19d68501417398b05a6a4b7c22de', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6c689562-b70d-4f38-a6f1-f8491b7c8598', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=557eeff1-fb6f-4cc0-9427-7ac15dbf385b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>], logical_port=d6a8cc09-5401-43eb-a552-9e7406a4b201) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd3e33cdfd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  3 11:40:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:42.950 284328 INFO neutron.agent.ovn.metadata.agent [-] Port d6a8cc09-5401-43eb-a552-9e7406a4b201 in datapath 9844dad0-501d-443c-9110-8dd633c460de unbound from our chassis#033[00m
Oct  3 11:40:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:42.953 284328 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9844dad0-501d-443c-9110-8dd633c460de, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  3 11:40:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:42.954 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[982a5a56-0d19-4e0d-9e65-7bf68f03ad89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:40:42 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:42.955 284328 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9844dad0-501d-443c-9110-8dd633c460de namespace which is not needed anymore#033[00m
Oct  3 11:40:42 compute-0 nova_compute[351685]: 2025-10-03 11:40:42.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:43 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct  3 11:40:43 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000010.scope: Consumed 6min 44.673s CPU time.
Oct  3 11:40:43 compute-0 systemd-machined[137653]: Machine qemu-17-instance-00000010 terminated.
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.058 2 INFO nova.virt.libvirt.driver [-] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Instance destroyed successfully.#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.058 2 DEBUG nova.objects.instance [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lazy-loading 'resources' on Instance uuid 443e486d-1bf2-4550-a4ae-32f0f8f4af19 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.077 2 DEBUG nova.virt.libvirt.vif [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-03T11:30:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-0355793-asg-bz6ac4extgro-yngmy2hkxebf-kzam7sftwtcg',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0355793-asg-bz6ac4extgro-yngmy2hkxebf-kzam7sftwtcg',id=16,image_ref='b9c8e0cc-ecf1-4fa8-92ee-328b108123cd',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-03T11:30:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='0f5ccd31-0ab5-424c-9868-9c1f9b1ba831'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ebbd19d68501417398b05a6a4b7c22de',ramdisk_id='',reservation_id='r-oaxcr3xw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b9c8e0cc-ecf1-4fa8-92ee-328b108123cd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-298349364',owner_user_name='tempest-PrometheusGabbiTest-298349364-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-03T11:30:41Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='8990c210ba8740dc9714739f27702391',uuid=443e486d-1bf2-4550-a4ae-32f0f8f4af19,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "address": "fa:16:3e:5e:f1:a3", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6a8cc09-54", "ovs_interfaceid": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.077 2 DEBUG nova.network.os_vif_util [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Converting VIF {"id": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "address": "fa:16:3e:5e:f1:a3", "network": {"id": "9844dad0-501d-443c-9110-8dd633c460de", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ebbd19d68501417398b05a6a4b7c22de", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6a8cc09-54", "ovs_interfaceid": "d6a8cc09-5401-43eb-a552-9e7406a4b201", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.079 2 DEBUG nova.network.os_vif_util [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:f1:a3,bridge_name='br-int',has_traffic_filtering=True,id=d6a8cc09-5401-43eb-a552-9e7406a4b201,network=Network(9844dad0-501d-443c-9110-8dd633c460de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6a8cc09-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.079 2 DEBUG os_vif [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:f1:a3,bridge_name='br-int',has_traffic_filtering=True,id=d6a8cc09-5401-43eb-a552-9e7406a4b201,network=Network(9844dad0-501d-443c-9110-8dd633c460de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6a8cc09-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.082 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6a8cc09-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.093 2 INFO os_vif [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:f1:a3,bridge_name='br-int',has_traffic_filtering=True,id=d6a8cc09-5401-43eb-a552-9e7406a4b201,network=Network(9844dad0-501d-443c-9110-8dd633c460de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6a8cc09-54')#033[00m
Oct  3 11:40:43 compute-0 neutron-haproxy-ovnmeta-9844dad0-501d-443c-9110-8dd633c460de[531917]: [NOTICE]   (531926) : haproxy version is 2.8.14-c23fe91
Oct  3 11:40:43 compute-0 neutron-haproxy-ovnmeta-9844dad0-501d-443c-9110-8dd633c460de[531917]: [NOTICE]   (531926) : path to executable is /usr/sbin/haproxy
Oct  3 11:40:43 compute-0 neutron-haproxy-ovnmeta-9844dad0-501d-443c-9110-8dd633c460de[531917]: [WARNING]  (531926) : Exiting Master process...
Oct  3 11:40:43 compute-0 neutron-haproxy-ovnmeta-9844dad0-501d-443c-9110-8dd633c460de[531917]: [ALERT]    (531926) : Current worker (531930) exited with code 143 (Terminated)
Oct  3 11:40:43 compute-0 neutron-haproxy-ovnmeta-9844dad0-501d-443c-9110-8dd633c460de[531917]: [WARNING]  (531926) : All workers exited. Exiting... (0)
Oct  3 11:40:43 compute-0 systemd[1]: libpod-2ef57576c1edb1b4c2226583eeaaa7dbf5e5e1f0c4e73681eb9ddf25417b077a.scope: Deactivated successfully.
Oct  3 11:40:43 compute-0 podman[553324]: 2025-10-03 11:40:43.177923647 +0000 UTC m=+0.068539655 container died 2ef57576c1edb1b4c2226583eeaaa7dbf5e5e1f0c4e73681eb9ddf25417b077a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9844dad0-501d-443c-9110-8dd633c460de, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  3 11:40:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ef57576c1edb1b4c2226583eeaaa7dbf5e5e1f0c4e73681eb9ddf25417b077a-userdata-shm.mount: Deactivated successfully.
Oct  3 11:40:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d7cc04b6a4e1214966669a7868dca073ef1348bef424041b383eb294ac96b35-merged.mount: Deactivated successfully.
Oct  3 11:40:43 compute-0 podman[553324]: 2025-10-03 11:40:43.238688293 +0000 UTC m=+0.129304311 container cleanup 2ef57576c1edb1b4c2226583eeaaa7dbf5e5e1f0c4e73681eb9ddf25417b077a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9844dad0-501d-443c-9110-8dd633c460de, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:40:43 compute-0 systemd[1]: libpod-conmon-2ef57576c1edb1b4c2226583eeaaa7dbf5e5e1f0c4e73681eb9ddf25417b077a.scope: Deactivated successfully.
Oct  3 11:40:43 compute-0 podman[553368]: 2025-10-03 11:40:43.330566252 +0000 UTC m=+0.063746493 container remove 2ef57576c1edb1b4c2226583eeaaa7dbf5e5e1f0c4e73681eb9ddf25417b077a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9844dad0-501d-443c-9110-8dd633c460de, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  3 11:40:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:43.345 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[b2463bd4-8549-4ab6-ae18-d383672ff651]: (4, ('Fri Oct  3 11:40:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9844dad0-501d-443c-9110-8dd633c460de (2ef57576c1edb1b4c2226583eeaaa7dbf5e5e1f0c4e73681eb9ddf25417b077a)\n2ef57576c1edb1b4c2226583eeaaa7dbf5e5e1f0c4e73681eb9ddf25417b077a\nFri Oct  3 11:40:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9844dad0-501d-443c-9110-8dd633c460de (2ef57576c1edb1b4c2226583eeaaa7dbf5e5e1f0c4e73681eb9ddf25417b077a)\n2ef57576c1edb1b4c2226583eeaaa7dbf5e5e1f0c4e73681eb9ddf25417b077a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:40:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:43.348 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[35d5f9b8-7694-4963-a2b6-3174174453d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:40:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:43.350 284328 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9844dad0-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:43 compute-0 kernel: tap9844dad0-50: left promiscuous mode
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:43.377 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4d894b-f383-4f1d-805e-fad791d9484c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:40:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:43.400 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[173abf0c-13ea-454a-8d9d-9a6ade399ae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:40:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:43.401 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[cd01e922-bfd9-4dc1-aa32-56581a74f8ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:40:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:43.420 412583 DEBUG oslo.privsep.daemon [-] privsep: reply[dcad8fb6-0e32-4b53-bb0a-c7a8bfb8d12b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 999790, 'reachable_time': 22629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 553384, 'error': None, 'target': 'ovnmeta-9844dad0-501d-443c-9110-8dd633c460de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:40:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:43.424 284439 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9844dad0-501d-443c-9110-8dd633c460de deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  3 11:40:43 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:40:43.424 284439 DEBUG oslo.privsep.daemon [-] privsep: reply[2a537062-75e2-4b2c-a0b7-dc818aea3926]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  3 11:40:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d9844dad0\x2d501d\x2d443c\x2d9110\x2d8dd633c460de.mount: Deactivated successfully.
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.753 2 DEBUG nova.compute.manager [req-d3844427-0cb8-4cb5-bbb1-cfdf536a4eda req-21577562-06e2-413f-bd90-870090f39ca0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Received event network-vif-unplugged-d6a8cc09-5401-43eb-a552-9e7406a4b201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.753 2 DEBUG oslo_concurrency.lockutils [req-d3844427-0cb8-4cb5-bbb1-cfdf536a4eda req-21577562-06e2-413f-bd90-870090f39ca0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.753 2 DEBUG oslo_concurrency.lockutils [req-d3844427-0cb8-4cb5-bbb1-cfdf536a4eda req-21577562-06e2-413f-bd90-870090f39ca0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.754 2 DEBUG oslo_concurrency.lockutils [req-d3844427-0cb8-4cb5-bbb1-cfdf536a4eda req-21577562-06e2-413f-bd90-870090f39ca0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.754 2 DEBUG nova.compute.manager [req-d3844427-0cb8-4cb5-bbb1-cfdf536a4eda req-21577562-06e2-413f-bd90-870090f39ca0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] No waiting events found dispatching network-vif-unplugged-d6a8cc09-5401-43eb-a552-9e7406a4b201 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.754 2 DEBUG nova.compute.manager [req-d3844427-0cb8-4cb5-bbb1-cfdf536a4eda req-21577562-06e2-413f-bd90-870090f39ca0 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Received event network-vif-unplugged-d6a8cc09-5401-43eb-a552-9e7406a4b201 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.857 2 INFO nova.virt.libvirt.driver [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Deleting instance files /var/lib/nova/instances/443e486d-1bf2-4550-a4ae-32f0f8f4af19_del#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.858 2 INFO nova.virt.libvirt.driver [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Deletion of /var/lib/nova/instances/443e486d-1bf2-4550-a4ae-32f0f8f4af19_del complete#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.920 2 INFO nova.compute.manager [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Took 1.11 seconds to destroy the instance on the hypervisor.#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.921 2 DEBUG oslo.service.loopingcall [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.922 2 DEBUG nova.compute.manager [-] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  3 11:40:43 compute-0 nova_compute[351685]: 2025-10-03 11:40:43.922 2 DEBUG nova.network.neutron [-] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  3 11:40:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:40:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4007: 321 pgs: 321 active+clean; 218 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  3 11:40:44 compute-0 podman[553409]: 2025-10-03 11:40:44.797613366 +0000 UTC m=+0.083172551 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:40:44 compute-0 podman[553387]: 2025-10-03 11:40:44.812508881 +0000 UTC m=+0.121695149 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, distribution-scope=public, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Oct  3 11:40:44 compute-0 podman[553393]: 2025-10-03 11:40:44.8193821 +0000 UTC m=+0.118169317 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  3 11:40:44 compute-0 podman[553400]: 2025-10-03 11:40:44.824613337 +0000 UTC m=+0.114288583 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20250930, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 11:40:44 compute-0 podman[553388]: 2025-10-03 11:40:44.825863286 +0000 UTC m=+0.117314769 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:40:44 compute-0 podman[553386]: 2025-10-03 11:40:44.842313551 +0000 UTC m=+0.154622789 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 11:40:44 compute-0 podman[553401]: 2025-10-03 11:40:44.856706769 +0000 UTC m=+0.146659294 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:40:44 compute-0 nova_compute[351685]: 2025-10-03 11:40:44.921 2 DEBUG nova.network.neutron [-] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:40:44 compute-0 nova_compute[351685]: 2025-10-03 11:40:44.941 2 INFO nova.compute.manager [-] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Took 1.02 seconds to deallocate network for instance.#033[00m
Oct  3 11:40:44 compute-0 nova_compute[351685]: 2025-10-03 11:40:44.981 2 DEBUG oslo_concurrency.lockutils [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:40:44 compute-0 nova_compute[351685]: 2025-10-03 11:40:44.982 2 DEBUG oslo_concurrency.lockutils [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:40:44 compute-0 nova_compute[351685]: 2025-10-03 11:40:44.997 2 DEBUG nova.compute.manager [req-43f5754d-4987-4aee-9823-9d2565e00dca req-80461885-518c-4e87-ae4c-f7812ebfbe38 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Received event network-vif-deleted-d6a8cc09-5401-43eb-a552-9e7406a4b201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:40:45 compute-0 nova_compute[351685]: 2025-10-03 11:40:45.052 2 DEBUG oslo_concurrency.processutils [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:40:45 compute-0 nova_compute[351685]: 2025-10-03 11:40:45.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:45 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:40:45 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/863655666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:40:45 compute-0 nova_compute[351685]: 2025-10-03 11:40:45.551 2 DEBUG oslo_concurrency.processutils [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:40:45 compute-0 nova_compute[351685]: 2025-10-03 11:40:45.560 2 DEBUG nova.compute.provider_tree [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:40:45 compute-0 nova_compute[351685]: 2025-10-03 11:40:45.576 2 DEBUG nova.scheduler.client.report [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:40:45 compute-0 nova_compute[351685]: 2025-10-03 11:40:45.599 2 DEBUG oslo_concurrency.lockutils [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:40:45 compute-0 nova_compute[351685]: 2025-10-03 11:40:45.621 2 INFO nova.scheduler.client.report [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Deleted allocations for instance 443e486d-1bf2-4550-a4ae-32f0f8f4af19#033[00m
Oct  3 11:40:45 compute-0 nova_compute[351685]: 2025-10-03 11:40:45.688 2 DEBUG oslo_concurrency.lockutils [None req-c7912516-e1a1-4875-8eda-cd9a212451dd 8990c210ba8740dc9714739f27702391 ebbd19d68501417398b05a6a4b7c22de - - default default] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:40:45 compute-0 nova_compute[351685]: 2025-10-03 11:40:45.833 2 DEBUG nova.compute.manager [req-4323ac8e-215a-4d43-bbc1-0b2ea88eaa41 req-0594c7cb-5123-46be-9cf7-236e64fa8d38 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Received event network-vif-plugged-d6a8cc09-5401-43eb-a552-9e7406a4b201 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  3 11:40:45 compute-0 nova_compute[351685]: 2025-10-03 11:40:45.834 2 DEBUG oslo_concurrency.lockutils [req-4323ac8e-215a-4d43-bbc1-0b2ea88eaa41 req-0594c7cb-5123-46be-9cf7-236e64fa8d38 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Acquiring lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:40:45 compute-0 nova_compute[351685]: 2025-10-03 11:40:45.834 2 DEBUG oslo_concurrency.lockutils [req-4323ac8e-215a-4d43-bbc1-0b2ea88eaa41 req-0594c7cb-5123-46be-9cf7-236e64fa8d38 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:40:45 compute-0 nova_compute[351685]: 2025-10-03 11:40:45.834 2 DEBUG oslo_concurrency.lockutils [req-4323ac8e-215a-4d43-bbc1-0b2ea88eaa41 req-0594c7cb-5123-46be-9cf7-236e64fa8d38 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] Lock "443e486d-1bf2-4550-a4ae-32f0f8f4af19-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:40:45 compute-0 nova_compute[351685]: 2025-10-03 11:40:45.835 2 DEBUG nova.compute.manager [req-4323ac8e-215a-4d43-bbc1-0b2ea88eaa41 req-0594c7cb-5123-46be-9cf7-236e64fa8d38 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] No waiting events found dispatching network-vif-plugged-d6a8cc09-5401-43eb-a552-9e7406a4b201 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  3 11:40:45 compute-0 nova_compute[351685]: 2025-10-03 11:40:45.835 2 WARNING nova.compute.manager [req-4323ac8e-215a-4d43-bbc1-0b2ea88eaa41 req-0594c7cb-5123-46be-9cf7-236e64fa8d38 7ce4adc6eef841fd8cafe781ce721e66 cd7705ad7d694ad59b1a752a2b103da0 - - default default] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Received unexpected event network-vif-plugged-d6a8cc09-5401-43eb-a552-9e7406a4b201 for instance with vm_state deleted and task_state None.#033[00m
Oct  3 11:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:40:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:40:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4008: 321 pgs: 321 active+clean; 161 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.0 KiB/s wr, 48 op/s
Oct  3 11:40:46 compute-0 nova_compute[351685]: 2025-10-03 11:40:46.296 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759491631.2944028, 83fc22ce-d2e4-468a-b166-04f2743fa68d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:40:46 compute-0 nova_compute[351685]: 2025-10-03 11:40:46.297 2 INFO nova.compute.manager [-] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] VM Stopped (Lifecycle Event)#033[00m
Oct  3 11:40:46 compute-0 nova_compute[351685]: 2025-10-03 11:40:46.319 2 DEBUG nova.compute.manager [None req-fca4a8ec-7954-45d9-a283-4add9324a3bd - - - - - -] [instance: 83fc22ce-d2e4-468a-b166-04f2743fa68d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:40:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:40:46
Oct  3 11:40:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:40:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:40:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.mgr', 'default.rgw.log', 'backups', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'volumes', '.rgw.root']
Oct  3 11:40:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:40:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:40:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:40:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:40:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:40:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:40:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:40:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:40:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:40:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:40:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:40:48 compute-0 nova_compute[351685]: 2025-10-03 11:40:48.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4009: 321 pgs: 321 active+clean; 139 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  3 11:40:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:40:49 compute-0 nova_compute[351685]: 2025-10-03 11:40:49.725 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:40:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4010: 321 pgs: 321 active+clean; 139 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  3 11:40:50 compute-0 nova_compute[351685]: 2025-10-03 11:40:50.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4011: 321 pgs: 321 active+clean; 139 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  3 11:40:53 compute-0 nova_compute[351685]: 2025-10-03 11:40:53.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct  3 11:40:53 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct  3 11:40:53 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct  3 11:40:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:40:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:40:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2112605230' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:40:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:40:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2112605230' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:40:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4013: 321 pgs: 321 active+clean; 139 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Oct  3 11:40:54 compute-0 podman[553542]: 2025-10-03 11:40:54.86262335 +0000 UTC m=+0.100885606 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct  3 11:40:54 compute-0 podman[553544]: 2025-10-03 11:40:54.890051694 +0000 UTC m=+0.113870930 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Oct  3 11:40:54 compute-0 podman[553543]: 2025-10-03 11:40:54.895805277 +0000 UTC m=+0.141029525 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=base rhel9, container_name=kepler, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Oct  3 11:40:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:40:55 compute-0 ceph-mon[191783]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 17K writes, 81K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.01 MB/s#012Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.11 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1321 writes, 6236 keys, 1321 commit groups, 1.0 writes per commit group, ingest: 8.71 MB, 0.01 MB/s#012Interval WAL: 1321 writes, 1321 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     31.2      3.28              0.43        62    0.053       0      0       0.0       0.0#012  L6      1/0    8.47 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   4.9     96.0     80.9      6.18              1.90        61    0.101    406K    32K       0.0       0.0#012 Sum      1/0    8.47 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   5.9     62.7     63.6      9.46              2.33       123    0.077    406K    32K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.1    115.5    117.6      0.54              0.23        12    0.045     54K   3062       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     96.0     80.9      6.18              1.90        61    0.101    406K    32K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     31.2      3.27              0.43        61    0.054       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.6      0.01              0.00         1    0.009       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7800.1 total, 600.0 interval#012Flush(GB): cumulative 0.100, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.59 GB write, 0.08 MB/s write, 0.58 GB read, 0.08 MB/s read, 9.5 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56005dddb1f0#2 capacity: 304.00 MB usage: 71.63 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000393 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4511,68.87 MB,22.6556%) FilterBlock(124,1.11 MB,0.366005%) IndexBlock(124,1.64 MB,0.539318%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  3 11:40:55 compute-0 nova_compute[351685]: 2025-10-03 11:40:55.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct  3 11:40:55 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct  3 11:40:55 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4015: 321 pgs: 321 active+clean; 126 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.0 MiB/s wr, 31 op/s
Oct  3 11:40:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct  3 11:40:56 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct  3 11:40:56 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010494374511623363 of space, bias 1.0, pg target 0.3148312353487009 quantized to 32 (current 32)
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:40:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:40:58 compute-0 nova_compute[351685]: 2025-10-03 11:40:58.056 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759491643.0539038, 443e486d-1bf2-4550-a4ae-32f0f8f4af19 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  3 11:40:58 compute-0 nova_compute[351685]: 2025-10-03 11:40:58.056 2 INFO nova.compute.manager [-] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] VM Stopped (Lifecycle Event)#033[00m
Oct  3 11:40:58 compute-0 nova_compute[351685]: 2025-10-03 11:40:58.087 2 DEBUG nova.compute.manager [None req-da7845d0-8dd7-4bbb-8e68-b1d6b35d1790 - - - - - -] [instance: 443e486d-1bf2-4550-a4ae-32f0f8f4af19] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  3 11:40:58 compute-0 nova_compute[351685]: 2025-10-03 11:40:58.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:40:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4017: 321 pgs: 321 active+clean; 126 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 1.3 MiB/s wr, 110 op/s
Oct  3 11:40:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:40:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct  3 11:40:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct  3 11:40:59 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct  3 11:40:59 compute-0 podman[157165]: time="2025-10-03T11:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:40:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:40:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9139 "" "Go-http-client/1.1"
Oct  3 11:41:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4019: 321 pgs: 321 active+clean; 126 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 3.4 MiB/s wr, 113 op/s
Oct  3 11:41:00 compute-0 nova_compute[351685]: 2025-10-03 11:41:00.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:00 compute-0 nova_compute[351685]: 2025-10-03 11:41:00.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:41:00 compute-0 nova_compute[351685]: 2025-10-03 11:41:00.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:41:00 compute-0 ovn_controller[88471]: 2025-10-03T11:41:00Z|00209|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:41:00 compute-0 nova_compute[351685]: 2025-10-03 11:41:00.787 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  3 11:41:00 compute-0 nova_compute[351685]: 2025-10-03 11:41:00.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:01 compute-0 openstack_network_exporter[367524]: ERROR   11:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:41:01 compute-0 openstack_network_exporter[367524]: ERROR   11:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:41:01 compute-0 openstack_network_exporter[367524]: ERROR   11:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:41:01 compute-0 openstack_network_exporter[367524]: ERROR   11:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:41:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:41:01 compute-0 openstack_network_exporter[367524]: ERROR   11:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:41:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:41:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4020: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 1.9 MiB/s wr, 65 op/s
Oct  3 11:41:02 compute-0 nova_compute[351685]: 2025-10-03 11:41:02.728 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:41:02 compute-0 nova_compute[351685]: 2025-10-03 11:41:02.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:41:02 compute-0 ovn_controller[88471]: 2025-10-03T11:41:02Z|00210|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:41:02 compute-0 nova_compute[351685]: 2025-10-03 11:41:02.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:03 compute-0 nova_compute[351685]: 2025-10-03 11:41:03.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:41:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4021: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.6 MiB/s wr, 54 op/s
Oct  3 11:41:05 compute-0 nova_compute[351685]: 2025-10-03 11:41:05.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4022: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.3 MiB/s wr, 45 op/s
Oct  3 11:41:07 compute-0 ovn_controller[88471]: 2025-10-03T11:41:07Z|00211|binding|INFO|Releasing lport e79720f4-8084-4b6f-a8ef-933cf0e7b8bf from this chassis (sb_readonly=0)
Oct  3 11:41:07 compute-0 nova_compute[351685]: 2025-10-03 11:41:07.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:41:07 compute-0 nova_compute[351685]: 2025-10-03 11:41:07.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:07 compute-0 nova_compute[351685]: 2025-10-03 11:41:07.770 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:41:07 compute-0 nova_compute[351685]: 2025-10-03 11:41:07.770 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:41:07 compute-0 nova_compute[351685]: 2025-10-03 11:41:07.771 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:41:07 compute-0 nova_compute[351685]: 2025-10-03 11:41:07.771 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:41:07 compute-0 nova_compute[351685]: 2025-10-03 11:41:07.772 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:41:08 compute-0 nova_compute[351685]: 2025-10-03 11:41:08.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4023: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1.2 MiB/s wr, 3 op/s
Oct  3 11:41:08 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:41:08 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3861739264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:41:08 compute-0 nova_compute[351685]: 2025-10-03 11:41:08.330 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:41:08 compute-0 nova_compute[351685]: 2025-10-03 11:41:08.422 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:41:08 compute-0 nova_compute[351685]: 2025-10-03 11:41:08.423 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:41:08 compute-0 nova_compute[351685]: 2025-10-03 11:41:08.423 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:41:08 compute-0 nova_compute[351685]: 2025-10-03 11:41:08.828 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:41:08 compute-0 nova_compute[351685]: 2025-10-03 11:41:08.829 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3627MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:41:08 compute-0 nova_compute[351685]: 2025-10-03 11:41:08.829 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:41:08 compute-0 nova_compute[351685]: 2025-10-03 11:41:08.830 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:41:08 compute-0 nova_compute[351685]: 2025-10-03 11:41:08.898 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:41:08 compute-0 nova_compute[351685]: 2025-10-03 11:41:08.899 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:41:08 compute-0 nova_compute[351685]: 2025-10-03 11:41:08.899 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:41:08 compute-0 nova_compute[351685]: 2025-10-03 11:41:08.952 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:41:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:41:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct  3 11:41:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct  3 11:41:09 compute-0 ceph-mon[191783]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct  3 11:41:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:41:09 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451058963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:41:09 compute-0 nova_compute[351685]: 2025-10-03 11:41:09.528 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:41:09 compute-0 nova_compute[351685]: 2025-10-03 11:41:09.537 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:41:09 compute-0 nova_compute[351685]: 2025-10-03 11:41:09.555 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:41:09 compute-0 nova_compute[351685]: 2025-10-03 11:41:09.583 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:41:09 compute-0 nova_compute[351685]: 2025-10-03 11:41:09.584 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:41:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4025: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Oct  3 11:41:10 compute-0 nova_compute[351685]: 2025-10-03 11:41:10.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4026: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:13 compute-0 nova_compute[351685]: 2025-10-03 11:41:13.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:14 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:41:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4027: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:14 compute-0 nova_compute[351685]: 2025-10-03 11:41:14.584 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:41:14 compute-0 nova_compute[351685]: 2025-10-03 11:41:14.585 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:41:15 compute-0 nova_compute[351685]: 2025-10-03 11:41:15.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:15 compute-0 podman[553646]: 2025-10-03 11:41:15.876163582 +0000 UTC m=+0.124349634 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=)
Oct  3 11:41:15 compute-0 podman[553645]: 2025-10-03 11:41:15.878470525 +0000 UTC m=+0.132486263 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Oct  3 11:41:15 compute-0 podman[553648]: 2025-10-03 11:41:15.890976684 +0000 UTC m=+0.122931149 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  3 11:41:15 compute-0 podman[553653]: 2025-10-03 11:41:15.894953091 +0000 UTC m=+0.129079245 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute)
Oct  3 11:41:15 compute-0 podman[553647]: 2025-10-03 11:41:15.901405796 +0000 UTC m=+0.148515594 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct  3 11:41:15 compute-0 podman[553666]: 2025-10-03 11:41:15.907503321 +0000 UTC m=+0.133276478 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=iscsid)
Oct  3 11:41:15 compute-0 podman[553658]: 2025-10-03 11:41:15.910213467 +0000 UTC m=+0.142160951 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  3 11:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:41:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:41:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4028: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:18 compute-0 nova_compute[351685]: 2025-10-03 11:41:18.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4029: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:18 compute-0 nova_compute[351685]: 2025-10-03 11:41:18.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:41:18 compute-0 nova_compute[351685]: 2025-10-03 11:41:18.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:41:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:41:19 compute-0 nova_compute[351685]: 2025-10-03 11:41:19.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:41:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4030: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:20 compute-0 nova_compute[351685]: 2025-10-03 11:41:20.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:20 compute-0 nova_compute[351685]: 2025-10-03 11:41:20.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:41:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4031: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:23 compute-0 nova_compute[351685]: 2025-10-03 11:41:23.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:41:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4032: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:25 compute-0 nova_compute[351685]: 2025-10-03 11:41:25.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:25 compute-0 podman[553783]: 2025-10-03 11:41:25.840056802 +0000 UTC m=+0.096455184 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:41:25 compute-0 podman[553785]: 2025-10-03 11:41:25.844167084 +0000 UTC m=+0.099266635 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  3 11:41:25 compute-0 podman[553784]: 2025-10-03 11:41:25.855072611 +0000 UTC m=+0.112112594 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9)
Oct  3 11:41:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4033: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:28 compute-0 nova_compute[351685]: 2025-10-03 11:41:28.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4034: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:29 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:41:29 compute-0 podman[157165]: time="2025-10-03T11:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:41:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:41:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9138 "" "Go-http-client/1.1"
Oct  3 11:41:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4035: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:30 compute-0 nova_compute[351685]: 2025-10-03 11:41:30.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  3 11:41:30 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 11:41:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:41:30 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:41:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  3 11:41:30 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:41:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  3 11:41:30 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:41:30 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev be38cd5a-ee7d-463a-b933-383c52cd3ee3 does not exist
Oct  3 11:41:30 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev b027bb2e-d670-4538-b7e6-847edce21ab6 does not exist
Oct  3 11:41:30 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev cc6fccca-674e-4648-aaa8-36f4fed3985a does not exist
Oct  3 11:41:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  3 11:41:30 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  3 11:41:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  3 11:41:30 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:41:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:41:30 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:41:31 compute-0 openstack_network_exporter[367524]: ERROR   11:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:41:31 compute-0 openstack_network_exporter[367524]: ERROR   11:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:41:31 compute-0 openstack_network_exporter[367524]: ERROR   11:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:41:31 compute-0 openstack_network_exporter[367524]: ERROR   11:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:41:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:41:31 compute-0 openstack_network_exporter[367524]: ERROR   11:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:41:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:41:31 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  3 11:41:31 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  3 11:41:31 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:41:31 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  3 11:41:31 compute-0 podman[554113]: 2025-10-03 11:41:31.910419265 +0000 UTC m=+0.079197926 container create f0715e75cb98aecafb09a5e35db2b6b87a60ed35c60db0e4e6e76d12d2df0e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  3 11:41:31 compute-0 podman[554113]: 2025-10-03 11:41:31.873991114 +0000 UTC m=+0.042769785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:41:31 compute-0 systemd[1]: Started libpod-conmon-f0715e75cb98aecafb09a5e35db2b6b87a60ed35c60db0e4e6e76d12d2df0e92.scope.
Oct  3 11:41:32 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:41:32 compute-0 podman[554113]: 2025-10-03 11:41:32.066769967 +0000 UTC m=+0.235548638 container init f0715e75cb98aecafb09a5e35db2b6b87a60ed35c60db0e4e6e76d12d2df0e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  3 11:41:32 compute-0 podman[554113]: 2025-10-03 11:41:32.085335359 +0000 UTC m=+0.254113980 container start f0715e75cb98aecafb09a5e35db2b6b87a60ed35c60db0e4e6e76d12d2df0e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:41:32 compute-0 podman[554113]: 2025-10-03 11:41:32.093384556 +0000 UTC m=+0.262163187 container attach f0715e75cb98aecafb09a5e35db2b6b87a60ed35c60db0e4e6e76d12d2df0e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:41:32 compute-0 happy_meitner[554127]: 167 167
Oct  3 11:41:32 compute-0 systemd[1]: libpod-f0715e75cb98aecafb09a5e35db2b6b87a60ed35c60db0e4e6e76d12d2df0e92.scope: Deactivated successfully.
Oct  3 11:41:32 compute-0 podman[554113]: 2025-10-03 11:41:32.098326233 +0000 UTC m=+0.267104894 container died f0715e75cb98aecafb09a5e35db2b6b87a60ed35c60db0e4e6e76d12d2df0e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  3 11:41:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b67da4dd12f910a3c07b052ac7e84ab125960ac8645a3b7d0a93f6b6cf4bcfa-merged.mount: Deactivated successfully.
Oct  3 11:41:32 compute-0 podman[554113]: 2025-10-03 11:41:32.179413497 +0000 UTC m=+0.348192128 container remove f0715e75cb98aecafb09a5e35db2b6b87a60ed35c60db0e4e6e76d12d2df0e92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:41:32 compute-0 systemd[1]: libpod-conmon-f0715e75cb98aecafb09a5e35db2b6b87a60ed35c60db0e4e6e76d12d2df0e92.scope: Deactivated successfully.
Oct  3 11:41:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4036: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:32 compute-0 podman[554150]: 2025-10-03 11:41:32.395557166 +0000 UTC m=+0.054088635 container create a5ab01ce708794581692e8a572f0c751d2ad81536eed17221a93a9d7cfaf357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:41:32 compute-0 systemd[1]: Started libpod-conmon-a5ab01ce708794581692e8a572f0c751d2ad81536eed17221a93a9d7cfaf357a.scope.
Oct  3 11:41:32 compute-0 podman[554150]: 2025-10-03 11:41:32.376061285 +0000 UTC m=+0.034592754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:41:32 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c46e4ff3e6f6420ac09521779c72c790a8fe97174a1cd409f7e4fb19839cf5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c46e4ff3e6f6420ac09521779c72c790a8fe97174a1cd409f7e4fb19839cf5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c46e4ff3e6f6420ac09521779c72c790a8fe97174a1cd409f7e4fb19839cf5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c46e4ff3e6f6420ac09521779c72c790a8fe97174a1cd409f7e4fb19839cf5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c46e4ff3e6f6420ac09521779c72c790a8fe97174a1cd409f7e4fb19839cf5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  3 11:41:32 compute-0 podman[554150]: 2025-10-03 11:41:32.525996003 +0000 UTC m=+0.184527492 container init a5ab01ce708794581692e8a572f0c751d2ad81536eed17221a93a9d7cfaf357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct  3 11:41:32 compute-0 podman[554150]: 2025-10-03 11:41:32.54756461 +0000 UTC m=+0.206096069 container start a5ab01ce708794581692e8a572f0c751d2ad81536eed17221a93a9d7cfaf357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  3 11:41:32 compute-0 podman[554150]: 2025-10-03 11:41:32.555307407 +0000 UTC m=+0.213838896 container attach a5ab01ce708794581692e8a572f0c751d2ad81536eed17221a93a9d7cfaf357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  3 11:41:33 compute-0 nova_compute[351685]: 2025-10-03 11:41:33.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:33 compute-0 bold_shannon[554164]: --> passed data devices: 0 physical, 3 LVM
Oct  3 11:41:33 compute-0 bold_shannon[554164]: --> relative data size: 1.0
Oct  3 11:41:33 compute-0 bold_shannon[554164]: --> All data devices are unavailable
Oct  3 11:41:33 compute-0 systemd[1]: libpod-a5ab01ce708794581692e8a572f0c751d2ad81536eed17221a93a9d7cfaf357a.scope: Deactivated successfully.
Oct  3 11:41:33 compute-0 podman[554150]: 2025-10-03 11:41:33.746993005 +0000 UTC m=+1.405524484 container died a5ab01ce708794581692e8a572f0c751d2ad81536eed17221a93a9d7cfaf357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:41:33 compute-0 systemd[1]: libpod-a5ab01ce708794581692e8a572f0c751d2ad81536eed17221a93a9d7cfaf357a.scope: Consumed 1.129s CPU time.
Oct  3 11:41:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-10c46e4ff3e6f6420ac09521779c72c790a8fe97174a1cd409f7e4fb19839cf5-merged.mount: Deactivated successfully.
Oct  3 11:41:33 compute-0 podman[554150]: 2025-10-03 11:41:33.834544136 +0000 UTC m=+1.493075585 container remove a5ab01ce708794581692e8a572f0c751d2ad81536eed17221a93a9d7cfaf357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  3 11:41:33 compute-0 systemd[1]: libpod-conmon-a5ab01ce708794581692e8a572f0c751d2ad81536eed17221a93a9d7cfaf357a.scope: Deactivated successfully.
Oct  3 11:41:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:41:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4037: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:34 compute-0 podman[554349]: 2025-10-03 11:41:34.856425953 +0000 UTC m=+0.078654628 container create 917d4904d6ef286d6642cfa6c03d8a2bad77f1b4cee77beebe78cadf73d732db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dijkstra, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  3 11:41:34 compute-0 podman[554349]: 2025-10-03 11:41:34.827446469 +0000 UTC m=+0.049675214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:41:34 compute-0 systemd[1]: Started libpod-conmon-917d4904d6ef286d6642cfa6c03d8a2bad77f1b4cee77beebe78cadf73d732db.scope.
Oct  3 11:41:34 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:41:34 compute-0 podman[554349]: 2025-10-03 11:41:34.990504536 +0000 UTC m=+0.212733221 container init 917d4904d6ef286d6642cfa6c03d8a2bad77f1b4cee77beebe78cadf73d732db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:41:35 compute-0 podman[554349]: 2025-10-03 11:41:35.002173408 +0000 UTC m=+0.224402073 container start 917d4904d6ef286d6642cfa6c03d8a2bad77f1b4cee77beebe78cadf73d732db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dijkstra, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:41:35 compute-0 podman[554349]: 2025-10-03 11:41:35.007128535 +0000 UTC m=+0.229357200 container attach 917d4904d6ef286d6642cfa6c03d8a2bad77f1b4cee77beebe78cadf73d732db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dijkstra, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  3 11:41:35 compute-0 happy_dijkstra[554365]: 167 167
Oct  3 11:41:35 compute-0 systemd[1]: libpod-917d4904d6ef286d6642cfa6c03d8a2bad77f1b4cee77beebe78cadf73d732db.scope: Deactivated successfully.
Oct  3 11:41:35 compute-0 podman[554349]: 2025-10-03 11:41:35.012030691 +0000 UTC m=+0.234259366 container died 917d4904d6ef286d6642cfa6c03d8a2bad77f1b4cee77beebe78cadf73d732db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dijkstra, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  3 11:41:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-630151ce848a749bdc3340e792a55c83a42c46d032c10343077040d665f1c92f-merged.mount: Deactivated successfully.
Oct  3 11:41:35 compute-0 podman[554349]: 2025-10-03 11:41:35.077797318 +0000 UTC m=+0.300025973 container remove 917d4904d6ef286d6642cfa6c03d8a2bad77f1b4cee77beebe78cadf73d732db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  3 11:41:35 compute-0 systemd[1]: libpod-conmon-917d4904d6ef286d6642cfa6c03d8a2bad77f1b4cee77beebe78cadf73d732db.scope: Deactivated successfully.
Oct  3 11:41:35 compute-0 podman[554387]: 2025-10-03 11:41:35.306801086 +0000 UTC m=+0.074593539 container create 85202e1c26a5e22410cf35c9bd93d1fec4bd8cac7323bdfb61e1c586791e1a4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wu, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  3 11:41:35 compute-0 podman[554387]: 2025-10-03 11:41:35.271919804 +0000 UTC m=+0.039712297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:41:35 compute-0 systemd[1]: Started libpod-conmon-85202e1c26a5e22410cf35c9bd93d1fec4bd8cac7323bdfb61e1c586791e1a4e.scope.
Oct  3 11:41:35 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233f665278887d8b8099a43fa3035778b168878d8589aedb33a9cdeafd5c2e75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233f665278887d8b8099a43fa3035778b168878d8589aedb33a9cdeafd5c2e75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233f665278887d8b8099a43fa3035778b168878d8589aedb33a9cdeafd5c2e75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:41:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233f665278887d8b8099a43fa3035778b168878d8589aedb33a9cdeafd5c2e75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:41:35 compute-0 podman[554387]: 2025-10-03 11:41:35.432617576 +0000 UTC m=+0.200410059 container init 85202e1c26a5e22410cf35c9bd93d1fec4bd8cac7323bdfb61e1c586791e1a4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  3 11:41:35 compute-0 podman[554387]: 2025-10-03 11:41:35.46602603 +0000 UTC m=+0.233818483 container start 85202e1c26a5e22410cf35c9bd93d1fec4bd8cac7323bdfb61e1c586791e1a4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  3 11:41:35 compute-0 podman[554387]: 2025-10-03 11:41:35.470615816 +0000 UTC m=+0.238408319 container attach 85202e1c26a5e22410cf35c9bd93d1fec4bd8cac7323bdfb61e1c586791e1a4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wu, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct  3 11:41:35 compute-0 nova_compute[351685]: 2025-10-03 11:41:35.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4038: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:36 compute-0 priceless_wu[554402]: {
Oct  3 11:41:36 compute-0 priceless_wu[554402]:    "0": [
Oct  3 11:41:36 compute-0 priceless_wu[554402]:        {
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "devices": [
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "/dev/loop3"
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            ],
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "lv_name": "ceph_lv0",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "lv_size": "21470642176",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=25b10821-47d4-4e0b-9b6d-d16a0463c4d0,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "lv_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "name": "ceph_lv0",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "tags": {
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.block_uuid": "SMaHSj-aWH7-ml6y-cS7a-kdC5-X1Oi-tfZQai",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.cluster_name": "ceph",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.crush_device_class": "",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.encrypted": "0",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.osd_fsid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.osd_id": "0",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.type": "block",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.vdo": "0"
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            },
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "type": "block",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "vg_name": "ceph_vg0"
Oct  3 11:41:36 compute-0 priceless_wu[554402]:        }
Oct  3 11:41:36 compute-0 priceless_wu[554402]:    ],
Oct  3 11:41:36 compute-0 priceless_wu[554402]:    "1": [
Oct  3 11:41:36 compute-0 priceless_wu[554402]:        {
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "devices": [
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "/dev/loop4"
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            ],
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "lv_name": "ceph_lv1",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "lv_size": "21470642176",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=16cef594-0067-4499-9298-5d83edf70190,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "lv_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "name": "ceph_lv1",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "tags": {
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.block_uuid": "4GRqF0-Mfkg-h0aO-Ifpb-fih3-mMw0-7eIg55",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.cluster_name": "ceph",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.crush_device_class": "",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.encrypted": "0",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.osd_fsid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.osd_id": "1",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.type": "block",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.vdo": "0"
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            },
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "type": "block",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "vg_name": "ceph_vg1"
Oct  3 11:41:36 compute-0 priceless_wu[554402]:        }
Oct  3 11:41:36 compute-0 priceless_wu[554402]:    ],
Oct  3 11:41:36 compute-0 priceless_wu[554402]:    "2": [
Oct  3 11:41:36 compute-0 priceless_wu[554402]:        {
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "devices": [
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "/dev/loop5"
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            ],
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "lv_name": "ceph_lv2",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "lv_size": "21470642176",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=9b4e8c9a-5555-5510-a631-4742a1182561,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "lv_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "name": "ceph_lv2",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "tags": {
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.block_uuid": "EH0Asc-tFJO-Kjrx-G5cn-8Wnf-SVU4-44EhbZ",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.cephx_lockbox_secret": "",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.cluster_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.cluster_name": "ceph",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.crush_device_class": "",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.encrypted": "0",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.osd_fsid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.osd_id": "2",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.type": "block",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:                "ceph.vdo": "0"
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            },
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "type": "block",
Oct  3 11:41:36 compute-0 priceless_wu[554402]:            "vg_name": "ceph_vg2"
Oct  3 11:41:36 compute-0 priceless_wu[554402]:        }
Oct  3 11:41:36 compute-0 priceless_wu[554402]:    ]
Oct  3 11:41:36 compute-0 priceless_wu[554402]: }
Oct  3 11:41:36 compute-0 systemd[1]: libpod-85202e1c26a5e22410cf35c9bd93d1fec4bd8cac7323bdfb61e1c586791e1a4e.scope: Deactivated successfully.
Oct  3 11:41:36 compute-0 podman[554387]: 2025-10-03 11:41:36.309851922 +0000 UTC m=+1.077644435 container died 85202e1c26a5e22410cf35c9bd93d1fec4bd8cac7323bdfb61e1c586791e1a4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:41:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-233f665278887d8b8099a43fa3035778b168878d8589aedb33a9cdeafd5c2e75-merged.mount: Deactivated successfully.
Oct  3 11:41:36 compute-0 podman[554387]: 2025-10-03 11:41:36.399771728 +0000 UTC m=+1.167564201 container remove 85202e1c26a5e22410cf35c9bd93d1fec4bd8cac7323bdfb61e1c586791e1a4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wu, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:41:36 compute-0 systemd[1]: libpod-conmon-85202e1c26a5e22410cf35c9bd93d1fec4bd8cac7323bdfb61e1c586791e1a4e.scope: Deactivated successfully.
Oct  3 11:41:37 compute-0 podman[554563]: 2025-10-03 11:41:37.480736928 +0000 UTC m=+0.075734185 container create eac847b9f931c04e1c54619b111227fd8365875cb8c590ad873095bc157c136f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lederberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  3 11:41:37 compute-0 systemd[1]: Started libpod-conmon-eac847b9f931c04e1c54619b111227fd8365875cb8c590ad873095bc157c136f.scope.
Oct  3 11:41:37 compute-0 podman[554563]: 2025-10-03 11:41:37.448620205 +0000 UTC m=+0.043617552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:41:37 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:41:37 compute-0 podman[554563]: 2025-10-03 11:41:37.583355969 +0000 UTC m=+0.178353226 container init eac847b9f931c04e1c54619b111227fd8365875cb8c590ad873095bc157c136f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lederberg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:41:37 compute-0 podman[554563]: 2025-10-03 11:41:37.593037207 +0000 UTC m=+0.188034444 container start eac847b9f931c04e1c54619b111227fd8365875cb8c590ad873095bc157c136f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  3 11:41:37 compute-0 podman[554563]: 2025-10-03 11:41:37.598559393 +0000 UTC m=+0.193556660 container attach eac847b9f931c04e1c54619b111227fd8365875cb8c590ad873095bc157c136f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lederberg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct  3 11:41:37 compute-0 practical_lederberg[554579]: 167 167
Oct  3 11:41:37 compute-0 systemd[1]: libpod-eac847b9f931c04e1c54619b111227fd8365875cb8c590ad873095bc157c136f.scope: Deactivated successfully.
Oct  3 11:41:37 compute-0 podman[554563]: 2025-10-03 11:41:37.601466785 +0000 UTC m=+0.196464032 container died eac847b9f931c04e1c54619b111227fd8365875cb8c590ad873095bc157c136f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  3 11:41:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-92e1325e7b1d40a89ebf09be8436849302e5fd0b1a27b1a537630a55b49d9b3e-merged.mount: Deactivated successfully.
Oct  3 11:41:37 compute-0 podman[554563]: 2025-10-03 11:41:37.657696878 +0000 UTC m=+0.252694115 container remove eac847b9f931c04e1c54619b111227fd8365875cb8c590ad873095bc157c136f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lederberg, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  3 11:41:37 compute-0 systemd[1]: libpod-conmon-eac847b9f931c04e1c54619b111227fd8365875cb8c590ad873095bc157c136f.scope: Deactivated successfully.
Oct  3 11:41:37 compute-0 podman[554605]: 2025-10-03 11:41:37.875633393 +0000 UTC m=+0.076457238 container create 8bd0e783aeadf4415c39e5fded322c1df6c60349ff22e4e03c48734886b9c102 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  3 11:41:37 compute-0 podman[554605]: 2025-10-03 11:41:37.848069215 +0000 UTC m=+0.048893090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  3 11:41:37 compute-0 systemd[1]: Started libpod-conmon-8bd0e783aeadf4415c39e5fded322c1df6c60349ff22e4e03c48734886b9c102.scope.
Oct  3 11:41:38 compute-0 systemd[1]: Started libcrun container.
Oct  3 11:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad1359105ac4bd9fbab6c33b030138530ed83d9a70cb11bb0c24b9e5a2a5ff64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  3 11:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad1359105ac4bd9fbab6c33b030138530ed83d9a70cb11bb0c24b9e5a2a5ff64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  3 11:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad1359105ac4bd9fbab6c33b030138530ed83d9a70cb11bb0c24b9e5a2a5ff64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  3 11:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad1359105ac4bd9fbab6c33b030138530ed83d9a70cb11bb0c24b9e5a2a5ff64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  3 11:41:38 compute-0 podman[554605]: 2025-10-03 11:41:38.075953018 +0000 UTC m=+0.276776863 container init 8bd0e783aeadf4415c39e5fded322c1df6c60349ff22e4e03c48734886b9c102 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  3 11:41:38 compute-0 podman[554605]: 2025-10-03 11:41:38.101316416 +0000 UTC m=+0.302140251 container start 8bd0e783aeadf4415c39e5fded322c1df6c60349ff22e4e03c48734886b9c102 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  3 11:41:38 compute-0 podman[554605]: 2025-10-03 11:41:38.107412799 +0000 UTC m=+0.308236664 container attach 8bd0e783aeadf4415c39e5fded322c1df6c60349ff22e4e03c48734886b9c102 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  3 11:41:38 compute-0 nova_compute[351685]: 2025-10-03 11:41:38.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4039: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:38 compute-0 ovn_controller[88471]: 2025-10-03T11:41:38Z|00212|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Oct  3 11:41:39 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]: {
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:    "16cef594-0067-4499-9298-5d83edf70190": {
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:        "osd_id": 1,
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:        "osd_uuid": "16cef594-0067-4499-9298-5d83edf70190",
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:        "type": "bluestore"
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:    },
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:    "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0": {
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:        "osd_id": 2,
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:        "osd_uuid": "19fdbf19-5f8e-43a6-9f0c-26b99e42b3f0",
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:        "type": "bluestore"
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:    },
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:    "25b10821-47d4-4e0b-9b6d-d16a0463c4d0": {
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:        "ceph_fsid": "9b4e8c9a-5555-5510-a631-4742a1182561",
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:        "osd_id": 0,
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:        "osd_uuid": "25b10821-47d4-4e0b-9b6d-d16a0463c4d0",
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:        "type": "bluestore"
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]:    }
Oct  3 11:41:39 compute-0 heuristic_hamilton[554621]: }
Oct  3 11:41:39 compute-0 systemd[1]: libpod-8bd0e783aeadf4415c39e5fded322c1df6c60349ff22e4e03c48734886b9c102.scope: Deactivated successfully.
Oct  3 11:41:39 compute-0 systemd[1]: libpod-8bd0e783aeadf4415c39e5fded322c1df6c60349ff22e4e03c48734886b9c102.scope: Consumed 1.184s CPU time.
Oct  3 11:41:39 compute-0 podman[554655]: 2025-10-03 11:41:39.362741907 +0000 UTC m=+0.049703395 container died 8bd0e783aeadf4415c39e5fded322c1df6c60349ff22e4e03c48734886b9c102 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  3 11:41:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad1359105ac4bd9fbab6c33b030138530ed83d9a70cb11bb0c24b9e5a2a5ff64-merged.mount: Deactivated successfully.
Oct  3 11:41:39 compute-0 podman[554655]: 2025-10-03 11:41:39.476160611 +0000 UTC m=+0.163122059 container remove 8bd0e783aeadf4415c39e5fded322c1df6c60349ff22e4e03c48734886b9c102 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  3 11:41:39 compute-0 systemd[1]: libpod-conmon-8bd0e783aeadf4415c39e5fded322c1df6c60349ff22e4e03c48734886b9c102.scope: Deactivated successfully.
Oct  3 11:41:39 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  3 11:41:39 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:41:39 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  3 11:41:39 compute-0 ceph-mon[191783]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:41:39 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev eafcad45-6fdc-4aa3-889c-aa8fd75772b2 does not exist
Oct  3 11:41:39 compute-0 ceph-mgr[192071]: [progress WARNING root] complete: ev 364df61a-0d4d-4966-b5ff-e55b4b10efd8 does not exist
Oct  3 11:41:40 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4040: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:40 compute-0 nova_compute[351685]: 2025-10-03 11:41:40.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:40 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:41:40 compute-0 ceph-mon[191783]: from='mgr.14130 192.168.122.100:0/475841435' entity='mgr.compute-0.vtkhde' 
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.912 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.913 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.913 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.914 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f1a94060050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.915 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.916 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.917 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.918 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.919 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.919 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.919 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.919 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.920 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1a92b3a3c0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.921 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b43db93c-a4fe-46e9-8418-eedf4f5c135a', 'name': 'test_0', 'flavor': {'id': 'ada739ee-222b-4269-8d29-62bea534173e', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '37f03e8a-3aed-46a5-8219-fc87e355127e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ee75a4dc6ade43baab6ee923c9cf4cdf', 'user_id': '2f408449ba0f42fcb69f92dbf541f2e3', 'hostId': 'b02159e472b4d67148a1c8eab0ef80aca6e6d7b8ee0e2a8dcff05b85', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.921 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.922 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.922 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.922 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-10-03T11:41:40.922187) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.928 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.929 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.929 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f1a940600e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.929 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.929 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.929 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.929 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.929 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.930 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.930 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f1a93fbd790>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.930 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.930 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.930 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-10-03T11:41:40.929657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-10-03T11:41:40.930950) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.959 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.959 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.960 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.960 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f1a93fbf0e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.960 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.960 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.961 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.961 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:40 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:40.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-10-03T11:41:40.961079) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.006 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.006 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.006 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.007 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f1a93fbf200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.007 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.007 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.007 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 1351272306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.007 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 240576853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.008 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.latency volume: 113683071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.008 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f1a93fbf260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.008 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.008 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.008 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-10-03T11:41:41.007587) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.009 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.010 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f1a93fbf2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.010 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.010 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-10-03T11:41:41.009285) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.010 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-10-03T11:41:41.010693) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.011 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.011 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f1a961ce330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.011 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.011 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.011 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95e6c350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.012 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.012 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.012 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.012 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f1a93fbf320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.013 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-10-03T11:41:41.012019) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-10-03T11:41:41.013277) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.013 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.014 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f1a94060350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.014 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a94060380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.014 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-10-03T11:41:41.014482) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.042 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.042 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.042 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f1a93fbf380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.042 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf3b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.043 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.043 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 12067482402 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.043 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 31229511 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.043 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.043 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f1a93fbf3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.044 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.044 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 230 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.044 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.044 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-10-03T11:41:41.042969) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-10-03T11:41:41.044265) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f1a93fbf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.045 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.045 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.045 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.045 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-10-03T11:41:41.045954) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.046 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.046 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.046 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f1a93fbfa10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.046 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.046 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f1a93fbf440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.046 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.046 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.047 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.047 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.047 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f1a93fbfc80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.047 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.047 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-10-03T11:41:41.047068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfcb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.048 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.048 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.048 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.048 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f1a93fbf4a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.048 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.048 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf4d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-10-03T11:41:41.048177) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-10-03T11:41:41.048984) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.049 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.049 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f1a93fbfce0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.050 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.050 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.050 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.050 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.050 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.051 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-10-03T11:41:41.050685) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f1a93fbd760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.051 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.052 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.052 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a95f5cd40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.052 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.052 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/cpu volume: 119320000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.053 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.053 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-10-03T11:41:41.052400) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f1a93fbfd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.053 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.053 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.054 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.054 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.054 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f1a93fbfdd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.055 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.055 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.055 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-10-03T11:41:41.054040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.055 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.055 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.055 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes volume: 2552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f1a93fbfe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.056 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.057 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-10-03T11:41:41.055771) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbfe90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.057 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-10-03T11:41:41.057528) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f1a93fbf6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.058 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.058 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.058 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.059 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/memory.usage volume: 48.81640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.059 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.059 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-10-03T11:41:41.058959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f1a93fbfef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.060 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f1a93fbf710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.060 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.060 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbf740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.060 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.060 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.incoming.bytes volume: 2856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f1a93fbff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f1a94177b60>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.061 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-10-03T11:41:41.060596) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f1a93fbffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.062 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.062 14 DEBUG ceilometer.compute.pollsters [-] b43db93c-a4fe-46e9-8418-eedf4f5c135a/network.outgoing.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.062 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-10-03T11:41:41.062180) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.065 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ceilometer_agent_compute[362317]: 2025-10-03 11:41:41.066 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Oct  3 11:41:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:41:41.721 284328 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:41:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:41:41.722 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:41:41 compute-0 ovn_metadata_agent[284320]: 2025-10-03 11:41:41.722 284328 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:41:42 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4041: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:43 compute-0 nova_compute[351685]: 2025-10-03 11:41:43.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:44 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:41:44 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4042: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:45 compute-0 nova_compute[351685]: 2025-10-03 11:41:45.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:41:46 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:41:46 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4043: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Optimize plan auto_2025-10-03_11:41:46
Oct  3 11:41:46 compute-0 ceph-mgr[192071]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  3 11:41:46 compute-0 ceph-mgr[192071]: [balancer INFO root] do_upmap
Oct  3 11:41:46 compute-0 ceph-mgr[192071]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'images', 'volumes', 'vms']
Oct  3 11:41:46 compute-0 ceph-mgr[192071]: [balancer INFO root] prepared 0/10 changes
Oct  3 11:41:46 compute-0 podman[554722]: 2025-10-03 11:41:46.877156949 +0000 UTC m=+0.110646318 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  3 11:41:46 compute-0 podman[554720]: 2025-10-03 11:41:46.881551439 +0000 UTC m=+0.127843425 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Oct  3 11:41:46 compute-0 podman[554721]: 2025-10-03 11:41:46.886052943 +0000 UTC m=+0.123536378 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible)
Oct  3 11:41:46 compute-0 podman[554735]: 2025-10-03 11:41:46.898424186 +0000 UTC m=+0.108846419 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, org.label-schema.build-date=20250930, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Oct  3 11:41:46 compute-0 podman[554728]: 2025-10-03 11:41:46.932482182 +0000 UTC m=+0.148582806 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct  3 11:41:46 compute-0 podman[554744]: 2025-10-03 11:41:46.934427554 +0000 UTC m=+0.130275423 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid)
Oct  3 11:41:46 compute-0 podman[554742]: 2025-10-03 11:41:46.953280565 +0000 UTC m=+0.160599079 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  3 11:41:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  3 11:41:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:41:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  3 11:41:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  3 11:41:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:41:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  3 11:41:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:41:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  3 11:41:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:41:47 compute-0 ceph-mgr[192071]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  3 11:41:48 compute-0 nova_compute[351685]: 2025-10-03 11:41:48.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:48 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4044: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:49 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:41:50 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4045: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:50 compute-0 nova_compute[351685]: 2025-10-03 11:41:50.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:52 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4046: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:53 compute-0 nova_compute[351685]: 2025-10-03 11:41:53.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:41:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  3 11:41:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1019783396' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  3 11:41:54 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  3 11:41:54 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1019783396' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  3 11:41:54 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4047: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:55 compute-0 nova_compute[351685]: 2025-10-03 11:41:55.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4048: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] _maybe_adjust
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000551649390343166 of space, bias 1.0, pg target 0.1654948171029498 quantized to 32 (current 32)
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  3 11:41:56 compute-0 ceph-mgr[192071]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  3 11:41:56 compute-0 podman[554858]: 2025-10-03 11:41:56.826620228 +0000 UTC m=+0.087176029 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct  3 11:41:56 compute-0 podman[554859]: 2025-10-03 11:41:56.852557435 +0000 UTC m=+0.090099563 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, distribution-scope=public, vendor=Red Hat, Inc., release=1214.1726694543, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Oct  3 11:41:56 compute-0 podman[554864]: 2025-10-03 11:41:56.891271059 +0000 UTC m=+0.137460142 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  3 11:41:58 compute-0 nova_compute[351685]: 2025-10-03 11:41:58.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:41:58 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4049: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:41:59 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:41:59 compute-0 podman[157165]: time="2025-10-03T11:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:41:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:41:59 compute-0 podman[157165]: @ - - [03/Oct/2025:11:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9139 "" "Go-http-client/1.1"
Oct  3 11:42:00 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4050: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:00 compute-0 nova_compute[351685]: 2025-10-03 11:42:00.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:42:01 compute-0 openstack_network_exporter[367524]: ERROR   11:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:42:01 compute-0 openstack_network_exporter[367524]: ERROR   11:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:42:01 compute-0 openstack_network_exporter[367524]: ERROR   11:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:42:01 compute-0 openstack_network_exporter[367524]: ERROR   11:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:42:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:42:01 compute-0 openstack_network_exporter[367524]: ERROR   11:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:42:01 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:42:02 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4051: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:02 compute-0 nova_compute[351685]: 2025-10-03 11:42:02.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:42:02 compute-0 nova_compute[351685]: 2025-10-03 11:42:02.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  3 11:42:02 compute-0 nova_compute[351685]: 2025-10-03 11:42:02.729 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  3 11:42:03 compute-0 nova_compute[351685]: 2025-10-03 11:42:03.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:42:03 compute-0 nova_compute[351685]: 2025-10-03 11:42:03.574 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  3 11:42:03 compute-0 nova_compute[351685]: 2025-10-03 11:42:03.575 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquired lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  3 11:42:03 compute-0 nova_compute[351685]: 2025-10-03 11:42:03.575 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  3 11:42:03 compute-0 nova_compute[351685]: 2025-10-03 11:42:03.575 2 DEBUG nova.objects.instance [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b43db93c-a4fe-46e9-8418-eedf4f5c135a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  3 11:42:04 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:42:04 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4052: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:05 compute-0 nova_compute[351685]: 2025-10-03 11:42:05.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:42:05 compute-0 nova_compute[351685]: 2025-10-03 11:42:05.840 2 DEBUG nova.network.neutron [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updating instance_info_cache with network_info: [{"id": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "address": "fa:16:3e:a9:40:5c", "network": {"id": "67eed0ac-d131-40ed-a5fe-0484d04236a0", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.158", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ee75a4dc6ade43baab6ee923c9cf4cdf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8897fbc-9f", "ovs_interfaceid": "a8897fbc-9fd1-4981-b049-6e702bcb7e2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  3 11:42:05 compute-0 nova_compute[351685]: 2025-10-03 11:42:05.859 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Releasing lock "refresh_cache-b43db93c-a4fe-46e9-8418-eedf4f5c135a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  3 11:42:05 compute-0 nova_compute[351685]: 2025-10-03 11:42:05.860 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] [instance: b43db93c-a4fe-46e9-8418-eedf4f5c135a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  3 11:42:05 compute-0 nova_compute[351685]: 2025-10-03 11:42:05.861 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:42:06 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4053: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:06 compute-0 nova_compute[351685]: 2025-10-03 11:42:06.856 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:42:08 compute-0 nova_compute[351685]: 2025-10-03 11:42:08.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:42:08 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4054: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:08 compute-0 nova_compute[351685]: 2025-10-03 11:42:08.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:42:08 compute-0 nova_compute[351685]: 2025-10-03 11:42:08.771 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:42:08 compute-0 nova_compute[351685]: 2025-10-03 11:42:08.771 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:42:08 compute-0 nova_compute[351685]: 2025-10-03 11:42:08.772 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:42:08 compute-0 nova_compute[351685]: 2025-10-03 11:42:08.772 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  3 11:42:08 compute-0 nova_compute[351685]: 2025-10-03 11:42:08.772 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:42:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:42:09 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:42:09 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/415607950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:42:09 compute-0 nova_compute[351685]: 2025-10-03 11:42:09.289 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:42:09 compute-0 nova_compute[351685]: 2025-10-03 11:42:09.391 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:42:09 compute-0 nova_compute[351685]: 2025-10-03 11:42:09.392 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:42:09 compute-0 nova_compute[351685]: 2025-10-03 11:42:09.396 2 DEBUG nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  3 11:42:09 compute-0 nova_compute[351685]: 2025-10-03 11:42:09.788 2 WARNING nova.virt.libvirt.driver [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  3 11:42:09 compute-0 nova_compute[351685]: 2025-10-03 11:42:09.790 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3582MB free_disk=59.9551887512207GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  3 11:42:09 compute-0 nova_compute[351685]: 2025-10-03 11:42:09.790 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  3 11:42:09 compute-0 nova_compute[351685]: 2025-10-03 11:42:09.791 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  3 11:42:09 compute-0 nova_compute[351685]: 2025-10-03 11:42:09.893 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Instance b43db93c-a4fe-46e9-8418-eedf4f5c135a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  3 11:42:09 compute-0 nova_compute[351685]: 2025-10-03 11:42:09.894 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  3 11:42:09 compute-0 nova_compute[351685]: 2025-10-03 11:42:09.895 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  3 11:42:09 compute-0 nova_compute[351685]: 2025-10-03 11:42:09.925 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  3 11:42:10 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4055: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:10 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  3 11:42:10 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/206709954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  3 11:42:10 compute-0 nova_compute[351685]: 2025-10-03 11:42:10.484 2 DEBUG oslo_concurrency.processutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  3 11:42:10 compute-0 nova_compute[351685]: 2025-10-03 11:42:10.493 2 DEBUG nova.compute.provider_tree [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed in ProviderTree for provider: fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  3 11:42:10 compute-0 nova_compute[351685]: 2025-10-03 11:42:10.510 2 DEBUG nova.scheduler.client.report [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Inventory has not changed for provider fd9cb9b4-441d-4ebb-82e6-0c18eec40f5a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  3 11:42:10 compute-0 nova_compute[351685]: 2025-10-03 11:42:10.513 2 DEBUG nova.compute.resource_tracker [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  3 11:42:10 compute-0 nova_compute[351685]: 2025-10-03 11:42:10.514 2 DEBUG oslo_concurrency.lockutils [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  3 11:42:10 compute-0 nova_compute[351685]: 2025-10-03 11:42:10.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:42:12 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4056: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:13 compute-0 nova_compute[351685]: 2025-10-03 11:42:13.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:42:14 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:42:14 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4057: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:15 compute-0 nova_compute[351685]: 2025-10-03 11:42:15.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] scanning for idle connections..
Oct  3 11:42:16 compute-0 ceph-mgr[192071]: [volumes INFO mgr_util] cleaning up connections: []
Oct  3 11:42:16 compute-0 systemd-logind[798]: New session 66 of user zuul.
Oct  3 11:42:16 compute-0 systemd[1]: Started Session 66 of User zuul.
Oct  3 11:42:16 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4058: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:16 compute-0 nova_compute[351685]: 2025-10-03 11:42:16.518 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:42:16 compute-0 nova_compute[351685]: 2025-10-03 11:42:16.519 2 DEBUG nova.compute.manager [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  3 11:42:18 compute-0 podman[554997]: 2025-10-03 11:42:18.04385195 +0000 UTC m=+0.106783854 container health_status 343e2afd2598e68789ce135e7ca2aca5d601e80be20980b9b37dc2c992fb61a2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Oct  3 11:42:18 compute-0 podman[555001]: 2025-10-03 11:42:18.063182706 +0000 UTC m=+0.111969970 container health_status d1f8d4381794bad4147669a01d5363c62339f11db8258f1f858682c2e9e46d47 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=254c55e1b6431493ec1ac89f73b53751, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20250930)
Oct  3 11:42:18 compute-0 podman[555008]: 2025-10-03 11:42:18.068480944 +0000 UTC m=+0.111573766 container health_status ed858ff0b32fd3e12a4c300d264eebf0debd347b1c5e0128119383c784b028d0 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  3 11:42:18 compute-0 podman[554999]: 2025-10-03 11:42:18.071197051 +0000 UTC m=+0.127874336 container health_status 99c8902f0ab68ddf0f1c4c94019aaf1498445c19cffed623c2565bbc5e2bc8f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct  3 11:42:18 compute-0 podman[555000]: 2025-10-03 11:42:18.085959792 +0000 UTC m=+0.132106772 container health_status b6c79c86a4903343538380441ecf69c0cecea3041e77aef6a9c87f0fa9d94a92 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2)
Oct  3 11:42:18 compute-0 podman[554998]: 2025-10-03 11:42:18.088780222 +0000 UTC m=+0.137833754 container health_status 795f67c8e864b94d72538a787d6a9408cc64515222acf89c5aeb5c0286bc7e54 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.6, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Oct  3 11:42:18 compute-0 podman[555007]: 2025-10-03 11:42:18.090422405 +0000 UTC m=+0.138282769 container health_status e1a900316d3fdb101b4f8c0e27016714a99d520a17c17ee31d85f9391951ebc4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  3 11:42:18 compute-0 nova_compute[351685]: 2025-10-03 11:42:18.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:42:18 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4059: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:18 compute-0 nova_compute[351685]: 2025-10-03 11:42:18.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:42:19 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:42:20 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16075 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 11:42:20 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4060: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:20 compute-0 nova_compute[351685]: 2025-10-03 11:42:20.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:42:20 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16077 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 11:42:20 compute-0 nova_compute[351685]: 2025-10-03 11:42:20.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:42:21 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  3 11:42:21 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4272287232' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  3 11:42:21 compute-0 nova_compute[351685]: 2025-10-03 11:42:21.729 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:42:21 compute-0 nova_compute[351685]: 2025-10-03 11:42:21.730 2 DEBUG oslo_service.periodic_task [None req-f389a046-9967-41c2-a6a9-66bcdc83e777 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  3 11:42:22 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4061: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:23 compute-0 nova_compute[351685]: 2025-10-03 11:42:23.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:42:24 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:42:24 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4062: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:25 compute-0 ovs-vsctl[555380]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  3 11:42:25 compute-0 nova_compute[351685]: 2025-10-03 11:42:25.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:42:26 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4063: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:26 compute-0 virtqemud[137656]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  3 11:42:26 compute-0 virtqemud[137656]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  3 11:42:26 compute-0 virtqemud[137656]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  3 11:42:27 compute-0 podman[555614]: 2025-10-03 11:42:27.057891034 +0000 UTC m=+0.091598180 container health_status be6a85bb54cd04ea2944e1a593703f1b32bd7b925faac863097c227a02afaf0a (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, container_name=kepler, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vcs-type=git, io.openshift.expose-services=, version=9.4, architecture=x86_64, io.openshift.tags=base rhel9)
Oct  3 11:42:27 compute-0 podman[555608]: 2025-10-03 11:42:27.077342264 +0000 UTC m=+0.113839859 container health_status 629af6886ed1c73248f366b8892d475880397efc1764efbc16208afabb157e2c (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct  3 11:42:27 compute-0 podman[555617]: 2025-10-03 11:42:27.089511732 +0000 UTC m=+0.117290968 container health_status e6c96d93f9e7b018147a8a16ff379be528602423c07a6e527f13cbd7777768c7 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  3 11:42:27 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi asok_command: cache status {prefix=cache status} (starting...)
Oct  3 11:42:27 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi asok_command: client ls {prefix=client ls} (starting...)
Oct  3 11:42:27 compute-0 lvm[555770]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  3 11:42:27 compute-0 lvm[555770]: VG ceph_vg0 finished
Oct  3 11:42:27 compute-0 lvm[555771]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  3 11:42:27 compute-0 lvm[555771]: VG ceph_vg1 finished
Oct  3 11:42:27 compute-0 lvm[555791]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  3 11:42:27 compute-0 lvm[555791]: VG ceph_vg2 finished
Oct  3 11:42:27 compute-0 kernel: block sr0: the capability attribute has been deprecated.
Oct  3 11:42:27 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi asok_command: damage ls {prefix=damage ls} (starting...)
Oct  3 11:42:28 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi asok_command: dump loads {prefix=dump loads} (starting...)
Oct  3 11:42:28 compute-0 nova_compute[351685]: 2025-10-03 11:42:28.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:42:28 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16081 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 11:42:28 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  3 11:42:28 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4064: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:28 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  3 11:42:28 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  3 11:42:28 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  3 11:42:28 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16085 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 11:42:28 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct  3 11:42:28 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1646791223' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  3 11:42:28 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  3 11:42:29 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  3 11:42:29 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:42:29 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  3 11:42:29 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2354855632' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  3 11:42:29 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi asok_command: ops {prefix=ops} (starting...)
Oct  3 11:42:29 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct  3 11:42:29 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/650791653' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  3 11:42:29 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16093 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 11:42:29 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T11:42:29.598+0000 7f321e7b5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  3 11:42:29 compute-0 ceph-mgr[192071]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  3 11:42:29 compute-0 podman[157165]: time="2025-10-03T11:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct  3 11:42:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46267 "" "Go-http-client/1.1"
Oct  3 11:42:29 compute-0 podman[157165]: @ - - [03/Oct/2025:11:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9148 "" "Go-http-client/1.1"
Oct  3 11:42:29 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct  3 11:42:29 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2833175364' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  3 11:42:30 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi asok_command: session ls {prefix=session ls} (starting...)
Oct  3 11:42:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct  3 11:42:30 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2357699458' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  3 11:42:30 compute-0 ceph-mds[219626]: mds.cephfs.compute-0.svanmi asok_command: status {prefix=status} (starting...)
Oct  3 11:42:30 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4065: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:30 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16099 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 11:42:30 compute-0 nova_compute[351685]: 2025-10-03 11:42:30.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:42:30 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  3 11:42:30 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/839245778' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  3 11:42:30 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16103 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 11:42:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  3 11:42:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4102664891' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  3 11:42:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct  3 11:42:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190400388' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  3 11:42:31 compute-0 openstack_network_exporter[367524]: ERROR   11:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:42:31 compute-0 openstack_network_exporter[367524]: ERROR   11:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct  3 11:42:31 compute-0 openstack_network_exporter[367524]: ERROR   11:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct  3 11:42:31 compute-0 openstack_network_exporter[367524]: ERROR   11:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct  3 11:42:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:42:31 compute-0 openstack_network_exporter[367524]: ERROR   11:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct  3 11:42:31 compute-0 openstack_network_exporter[367524]: 
Oct  3 11:42:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  3 11:42:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1078459116' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  3 11:42:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct  3 11:42:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1381233978' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  3 11:42:31 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  3 11:42:31 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3071211275' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  3 11:42:32 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16115 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 11:42:32 compute-0 ceph-mgr[192071]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  3 11:42:32 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T11:42:32.195+0000 7f321e7b5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  3 11:42:32 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4066: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  3 11:42:32 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2857349810' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  3 11:42:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct  3 11:42:32 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/756415500' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  3 11:42:32 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  3 11:42:32 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/287183329' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  3 11:42:33 compute-0 nova_compute[351685]: 2025-10-03 11:42:33.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:42:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct  3 11:42:33 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/7614714' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  3 11:42:33 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16125 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 11:42:33 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  3 11:42:33 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1629177912' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  3 11:42:33 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16129 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 8065 writes, 29K keys, 8065 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 8065 writes, 1991 syncs, 4.05 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.008       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56167acccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56167acccdd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memta
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 24453120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 24895488 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.692199707s of 600.301635742s, submitted: 90
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99368960 unmapped: 24879104 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99368960 unmapped: 24879104 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99409920 unmapped: 24838144 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99475456 unmapped: 24772608 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99532800 unmapped: 24715264 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 ms_handle_reset con 0x56167c989c00 session 0x56167f7830e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: mgrc ms_handle_reset ms_handle_reset con 0x56167e09d000
Oct  3 11:42:34 compute-0 ceph-osd[207741]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3262515590
Oct  3 11:42:34 compute-0 ceph-osd[207741]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3262515590,v1:192.168.122.100:6801/3262515590]
Oct  3 11:42:34 compute-0 ceph-osd[207741]: mgrc handle_mgr_configure stats_period=5
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99516416 unmapped: 24731648 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 ms_handle_reset con 0x56167ea78400 session 0x56167ebd8f00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 8245 writes, 29K keys, 8245 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 8245 writes, 2081 syncs, 3.96 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140948 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fa290000/0x0/0x4ffc00000, data 0x13017a3/0x13de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 24723456 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 565.936340332s of 566.476196289s, submitted: 90
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100597760 unmapped: 23650304 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144682 data_alloc: 218103808 data_used: 11821056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 144 handle_osd_map epochs [144,144], i have 144, src has [1,144]
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 144 ms_handle_reset con 0x56167e2d7c00 session 0x56167f01e960
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 23625728 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa28f000/0x0/0x4ffc00000, data 0x13017c6/0x13df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100638720 unmapped: 23609344 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 145 ms_handle_reset con 0x56167e373000 session 0x561680599c20
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa28a000/0x0/0x4ffc00000, data 0x1303353/0x13e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154668 data_alloc: 218103808 data_used: 11829248
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1304ef3/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154668 data_alloc: 218103808 data_used: 11829248
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1304ef3/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1304ef3/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154668 data_alloc: 218103808 data_used: 11829248
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1304ef3/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154668 data_alloc: 218103808 data_used: 11829248
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1304ef3/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1304ef3/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154668 data_alloc: 218103808 data_used: 11829248
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1304ef3/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x1304ef3/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154668 data_alloc: 218103808 data_used: 11829248
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 23601152 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 33.711662292s of 33.863773346s, submitted: 28
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 23552000 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 23552000 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa287000/0x0/0x4ffc00000, data 0x1304ef3/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100745216 unmapped: 23502848 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153612 data_alloc: 218103808 data_used: 11829248
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 23461888 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa287000/0x0/0x4ffc00000, data 0x1304ef3/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100818944 unmapped: 23429120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100818944 unmapped: 23429120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100818944 unmapped: 23429120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100818944 unmapped: 23429120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153612 data_alloc: 218103808 data_used: 11829248
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100818944 unmapped: 23429120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100818944 unmapped: 23429120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa287000/0x0/0x4ffc00000, data 0x1304ef3/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100818944 unmapped: 23429120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100818944 unmapped: 23429120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100818944 unmapped: 23429120 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153612 data_alloc: 218103808 data_used: 11829248
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 23420928 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 23420928 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 23420928 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa287000/0x0/0x4ffc00000, data 0x1304ef3/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 23420928 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 23420928 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153612 data_alloc: 218103808 data_used: 11829248
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 23420928 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 23420928 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa287000/0x0/0x4ffc00000, data 0x1304ef3/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 23420928 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 23420928 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 23420928 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153612 data_alloc: 218103808 data_used: 11829248
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 23420928 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.419168472s of 23.981992722s, submitted: 90
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100835328 unmapped: 23412736 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 146 ms_handle_reset con 0x56167c53ec00 session 0x56167eaad680
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 146 ms_handle_reset con 0x5616803a0000 session 0x56167f782960
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fa286000/0x0/0x4ffc00000, data 0x13052f3/0x13e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100835328 unmapped: 23412736 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 146 ms_handle_reset con 0x561680f06000 session 0x561680db94a0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 146 ms_handle_reset con 0x56167c53ec00 session 0x56167e2f4780
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 146 ms_handle_reset con 0x56167e2d7c00 session 0x56167c28fc20
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 146 ms_handle_reset con 0x5616803a0000 session 0x56167f782f00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 146 ms_handle_reset con 0x561680f06400 session 0x56167ebfb2c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 146 ms_handle_reset con 0x561680f06c00 session 0x5616803bba40
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 146 ms_handle_reset con 0x56167c53ec00 session 0x56167c28f2c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100859904 unmapped: 23388160 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 146 ms_handle_reset con 0x561680f06800 session 0x56167c28fe00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 ms_handle_reset con 0x56167e373000 session 0x56167e2f5c20
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 23363584 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 ms_handle_reset con 0x56167e2d7c00 session 0x56167ec17860
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 ms_handle_reset con 0x5616803a0000 session 0x56167ced1860
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173186 data_alloc: 218103808 data_used: 11845632
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 ms_handle_reset con 0x56167c53ec00 session 0x56167f7832c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 ms_handle_reset con 0x56167e2d7c00 session 0x56167ec16780
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 ms_handle_reset con 0x56167e373000 session 0x56167f01dc20
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 ms_handle_reset con 0x5616803a0000 session 0x56167cecd860
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 ms_handle_reset con 0x561680f06800 session 0x56167d3301e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 ms_handle_reset con 0x56167c53ec00 session 0x56167d04de00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 ms_handle_reset con 0x561680f06400 session 0x56167ebc61e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 ms_handle_reset con 0x56167e373000 session 0x56167d04c000
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 ms_handle_reset con 0x56167e2d7c00 session 0x56167ec02d20
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 102998016 unmapped: 21250048 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 ms_handle_reset con 0x5616803a0000 session 0x56167eb80f00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 103006208 unmapped: 21241856 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 103006208 unmapped: 21241856 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 ms_handle_reset con 0x56167c53ec00 session 0x56167e4feb40
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 ms_handle_reset con 0x56167e2d7c00 session 0x56167e3bb680
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 heartbeat osd_stat(store_statfs(0x4f9b39000/0x0/0x4ffc00000, data 0x1a4c6c3/0x1b34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 103006208 unmapped: 21241856 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 ms_handle_reset con 0x56167e373000 session 0x56167eb24f00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 103006208 unmapped: 21241856 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 ms_handle_reset con 0x561680f06400 session 0x56167e4ba3c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226230 data_alloc: 218103808 data_used: 11845632
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 103006208 unmapped: 21241856 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 21544960 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 21544960 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x561680f07800 session 0x5616803bb2c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9b36000/0x0/0x4ffc00000, data 0x1a4e126/0x1b37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9b36000/0x0/0x4ffc00000, data 0x1a4e126/0x1b37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 102703104 unmapped: 21544960 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9b36000/0x0/0x4ffc00000, data 0x1a4e126/0x1b37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 102719488 unmapped: 21528576 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227668 data_alloc: 218103808 data_used: 11849728
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 102719488 unmapped: 21528576 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 102719488 unmapped: 21528576 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 102719488 unmapped: 21528576 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 21504000 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 21504000 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9b36000/0x0/0x4ffc00000, data 0x1a4e126/0x1b37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239988 data_alloc: 218103808 data_used: 13557760
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 21504000 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 21504000 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 21504000 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 21504000 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 21504000 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9b36000/0x0/0x4ffc00000, data 0x1a4e126/0x1b37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240308 data_alloc: 218103808 data_used: 13590528
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 102744064 unmapped: 21504000 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 102825984 unmapped: 21422080 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 105545728 unmapped: 18702336 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 105545728 unmapped: 18702336 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9b36000/0x0/0x4ffc00000, data 0x1a4e126/0x1b37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167c53ec00 session 0x561680598960
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167e2d7c00 session 0x56168107f0e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 105545728 unmapped: 18702336 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279988 data_alloc: 218103808 data_used: 18329600
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.697580338s of 29.090560913s, submitted: 72
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9b36000/0x0/0x4ffc00000, data 0x1a4e126/0x1b37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167e373000 session 0x56167d09a5a0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 20217856 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 20217856 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 20217856 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 20217856 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0aa000/0x0/0x4ffc00000, data 0x14dc116/0x15c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 20217856 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199570 data_alloc: 218103808 data_used: 12705792
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 20217856 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 20217856 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0aa000/0x0/0x4ffc00000, data 0x14dc116/0x15c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 20217856 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0aa000/0x0/0x4ffc00000, data 0x14dc116/0x15c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 20217856 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 20217856 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199570 data_alloc: 218103808 data_used: 12705792
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 104038400 unmapped: 20209664 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0aa000/0x0/0x4ffc00000, data 0x14dc116/0x15c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 104038400 unmapped: 20209664 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.722612381s of 11.736725807s, submitted: 4
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x561680f06400 session 0x56167c80d0e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x561680f07800 session 0x56168107f2c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167c53ec00 session 0x56167e4d94a0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167e2d7c00 session 0x56167ec03860
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167e373000 session 0x56167ec02000
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 103538688 unmapped: 20709376 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16133 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 103538688 unmapped: 20709376 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 103538688 unmapped: 20709376 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x561680f06400 session 0x56167cee85a0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x561680f07800 session 0x56167ec170e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167c53ec00 session 0x56167c80af00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167e2d7c00 session 0x561680db8960
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222379 data_alloc: 218103808 data_used: 12705792
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167e373000 session 0x56167ec02b40
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x561680f06400 session 0x56167ebd81e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x561680f07c00 session 0x56167c80bc20
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167c53ec00 session 0x56167ebd9a40
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 20488192 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167e2d7c00 session 0x56167c80a5a0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 20488192 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f965e000/0x0/0x4ffc00000, data 0x1f27126/0x2010000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 20488192 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 20488192 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167e373000 session 0x56167d32ed20
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x561680f06400 session 0x5616803ba5a0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56168064fc00 session 0x5616805985a0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 20488192 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288453 data_alloc: 218103808 data_used: 12705792
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167c53ec00 session 0x56167f783680
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167e2d7c00 session 0x56167cebf0e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167e373000 session 0x56167d330000
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 17817600 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56168064f800 session 0x56167cf22000
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56168064f000 session 0x561680598d20
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 106946560 unmapped: 17301504 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f8cbe000/0x0/0x4ffc00000, data 0x28c6136/0x29b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.056495667s of 10.598365784s, submitted: 104
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 105160704 unmapped: 19087360 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 18022400 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 108347392 unmapped: 15900672 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412883 data_alloc: 218103808 data_used: 17870848
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 111108096 unmapped: 13139968 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 11108352 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f8c40000/0x0/0x4ffc00000, data 0x2944136/0x2a2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 11108352 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 113139712 unmapped: 11108352 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f8c40000/0x0/0x4ffc00000, data 0x2944136/0x2a2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,1])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 10838016 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1449171 data_alloc: 234881024 data_used: 23175168
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 10838016 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167c53ec00 session 0x56167d34c780
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167e2d7c00 session 0x56167ec17680
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f8c20000/0x0/0x4ffc00000, data 0x2964136/0x2a4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,1])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167e373000 session 0x56167eb812c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112435200 unmapped: 11812864 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112435200 unmapped: 11812864 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112435200 unmapped: 11812864 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f8f20000/0x0/0x4ffc00000, data 0x2664136/0x274e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112435200 unmapped: 11812864 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405011 data_alloc: 218103808 data_used: 20029440
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112435200 unmapped: 11812864 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f8f20000/0x0/0x4ffc00000, data 0x2664136/0x274e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112451584 unmapped: 11796480 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f8f20000/0x0/0x4ffc00000, data 0x2664136/0x274e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112451584 unmapped: 11796480 heap: 124248064 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56168064f800 session 0x56167ebd83c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56168064ec00 session 0x56167ced05a0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167c53ec00 session 0x56167cd1a1e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167e2d7c00 session 0x56167e4bb680
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.873032570s of 16.159612656s, submitted: 55
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167e373000 session 0x56167eb24d20
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112713728 unmapped: 21512192 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56168064f800 session 0x56167e4d9680
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56168064e000 session 0x56167ecbcf00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56168064e000 session 0x561680db9680
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167c53ec00 session 0x56167eb81a40
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112721920 unmapped: 21504000 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1504827 data_alloc: 218103808 data_used: 20029440
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112730112 unmapped: 21495808 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112730112 unmapped: 21495808 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112730112 unmapped: 21495808 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f8152000/0x0/0x4ffc00000, data 0x3431146/0x351c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112730112 unmapped: 21495808 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f8152000/0x0/0x4ffc00000, data 0x3431146/0x351c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112730112 unmapped: 21495808 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1504871 data_alloc: 218103808 data_used: 20029440
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167e2d7c00 session 0x5616805992c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112754688 unmapped: 21471232 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 21454848 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f8151000/0x0/0x4ffc00000, data 0x3431169/0x351d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112771072 unmapped: 21454848 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 21446656 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 21446656 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1508137 data_alloc: 218103808 data_used: 20033536
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 21446656 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f8151000/0x0/0x4ffc00000, data 0x3431169/0x351d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 21446656 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f8151000/0x0/0x4ffc00000, data 0x3431169/0x351d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112779264 unmapped: 21446656 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167e373000 session 0x56167d3305a0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.552539825s of 14.703654289s, submitted: 21
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56168064f800 session 0x56167c6bc780
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112680960 unmapped: 21544960 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 ms_handle_reset con 0x56167c53ec00 session 0x56167cd1b0e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112680960 unmapped: 21544960 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412887 data_alloc: 218103808 data_used: 20037632
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112680960 unmapped: 21544960 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 21536768 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112689152 unmapped: 21536768 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f8f1c000/0x0/0x4ffc00000, data 0x2667136/0x2751000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 21520384 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 114860032 unmapped: 19365888 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1458507 data_alloc: 218103808 data_used: 20434944
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f89c8000/0x0/0x4ffc00000, data 0x2bae136/0x2c98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 18833408 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f89c8000/0x0/0x4ffc00000, data 0x2bae136/0x2c98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 18833408 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f89c8000/0x0/0x4ffc00000, data 0x2bae136/0x2c98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 19275776 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112869376 unmapped: 21356544 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.552525520s of 11.021986961s, submitted: 101
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112369664 unmapped: 21856256 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1467632 data_alloc: 218103808 data_used: 20361216
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e2d7c00 session 0x56168074bc20
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112369664 unmapped: 21856256 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f89b4000/0x0/0x4ffc00000, data 0x2bcccd6/0x2cb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112410624 unmapped: 21815296 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e3b8c00 session 0x561680db9c20
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e3b9c00 session 0x5616803bb680
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167ea79400 session 0x56167eb80780
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167c53ec00 session 0x56167e2f5860
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e2d7c00 session 0x56167e2f4b40
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112484352 unmapped: 21741568 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x561680f07000 session 0x56167f01f860
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x561680f07400 session 0x56167cd1af00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e3b8c00 session 0x56167c80dc20
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 22429696 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 22429696 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8fe6000/0x0/0x4ffc00000, data 0x21f2c64/0x22dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381302 data_alloc: 218103808 data_used: 17756160
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 22429696 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8fe6000/0x0/0x4ffc00000, data 0x21f2c64/0x22dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 22429696 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 22421504 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8fe6000/0x0/0x4ffc00000, data 0x21f2c64/0x22dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167c53ec00 session 0x5616803ba3c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 22421504 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e2d7c00 session 0x56167d09a3c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 22421504 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x561680f07000 session 0x56167eb25680
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381302 data_alloc: 218103808 data_used: 17756160
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.108545303s of 11.337874413s, submitted: 37
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x561680f07400 session 0x56167ec032c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8fe5000/0x0/0x4ffc00000, data 0x21f2c74/0x22de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 22421504 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 22421504 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8fe5000/0x0/0x4ffc00000, data 0x21f2c74/0x22de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 22421504 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112050176 unmapped: 22175744 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167fb45000 session 0x56167eb24b40
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167fb45000 session 0x56167c6d8780
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167c53ec00 session 0x56167eaac5a0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e2d7c00 session 0x56167c80be00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x561680f07000 session 0x56167e4ff4a0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 112967680 unmapped: 21258240 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1443190 data_alloc: 218103808 data_used: 18305024
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 113057792 unmapped: 21168128 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8c61000/0x0/0x4ffc00000, data 0x2921c74/0x2a0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8c61000/0x0/0x4ffc00000, data 0x2921c74/0x2a0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 113057792 unmapped: 21168128 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 113057792 unmapped: 21168128 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 113057792 unmapped: 21168128 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x561680f07400 session 0x56167c80a3c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 20668416 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167c53ec00 session 0x56167e4bba40
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e2d7c00 session 0x56167d335a40
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167fb45000 session 0x56167d09b2c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167fb44c00 session 0x56167cebeb40
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1529585 data_alloc: 218103808 data_used: 19918848
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8c61000/0x0/0x4ffc00000, data 0x2921c74/0x2a0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 20684800 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 20684800 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.563384056s of 11.790323257s, submitted: 33
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167fb44800 session 0x56167c80d860
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 20332544 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 20332544 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f82d0000/0x0/0x4ffc00000, data 0x32b2c74/0x339e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167fb44c00 session 0x56167c6b61e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 20332544 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1535368 data_alloc: 218103808 data_used: 19988480
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167fb45000 session 0x56167cf23680
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 115310592 unmapped: 18915328 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e0d3000 session 0x56167d32e960
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 17473536 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x561680f07000 session 0x56167cec2960
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 13475840 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 121044992 unmapped: 13180928 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 13172736 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1599725 data_alloc: 234881024 data_used: 27455488
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f82a2000/0x0/0x4ffc00000, data 0x32ddca7/0x33cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 123281408 unmapped: 10944512 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x561680f06400 session 0x56167e3bab40
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56168064f400 session 0x56168107e000
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 126976000 unmapped: 7249920 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e0d3000 session 0x56167e4d8f00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128720896 unmapped: 5505024 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.516874313s of 11.963299751s, submitted: 32
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128753664 unmapped: 5472256 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f82a3000/0x0/0x4ffc00000, data 0x32ddca7/0x33cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128786432 unmapped: 5439488 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665789 data_alloc: 234881024 data_used: 35467264
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128786432 unmapped: 5439488 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f82a3000/0x0/0x4ffc00000, data 0x32ddca7/0x33cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128958464 unmapped: 5267456 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128999424 unmapped: 5226496 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 129007616 unmapped: 5218304 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 129007616 unmapped: 5218304 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1667389 data_alloc: 234881024 data_used: 35651584
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f82a3000/0x0/0x4ffc00000, data 0x32ddca7/0x33cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 129007616 unmapped: 5218304 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 129007616 unmapped: 5218304 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 129007616 unmapped: 5218304 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f82a3000/0x0/0x4ffc00000, data 0x32ddca7/0x33cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 129007616 unmapped: 5218304 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f82a3000/0x0/0x4ffc00000, data 0x32ddca7/0x33cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 5185536 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1667389 data_alloc: 234881024 data_used: 35651584
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 5185536 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 5185536 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 5185536 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f82a3000/0x0/0x4ffc00000, data 0x32ddca7/0x33cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 5185536 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f82a3000/0x0/0x4ffc00000, data 0x32ddca7/0x33cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 5185536 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1667869 data_alloc: 234881024 data_used: 35659776
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 5169152 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.246124268s of 17.300064087s, submitted: 2
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131088384 unmapped: 3137536 heap: 134225920 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7ea1000/0x0/0x4ffc00000, data 0x36dfca7/0x37cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132521984 unmapped: 2752512 heap: 135274496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132136960 unmapped: 3137536 heap: 135274496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132284416 unmapped: 2990080 heap: 135274496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1711737 data_alloc: 234881024 data_used: 36798464
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 133480448 unmapped: 1794048 heap: 135274496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7e32000/0x0/0x4ffc00000, data 0x3746ca7/0x3834000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 1777664 heap: 135274496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7e32000/0x0/0x4ffc00000, data 0x3746ca7/0x3834000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7e32000/0x0/0x4ffc00000, data 0x3746ca7/0x3834000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 1777664 heap: 135274496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 133496832 unmapped: 1777664 heap: 135274496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 134250496 unmapped: 5218304 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1788877 data_alloc: 251658240 data_used: 37220352
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 135872512 unmapped: 3596288 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 136085504 unmapped: 3383296 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.433458328s of 11.277626038s, submitted: 124
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 4562944 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7523000/0x0/0x4ffc00000, data 0x405dca7/0x414b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 4562944 heap: 139468800 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7523000/0x0/0x4ffc00000, data 0x405dca7/0x414b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 139149312 unmapped: 2416640 heap: 141565952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1854027 data_alloc: 251658240 data_used: 38567936
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 139182080 unmapped: 2383872 heap: 141565952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 3637248 heap: 141565952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f6e3d000/0x0/0x4ffc00000, data 0x4743ca7/0x4831000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138010624 unmapped: 3555328 heap: 141565952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 139247616 unmapped: 2318336 heap: 141565952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 2056192 heap: 141565952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1869085 data_alloc: 251658240 data_used: 39149568
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 2056192 heap: 141565952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 139517952 unmapped: 2048000 heap: 141565952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56168124b000 session 0x56167ebfad20
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56168124b400 session 0x56167cec34a0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56168124b800 session 0x56167c636d20
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.545746803s of 10.141256332s, submitted: 122
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e0d3000 session 0x56167ec02b40
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56168064f400 session 0x56167ec03e00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138543104 unmapped: 23101440 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f5eb2000/0x0/0x4ffc00000, data 0x56ceca7/0x57bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 137969664 unmapped: 23674880 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f5eb2000/0x0/0x4ffc00000, data 0x56ceca7/0x57bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 137969664 unmapped: 23674880 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1989476 data_alloc: 251658240 data_used: 39145472
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 137969664 unmapped: 23674880 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 137969664 unmapped: 23674880 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f5eb2000/0x0/0x4ffc00000, data 0x56ceca7/0x57bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 137969664 unmapped: 23674880 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56168124b000 session 0x56167c80d860
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 137773056 unmapped: 23871488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 23855104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1990764 data_alloc: 251658240 data_used: 39145472
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 137871360 unmapped: 23773184 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f5e8e000/0x0/0x4ffc00000, data 0x56f2ca7/0x57e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 22708224 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 142139392 unmapped: 19505152 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 147791872 unmapped: 13852672 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 147800064 unmapped: 13844480 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.560036659s of 12.733132362s, submitted: 13
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2077532 data_alloc: 251658240 data_used: 49840128
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 147800064 unmapped: 13844480 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f5e8e000/0x0/0x4ffc00000, data 0x56f2ca7/0x57e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f5e8e000/0x0/0x4ffc00000, data 0x56f2ca7/0x57e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 13803520 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f5e8e000/0x0/0x4ffc00000, data 0x56f2ca7/0x57e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 148889600 unmapped: 12754944 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 148889600 unmapped: 12754944 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 148889600 unmapped: 12754944 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f5e8e000/0x0/0x4ffc00000, data 0x56f2ca7/0x57e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2077356 data_alloc: 251658240 data_used: 49840128
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 148889600 unmapped: 12754944 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 148889600 unmapped: 12754944 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f5e8d000/0x0/0x4ffc00000, data 0x56f2ca7/0x57e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 148889600 unmapped: 12754944 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 12746752 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 148914176 unmapped: 12730368 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2077148 data_alloc: 251658240 data_used: 49840128
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 148914176 unmapped: 12730368 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f5e8e000/0x0/0x4ffc00000, data 0x56f2ca7/0x57e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 148914176 unmapped: 12730368 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.716558456s of 12.033885002s, submitted: 7
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 148938752 unmapped: 12705792 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 148996096 unmapped: 12648448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f5e8e000/0x0/0x4ffc00000, data 0x56f2ca7/0x57e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167fb44c00 session 0x56167eaad0e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167fb45000 session 0x56167cebe3c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56168124a000 session 0x56167cecd2c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 148996096 unmapped: 12648448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56168124a400 session 0x56167ebc6f00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1923548 data_alloc: 251658240 data_used: 43515904
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f6b2d000/0x0/0x4ffc00000, data 0x4a47c87/0x4b33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e0d3000 session 0x56167e4bbe00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167fb44c00 session 0x56167cec25a0
Oct  3 11:42:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 23076864 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 23076864 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 23076864 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/354806602' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 23076864 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 23076864 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1713688 data_alloc: 234881024 data_used: 33050624
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 23076864 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7bdd000/0x0/0x4ffc00000, data 0x39a7c54/0x3a91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7bdd000/0x0/0x4ffc00000, data 0x39a7c54/0x3a91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 23076864 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 23076864 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 23076864 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 23076864 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1713688 data_alloc: 234881024 data_used: 33050624
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 23076864 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7bdd000/0x0/0x4ffc00000, data 0x39a7c54/0x3a91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 23076864 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7bdd000/0x0/0x4ffc00000, data 0x39a7c54/0x3a91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 23076864 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 23076864 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7bdd000/0x0/0x4ffc00000, data 0x39a7c54/0x3a91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138567680 unmapped: 23076864 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1714168 data_alloc: 234881024 data_used: 33062912
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7bdd000/0x0/0x4ffc00000, data 0x39a7c54/0x3a91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 138575872 unmapped: 23068672 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.761955261s of 19.092134476s, submitted: 79
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 142622720 unmapped: 19021824 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 142622720 unmapped: 19021824 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f72cf000/0x0/0x4ffc00000, data 0x42a7c54/0x4391000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 143155200 unmapped: 18489344 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 141197312 unmapped: 20447232 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1803340 data_alloc: 234881024 data_used: 34070528
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 141205504 unmapped: 20439040 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f72a2000/0x0/0x4ffc00000, data 0x42dcc54/0x43c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 141205504 unmapped: 20439040 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 141205504 unmapped: 20439040 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 141205504 unmapped: 20439040 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56168124b400 session 0x56167c80d4a0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56168124b800 session 0x56168074b2c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 33767424 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e0d3000 session 0x56167e4ba1e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f877c000/0x0/0x4ffc00000, data 0x29f8c54/0x2ae2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1486082 data_alloc: 234881024 data_used: 16850944
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 33767424 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 33767424 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f877c000/0x0/0x4ffc00000, data 0x29f8c54/0x2ae2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 33767424 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e3b9c00 session 0x56167ebc72c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167fb45c00 session 0x56167eb801e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 33767424 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f877c000/0x0/0x4ffc00000, data 0x29f8c54/0x2ae2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 33767424 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1486082 data_alloc: 234881024 data_used: 16850944
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.421999931s of 14.332877159s, submitted: 131
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f877c000/0x0/0x4ffc00000, data 0x29f8c54/0x2ae2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167fb44c00 session 0x56167e4fe3c0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 36585472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 36585472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 36585472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 36585472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124395520 unmapped: 37249024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 37240832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 37240832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 37240832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 37240832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 37240832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 37240832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 37240832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124403712 unmapped: 37240832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 37232640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 37232640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 37232640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 37232640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 37232640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 37232640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 37232640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 37232640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 37224448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 37224448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 37224448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 37224448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 37224448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 37224448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 37224448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 37216256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 37216256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 37216256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 37216256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 37216256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 37216256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 37216256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 37216256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124436480 unmapped: 37208064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124436480 unmapped: 37208064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124436480 unmapped: 37208064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124436480 unmapped: 37208064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124436480 unmapped: 37208064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124436480 unmapped: 37208064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124436480 unmapped: 37208064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124444672 unmapped: 37199872 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124444672 unmapped: 37199872 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124452864 unmapped: 37191680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124452864 unmapped: 37191680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124452864 unmapped: 37191680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124452864 unmapped: 37191680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124452864 unmapped: 37191680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124452864 unmapped: 37191680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124452864 unmapped: 37191680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124452864 unmapped: 37191680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124452864 unmapped: 37191680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124452864 unmapped: 37191680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124452864 unmapped: 37191680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124452864 unmapped: 37191680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124452864 unmapped: 37191680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124452864 unmapped: 37191680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 37183488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124461056 unmapped: 37183488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124469248 unmapped: 37175296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124469248 unmapped: 37175296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124469248 unmapped: 37175296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124469248 unmapped: 37175296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124469248 unmapped: 37175296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124469248 unmapped: 37175296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124469248 unmapped: 37175296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124469248 unmapped: 37175296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 37167104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 37167104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 37167104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 37167104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 37167104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 37167104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 37167104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 37167104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 37167104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 37167104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 37167104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 37167104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124477440 unmapped: 37167104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1403499 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 37158912 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124485632 unmapped: 37158912 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 123625472 unmapped: 38019072 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8dfd000/0x0/0x4ffc00000, data 0x2378c44/0x2461000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 123625472 unmapped: 38019072 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 110.179321289s of 110.219848633s, submitted: 8
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 133242880 unmapped: 28401664 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e0d3000 session 0x56167c6bc780
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e3b9c00 session 0x56167eb81680
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167fb45c00 session 0x56167c6b61e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466950 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56168124b800 session 0x56167c451c20
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56168124a000 session 0x56167cd1ba40
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 37748736 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 37748736 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 37748736 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e0d3000 session 0x56167cd1af00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 37748736 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e3b9c00 session 0x56167c637a40
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8687000/0x0/0x4ffc00000, data 0x2aeec44/0x2bd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 37756928 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466950 data_alloc: 218103808 data_used: 14045184
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167fb45c00 session 0x56167cebe5a0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56168124b800 session 0x56167eb80960
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 37748736 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 37748736 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 37748736 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 37740544 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8686000/0x0/0x4ffc00000, data 0x2aeec54/0x2bd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [1])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 37740544 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1509368 data_alloc: 234881024 data_used: 19501056
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8686000/0x0/0x4ffc00000, data 0x2aeec54/0x2bd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522648 data_alloc: 234881024 data_used: 21372928
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8686000/0x0/0x4ffc00000, data 0x2aeec54/0x2bd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8686000/0x0/0x4ffc00000, data 0x2aeec54/0x2bd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522648 data_alloc: 234881024 data_used: 21372928
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8686000/0x0/0x4ffc00000, data 0x2aeec54/0x2bd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522648 data_alloc: 234881024 data_used: 21372928
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8686000/0x0/0x4ffc00000, data 0x2aeec54/0x2bd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 37117952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124534784 unmapped: 37109760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522648 data_alloc: 234881024 data_used: 21372928
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124534784 unmapped: 37109760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8686000/0x0/0x4ffc00000, data 0x2aeec54/0x2bd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124534784 unmapped: 37109760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124534784 unmapped: 37109760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124534784 unmapped: 37109760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8686000/0x0/0x4ffc00000, data 0x2aeec54/0x2bd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124534784 unmapped: 37109760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522648 data_alloc: 234881024 data_used: 21372928
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124534784 unmapped: 37109760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124534784 unmapped: 37109760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124542976 unmapped: 37101568 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124542976 unmapped: 37101568 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124542976 unmapped: 37101568 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8686000/0x0/0x4ffc00000, data 0x2aeec54/0x2bd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522648 data_alloc: 234881024 data_used: 21372928
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 124542976 unmapped: 37101568 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 41.648380280s of 41.815612793s, submitted: 25
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132571136 unmapped: 29073408 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f8056000/0x0/0x4ffc00000, data 0x3116c54/0x3200000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 29065216 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132669440 unmapped: 28975104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132669440 unmapped: 28975104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1582940 data_alloc: 234881024 data_used: 21598208
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fd3000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132767744 unmapped: 28876800 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132767744 unmapped: 28876800 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132767744 unmapped: 28876800 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132767744 unmapped: 28876800 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fd3000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132767744 unmapped: 28876800 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1582956 data_alloc: 234881024 data_used: 21598208
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132767744 unmapped: 28876800 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132767744 unmapped: 28876800 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fd3000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 28868608 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fd3000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 28868608 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 28868608 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1582956 data_alloc: 234881024 data_used: 21598208
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fd3000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 28868608 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 28868608 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fd3000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132784128 unmapped: 28860416 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132784128 unmapped: 28860416 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 132784128 unmapped: 28860416 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1582956 data_alloc: 234881024 data_used: 21598208
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.388793945s of 19.706190109s, submitted: 71
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131194880 unmapped: 30449664 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131194880 unmapped: 30449664 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131194880 unmapped: 30449664 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131194880 unmapped: 30449664 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131194880 unmapped: 30449664 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1575084 data_alloc: 234881024 data_used: 21598208
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131194880 unmapped: 30449664 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131194880 unmapped: 30449664 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131194880 unmapped: 30449664 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131194880 unmapped: 30449664 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 30441472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1575084 data_alloc: 234881024 data_used: 21598208
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 30441472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 30441472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 30441472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.049583435s of 13.056660652s, submitted: 1
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 30441472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167ea78c00 session 0x56167c4510e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 30441472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1575084 data_alloc: 234881024 data_used: 21598208
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 30441472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 30441472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 30441472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 30441472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 30441472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1575084 data_alloc: 234881024 data_used: 21598208
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 30441472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 30441472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 30441472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 30441472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131203072 unmapped: 30441472 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1575084 data_alloc: 234881024 data_used: 21598208
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1575084 data_alloc: 234881024 data_used: 21598208
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1575084 data_alloc: 234881024 data_used: 21598208
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1575084 data_alloc: 234881024 data_used: 21598208
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1575084 data_alloc: 234881024 data_used: 21598208
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131211264 unmapped: 30433280 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1575084 data_alloc: 234881024 data_used: 21598208
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131219456 unmapped: 30425088 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131219456 unmapped: 30425088 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131219456 unmapped: 30425088 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131219456 unmapped: 30425088 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131219456 unmapped: 30425088 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1575084 data_alloc: 234881024 data_used: 21598208
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131219456 unmapped: 30425088 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1575084 data_alloc: 234881024 data_used: 21598208
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 47.213550568s of 47.223968506s, submitted: 1
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576684 data_alloc: 234881024 data_used: 21753856
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576684 data_alloc: 234881024 data_used: 21753856
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576684 data_alloc: 234881024 data_used: 21753856
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131227648 unmapped: 30416896 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131235840 unmapped: 30408704 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131235840 unmapped: 30408704 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131235840 unmapped: 30408704 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131235840 unmapped: 30408704 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131235840 unmapped: 30408704 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131235840 unmapped: 30408704 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131235840 unmapped: 30408704 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131235840 unmapped: 30408704 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131235840 unmapped: 30408704 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 10K writes, 2847 syncs, 3.57 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1928 writes, 7500 keys, 1928 commit groups, 1.0 writes per commit group, ingest: 9.29 MB, 0.02 MB/s#012Interval WAL: 1928 writes, 766 syncs, 2.52 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131244032 unmapped: 30400512 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131244032 unmapped: 30400512 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131244032 unmapped: 30400512 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131244032 unmapped: 30400512 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131244032 unmapped: 30400512 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131244032 unmapped: 30400512 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131244032 unmapped: 30400512 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131244032 unmapped: 30400512 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131244032 unmapped: 30400512 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131244032 unmapped: 30400512 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131244032 unmapped: 30400512 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131244032 unmapped: 30400512 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131244032 unmapped: 30400512 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 40.990829468s of 40.995243073s, submitted: 1
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131244032 unmapped: 30400512 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131244032 unmapped: 30400512 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131244032 unmapped: 30400512 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 30392320 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 30392320 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 30392320 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 30392320 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 30392320 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 30392320 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 30392320 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 30392320 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 30392320 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 30392320 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131260416 unmapped: 30384128 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131260416 unmapped: 30384128 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131260416 unmapped: 30384128 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131260416 unmapped: 30384128 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131260416 unmapped: 30384128 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131268608 unmapped: 30375936 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131268608 unmapped: 30375936 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131268608 unmapped: 30375936 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131268608 unmapped: 30375936 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131268608 unmapped: 30375936 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131268608 unmapped: 30375936 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131268608 unmapped: 30375936 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131268608 unmapped: 30375936 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131268608 unmapped: 30375936 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131276800 unmapped: 30367744 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131276800 unmapped: 30367744 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131276800 unmapped: 30367744 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131276800 unmapped: 30367744 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131276800 unmapped: 30367744 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131276800 unmapped: 30367744 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131276800 unmapped: 30367744 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131276800 unmapped: 30367744 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131276800 unmapped: 30367744 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131276800 unmapped: 30367744 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131276800 unmapped: 30367744 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131276800 unmapped: 30367744 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131276800 unmapped: 30367744 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131276800 unmapped: 30367744 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131284992 unmapped: 30359552 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131284992 unmapped: 30359552 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131284992 unmapped: 30359552 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131284992 unmapped: 30359552 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131284992 unmapped: 30359552 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131284992 unmapped: 30359552 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131284992 unmapped: 30359552 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131284992 unmapped: 30359552 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131284992 unmapped: 30359552 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131284992 unmapped: 30359552 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 30351360 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 30351360 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 30351360 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 30351360 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 30351360 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 30351360 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 30351360 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 30351360 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 30351360 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 30351360 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 30351360 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 30351360 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 30351360 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 30351360 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 30351360 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131293184 unmapped: 30351360 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 30343168 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167ea79000 session 0x56167ec030e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 30343168 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 30343168 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 30343168 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 30343168 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 30343168 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 30343168 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576844 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 30343168 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 75.497352600s of 75.502746582s, submitted: 1
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131424256 unmapped: 30220288 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131424256 unmapped: 30220288 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131424256 unmapped: 30220288 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131448832 unmapped: 30195712 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131448832 unmapped: 30195712 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131457024 unmapped: 30187520 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131530752 unmapped: 30113792 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131530752 unmapped: 30113792 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131530752 unmapped: 30113792 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131530752 unmapped: 30113792 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131530752 unmapped: 30113792 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131530752 unmapped: 30113792 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131530752 unmapped: 30113792 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131530752 unmapped: 30113792 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131530752 unmapped: 30113792 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 30105600 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131547136 unmapped: 30097408 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131547136 unmapped: 30097408 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131547136 unmapped: 30097408 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131555328 unmapped: 30089216 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131555328 unmapped: 30089216 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131563520 unmapped: 30081024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131563520 unmapped: 30081024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131563520 unmapped: 30081024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131563520 unmapped: 30081024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131563520 unmapped: 30081024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131563520 unmapped: 30081024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131563520 unmapped: 30081024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131563520 unmapped: 30081024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131563520 unmapped: 30081024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131563520 unmapped: 30081024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131563520 unmapped: 30081024 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131571712 unmapped: 30072832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131571712 unmapped: 30072832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131571712 unmapped: 30072832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131571712 unmapped: 30072832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131571712 unmapped: 30072832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131571712 unmapped: 30072832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131571712 unmapped: 30072832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131571712 unmapped: 30072832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131571712 unmapped: 30072832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131571712 unmapped: 30072832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131571712 unmapped: 30072832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131571712 unmapped: 30072832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131571712 unmapped: 30072832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131571712 unmapped: 30072832 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131579904 unmapped: 30064640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131579904 unmapped: 30064640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131579904 unmapped: 30064640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131579904 unmapped: 30064640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131579904 unmapped: 30064640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131579904 unmapped: 30064640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131579904 unmapped: 30064640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131579904 unmapped: 30064640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131579904 unmapped: 30064640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131579904 unmapped: 30064640 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131588096 unmapped: 30056448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131588096 unmapped: 30056448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131588096 unmapped: 30056448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131588096 unmapped: 30056448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131588096 unmapped: 30056448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131588096 unmapped: 30056448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131588096 unmapped: 30056448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131588096 unmapped: 30056448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131588096 unmapped: 30056448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131588096 unmapped: 30056448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131588096 unmapped: 30056448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131588096 unmapped: 30056448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131588096 unmapped: 30056448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131588096 unmapped: 30056448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131588096 unmapped: 30056448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131588096 unmapped: 30056448 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131596288 unmapped: 30048256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576668 data_alloc: 234881024 data_used: 21757952
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131596288 unmapped: 30048256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 105.092544556s of 105.745857239s, submitted: 108
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131596288 unmapped: 30048256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131596288 unmapped: 30048256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131596288 unmapped: 30048256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131596288 unmapped: 30048256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578268 data_alloc: 234881024 data_used: 22097920
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131596288 unmapped: 30048256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131596288 unmapped: 30048256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131596288 unmapped: 30048256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131596288 unmapped: 30048256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131596288 unmapped: 30048256 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578268 data_alloc: 234881024 data_used: 22097920
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578268 data_alloc: 234881024 data_used: 22097920
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578428 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578428 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 30040064 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131612672 unmapped: 30031872 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578428 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131612672 unmapped: 30031872 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131612672 unmapped: 30031872 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131612672 unmapped: 30031872 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131612672 unmapped: 30031872 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131612672 unmapped: 30031872 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578428 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131612672 unmapped: 30031872 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131612672 unmapped: 30031872 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.933090210s of 35.950458527s, submitted: 2
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131612672 unmapped: 30031872 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131612672 unmapped: 30031872 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131620864 unmapped: 30023680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131620864 unmapped: 30023680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131620864 unmapped: 30023680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131620864 unmapped: 30023680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131620864 unmapped: 30023680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131620864 unmapped: 30023680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131620864 unmapped: 30023680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131620864 unmapped: 30023680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131620864 unmapped: 30023680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131620864 unmapped: 30023680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131620864 unmapped: 30023680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131620864 unmapped: 30023680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131620864 unmapped: 30023680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131620864 unmapped: 30023680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131620864 unmapped: 30023680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131620864 unmapped: 30023680 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 30015488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 30015488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 30015488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 30015488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 30015488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 30015488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 30015488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 30015488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 30015488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 30015488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 30015488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 30015488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 30015488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 30015488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 30015488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131629056 unmapped: 30015488 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 30007296 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131645440 unmapped: 29999104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131645440 unmapped: 29999104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131645440 unmapped: 29999104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131645440 unmapped: 29999104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131645440 unmapped: 29999104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131645440 unmapped: 29999104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131645440 unmapped: 29999104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131645440 unmapped: 29999104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131645440 unmapped: 29999104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131645440 unmapped: 29999104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131645440 unmapped: 29999104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131645440 unmapped: 29999104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131645440 unmapped: 29999104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131645440 unmapped: 29999104 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131653632 unmapped: 29990912 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131661824 unmapped: 29982720 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131661824 unmapped: 29982720 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131661824 unmapped: 29982720 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131661824 unmapped: 29982720 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131661824 unmapped: 29982720 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131661824 unmapped: 29982720 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131661824 unmapped: 29982720 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131661824 unmapped: 29982720 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131661824 unmapped: 29982720 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131670016 unmapped: 29974528 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131670016 unmapped: 29974528 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131670016 unmapped: 29974528 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131670016 unmapped: 29974528 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131670016 unmapped: 29974528 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131670016 unmapped: 29974528 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131670016 unmapped: 29974528 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131670016 unmapped: 29974528 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131670016 unmapped: 29974528 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131670016 unmapped: 29974528 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131670016 unmapped: 29974528 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131670016 unmapped: 29974528 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131670016 unmapped: 29974528 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131670016 unmapped: 29974528 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131670016 unmapped: 29974528 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131670016 unmapped: 29974528 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131670016 unmapped: 29974528 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131678208 unmapped: 29966336 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131678208 unmapped: 29966336 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131678208 unmapped: 29966336 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131678208 unmapped: 29966336 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131678208 unmapped: 29966336 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131678208 unmapped: 29966336 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131678208 unmapped: 29966336 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131678208 unmapped: 29966336 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131678208 unmapped: 29966336 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131678208 unmapped: 29966336 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131678208 unmapped: 29966336 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131678208 unmapped: 29966336 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131678208 unmapped: 29966336 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131678208 unmapped: 29966336 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131678208 unmapped: 29966336 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131686400 unmapped: 29958144 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131686400 unmapped: 29958144 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131686400 unmapped: 29958144 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131686400 unmapped: 29958144 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131686400 unmapped: 29958144 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131686400 unmapped: 29958144 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131686400 unmapped: 29958144 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131686400 unmapped: 29958144 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131686400 unmapped: 29958144 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131686400 unmapped: 29958144 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131694592 unmapped: 29949952 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 29941760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 29941760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 29941760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 29941760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 29941760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 29941760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 29941760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 29941760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 29941760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 29941760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 29941760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 29941760 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 29933568 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 29933568 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 29933568 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 29933568 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 29933568 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 234881024 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 29933568 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 29933568 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 29933568 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 29933568 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 29933568 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 218103808 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 29933568 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 218103808 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 218103808 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 218103808 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 218103808 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 29925376 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 29917184 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 29917184 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 29917184 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 218103808 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 29917184 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 29917184 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 29917184 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 29917184 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 29917184 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 218103808 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 29917184 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 29917184 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 29917184 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 29917184 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 29917184 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 218103808 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 29917184 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 29917184 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131735552 unmapped: 29908992 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131735552 unmapped: 29908992 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131735552 unmapped: 29908992 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 218103808 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131735552 unmapped: 29908992 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131735552 unmapped: 29908992 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131735552 unmapped: 29908992 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131735552 unmapped: 29908992 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131735552 unmapped: 29908992 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 218103808 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131735552 unmapped: 29908992 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131735552 unmapped: 29908992 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131735552 unmapped: 29908992 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131735552 unmapped: 29908992 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131735552 unmapped: 29908992 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1578604 data_alloc: 218103808 data_used: 22102016
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131735552 unmapped: 29908992 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131735552 unmapped: 29908992 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f7fdb000/0x0/0x4ffc00000, data 0x3199c54/0x3283000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 131735552 unmapped: 29908992 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167c53ec00 session 0x56167e4d8960
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 217.116027832s of 217.122406006s, submitted: 1
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167e2d7c00 session 0x56167ecbc000
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 32940032 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167fb45c00 session 0x561680db8f00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 32940032 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392994 data_alloc: 218103808 data_used: 15437824
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 32940032 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 32940032 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9048000/0x0/0x4ffc00000, data 0x212cc54/0x2216000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 32940032 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 32940032 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9048000/0x0/0x4ffc00000, data 0x212cc54/0x2216000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 32940032 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392994 data_alloc: 218103808 data_used: 15437824
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 32940032 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 32940032 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 32940032 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9048000/0x0/0x4ffc00000, data 0x212cc54/0x2216000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 32940032 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.604131699s of 11.737083435s, submitted: 22
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56168124a400 session 0x56167c6b6960
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56168064f400 session 0x56167d34c1e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 128704512 unmapped: 32940032 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392770 data_alloc: 218103808 data_used: 15437824
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167c53ec00 session 0x56167ecbd860
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120717312 unmapped: 40927232 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9e69000/0x0/0x4ffc00000, data 0x130bc54/0x13f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120717312 unmapped: 40927232 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9e6a000/0x0/0x4ffc00000, data 0x130bc44/0x13f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120717312 unmapped: 40927232 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120717312 unmapped: 40927232 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9e6a000/0x0/0x4ffc00000, data 0x130bc44/0x13f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120717312 unmapped: 40927232 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229619 data_alloc: 218103808 data_used: 7233536
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120717312 unmapped: 40927232 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120717312 unmapped: 40927232 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120717312 unmapped: 40927232 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120717312 unmapped: 40927232 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9e6a000/0x0/0x4ffc00000, data 0x130bc44/0x13f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120717312 unmapped: 40927232 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229867 data_alloc: 218103808 data_used: 7233536
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 ms_handle_reset con 0x56167fb45c00 session 0x56167eb25a40
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.531622887s of 10.673888206s, submitted: 27
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 150 ms_handle_reset con 0x56167e2d7c00 session 0x56167ec02f00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120717312 unmapped: 40927232 heap: 161644544 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120717312 unmapped: 57712640 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 151 ms_handle_reset con 0x56168124a400 session 0x56167d34de00
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 151 heartbeat osd_stat(store_statfs(0x4f9662000/0x0/0x4ffc00000, data 0x1b0f38b/0x1bf9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120717312 unmapped: 57712640 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 152 ms_handle_reset con 0x56168124b800 session 0x56168074b0e0
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9660000/0x0/0x4ffc00000, data 0x1b10f6c/0x1bfc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120741888 unmapped: 57688064 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 57679872 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240973 data_alloc: 218103808 data_used: 7241728
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 57679872 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 57679872 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120750080 unmapped: 57679872 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 57671680 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 153 heartbeat osd_stat(store_statfs(0x4f9e5e000/0x0/0x4ffc00000, data 0x13129eb/0x13ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 57671680 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243275 data_alloc: 218103808 data_used: 7241728
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 57671680 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 57671680 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 57671680 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 153 heartbeat osd_stat(store_statfs(0x4f9e5e000/0x0/0x4ffc00000, data 0x13129eb/0x13ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 57671680 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 57671680 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1243275 data_alloc: 218103808 data_used: 7241728
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.209228516s of 15.539533615s, submitted: 60
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246249 data_alloc: 218103808 data_used: 7241728
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246249 data_alloc: 218103808 data_used: 7241728
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246249 data_alloc: 218103808 data_used: 7241728
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246249 data_alloc: 218103808 data_used: 7241728
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246249 data_alloc: 218103808 data_used: 7241728
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246249 data_alloc: 218103808 data_used: 7241728
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246249 data_alloc: 218103808 data_used: 7241728
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 57655296 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246409 data_alloc: 218103808 data_used: 7245824
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246409 data_alloc: 218103808 data_used: 7245824
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246409 data_alloc: 218103808 data_used: 7245824
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246409 data_alloc: 218103808 data_used: 7245824
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246409 data_alloc: 218103808 data_used: 7245824
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120791040 unmapped: 57638912 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246409 data_alloc: 218103808 data_used: 7245824
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246409 data_alloc: 218103808 data_used: 7245824
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246409 data_alloc: 218103808 data_used: 7245824
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:34 compute-0 ceph-osd[207741]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:34 compute-0 ceph-osd[207741]: bluestore.MempoolThread(0x56167adabb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246409 data_alloc: 218103808 data_used: 7245824
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 57630720 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120913920 unmapped: 57516032 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: do_command 'config diff' '{prefix=config diff}'
Oct  3 11:42:34 compute-0 ceph-osd[207741]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  3 11:42:34 compute-0 ceph-osd[207741]: do_command 'config show' '{prefix=config show}'
Oct  3 11:42:34 compute-0 ceph-osd[207741]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  3 11:42:34 compute-0 ceph-osd[207741]: do_command 'counter dump' '{prefix=counter dump}'
Oct  3 11:42:34 compute-0 ceph-osd[207741]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  3 11:42:34 compute-0 ceph-osd[207741]: do_command 'counter schema' '{prefix=counter schema}'
Oct  3 11:42:34 compute-0 ceph-osd[207741]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  3 11:42:34 compute-0 ceph-osd[207741]: osd.2 154 heartbeat osd_stat(store_statfs(0x4f9e5b000/0x0/0x4ffc00000, data 0x131444e/0x1402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120963072 unmapped: 57466880 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120463360 unmapped: 57966592 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: prioritycache tune_memory target: 4294967296 mapped: 120725504 unmapped: 57704448 heap: 178429952 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:34 compute-0 ceph-osd[207741]: do_command 'log dump' '{prefix=log dump}'
Oct  3 11:42:34 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4067: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:34 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16135 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  3 11:42:34 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  3 11:42:34 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2219001619' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  3 11:42:35 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16139 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 11:42:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  3 11:42:35 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/561358310' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  3 11:42:35 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16143 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  3 11:42:35 compute-0 nova_compute[351685]: 2025-10-03 11:42:35.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:42:35 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  3 11:42:35 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3581000377' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  3 11:42:35 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16147 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct  3 11:42:36 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct  3 11:42:36 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3540617896' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  3 11:42:36 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16151 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  3 11:42:36 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4068: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:36 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16155 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  3 11:42:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct  3 11:42:37 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3542357452' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  3 11:42:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct  3 11:42:37 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3262840816' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  3 11:42:37 compute-0 ceph-mgr[192071]: log_channel(audit) log [DBG] : from='client.16163 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  3 11:42:37 compute-0 ceph-mgr[192071]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  3 11:42:37 compute-0 ceph-9b4e8c9a-5555-5510-a631-4742a1182561-mgr-compute-0-vtkhde[192067]: 2025-10-03T11:42:37.561+0000 7f321e7b5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  3 11:42:37 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct  3 11:42:37 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4209439925' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct  3 11:42:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct  3 11:42:38 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2225161366' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  3 11:42:38 compute-0 nova_compute[351685]: 2025-10-03 11:42:38.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  3 11:42:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct  3 11:42:38 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1977697454' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  3 11:42:38 compute-0 ceph-mgr[192071]: log_channel(cluster) log [DBG] : pgmap v4069: 321 pgs: 321 active+clean; 118 MiB data, 321 MiB used, 60 GiB / 60 GiB avail
Oct  3 11:42:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct  3 11:42:38 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/820510911' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  3 11:42:38 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct  3 11:42:38 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1765664296' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98590720 unmapped: 17735680 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 9884 writes, 35K keys, 9884 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 9884 writes, 2638 syncs, 3.75 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5650663e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5650663e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memta
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.684997559s of 600.302673340s, submitted: 90
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98598912 unmapped: 17727488 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 17702912 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 17645568 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 17637376 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 17637376 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 17637376 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 17637376 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98697216 unmapped: 17629184 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 ms_handle_reset con 0x565067cf5400 session 0x56506b995860
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 ms_handle_reset con 0x565068c85c00 session 0x565068c48960
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 ms_handle_reset con 0x56506a8a6400 session 0x565068b421e0
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 17620992 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 ms_handle_reset con 0x565068039000 session 0x565069ed45a0
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: mgrc ms_handle_reset ms_handle_reset con 0x56506a881000
Oct  3 11:42:38 compute-0 ceph-osd[206733]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3262515590
Oct  3 11:42:38 compute-0 ceph-osd[206733]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3262515590,v1:192.168.122.100:6801/3262515590]
Oct  3 11:42:38 compute-0 ceph-osd[206733]: mgrc handle_mgr_configure stats_period=5
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 ms_handle_reset con 0x565068c96c00 session 0x565068c490e0
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 ms_handle_reset con 0x565069b6bc00 session 0x565067e1fa40
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 ms_handle_reset con 0x56506a536400 session 0x565068c49c20
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 10K writes, 35K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 10K writes, 2728 syncs, 3.69 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097920 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xaf9ae9/0xbce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 17612800 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 565.841674805s of 566.463256836s, submitted: 90
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 17604608 heap: 116326400 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185382 data_alloc: 218103808 data_used: 4943872
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 34357248 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 144 ms_handle_reset con 0x56506a536400 session 0x5650698634a0
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 34332672 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 34299904 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 145 ms_handle_reset con 0x5650698fcc00 session 0x565067f27a40
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f91ba000/0x0/0x4ffc00000, data 0x23db699/0x24b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 34299904 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 34299904 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280842 data_alloc: 218103808 data_used: 4952064
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 34299904 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98820096 unmapped: 34291712 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f91b6000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98820096 unmapped: 34291712 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98820096 unmapped: 34291712 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f91b6000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98820096 unmapped: 34291712 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280842 data_alloc: 218103808 data_used: 4952064
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98820096 unmapped: 34291712 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98820096 unmapped: 34291712 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f91b6000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98820096 unmapped: 34291712 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f91b6000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98820096 unmapped: 34291712 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280842 data_alloc: 218103808 data_used: 4952064
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f91b6000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f91b6000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280842 data_alloc: 218103808 data_used: 4952064
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f91b6000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280842 data_alloc: 218103808 data_used: 4952064
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f91b6000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f91b6000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280842 data_alloc: 218103808 data_used: 4952064
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f91b6000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 34283520 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.712635040s of 33.881046295s, submitted: 10
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98844672 unmapped: 34267136 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98844672 unmapped: 34267136 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:38 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:38 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279114 data_alloc: 218103808 data_used: 4952064
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98910208 unmapped: 34201600 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98951168 unmapped: 34160640 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8da8000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98959360 unmapped: 34152448 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8da8000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98959360 unmapped: 34152448 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8da8000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98959360 unmapped: 34152448 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279114 data_alloc: 218103808 data_used: 4952064
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98959360 unmapped: 34152448 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98959360 unmapped: 34152448 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98959360 unmapped: 34152448 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8da8000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98959360 unmapped: 34152448 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8da8000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8da8000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98959360 unmapped: 34152448 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279114 data_alloc: 218103808 data_used: 4952064
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98959360 unmapped: 34152448 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98959360 unmapped: 34152448 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98967552 unmapped: 34144256 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8da8000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8da8000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98967552 unmapped: 34144256 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98967552 unmapped: 34144256 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279114 data_alloc: 218103808 data_used: 4952064
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98967552 unmapped: 34144256 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98967552 unmapped: 34144256 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98967552 unmapped: 34144256 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f8da8000/0x0/0x4ffc00000, data 0x23dd216/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98967552 unmapped: 34144256 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98967552 unmapped: 34144256 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279114 data_alloc: 218103808 data_used: 4952064
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98967552 unmapped: 34144256 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98967552 unmapped: 34144256 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.373878479s of 23.966392517s, submitted: 90
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98983936 unmapped: 34127872 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 146 ms_handle_reset con 0x5650698fd000 session 0x565068c49680
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f8da1000/0x0/0x4ffc00000, data 0x23dede8/0x24bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 146 ms_handle_reset con 0x565069e1bc00 session 0x565067cb83c0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 146 ms_handle_reset con 0x56506b33cc00 session 0x56506a4081e0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 146 ms_handle_reset con 0x565069e1bc00 session 0x565069ed4d20
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 98992128 unmapped: 34119680 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 146 ms_handle_reset con 0x565069e1a400 session 0x565067cb7860
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 146 ms_handle_reset con 0x5650698fd000 session 0x5650679f01e0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 146 ms_handle_reset con 0x56506a8a5c00 session 0x56506a82eb40
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 146 ms_handle_reset con 0x56506a8a4400 session 0x56506a82f860
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 107388928 unmapped: 25722880 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 ms_handle_reset con 0x5650698fcc00 session 0x56506b994780
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313231 data_alloc: 218103808 data_used: 11714560
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 ms_handle_reset con 0x5650698fd000 session 0x5650698baf00
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 108462080 unmapped: 24649728 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 ms_handle_reset con 0x565069e1a400 session 0x5650679f12c0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 ms_handle_reset con 0x565069e1bc00 session 0x565068bc23c0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 ms_handle_reset con 0x56506a8a5c00 session 0x56506a82fa40
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 ms_handle_reset con 0x5650698fcc00 session 0x5650698a4780
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 ms_handle_reset con 0x5650698fd000 session 0x565067e1ef00
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 ms_handle_reset con 0x565069e1bc00 session 0x565069ed45a0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 ms_handle_reset con 0x565069e1a400 session 0x56506a603860
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 ms_handle_reset con 0x565068b73000 session 0x565069ba2000
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 ms_handle_reset con 0x565069c08400 session 0x565068b421e0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 ms_handle_reset con 0x5650698fcc00 session 0x565068026f00
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 109002752 unmapped: 24109056 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 109002752 unmapped: 24109056 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f7be9000/0x0/0x4ffc00000, data 0x35969c6/0x3673000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 109002752 unmapped: 24109056 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 109002752 unmapped: 24109056 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f7be9000/0x0/0x4ffc00000, data 0x35969c6/0x3673000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f7be9000/0x0/0x4ffc00000, data 0x35969c6/0x3673000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1460898 data_alloc: 218103808 data_used: 11714560
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 ms_handle_reset con 0x5650698fd000 session 0x56506988d2c0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 109355008 unmapped: 23756800 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 ms_handle_reset con 0x56506983e400 session 0x565068772d20
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 109371392 unmapped: 23740416 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.816375732s of 10.445452690s, submitted: 111
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x56506983e000 session 0x56506a4085a0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 109379584 unmapped: 23732224 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x56506983e400 session 0x56506804ed20
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x5650698fcc00 session 0x565068772960
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 109387776 unmapped: 23724032 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 109387776 unmapped: 23724032 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471543 data_alloc: 218103808 data_used: 11735040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f7bba000/0x0/0x4ffc00000, data 0x35c247f/0x36a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 109395968 unmapped: 23715840 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 109395968 unmapped: 23715840 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 108830720 unmapped: 24281088 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 113008640 unmapped: 20103168 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f7bbb000/0x0/0x4ffc00000, data 0x35c247f/0x36a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 19234816 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-mon[191783]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct  3 11:42:39 compute-0 ceph-mon[191783]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2390972488' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533463 data_alloc: 234881024 data_used: 19668992
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 19234816 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f7bbb000/0x0/0x4ffc00000, data 0x35c247f/0x36a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f7bbb000/0x0/0x4ffc00000, data 0x35c247f/0x36a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 19226624 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 19226624 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f7bbb000/0x0/0x4ffc00000, data 0x35c247f/0x36a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 19226624 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 19226624 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1534583 data_alloc: 234881024 data_used: 19783680
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 114475008 unmapped: 18636800 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 118030336 unmapped: 15081472 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 121364480 unmapped: 11747328 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 122224640 unmapped: 10887168 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f7bbb000/0x0/0x4ffc00000, data 0x35c247f/0x36a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 122224640 unmapped: 10887168 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.076580048s of 18.181915283s, submitted: 25
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1593651 data_alloc: 234881024 data_used: 27959296
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x5650698fd000 session 0x565067e38f00
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x565069c08400 session 0x565067a4b0e0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f7bbb000/0x0/0x4ffc00000, data 0x35c247f/0x36a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 10838016 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x56506a2ec400 session 0x56506803f2c0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 14966784 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 14966784 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 14966784 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f851d000/0x0/0x4ffc00000, data 0x2c613ea/0x2d3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 14966784 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1459464 data_alloc: 234881024 data_used: 19460096
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 14966784 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 14966784 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 14966784 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 14966784 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 14966784 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1459464 data_alloc: 234881024 data_used: 19460096
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f851d000/0x0/0x4ffc00000, data 0x2c613ea/0x2d3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 14966784 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f851d000/0x0/0x4ffc00000, data 0x2c613ea/0x2d3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 118145024 unmapped: 14966784 heap: 133111808 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x56506983e400 session 0x565067cb7c20
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x5650698fcc00 session 0x565067e11860
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x5650698fd000 session 0x565069ed5680
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x565069c08400 session 0x5650697f25a0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.852344513s of 12.180787086s, submitted: 65
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 118407168 unmapped: 20529152 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x56506a2ec800 session 0x5650697e03c0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x56506983e400 session 0x565069e110e0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x5650698fcc00 session 0x565068bc5860
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x5650698fd000 session 0x56506804f0e0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x565069c08400 session 0x56506804e5a0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 21069824 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f7b44000/0x0/0x4ffc00000, data 0x363b3fa/0x371a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 21069824 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1546494 data_alloc: 234881024 data_used: 19460096
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 21061632 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x565067f74400 session 0x565068b42d20
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f76a5000/0x0/0x4ffc00000, data 0x3ada3fa/0x3bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 21012480 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 21004288 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f76a5000/0x0/0x4ffc00000, data 0x3ada3fa/0x3bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 21004288 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f76a5000/0x0/0x4ffc00000, data 0x3ada3fa/0x3bb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x56506983e400 session 0x56506872da40
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 21004288 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x565067a35800 session 0x5650697e1680
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1581962 data_alloc: 234881024 data_used: 19460096
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 21004288 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x565067ffac00 session 0x56506987d860
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x565067ffa000 session 0x565067e1fe00
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 120209408 unmapped: 18726912 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x565067ffb400 session 0x565067e1d860
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f6eff000/0x0/0x4ffc00000, data 0x42803fa/0x435f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 120668160 unmapped: 18268160 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.295460701s of 10.764211655s, submitted: 85
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 19357696 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 119635968 unmapped: 19300352 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1663794 data_alloc: 234881024 data_used: 20992000
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 119660544 unmapped: 19275776 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f6ed2000/0x0/0x4ffc00000, data 0x42ad3fa/0x438c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [1])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 127393792 unmapped: 11542528 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 129581056 unmapped: 9355264 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 8069120 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 8003584 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1761302 data_alloc: 251658240 data_used: 33341440
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 8003584 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x565067a35800 session 0x565069e043c0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x565067ffa000 session 0x5650698ba1e0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 130932736 unmapped: 8003584 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f78aa000/0x0/0x4ffc00000, data 0x38d63ea/0x39b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,1])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x56506983e400 session 0x56506804e5a0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 13320192 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 13320192 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 13320192 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1609152 data_alloc: 234881024 data_used: 23392256
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 13320192 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f78d4000/0x0/0x4ffc00000, data 0x38ac3ea/0x398a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 13320192 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 13320192 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f78d4000/0x0/0x4ffc00000, data 0x38ac3ea/0x398a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 125616128 unmapped: 13320192 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.960973740s of 16.128227234s, submitted: 37
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x565069c09000 session 0x565067d5ad20
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 14581760 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1630856 data_alloc: 234881024 data_used: 23392256
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 14581760 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 14581760 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 14581760 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f76f6000/0x0/0x4ffc00000, data 0x3a8a3ea/0x3b68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 14581760 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f76f6000/0x0/0x4ffc00000, data 0x3a8a3ea/0x3b68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x565069c08400 session 0x565068028780
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 14581760 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f76f6000/0x0/0x4ffc00000, data 0x3a8a3ea/0x3b68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x565067a35800 session 0x56506a82ef00
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1630856 data_alloc: 234881024 data_used: 23392256
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 14581760 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x565067ffa000 session 0x565067e1f4a0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x56506983e400 session 0x5650698ba3c0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 14958592 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 14958592 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f76d2000/0x0/0x4ffc00000, data 0x3aae3ea/0x3b8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 14958592 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 14958592 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1632700 data_alloc: 234881024 data_used: 23396352
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 123977728 unmapped: 14958592 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 14950400 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 123985920 unmapped: 14950400 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f76d2000/0x0/0x4ffc00000, data 0x3aae3ea/0x3b8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.465065002s of 14.524572372s, submitted: 13
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x565069c08400 session 0x565068b42960
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x565069c09000 session 0x565067e045a0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 15032320 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 123936768 unmapped: 14999552 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 ms_handle_reset con 0x565067a35800 session 0x56506804fe00
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1613360 data_alloc: 234881024 data_used: 23392256
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 123944960 unmapped: 14991360 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f78d4000/0x0/0x4ffc00000, data 0x38ac3ea/0x398a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 14983168 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f78d4000/0x0/0x4ffc00000, data 0x38ac3ea/0x398a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 14983168 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f78d4000/0x0/0x4ffc00000, data 0x38ac3ea/0x398a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 14983168 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 14983168 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1654460 data_alloc: 234881024 data_used: 23392256
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 127680512 unmapped: 11255808 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 127688704 unmapped: 11247616 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 127688704 unmapped: 11247616 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.664110184s of 10.005697250s, submitted: 72
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 126156800 unmapped: 12779520 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f734d000/0x0/0x4ffc00000, data 0x3e333ea/0x3f11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 126156800 unmapped: 12779520 heap: 138936320 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f6b4c000/0x0/0x4ffc00000, data 0x46333fa/0x4712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1805103 data_alloc: 234881024 data_used: 24457216
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 126230528 unmapped: 29491200 heap: 155721728 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565067ffa000 session 0x565067e0e1e0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 126230528 unmapped: 29491200 heap: 155721728 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 135716864 unmapped: 20004864 heap: 155721728 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506983e400 session 0x565069858b40
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565069c08400 session 0x565067e11680
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565069c08800 session 0x56506aa1c3c0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565067a35800 session 0x5650699210e0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565067ffa000 session 0x5650698d0000
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565069e1a400 session 0x565068bc4780
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565069e1bc00 session 0x565067f08000
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 126517248 unmapped: 29204480 heap: 155721728 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506983e400 session 0x565067e1f860
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 34250752 heap: 155721728 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f54e1000/0x0/0x4ffc00000, data 0x5c9bfd9/0x5d7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1697123 data_alloc: 234881024 data_used: 14495744
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 34250752 heap: 155721728 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 34250752 heap: 155721728 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f64a8000/0x0/0x4ffc00000, data 0x4c70fb6/0x4d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 34250752 heap: 155721728 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 34250752 heap: 155721728 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f64a8000/0x0/0x4ffc00000, data 0x4c70fb6/0x4d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 34250752 heap: 155721728 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1697123 data_alloc: 234881024 data_used: 14495744
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 34250752 heap: 155721728 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 34250752 heap: 155721728 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565067a35800 session 0x56506a534b40
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.923872948s of 13.605126381s, submitted: 105
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506983e400 session 0x5650679f1a40
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565069e1a400 session 0x56506b9945a0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f64a8000/0x0/0x4ffc00000, data 0x4c70fb6/0x4d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 34250752 heap: 155721728 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565069e1bc00 session 0x5650697e14a0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565069c08400 session 0x565067e1cd20
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 134963200 unmapped: 20758528 heap: 155721728 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f650d000/0x0/0x4ffc00000, data 0x4c70fb6/0x4d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565067a35800 session 0x565069e05c20
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 14843904 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506983e400 session 0x56506804eb40
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565069c08400 session 0x565067e1e5a0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565069e1a400 session 0x56506a61a000
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565069e1bc00 session 0x565068b13a40
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565067a35800 session 0x565068abfa40
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1869211 data_alloc: 234881024 data_used: 31899648
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 135774208 unmapped: 24150016 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 137797632 unmapped: 22126592 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 21995520 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 21995520 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5a02000/0x0/0x4ffc00000, data 0x577a028/0x585c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 137928704 unmapped: 21995520 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1946855 data_alloc: 251658240 data_used: 36106240
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506983e400 session 0x565068b42f00
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 22233088 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565069c08400 session 0x56506aa2b860
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565069e1a400 session 0x56506b9941e0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 137691136 unmapped: 22233088 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5596000/0x0/0x4ffc00000, data 0x5be6028/0x5cc8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565069c08800 session 0x56506aa2a5a0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.163011551s of 10.508476257s, submitted: 62
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 137707520 unmapped: 22216704 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565067a35800 session 0x56506a83f0e0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 22167552 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 137756672 unmapped: 22167552 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565069e1a400 session 0x56506a83f680
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1939694 data_alloc: 251658240 data_used: 36110336
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 137789440 unmapped: 22134784 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506a593400 session 0x56506a83e000
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 139902976 unmapped: 20021248 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506a592c00 session 0x565067e1d2c0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 144449536 unmapped: 15474688 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506a592800 session 0x565067e1c000
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5594000/0x0/0x4ffc00000, data 0x5be605b/0x5cca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,1])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5594000/0x0/0x4ffc00000, data 0x5be605b/0x5cca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146546688 unmapped: 13377536 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146571264 unmapped: 13352960 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2013579 data_alloc: 251658240 data_used: 46252032
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147013632 unmapped: 12910592 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147005440 unmapped: 12918784 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565067ffac00 session 0x565069920d20
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 148987904 unmapped: 10936320 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5594000/0x0/0x4ffc00000, data 0x5be605b/0x5cca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 10567680 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 10567680 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2044939 data_alloc: 251658240 data_used: 47882240
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 10567680 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5594000/0x0/0x4ffc00000, data 0x5be605b/0x5cca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149389312 unmapped: 10534912 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149487616 unmapped: 10436608 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5594000/0x0/0x4ffc00000, data 0x5be605b/0x5cca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149651456 unmapped: 10272768 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149651456 unmapped: 10272768 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2045259 data_alloc: 251658240 data_used: 47919104
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5594000/0x0/0x4ffc00000, data 0x5be605b/0x5cca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149659648 unmapped: 10264576 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5594000/0x0/0x4ffc00000, data 0x5be605b/0x5cca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149667840 unmapped: 10256384 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149667840 unmapped: 10256384 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5594000/0x0/0x4ffc00000, data 0x5be605b/0x5cca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149667840 unmapped: 10256384 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149667840 unmapped: 10256384 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2045259 data_alloc: 251658240 data_used: 47919104
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149667840 unmapped: 10256384 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149667840 unmapped: 10256384 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5594000/0x0/0x4ffc00000, data 0x5be605b/0x5cca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5594000/0x0/0x4ffc00000, data 0x5be605b/0x5cca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 10240000 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 10231808 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 10231808 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2045259 data_alloc: 251658240 data_used: 47919104
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 10231808 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.766679764s of 29.143795013s, submitted: 14
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 149692416 unmapped: 10231808 heap: 159924224 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f53fe000/0x0/0x4ffc00000, data 0x5d7c05b/0x5e60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,0,1,15,16])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 160579584 unmapped: 6168576 heap: 166748160 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158007296 unmapped: 8740864 heap: 166748160 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158105600 unmapped: 8642560 heap: 166748160 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2157425 data_alloc: 251658240 data_used: 49262592
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158195712 unmapped: 8552448 heap: 166748160 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f473d000/0x0/0x4ffc00000, data 0x6a3d05b/0x6b21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 159088640 unmapped: 7659520 heap: 166748160 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f46b9000/0x0/0x4ffc00000, data 0x6ac105b/0x6ba5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 159088640 unmapped: 7659520 heap: 166748160 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f46b9000/0x0/0x4ffc00000, data 0x6ac105b/0x6ba5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 159088640 unmapped: 7659520 heap: 166748160 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 159088640 unmapped: 7659520 heap: 166748160 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2203597 data_alloc: 251658240 data_used: 49405952
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162488320 unmapped: 7413760 heap: 169902080 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162725888 unmapped: 7176192 heap: 169902080 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.543340683s of 11.100098610s, submitted: 245
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 161316864 unmapped: 8585216 heap: 169902080 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f3883000/0x0/0x4ffc00000, data 0x78f605b/0x79da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,1])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162275328 unmapped: 7626752 heap: 169902080 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162275328 unmapped: 7626752 heap: 169902080 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2352313 data_alloc: 268435456 data_used: 50384896
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 163119104 unmapped: 7839744 heap: 170958848 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f3049000/0x0/0x4ffc00000, data 0x812305b/0x8207000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 163168256 unmapped: 7790592 heap: 170958848 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 163504128 unmapped: 7454720 heap: 170958848 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f2fac000/0x0/0x4ffc00000, data 0x81b805b/0x829c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162193408 unmapped: 8765440 heap: 170958848 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f2fc2000/0x0/0x4ffc00000, data 0x81b805b/0x829c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162209792 unmapped: 8749056 heap: 170958848 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2364903 data_alloc: 268435456 data_used: 50790400
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162234368 unmapped: 8724480 heap: 170958848 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162242560 unmapped: 8716288 heap: 170958848 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.228363991s of 10.003221512s, submitted: 126
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 165691392 unmapped: 11567104 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506a593400 session 0x5650698bb860
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506a593000 session 0x5650698a4780
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162553856 unmapped: 14704640 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162553856 unmapped: 14704640 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f2bb5000/0x0/0x4ffc00000, data 0x85c40bd/0x86a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2398308 data_alloc: 268435456 data_used: 50790400
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f2bb5000/0x0/0x4ffc00000, data 0x85c40bd/0x86a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 14671872 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f2bb5000/0x0/0x4ffc00000, data 0x85c40bd/0x86a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162619392 unmapped: 14639104 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565067f73c00 session 0x565067f08960
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162619392 unmapped: 14639104 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x5650698fd000 session 0x565067f09680
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565067f73c00 session 0x565067e041e0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162619392 unmapped: 14639104 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f2bb2000/0x0/0x4ffc00000, data 0x85c70bd/0x86ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565067ffac00 session 0x56506a61b2c0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162643968 unmapped: 14614528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2398989 data_alloc: 268435456 data_used: 50790400
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162652160 unmapped: 14606336 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162652160 unmapped: 14606336 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f2baf000/0x0/0x4ffc00000, data 0x85ca0bd/0x86af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 162865152 unmapped: 14393344 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f2baf000/0x0/0x4ffc00000, data 0x85ca0bd/0x86af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 164257792 unmapped: 13000704 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 164257792 unmapped: 13000704 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2427481 data_alloc: 268435456 data_used: 52166656
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.778897285s of 13.251462936s, submitted: 39
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 164257792 unmapped: 13000704 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f2bad000/0x0/0x4ffc00000, data 0x85cc0bd/0x86b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 164265984 unmapped: 12992512 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f2baa000/0x0/0x4ffc00000, data 0x85ce0bd/0x86b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 164274176 unmapped: 12984320 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 164274176 unmapped: 12984320 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f2baa000/0x0/0x4ffc00000, data 0x85ce0bd/0x86b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 164282368 unmapped: 12976128 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2428569 data_alloc: 268435456 data_used: 52166656
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 164282368 unmapped: 12976128 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 164282368 unmapped: 12976128 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f2baa000/0x0/0x4ffc00000, data 0x85ce0bd/0x86b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 164315136 unmapped: 12943360 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f2baa000/0x0/0x4ffc00000, data 0x85ce0bd/0x86b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 164315136 unmapped: 12943360 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f2baa000/0x0/0x4ffc00000, data 0x85ce0bd/0x86b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 164315136 unmapped: 12943360 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2428745 data_alloc: 268435456 data_used: 52166656
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 164323328 unmapped: 12935168 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f2baa000/0x0/0x4ffc00000, data 0x85ce0bd/0x86b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 164323328 unmapped: 12935168 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.494370461s of 12.032884598s, submitted: 8
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 164323328 unmapped: 12935168 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f2baa000/0x0/0x4ffc00000, data 0x85ce0bd/0x86b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 164323328 unmapped: 12935168 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 164323328 unmapped: 12935168 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506a592c00 session 0x56506803fe00
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2429189 data_alloc: 268435456 data_used: 52187136
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565069e1a400 session 0x565068bc3c20
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565067a35800 session 0x5650680532c0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 160505856 unmapped: 16752640 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506b33e800 session 0x565069920000
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565067f73c00 session 0x565067e1fc20
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 18366464 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 18366464 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 18366464 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4284000/0x0/0x4ffc00000, data 0x6ef50bd/0x6fda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 18366464 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2179281 data_alloc: 251658240 data_used: 41869312
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 18366464 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 18366464 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 18366464 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4284000/0x0/0x4ffc00000, data 0x6ef50bd/0x6fda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4284000/0x0/0x4ffc00000, data 0x6ef50bd/0x6fda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 18366464 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 18366464 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4284000/0x0/0x4ffc00000, data 0x6ef50bd/0x6fda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2179281 data_alloc: 251658240 data_used: 41869312
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 18366464 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 18366464 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 18366464 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 18366464 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4284000/0x0/0x4ffc00000, data 0x6ef50bd/0x6fda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 18366464 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2179281 data_alloc: 251658240 data_used: 41869312
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 18366464 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.793655396s of 19.099811554s, submitted: 56
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 18063360 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 161816576 unmapped: 15441920 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 161816576 unmapped: 15441920 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f39a8000/0x0/0x4ffc00000, data 0x77d10bd/0x78b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 159416320 unmapped: 17842176 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2252907 data_alloc: 251658240 data_used: 42627072
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 159440896 unmapped: 17817600 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 159440896 unmapped: 17817600 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f39a0000/0x0/0x4ffc00000, data 0x77d90bd/0x78be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 159440896 unmapped: 17817600 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 159440896 unmapped: 17817600 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 159440896 unmapped: 17817600 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506a593400 session 0x565067e050e0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506a593000 session 0x565068bc2d20
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f39a0000/0x0/0x4ffc00000, data 0x77d90bd/0x78be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2126937 data_alloc: 251658240 data_used: 37675008
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565069e1a400 session 0x565069920d20
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 22568960 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 22568960 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4659000/0x0/0x4ffc00000, data 0x6b2005b/0x6c04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 22568960 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 22568960 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.560870171s of 12.645262718s, submitted: 119
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565067ffa000 session 0x565067a4ad20
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 22560768 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2126097 data_alloc: 251658240 data_used: 37675008
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 154730496 unmapped: 22528000 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565067f73c00 session 0x56506a535e00
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146259968 unmapped: 30998528 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146268160 unmapped: 30990336 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146268160 unmapped: 30990336 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146268160 unmapped: 30990336 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146268160 unmapped: 30990336 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146268160 unmapped: 30990336 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146268160 unmapped: 30990336 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146268160 unmapped: 30990336 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146268160 unmapped: 30990336 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146276352 unmapped: 30982144 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146276352 unmapped: 30982144 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146276352 unmapped: 30982144 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146276352 unmapped: 30982144 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146276352 unmapped: 30982144 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146276352 unmapped: 30982144 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146276352 unmapped: 30982144 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146284544 unmapped: 30973952 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146284544 unmapped: 30973952 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146284544 unmapped: 30973952 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146284544 unmapped: 30973952 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146292736 unmapped: 30965760 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146292736 unmapped: 30965760 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146292736 unmapped: 30965760 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146292736 unmapped: 30965760 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146292736 unmapped: 30965760 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146292736 unmapped: 30965760 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146292736 unmapped: 30965760 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146292736 unmapped: 30965760 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146292736 unmapped: 30965760 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146292736 unmapped: 30965760 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146292736 unmapped: 30965760 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146292736 unmapped: 30965760 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146300928 unmapped: 30957568 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146300928 unmapped: 30957568 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146300928 unmapped: 30957568 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146300928 unmapped: 30957568 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146300928 unmapped: 30957568 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146300928 unmapped: 30957568 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146300928 unmapped: 30957568 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146300928 unmapped: 30957568 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146309120 unmapped: 30949376 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146309120 unmapped: 30949376 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146309120 unmapped: 30949376 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146309120 unmapped: 30949376 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146317312 unmapped: 30941184 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146317312 unmapped: 30941184 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146317312 unmapped: 30941184 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146317312 unmapped: 30941184 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146317312 unmapped: 30941184 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146325504 unmapped: 30932992 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146325504 unmapped: 30932992 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146325504 unmapped: 30932992 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146325504 unmapped: 30932992 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146325504 unmapped: 30932992 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146325504 unmapped: 30932992 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146325504 unmapped: 30932992 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146325504 unmapped: 30932992 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146333696 unmapped: 30924800 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146333696 unmapped: 30924800 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146333696 unmapped: 30924800 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146333696 unmapped: 30924800 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146333696 unmapped: 30924800 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146333696 unmapped: 30924800 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146333696 unmapped: 30924800 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146341888 unmapped: 30916608 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146341888 unmapped: 30916608 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146341888 unmapped: 30916608 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146350080 unmapped: 30908416 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146350080 unmapped: 30908416 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146350080 unmapped: 30908416 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146350080 unmapped: 30908416 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146350080 unmapped: 30908416 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146350080 unmapped: 30908416 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146358272 unmapped: 30900224 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146358272 unmapped: 30900224 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146358272 unmapped: 30900224 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146358272 unmapped: 30900224 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146358272 unmapped: 30900224 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146358272 unmapped: 30900224 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146358272 unmapped: 30900224 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146358272 unmapped: 30900224 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146358272 unmapped: 30900224 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5f5a000/0x0/0x4ffc00000, data 0x521fff9/0x5303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 146358272 unmapped: 30900224 heap: 177258496 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506a593000 session 0x56506a5a52c0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506a593400 session 0x565067d5a3c0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506b33e800 session 0x565067d5a5a0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506b33e800 session 0x565068abfa40
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1867455 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 110.723114014s of 112.060089111s, submitted: 47
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565067f73c00 session 0x565069920780
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 145555456 unmapped: 47448064 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 145555456 unmapped: 47448064 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 145555456 unmapped: 47448064 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 145555456 unmapped: 47448064 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565067ffa000 session 0x5650698d10e0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5581000/0x0/0x4ffc00000, data 0x5bf9ff9/0x5cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 145563648 unmapped: 47439872 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506a593000 session 0x5650698d0780
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1937703 data_alloc: 234881024 data_used: 26071040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 145563648 unmapped: 47439872 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506a593400 session 0x565068052000
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506a593400 session 0x5650671cd860
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5581000/0x0/0x4ffc00000, data 0x5bf9ff9/0x5cdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 145899520 unmapped: 47104000 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 145899520 unmapped: 47104000 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 145899520 unmapped: 47104000 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5556000/0x0/0x4ffc00000, data 0x5c2401c/0x5d08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 145899520 unmapped: 47104000 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1978099 data_alloc: 234881024 data_used: 30863360
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 145899520 unmapped: 47104000 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5556000/0x0/0x4ffc00000, data 0x5c2401c/0x5d08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2017939 data_alloc: 251658240 data_used: 36405248
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5556000/0x0/0x4ffc00000, data 0x5c2401c/0x5d08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5556000/0x0/0x4ffc00000, data 0x5c2401c/0x5d08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5556000/0x0/0x4ffc00000, data 0x5c2401c/0x5d08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5556000/0x0/0x4ffc00000, data 0x5c2401c/0x5d08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5556000/0x0/0x4ffc00000, data 0x5c2401c/0x5d08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2017939 data_alloc: 251658240 data_used: 36405248
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5556000/0x0/0x4ffc00000, data 0x5c2401c/0x5d08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5556000/0x0/0x4ffc00000, data 0x5c2401c/0x5d08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2017939 data_alloc: 251658240 data_used: 36405248
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5556000/0x0/0x4ffc00000, data 0x5c2401c/0x5d08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5556000/0x0/0x4ffc00000, data 0x5c2401c/0x5d08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2017939 data_alloc: 251658240 data_used: 36405248
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 45162496 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147849216 unmapped: 45154304 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5556000/0x0/0x4ffc00000, data 0x5c2401c/0x5d08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147849216 unmapped: 45154304 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2017939 data_alloc: 251658240 data_used: 36405248
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147849216 unmapped: 45154304 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5556000/0x0/0x4ffc00000, data 0x5c2401c/0x5d08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147849216 unmapped: 45154304 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5556000/0x0/0x4ffc00000, data 0x5c2401c/0x5d08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147849216 unmapped: 45154304 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f5556000/0x0/0x4ffc00000, data 0x5c2401c/0x5d08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 rsyslogd[187556]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147849216 unmapped: 45154304 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147849216 unmapped: 45154304 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2017939 data_alloc: 251658240 data_used: 36405248
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147857408 unmapped: 45146112 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 147857408 unmapped: 45146112 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.581501007s of 41.749134064s, submitted: 16
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151724032 unmapped: 41279488 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151969792 unmapped: 41033728 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a97000/0x0/0x4ffc00000, data 0x66e301c/0x67c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152010752 unmapped: 40992768 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2115467 data_alloc: 251658240 data_used: 36823040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152010752 unmapped: 40992768 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152010752 unmapped: 40992768 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152010752 unmapped: 40992768 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152010752 unmapped: 40992768 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a70000/0x0/0x4ffc00000, data 0x670a01c/0x67ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152010752 unmapped: 40992768 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2115683 data_alloc: 251658240 data_used: 36823040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152010752 unmapped: 40992768 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152010752 unmapped: 40992768 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152010752 unmapped: 40992768 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152018944 unmapped: 40984576 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152018944 unmapped: 40984576 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2115683 data_alloc: 251658240 data_used: 36823040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152018944 unmapped: 40984576 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152018944 unmapped: 40984576 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152018944 unmapped: 40984576 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152018944 unmapped: 40984576 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152018944 unmapped: 40984576 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2115683 data_alloc: 251658240 data_used: 36823040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152018944 unmapped: 40984576 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152027136 unmapped: 40976384 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152027136 unmapped: 40976384 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152027136 unmapped: 40976384 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152027136 unmapped: 40976384 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-mon[191783]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2115683 data_alloc: 251658240 data_used: 36823040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152027136 unmapped: 40976384 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152027136 unmapped: 40976384 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152027136 unmapped: 40976384 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152027136 unmapped: 40976384 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152027136 unmapped: 40976384 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2115683 data_alloc: 251658240 data_used: 36823040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152027136 unmapped: 40976384 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152035328 unmapped: 40968192 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152035328 unmapped: 40968192 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152035328 unmapped: 40968192 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.389015198s of 32.670913696s, submitted: 59
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506a881800 session 0x56506b994000
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2114275 data_alloc: 251658240 data_used: 36823040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506c01a000 session 0x56506a82e780
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2114275 data_alloc: 251658240 data_used: 36823040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2114275 data_alloc: 251658240 data_used: 36823040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2114275 data_alloc: 251658240 data_used: 36823040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 41967616 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151044096 unmapped: 41959424 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151044096 unmapped: 41959424 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151044096 unmapped: 41959424 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2114275 data_alloc: 251658240 data_used: 36823040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151044096 unmapped: 41959424 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151044096 unmapped: 41959424 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151044096 unmapped: 41959424 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151044096 unmapped: 41959424 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151044096 unmapped: 41959424 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2114275 data_alloc: 251658240 data_used: 36823040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151052288 unmapped: 41951232 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151052288 unmapped: 41951232 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151052288 unmapped: 41951232 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151052288 unmapped: 41951232 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151052288 unmapped: 41951232 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2114275 data_alloc: 251658240 data_used: 36823040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151052288 unmapped: 41951232 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151052288 unmapped: 41951232 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151052288 unmapped: 41951232 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151052288 unmapped: 41951232 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151060480 unmapped: 41943040 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565068c85c00 session 0x565069921680
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2114275 data_alloc: 251658240 data_used: 36823040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151060480 unmapped: 41943040 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151060480 unmapped: 41943040 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151060480 unmapped: 41943040 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151060480 unmapped: 41943040 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151060480 unmapped: 41943040 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2114275 data_alloc: 251658240 data_used: 36823040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151060480 unmapped: 41943040 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151060480 unmapped: 41943040 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151060480 unmapped: 41943040 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151060480 unmapped: 41943040 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151060480 unmapped: 41943040 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2114275 data_alloc: 251658240 data_used: 36823040
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151060480 unmapped: 41943040 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 47.122699738s of 47.131389618s, submitted: 1
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151093248 unmapped: 41910272 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151093248 unmapped: 41910272 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151093248 unmapped: 41910272 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151093248 unmapped: 41910272 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2116179 data_alloc: 251658240 data_used: 37015552
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151093248 unmapped: 41910272 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151093248 unmapped: 41910272 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a6e000/0x0/0x4ffc00000, data 0x670c01c/0x67f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151191552 unmapped: 41811968 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151191552 unmapped: 41811968 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151191552 unmapped: 41811968 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2116807 data_alloc: 251658240 data_used: 37015552
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151191552 unmapped: 41811968 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a55000/0x0/0x4ffc00000, data 0x672501c/0x6809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151191552 unmapped: 41811968 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a55000/0x0/0x4ffc00000, data 0x672501c/0x6809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151199744 unmapped: 41803776 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151199744 unmapped: 41803776 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151199744 unmapped: 41803776 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2116807 data_alloc: 251658240 data_used: 37015552
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151199744 unmapped: 41803776 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151199744 unmapped: 41803776 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a55000/0x0/0x4ffc00000, data 0x672501c/0x6809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151199744 unmapped: 41803776 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151199744 unmapped: 41803776 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a55000/0x0/0x4ffc00000, data 0x672501c/0x6809000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151199744 unmapped: 41803776 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117287 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151199744 unmapped: 41803776 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151199744 unmapped: 41803776 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 151207936 unmapped: 41795584 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 12K writes, 44K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 12K writes, 3634 syncs, 3.41 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2316 writes, 9075 keys, 2316 commit groups, 1.0 writes per commit group, ingest: 10.78 MB, 0.02 MB/s#012Interval WAL: 2316 writes, 906 syncs, 2.56 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.843839645s of 21.885229111s, submitted: 5
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152346624 unmapped: 40656896 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152346624 unmapped: 40656896 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117775 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152428544 unmapped: 40574976 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152428544 unmapped: 40574976 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152428544 unmapped: 40574976 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152436736 unmapped: 40566784 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506c01a400 session 0x56506872c960
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152436736 unmapped: 40566784 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117775 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x56506a536000 session 0x565068c481e0
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152436736 unmapped: 40566784 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x565068c96c00 session 0x56506aa2be00
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152436736 unmapped: 40566784 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152436736 unmapped: 40566784 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152436736 unmapped: 40566784 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152436736 unmapped: 40566784 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117775 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152436736 unmapped: 40566784 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152436736 unmapped: 40566784 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152444928 unmapped: 40558592 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152444928 unmapped: 40558592 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152444928 unmapped: 40558592 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117775 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152444928 unmapped: 40558592 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152444928 unmapped: 40558592 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.176465988s of 19.201866150s, submitted: 3
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152444928 unmapped: 40558592 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152444928 unmapped: 40558592 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152444928 unmapped: 40558592 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117775 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152444928 unmapped: 40558592 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152444928 unmapped: 40558592 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152444928 unmapped: 40558592 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152444928 unmapped: 40558592 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152444928 unmapped: 40558592 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117775 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152444928 unmapped: 40558592 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152444928 unmapped: 40558592 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.226624489s of 10.238464355s, submitted: 2
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152444928 unmapped: 40558592 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152453120 unmapped: 40550400 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152453120 unmapped: 40550400 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117951 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152453120 unmapped: 40550400 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152453120 unmapped: 40550400 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152453120 unmapped: 40550400 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152453120 unmapped: 40550400 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152453120 unmapped: 40550400 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117951 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152453120 unmapped: 40550400 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152453120 unmapped: 40550400 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152453120 unmapped: 40550400 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152453120 unmapped: 40550400 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152453120 unmapped: 40550400 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117951 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152453120 unmapped: 40550400 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152453120 unmapped: 40550400 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152453120 unmapped: 40550400 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152453120 unmapped: 40550400 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 40542208 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117951 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 40542208 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 40542208 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 40542208 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 40542208 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 40542208 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117951 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 40542208 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 40542208 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 40542208 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 40542208 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 40542208 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117951 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 40542208 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 40542208 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 40542208 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 40542208 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 40542208 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117951 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 40534016 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 40534016 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 40534016 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 40534016 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 40534016 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117951 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 40534016 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 40534016 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 40534016 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 40534016 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 40534016 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117951 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 40534016 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 40534016 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 40534016 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 40534016 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 40534016 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117951 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 40525824 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 40525824 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 40525824 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 40525824 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 40525824 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117951 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 40525824 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 40525824 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 40525824 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 40525824 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 ms_handle_reset con 0x5650698fc800 session 0x56506a82fe00
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 40525824 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117951 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 40525824 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 40525824 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 40525824 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 40525824 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 40525824 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117951 data_alloc: 251658240 data_used: 37027840
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 40525824 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 40525824 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 65.300338745s of 65.308837891s, submitted: 1
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152485888 unmapped: 40517632 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152485888 unmapped: 40517632 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152485888 unmapped: 40517632 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117423 data_alloc: 251658240 data_used: 37031936
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152494080 unmapped: 40509440 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 40493056 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152518656 unmapped: 40484864 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152543232 unmapped: 40460288 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152543232 unmapped: 40460288 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  3 11:42:39 compute-0 ceph-osd[206733]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  3 11:42:39 compute-0 ceph-osd[206733]: bluestore.MempoolThread(0x5650664c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2117423 data_alloc: 251658240 data_used: 37031936
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152543232 unmapped: 40460288 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152543232 unmapped: 40460288 heap: 193003520 old mem: 2845415832 new mem: 2845415832
Oct  3 11:42:39 compute-0 ceph-osd[206733]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f4a3f000/0x0/0x4ffc00000, data 0x673b01c/0x681f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct  3 11:42:39 compute-0 ceph-osd[206733]: prioritycache tune_memory target: 4294967296 mapped: 152543232 unmapped: 40460288 heap: 193003520 old mem: 2845415832 new mem: 2845415832
